Guest Editorial: Advances in Systems, Modeling and Languages
The Special Section on Advances in Systems, Modeling and Languages was inspired by four events organized during 2014 in the domains of: Information Systems, Formal Languages and Compilers, and Model Driven Software Development and Testing. These events were: (i) World Conference on Information Systems and Technologies (WorldCIST) in Madeira, Portugal; (ii) Symposium on Languages, Applications and Technologies (SLATE) in Braganca, Portugal; (iii) Workshop on Model Driven Approaches in System Development (MDASD) and International Workshop Automating Test Case Design, Selection and Evaluation (ATSE), both organized within the scope of the Federated Conference on Computer Science and Information Systems (FedCSIS) in Warsaw, Poland. After an open call to the prospective authors to submit their papers, and a rigorous reviewing procedure, the same as for regularly submitted papers, we finally accepted 7 papers presenting both theoretical and practical contributions in the field of Advances in Systems, Modeling, and Languages.
Threats can trigger incidents in information systems causing damage or intangible material loss to assets. A good selection of safeguards is critical for reducing risks caused by threats. In the paper Fuzzified Risk Management: Selection of Safeguards to Minimize the Maximum Risk, Eloy Vicente, Alfonso Mateos, and Antonio Jiménez-Martín consider the selection of failure transmission, preventive and palliative safeguards that minimize the maximum risk of an information system for a specified budget. They assume that all the information system elements are valuated using a linguistic scale, which is capable of accounting for imprecision and/or vagueness concerning the inputs. Trapezoidal fuzzy numbers are associated with these linguistic terms, and risk analysis and management is consequently based on trapezoidal fuzzy number arithmetic. The authors model and solve the respective fuzzy optimization problem by means of the simulated annealing metaheuristic and give an example to illustrate the safeguard selection process.
Remote desktop connection (RDC) services offer clients the ability to access remote content and services, commonly in the context of accessing their working environment. Mirko Suznjevic, Lea Skorin-Kapov, and Iztok Humar in their paper Statistical User Behavior Detection and QoE Evaluation for Thin Client Services, aim to detect and analyze common user behavior when accessing RDC services, and use this as input for making Quality of Experience (QoE) estimations and subsequently providing input for effective QoE management mechanisms. They first identify different behavioral categories, and conduct traffic analysis to determine a feature set to be used for classification purposes, then propose a machine learning approach to be used for classifying behavior, and use this approach to classify a large number of realworld RDCs. The authors further conduct QoE evaluation studies to determine the relationship between different network conditions and subjective end user QoE for all identified behavioral categories.
Flávio Rodrigues, Nuno Oliveira, and Luís S. Barbosa in their paper Towards an engine for coordination-based architectural reconfigurations introduce an engine that statically applies reconfigurations to (formal) models of software architectures. Reconfigurations are specified using a domain specific language ReCooPLa, which targets the manipulation of software coordination structures, typically used in service-oriented architectures. The engine is responsible for the compilation of ReCooPLa instances and their application to the relevant coordination structures. The resulting configurations are amenable to formal analysis of qualitative and quantitative (probabilistic) properties. The authors claim that software reconfigurability become increasingly relevant to the architectural process due to the crescent dependency of modern societies on reliable and adaptable systems. Such systems are supposed to adapt themselves to surrounding environmental changes with minimal service disruption, if any.
In the paper Tuning a Semantic Relatedness Algorithm using a Multiscale Approach, José Paulo Leal and Teresa Costa start from the definition of a family of semantic relatedness algorithms. These algorithms depend on a semantic graph and on a set of weights assigned to each type of arcs in the graph. Their research objective is to automatically tune the weights for a given graph in order to increase the proximity quality. The quality of a semantic relatedness method is usually measured against a benchmark data set. The results produced by a method are compared with those on the benchmark using a nonparametric measure of statistical dependence, such as the Spearman’s rank correlation coefficient. The presented methodology works the other way round and uses this correlation coefficient to tune the proximity weights. The tuning process is controlled by a genetic algorithm using the Spearman’s rank correlation coefficient as fitness function. This algorithm has its own set of parameters which also need to be tuned. Several ranges of parameter values were tested and the obtained results are better than the state of the art methods for computing semantic relatedness using the WordNet 2.1, with the advantage of not requiring any domain knowledge of the semantic graph.
Tomas Cerny, Miroslav Macik, Michael J. Donahoo, and Jan Janousek in their paper On Distributed Concern Delivery in User Interface Design consider advantages brought by concern-separation approaches in user interface (UI) development from different perspectives. They propose an extension to the aspect-oriented UI design with distributed concern delivery (DCD) for client-server applications. Such an extension lessens the server side involvement in UI assembly and reduces the fragment replication in provided UI descriptions. The server provides clients with individual UI concerns, and they become partially responsible for the UI assembly. This change increases client-side concern reuse and extends caching opportunities, reducing the volume of transmitted information between client and server to improve UI responsiveness and performance. The underlying aspect-oriented UI design automates the server-side derivation of concerns related to data presentations adapted to runtime context, security, conditions, etc. Evaluation of the approach is considered in a case study applying DCD to an existing, production web application.
Model-driven software development (MDSD) is surrounded by numerous myths and misunderstandings that hamper its adoption. In the paper Teaching Pragmatic Model-Driven Software Development, Jaroslav Porubän, Michaela Bačíková, Sergej Chodarev, and Milan Nosál clam that students are sometimes victims of these myths, considering MDSD impractical and only applied in academy. The authors present their experience with devising an MDSD course that motivates students to understand MDSD principles. The main contribution is a set of MDSD teaching guidelines that can make the course pragmatic in the eyes of students – programmers. These guidelines introduce MDSD from the viewpoint of a programmer as a pragmatic tool for solving concrete problems in the development process. Their proposed course shows several techniques and principles of MDSD in multiple incremental iterations instead of concentrating on a single tool. The course is implemented as an iterative incremental MDSD case study.
Acceptance testing is highly dependent on the formulation of requirements, as the final system is tested against user requirements. It is thus highly desirable to be able to automate transition from requirements to acceptance tests. In the paper Model-Driven Acceptance Test Automation Based on Use Cases, Tomasz Straszak and Michał Śmiałek present a model-driven approach to this issue, where detailed use case models are transformed into test cases. The proposed approach facilitates synchronizing functional test cases with other types of tests and introducing test data. This leads to a unified approach where requirements models of various kind drive the acceptance testing process. The process is parallel to the development process which also involves automatic transformations from requirements models to software development artefacts (models and code).
Editor of the special issue
Ivan Luković