Current Projects


Automated Decision Support for Security Requirements Analysis

Many IT systems, for example, those deployed in banks, are subject to security requirements as a way to mitigate security risks and protect clients' data. Security requirements are often rooted in the recommendations and controls described in regulations, standards and best-practices such as GDPR, the ISO 27000 family of standards and NIST SP 800 guidelines. Collectively, the existing regulations, standards and best-practices induce hundreds of security requirements. For a specific IT system in a particular context, only a fraction of this large collection of security requirements would be pertinent. Manually selecting the appropriate security requirements for a given project from the vast set of potentially-applicable requirements is a laborious task. This project aims at mining security information from past projects in order to provide analysts with automated decision support for selecting the most pertinent security requirements for new projects.

HITEC Luxembourg

Model-driven Run-time Enforcement of Role-based Access Control Properties

HITEC develops a hardware/software communication suite for the public safety and humanitarian community, to increase situational awareness and improving communication capabilities in emergency response scenarios. In these scenarios, rescue teams need technological support to minimize the risk for their own lives and maximize the chances for victims; reliability and security of the communication system are absolute requirements for the effectiveness of the operations.
In particular, one of the main requirements is to guarantee that only designated people, at the right time and/or in the right location, can read and/or edit specific bits of information related to a certain mission; these data are available on a Web-based portal developed and operated by HITEC itself.

The goal of this project is to develop an approach for model-driven run-time enforcement of role-based access control (RBAC) properties. The approach is based on a  generalized, comprehensive model for RBAC (called GemRBAC), which includes all the entities required to define the main types of RBAC policies presented in the literature. The idea is to formalize the semantics of RBAC policies as OCL constraints on instances of the GemRBAC model, and to evaluate these constraints at run time to enforce the corresponding policies.

Testing for Access Control Vulnerabilities

Broken access control is a widely recognized security issue in Web applications; it leads to unauthorized accesses to sensitive data and
system resources. The consequences can be dramatic, from information leakage to business services being shut down. According to a recent
report by OWASP, broken access control represents in three out of the top ten vulnerabilities. As Web applications are integrating and providing more services to more diverse entities (users, devices, or other service providers, which often have different functionalities and data), they are becoming increasingly complex. Access control engineering for Web applications becomes increasingly challenging. Broken access control is an imminent risk if access control models are not designed and documented properly, or access control implementations are not adequately tested.

The main objectives of this project are three-fold. First, to deal with the problem that access control (AC) policy specifications are
often missing or outdated, we investigate an approach to automate the extraction of AC policies implemented in Web applications. Second, we
aim to propose a method to raise the abstraction level of AC policies that are extracted from Web applications to facilitate the security
auditors in identifying the AC issues and the vulnerabilities discussed above. Third, we strive to apply and evaluate the whole
framework to DISP, a key product of HITEC.


Automated Requirements-Based Testing for Embedded Systems

IEE is a leader in automotive safety systems, specializing in systems for occupant detection and classification. These systems are used in a variety of automotive applications, for example determining if the vehicle’s air bag can be safely employed in the event of a collision. Verification of these systems is a challenging process. The correctness of IEE systems is critical, as failure can potentially result in loss of life. Furthermore, there exist stringent guidelines for the verification process (namely ISO 26262) that must be satisfied prior to the product coming to market. Manually constructing these tests requires a great deal of effort during development. The collaboration with IEE and the SVV lab aims to reduce the manual effort required by automating the construction of tests using the system requirements. The resulting test suites are intended to be both effective at detecting faults, and capable of suitably demonstrating the connection between the verification process and the system requirements satisfying the verification guidelines.

Up to now the project has lead to four main contributions: a set of coherent guidelines for the analysis and definition of functional and timing requirements, and three techniques (and tools) that enable the automated generation of functional and timeliness test cases. The guidelines for requirement analysis enable engineers to adopt the test automation techniques developed in the context of the project. The first test automation technique developed in this project, UMTG, integrates a natural language processing solution that enables the automated generation of test cases from use case specifications written in natural language. The second technique delivered by the project, TAUC, automatically generates executable test cases for testing software timeliness from high level specifications of timing requirements given as timed automata. The third technique delivered, STUIOS, automatically generates oracles for testing non-deterministic systems where the source of non-determinism is time uncertainty, typical in embedded systems. All the techniques have been successfully evaluated on the case study system provided by our industrial partner.

Effective testing of Advanced Driver Assistance Systems

This project regards the testing of driver assistance systems. With the challenge of developing software for autonomous vehicles comes the challenge of testing this software to ensure vehicles' safety and reliability. Simulation, i.e., design time testing of system models, is arguably the most effective way of testing control and perception algorithms developed for autonomous driving. Rich simulation environments are able to replicate all kinds of weather conditions, different types of roads, vehicles and pedestrians, intersections, buildings, landscapes, etc. They enable engineers to generate and execute scenarios capturing various ways in which pedestrians interact with the road traffic or vehicles interact with one another. Recent years have seen major improvements in computational and modeling accuracy of simulators. Their full exploitation, however, is hampered by lack of guidance for identifying the most critical simulation scenarios among the sheer number of scenarios that one can possibly generate, and the time it takes to generate these scenarios.

Our work in this project is currently focused on addressing the following challenges: (1) Identifying the most critical system behaviors and generating input data to simulate critical system behaviors, (2) developing methods to speed up the search for test input data generation, and (3) characterizing the conditions under which the critical system behaviors are likely to occur.

Supporting Change in Product Lines within the context of Use Case Centric Development and Testing

This project regards the support for product line engineering process. IEE is a representative supplier in the automotive domain, producing sensing systems for multiple automotive manufacturers. IEE needs to adopt product line engineering concepts (e.g., variation points and variants) to identify commonalities and variability early in requirements analysis. The project is divided into five main objectives: (1) definition of a modeling method for capturing variability information in product line use case models, (2) automated configuration of product specific use case models, (3) definition of a change impact analysis approach for evolving configuration decisions, (4) definition of a change impact analysis approach for evolving product line use case and domain models, and (5) automated regression test selection for a family of products.

We have currently defined techniques that cover the first four project objectives. The modeling method guides software engineers in capturing variability information in product line use case and domain models, e.g., product line use case diagrams, use case specifications and domain model. Product line use case models that capture the variability information for a family of products are then used to derive product specific use case models for each customer via a configurator. The change impact analysis approach is used to reconfigure the product specific use case models for changes either in product line models or in configuration decisions made by the analysts. Automated regression test selection for a family of products can be done by comparing configuration decisions made for each product.


Model Testing and Model-based Testing of Satellite Control Systems

ncreasingly, we are faced with systems that are untestable, meaning that traditional testing methods are expensive, time-consuming or infeasible to apply due to factors such as the systems' continuous interactions with the environment and the deep intertwining of software with hardware. In this project we focus on providing testing techniques for the Attitude Determination and Control Sub-System (ADCS) which monitors and controls satellite systems. ADCS is an example of an untestable system: It receives sensor data, and produces control commands to actuators after processing of the input measurements. It further has to implement remote and real-time interactions between the satellite and the ground station via telecommands and telemetry data. The development of ADCS software has two main stages: (1) Functional development where the focus is to develop control algorithms, and (2) Operational development where the telecommands and telemetry communications are implemented, and the real-time design and implementation of ADCS is completed. The artifacts generated at each of these two stages have to be properly tested. For this purpose, two kinds of simulators have been developed at LuxSpace to support testing of ADCS: (1) The Functional Engineering simulator (FES) to support testing at the functional-level, and (2) the Software Validation Facilities simulator (SVF) to support testing at the operational-level.

Our goal in this project to define test techniques for functional and operational requirements of ADCS relying on the FES and SVF simulators. Specifically, for the functional-level testing, our goal is to automate testing of the ADCS functional requirements for different environments (with uncertainty) and different hardware configurations. For the operation-level testing, our goals are (1) guided exploration of the input space and checking whether, given an input scenario consisting of environment stimuli and (tele)commands, ADCS generates the expected output and telemetry and (2) analysis of real-time and performance properties such as CPU-time usage and deadline misses.


Automated Extraction of Semantic Metadata from Legal Texts 

Legal informatics aims at improving the understanding and dissemination of legal texts, and providing ways to comply with the provisions in such texts more effectively. A key tenet of legal informatics is leveraging information technologies for making explicit the structure and semantic properties of legal texts. These properties constitute the metadata that needs to be recorded alongside the natural-language content of legal texts, both to facilitate the interpretation of legal texts by humans and to enable advanced analyses such as smart search, automated compliance verification and monitoring. Given the sheer scale of legal corpora amassed over decades and centuries, a fully manual creation of legal metadata is extremely laborious and requires a tremendous amount of resources. Nowadays, this task is still done largely manually. Consequently, important compromises have had to be made when specifying the metadata, in order to reduce the associated costs. These compromises often impact the quality of the specified metadata and hinder machine analyzability of legal texts. The aim of this project is to enable the extraction of several important pieces of semantic metadata from legal texts. These include modalities (rights, obligations, permissions, etc.), conditions and consequences.


Log-based Test Case Generation for Ground Control Systems

A Satellite Ground Control System (SCGS) generates various log files to keep tracks of events (e.g., sending commands to or receiving telemetries from the satellite) as well as other information useful for monitoring and debugging purpose. Domain experts are have to manually analyze several log files, which are generated by different software components (e.g., clients, servers, etc.) and processes. Such files are also used to manually identify regression faults and to ensure the reliability of new functionalities when a new version of the SGCS is built. Manually analysing the log files generated by SCGS is extremely challenging for several reasons. First, log files generated by different processes and/or by different software components have different characteristics, such as format, type of log messages, etc. Second, different log files can be linked to each others (e.g., one single process/components may generate more than one file log) but no explicit traceability is provided. Finally, SCGS generates hundreds of log file per day with redundant information.

This project aims at reducing the manual effort by automatically inferring an abstract model from logged data, e.g., common patterns, invariants and observed failures. Then, model-based testing techniques will be used to generate new tests with assertions (oracle) reproducing the observed behaviors. Such tests will help developers to monitor the SGCS, to detect new behaviors as well as to support regression testing.
Other than passively learning from past log data, we will develop new test strategies aimed at exercising additional behaviors to stress non-functional properties or to trigger exceptional situations. Finally, we will develop enhancement techniques to dynamically add log messages if the existing ones are not sufficient for generating meaningful abstract models.

Model-Based Simulation of Integrated Software Systems

Many industries, such as the telecommunications industry, are heavily reliant on Integrated Software Systems (ISSs) to provide their services and products. The main characteristic of an ISS is that it is made up of several large-scale and heterogeneous subsystems that have been developed by different suppliers and later put together to provide a set of advanced functions. To ensure that ISSs behave as intended and meet their functional, performance, and robustness requirements, these systems are subject to extensive Verification and Validation (V&V).

A key aspect of V&V for an ISS is simulation-based testing, which is aimed at the detection of defects at design time, and before the ISS has been operationalized. In this project, we aim to develop an automated and effective technique for testing satellite systems. Testing a satellite is a critical activity; nevertheless, there are often strict limits on the power and bandwidth usage, and the duration of testing due to the high cost implications. The smaller the power and bandwidth consumption and the shorter the duration of testing, the earlier the satellite can resume full operation. It is thus desirable to minimize as much as possible the usage of critical satellite resources during testing.
Given a test budget, for example, in terms of power, bandwidth or time, our goal is to find an execution ordering for running satellite test procedures in such a way that the whole testing activity runs within a specific budget. If a suitable ordering is not found within reasonable time, the goal would be to select an optimal subset of the test cases to maximize certain criteria, for example, coverage criteria.

To address this problem, we plan to precisely define what constitutes the state of a satellite resource, and what are the dependencies between the test cases and the resources, that is, the "requires" and "taints" relationships. We further need a mechanism, for example, a rule language, to explicitly express the preconditions under which a test case can be run. Finally, we intend to develop a search-based optimization algorithm to compute an optimized ordering for running (a subset of) test procedures.

EU- and FNR-funded projects


CRC "Methods and Tools for Understanding and Controlling Privacy" is an European project in which the SVV lab collaborates with Saarland University and Max Planck Institutes, Saarbrucken, Germany and the University of Lorraine and INRIA, France. The research centre was established in early 2016. The project targets towards making scientific and technological contributions for providing privacy in tomorrow's Internet. The SVV team participating to the CRC investigates methods for analysing how Web and mobile applications handle private information, new techniques for determining potential privacy violations and assessing users' privacy impacts from online interactions and mobile encounters.


EDLAH2 "Enhanced Daily Living and Health of Older Adults" is a European project in which the SVV lab collaborates with KG&S company (UK), EverdreamSoft company (Switzerland), terzStiftung company (Switzerland), and the University of Geneva. The goal of the project is to create a service system for the elderly to stay autonomous as long as possible based on the concept called gamification. The project will develop a set of gamified tablet applications and offer different services such as Health and Nutrition, Social network, Medicine and Object location. The SVV team participating to EDLAH2 investigates methods for capturing the security and privacy concerns for using gamified applications using restricted use case modelling techniques and methods for automatically generating security test cases from use case models to ensure that the developed gamified applications comply with applicable security and privacy regulations.

Completed Projects


Testing of Web Applications and Services for SQL Injection Vulnerabilities

The world wide web has been constantly evolving, many business sectors such has e-banking, online shopping, e-government, and social networking have made their services available on the web. In addition, the adoption of cloud-based systems and services further accelerate this shift. However, this also caused the web to become the main target for malicious attackers. Recent studies found that the number of reported web vulnerabilities is growing sharply. The latest statistics show that web applications experience up to 27 attacks per minute. 

This project focuses on SQL injection vulnerabilities, which have consistently been top-ranked for the last years. We investigate a novel black-box testing approach that target SQL injection vulnerabilities in the services of our industry partner, CETREL. The key challenges that we need to address include scalability, reachability, and executability. In terms of scalability, the systems under test can have many service interfaces composing of a large number of input parameters, and there is potentially a large number of test inputs (SQL injection attacks). As a result, the proposed approach must be efficient in test generation and execution. Regarding reachability, systems under test can be protected by web application firewalls (WAFs) and, for this reason, generated test inputs need to bypass the WAFs to reach the target systems. Finally, once test inputs reach the systems, they should trigger SQL statements that are executable. Otherwise, they will not be executed, and injection impact might not be observed.

Testing of Web Applications and Services for XML Vulnerabilities

Over the last few years, the use of XML in software systems and services has significantly increased. For example, XML is the enabling technology used in most of today's web services for exchanging data between service providers and consumers. XML is also widely used to store data and configuration files that govern the operation of software systems.
However, a dozen of XML vulnerabilities have been recently uncovered and reported; they provide opportunities for denial of service attacks or malicious data access and manipulation. As a consequence, many systems that rely on XML are at risk if they do not properly mitigate these vulnerabilities. Such systems include (i) web services, (ii) XML processors, and (iii) other systems that read XML input data.
There is an urgent need for software engineering techniques and tools, including those in software testing, to build security in software and services. However, very limited (if at all) research effort has been devoted to the problem of testing for XML vulnerabilities. 

This project targets XML vulnerabilities in web services, including a first step focusing on XML parsers. We investigate a testing technique driven by a comprehensive list of indicators of XML-based attacks. The key challenge for some categories of XML vulnerabilities in this project is that a thorough domain knowledge is needed in order to generate effective tests. We plan to tackle this challenge by adopting a string-based constraint solver that turns domain constraints and test objectives into test inputs.


Model-driven Trace Checking of Temporal Properties of eGovernment Business Processes

Digital society has seen the introduction of electronic government (eGovernment) services to deliver digital public services to citizen and
enterprises. The majority of eGovernment services is modeled in terms of business processes, which usually are defined by composing external
services offered by third-party providers. The definition of a business process corresponding to an eGovernment service has also to take into account specific requirements and constraints, such as those prescribed by the law.

The compliance of a business process with respect to its requirements can be checked with different verification techniques, such as model
checking, run-time monitoring, and offline trace checking. Offline trace checking is a procedure for evaluating requirements over a log of recorded events produced by a system. Traces can be produced at run time by a proper monitoring/logging infrastructure, and made
available at the end of a business process execution to perform offline trace checking. Offline trace checking complements
verification activities performed before the deployment of a system, by allowing for the post-mortem analysis of actual behaviors emerged
at run time and recorded on a log. These behaviors include the ones of the business process as well as those derived from the interaction of
the business process with the various third-parties (e.g., other administrations, suppliers) involved in the execution of the process
itself. Offline trace checking is thus also a way to check whether third-party providers fulfill their guarantees and to assess how they
interact with the rest of the parties involved in the business process.

The goal of this project is to develop a practical and scalable solution for the offline checking of the temporal requirements of a
eGovernment system, in the context of a collaboration between CTIE and the SnT centre at the University of Luxembourg. The proposed solution
can be used in contexts where model-driven engineering is already a practice, where temporal specifications should be written in a
domain-specific language not requiring a strong mathematical background, and where relying on standards and industry-strength tools
for property checking is a fundamental prerequisite. 

Modeling, Verification and Testing of Legal Requirements

Developing a system complying with legal requirements is a challenging task which, if not managed carefully, can lead to a hard-to-test and
undocumented system potentially violating legal obligations. Legal texts may contain ambiguities that make it difficult to extract well-defined,
clear legal requirements. When these requirements are extracted, the impact of changes to legal texts on the developed system may be hard to 
trace if no effective support is provided for this activity. The inability to automatically identify the impact of changes would not only complicate the testing of the system after changes but can also  introduce further inconsistencies between the legal texts, legal requirements, and the system implementation. Different stakeholders of legal systems have different levels of technical expertise. Selecting one representation of legal requirements that is both useful and understandable to all stakeholders (e.g., lawyers, software engineers) may not be possible. Instead, solutions are required for managing consistency across different representations.

The goal of this project is to develop suitable techniques to model legal requirements in such a way that they can be reviewed by legal experts, but also analyzed to derive test plans for systematic and automated system validation. In addition, traceability links will be maintained between the law, its interpretation, system requirements, system design, and the test plans.

A key prerequisite for achieving the above is to be able to model the legal and system requirements in a syntactically and semantically
well-defined form, preferably relying on international modeling standards for software. Model-Driven Engineering (MDE) is already part
of the practice at CTIE but will need to be further adapted and tailored where necessary, to address these new requirements.


A SysML-Based Approach to Traceability Management in Support of Change Impact Analysis

The focus of this project is to help improve change impact analysis for embedded software systems. Change impact analysis is a key activity in software maintenance and evolution, and is useful both (1) to estimate the risk and effort involved with application of a change and (2) to implement a change by identifying what needs to be modified to accomplish a change. We study change impact analysis for embedded systems in the automotive domain consisting of both software and hardware components where a change in hardware may have several unknown implications on software and vice versa. A solution to change impact analysis for embedded systems has to be defined at the system-level and be carried out within a system engineering group rather than engineering groups focused specifically on software or hardware. As a result, our approach builds on a System Modeling Language (SysML) and proposes a SysML-based methodology to help system engineers create requirements and design diagrams, and in addition, establish proper traceability links from requirements to design elements. Given SysML models developed based on our methodology, we develop and evaluate automated change impact analysis techniques that, given a change in a requirement, help engineers effectively identify the set of impacted requirements and design elements.

Fault Localization in Simulink Models

The automotive software industry increasingly relies on Simulink to model, simulate, and develop embedded software. To ensure the reliability of Simulink models (i.e., free from faults), these models are subject to extensive testing to verify the logic and behavior of software modules developed in the models. During model testing, the occurrence of failures indicates faults have manifested in the models. Due to complex structure of Simulink models, finding the root cause of failures (i.e., faults) is non-trivial and time consuming. Hence, the presence of a technique that is able to automatically locate faults in Simulink models is beneficial to minimize engineers' effort in debugging the models.
Fault localization has been an active research area that focuses on developing automated techniques to support software debugging. Although there have been many techniques proposed to localize faults in programs, there has not been much research on fault localization for Simulink models. In this project, we investigate and develop a lightweight fault localization approach that is able to automatically and accurately locate faults in Simulink models, and hence minimizes engineers' effort in debugging the models. To enhance the usability of our approach, we also investigate a visualization mechanism that can provide engineers with a better understanding about faults in their models.

Testing Simulink Models with Time-Continuous Behaviors

This project focuses on the problem of testing embedded software systems implemented as Simulink models. Simulink is a data-flow-based block diagram language for the modeling, simulation, and development of embedded software. Despite proliferation of software testing automation techniques, the existing approaches largely fail to address various challenges of Simulink model testing. Examples of such challenges include, among others, testing continuous or mixed discrete-continuous behaviors of Simulink models, handling large numbers of configuration/caliberation variables used in these models, and developing test oracles for Simulink models. Broadly, the goal of this project is to provide scalable and effective automated testing techniques for Simulink models by addressing these challenges.


Automated Model-Based Testing for Data Acquisition Systems

SES provides, among other things, satellite data acquisition systems that are customized for different clients based on their requirements. Testing these systems is time consuming especially as the test suite grows with each new release. The collaboration between the SVV lab and SES has focused on the automated generation of both test data and oracles. A significant challenge in this project is that the system is outsourced and, therefore, only black box techniques can be applied.

The project has lead to four major contributions that aim to achieve different objectives: generation of automated oracles, automated testing of functional requirements, automated robustness testing, and automated testing of updated data requirements. The first contribution is a model-based methodology to capture data requirements by relying upon UML models and constraints expressed in the Object Constraint Language. The methodology enables the automated validation of input data and the generation of oracles that check if the system generates the expected outputs in the presence of invalid inputs. The second contribution is a model-based technique that automatically generates faulty test inputs for the purpose of robustness testing, by relying upon generic mutation operators that alter data collected in the field. The third approach is an evolutionary algorithm to automate the robustness testing of data processing systems through optimised test suites. Finally, the fourth approach is an automated, model-based approach that reuses field data to generate test inputs that fit new data requirements (e.g., network packets including additional fields). All the approaches have been evaluated against the case study system provided by our industrial partner. The approaches have been detailed in papers published in top software engineering venues including ACM Tosem, International Conference on Automated Software Engineering, International Conference on Software Testing, Validation and Verification.

Automated Requirements Quality Assurance

A second joint project between the lab and SES is on the topic of requirements Quality Assurance (QA). Requirement defects, if left undetected, often ripple through the system design and implementation stages with major implications on costs. The aim of this second project is to develop improved requirements QA methods and tools to automate complex and laborious requirement analysis tasks such as consistency checking, ambiguity detection, and change propagation.
There has been a lot of research on automated analysis of natural language requirements over the years, but major gaps remain. In particular, the problem of requirements evolution has not been adequately addressed; very little empirical knowledge exists about the effectiveness of natural language analysis in industrial settings; and many recent advances in natural language processing (NLP) with potential applications to requirements engineering are yet to be harnessed.

In the context of this project we have developed NLP-based requirements quality assurance tools (RETA, REGICE, NARCIA) which are fully operational. SES has expressed interest in maturing up these tools so that they can be deployed in aerospace production environments. We have further been approached by international companies from outside Luxembourg who have shown strong interest in our results and brought forward some compelling business cases for the integration and exploitation of our requirements quality assurance technologies.