The Security Design and Validation Research Group – SERVAL – is headed by Professor Yves Le Traon.

SERVAL conducts research on software engineering, Machine Learning (ML), and their intersection. SERVAL is committed to make software- and ML-based systems more trustworthy (i.e., secure, safe, and correct) than they are today. Among other issues addressed by the group, let us mention (1) the definition of testing techniques (mutation, evolutionary algorithms, static analysis) to ensure that functional and security mechanisms (privacy, access control, usage control, encryption) are correctly implemented and deployed; (2) the design of innovative methods to make ML models more robust against quality and security threats (e.g., adversarial attacks, failures), more generalizable, and more explicable (aka., eXplainable AI); (3) the design of decision support systems in dynamic and uncertain environments (using, among other methods, deep reinforcement learning).

SERVAL applies its research in various domain including financial services (FinTech), energy, Industry 4.0, and autonomous vehicles.

Research topics include:

  • Model Driven Engineering
  • Software Testing
  • AI Security
  • Effective and Trustworthy AI / ML
  • eXplainable AI (XAI)
  • Predictive Maintenance

Prof. Yves Le Traon (SnT)