Home // Research // FSTM // DCS // Members // Decebal Constantin MOCANU

Decebal Constantin MOCANU

Decebal Constantin MOCANU

Associate Professor in Machine Learning

Department Department of Computer Science
Postal Address Université du Luxembourg
Maison du Nombre
6, Avenue de la Fonte
L-4364 Esch-sur-Alzette
Campus Office MNO, E03 0325-060
Telephone (+352) 46 66 44 6040


  • April 24: Invited talk titled "Sparse Neural Networks Training for Sustainable AI" at the TU Delft AI Energy Lab within the AI for E&S Think Tank
  • April 24: Ghada Sokar' paper titled "The Dormant Neuron Phenomenon in Deep Reinforcement Learning", an output of her Internship at Google Brain, has been accepted at ICML 2023 (link)
  • April 23: Our tutorial "Sparse Training for Supervised, Unsupervised, Continual, and Deep Reinforcement Learning with Deep Neural Networks" has been accepted at IJCAI 2023 (link: coming soon)
  • April 22: Zahra Atashgahi' paper titled "Cost-effective Artificial Neural Networks" has been accepted at IJCAI 2023 Doctoral Consortium
  • April 11: Invited talk titled "Dynamic Sparse Training: challenges and opportunities towards scalable, efficient, and sustainable AI" at the TRUST AI workshop during the Smart Diaspora 2023 conference in Timisoara, Romania
  • March 1: I have moved to the University of Luxembourg
  • February 7: Our paper "Supervised Feature Selection with Neuron Evolution in Sparse Neural Networks" has been accepted in Transactions on Machine Learning Research (TMLR) (link)
  • January 23: Zahra Atashgahi is doing a three months Research Visit at the University of Cambridge in the group of Prof. Mihaela van der Schaar
  • January 21: Our sparse training paper "More convnets in the 2020s: Scaling up kernels beyond 51x51 using sparsity" has been accepted at ICLR 2023 (link)
  • January 4: Our paper "Automatic Noise Filtering with Dynamic Sparse Training in Deep Reinforcement Learning" has been accepted at AAMAS 2023 (link)
  • January 4: The third edition of the workshop "Sparsity in Neural Networks: On practical limitations and tradeoffs between sustainability and efficiency" - SNN Workshop 2023 will be colocated with ICLR 2023 (https://www.sparseneural.net/)


  • December 12: Our paper "You Can Have Better Graph Neural Networks by Not Training Weights at All: Finding Untrained GNNs Tickets" received the best paper award at the Learning on Graphs (LoG 2022) conference (link)
  • November 24: One paper on sparsity and graph neural networks accepted at the LoG 2022 conference as a spotlight presentation (link)
  • November 16: One paper on human-robot cooperation and reinforcement learning accepted in NCAA journal (link)
  • September 14: 2 sparse training papers (on feature selection and time series classification) accepted at NeurIPS 2022 (Links: 12)
  • September 1: Ghada Sokar is doing a five months Internship at Google Brain, Montreal
  • September 1: Aleksandra Nowak from the GMUM group, Jagiellonian University, is visiting us for three months
  • September 1: Shiwei Liu is moving to the University of Texas at Austin as a postdoctoral fellow to continue his research in sparse neural networks
  • July 15: Invited talk during the AI Seminar at the University of Alberta/Alberta Machine Intelligence Institute titled "Sparse training in supervised, unsupervised, and deep reinforcement learning" (link)
  • July 13: We are organising the second edition of the "Sparsity in Neural Networks: Advancing Understanding and Practice" - SNN Workshop 2022 (https://www.sparseneural.net/)
  • July 5: One paper on sparse training for high sparsity regimes accepted in Machine Learning (ECMLPKDD 2022 journal track) (link)
  • June 14 One paper on sparse training and continual learning accepted at ECMLPKDD 2022 (link)
  • June 10: Invited talk at Calgary AI, University of Calgary titled "Sparse training in supervised, unsupervised, and deep reinforcement learning"
  • May 21: I am doing a research visit to the group of Dr. Matthew Taylor at the University of Alberta
  • May 16: One sparse training paper accepted at UAI 2022 (link)
  • May 10: Our paper "Dynamic Sparse Training for Deep Reinforcement Learning" received best paper award at ALA 2022, collocated with AAMAS 2022 (link)
  • April 25: We had the pleasure of hosting Utku Evci, Research Engineer at Google Brain Montreal, to give a very engaging in-person talk (link)
  • April 20: One paper on sparse training and deep reinforcement learning accepted at IJCAI-ECAI 2022 (link)
  • April 15: Our tutorial "Sparse Neural Networks Training" has been accepted at ECMLPKDD 2022 (link)
  • April 6: Shiwei Liu defended his outstanding PhD thesis (link) with cum laude
  • Jan 28: 2 sparse training papers accepted at ICLR 2022 (Links: 12)

Last updated on: 22 May 2023

Research interests

Artificial intelligence, machine learning, scalable and efficient deep learning, reinforcement learning, continual learning, sparse training, sparsity in artificial neural networks, evolutionary computing, network science.

Research highlights

Decebal and his co-authors have laid the ground (connected papers) for sparse training in deep learning (training sparse artificial neural networks from scratch), while introducing both static (static sparse training) and dynamic sparsity (dynamic sparse training). Besides the expected computational benefits, sparse training achieves in many cases better generalisation than dense training. For more details, please see:

  • Static sparsity in A topological insight into restricted Boltzmann machines, Machine Learning (2016), preprint https://arxiv.org/abs/1604.05978 (2016);
  • Dynamic sparsity in Scalable training of artificial neural networks with adaptive sparse connectivity inspired by network science, Nature Communications (2018), preprint https://arxiv.org/abs/1707.04780 (2017);
  • Short survey/position paper: Sparse Training Theory for Scalable and Efficient Agents, AAMAS (2021), preprint https://arxiv.org/abs/2103.01636 (2021).

Decebal short-term research interest is to conceive scalable deep artificial neural network models and their corresponding learning algorithms using principles from network science, evolutionary computing, optimization and neuroscience. Such models shall have sparse and evolutionary connectivity, make use of previous knowledge, and have strong generalization capabilities to be able to learn, and to reason, using few examples in a continuous and adaptive manner.

Most science carried out throughout human evolution uses the traditional reductionism paradigm, which even if it is very successful, still has some limitations. Aristotle wrote in Metaphysics “the totality is not, as it were, a mere heap, but the whole is something beside the parts”. Inspired by this quote, in long term, Decebal would like to follow the alternative complex systems paradigm and study the synergy between artificial intelligence, neuroscience, and network science for the benefits of science and society.


Google Scholarhttps://scholar.google.com/citations?user=RlQgUwEAAAAJ 

Last updated on: 12 May 2023

PhD students (ongoing)

  • Boqian Wu (2022 - present, University of Twente)
  • Bram Grooten (2021 - present, Eindhoven University of Technology)
  • Qiao Xiao (2021 - present, Eindhoven University of Technology)
  • Zahra Atashgahi (2019 - present, University of Twente)
  • Ghada Sokar (2019 - present, Eindhoven University of Technology)

PhD students (graduated)

  • Shiwei Liu (cum laude), Sparse Neural Network Training with In-Time Over-Parameterization (graduated 2022, Postdoctoral Researcher - UT Austin)
  • Anil YamanEvolution of biologically inspired learning in artificial neural networks (graduated 2019, Assistant Professor - VU Amsterdam) 

PDEng students supervision (graduated)

  • Pranav Bhatnagar, Automatic Microscope Alignment via Machine Learning (at Thermo Fisher Scientific, Eindhoven)September 2019
  • Eleftherios Koulierakis, Detection of outbreak of infectious diseases : a data science perspective (at GDD, Eindhoven)July 2018

MSc students supervision (graduated)

  • Peter van der Wal (cum laude), Diversifying Multilayer Perceptron Ensembles in a Truly Sparse Context, March 2023
  • Xuhao Zhang, Design and Impact of Activation Functions for Sparse Neural Networks, January 2023
  • Emiel Steerneman, Exploring the effect of merging techniques on the performance of merged sparse neural networks in a highly distributed setting, July 2022
  • Anubrata Bhowmick, Markers of Brain Resilience (at Philips Research), July 2021
  • Samarjeet Singh Patil (3rd supervisor), Automated Vulnerability Detection in Java Source Code using J-CPG and Graph Neural Network, February 2021
  • Mickey Beurskens, Pass the Ball! - Learning Strategic Behavioural Patterns for Distributed Multi Agent Robotic Systems, November 2020
  • Sonali Fotedar (cum laude), Information Extraction on Free-Text Sleep Narratives using Natural Language Processing (at Philips Research, Eindhoven), November 2020
  • Selima Curci (cum laude), Large Scale Sparse Neural Networks, October 2020
  • Manuel Munõz Sánchez (cum laude), Domain Knowledge-based Drivable Space Estimation (at TNO Helmond), September 2020
  • Chuan-Bin Huang, Novel Evolutionary Algorithms for Robust Training of Very Small Multilayer Perceptron Models, August 2020
  • Jeroen Brouns, Bridging the Domain-Gap in Computer Vision Tasks (at Philips Research, Eindhoven), December 2019
  • Daniel Ballesteros Castilla, Deep Reinforcement Learning for Intraday Power Trading (at ENGIE Global Markets, Brussels), December 2019
  • Mauro Comi (cum laude), Deep Reinforcement Learning for Light Transport Path Guiding, November 2019
  • Saurabh Bahulikar, Unsupervised Learning for Early Detection of Merchant Risk in Payments (at Payvision, Amsterdam), November 2019
  • Thomas Hagebols, Block-sparse evolutionary training using weight momentum evolution: training methods for hardware efficient sparse neural networks (at Philips Research, Eindhoven), March 2019
  • Bram Linders, Prediction and reduction of MRP nervousness by parameterization from a cost perspective (2nd supervisor, at Prodrive Technologies), February 2019
  • Joost Pieterse (cum laude), Evolving sparse neural networks using cosine similarity, July 2018

Last updated on: 22 Mar 2023

Decebal Mocanu is Associate Professor in Machine Learning within the Department of Computer Science, Faculty of Science, Technology and Medicine at the University of Luxembourg (UL); and Guest Faculty Member within the Data Mining group, Department of Mathematics and Computer Science at the Eindhoven University of Technology (TU/e). 

From 2020 until 2023, Decebal was Assistant Professor in Artificial Intelligence and Machine Learning within the DMB group, EEMCS faculty at the University of Twente. In the period 2017 - 2020, Decebal was Assistant Professor in Machine Learning within the Data Mining group, Department of Mathematics and Computer Science at TU/e and a member of TU/e Young Academy of Engineering. Previously, he worked as a PhD candidate at TU/e and as a software developer in industry. 

In 2017, Decebal received his PhD in Artificial Intelligence and Network Science from TU/e. During his doctoral studies and after that, Decebal undertook four research visits at the University of Pennsylvania (2014), Julius Maximilians University of Wurzburg (2015), the University of Texas at Austin (2016), and the University of Alberta (2022). Decebal holds a MSc in Artificial Intelligence from Maastricht University for which he received the "Best Master AI Thesis Award", and a BEng in Computer Science from the University Politehnica of Bucharest. 


Last updated on: 27 Mar 2023