Home // SnT // News & E... // SnT’s new Algorithm can Make you Move like a Pro

SnT’s new Algorithm can Make you Move like a Pro

twitter linkedin facebook email this page
Published on Thursday, 19 October 2017

Film goers are already well accustomed to highly convincing animated figures, whether it be Andy Serkis as a Chimpanzee in Planet of the Apes or Peter Cushing seemingly brought back to life for Star Wars: Rogue One. Digitally transferring the movements of one person to another, however, remains an expensive and labour intensive process. Thanks to the work of a team of researchers at the University of Luxembourg, however, we are one-step closer to making this an everyday technology. Their algorithm takes a 3D mesh (a kind of digital figurine) of one person and animates it with the precise movements and subtle deformations of another.

The team presented the research in a paper at the IEEE International Conference on Image Processing held in September in Beijing, China. It was awarded 2nd place in the prestigious Best Paper Award, selected from some 2,400 other papers. The conference is the world’s leading forum for the presentation of technological advances and research results in the fields of theoretical, experimental, and applied image and video processing.

The research has numerous potential applications in computer modelling and the animation industry. Dr. Abdelrahman Shabayek, Research Associate at the Interdisciplinary Centre for Security Reliability and Trust (SnT) where the work was carried out, emphasises the potential emotive impact of their work. “This software can produce 3D animations that really make people happy and inspire them. Children or people with limited mobility can now see themselves moving with all the same mannerisms as their favourite sportsperson or popstar. Equally, it could be used in sports science to compare aspiring athletes to world class performers, zeroing in on where they can improve.”

The algorithm relies on three main assumptions, says Shabayek. “Firstly, 3D meshes must be available for the professional sportsperson’s movements, or at least for the key moves. Secondly, that the first 3D mesh, representing the first key movement, can be easily reproduced for the target 3D mesh. And finally that all meshes from step 1 and step 2 must be registered (captured using a 3D scanner without obstruction by loose clothing).”

Dr. Djamila Aouada and Dr. Abdelrahman Shabayek at the IEEE International Conference on Image Processing

Dr. Djamila Aouada, Research Scientist at SnT’s SIGCOM Research Group and leader of their Computer Vision Lab, adds: “This work is part of our project on human body modelling and is directly linked to our Shapify campaign for 3D body data collection. We’ve been scanning people, gathering data to develop new mathematical methods for modelling the human body shape in 3D. This result, besides its strong theoretical basis, is a fantastic showcase for our research. It makes our work tangible for the general public and participants in the Shapify project.”

For the Shapify project earlier this year, the researchers acquired a 3D full body scanner made by Artec 3D – the Artec Shapify Booth – and invited members of the public to be scanned in it. This work formed the basis for the award-winning animation algorithm.

“This award is the reward for the hard work we all put in together in our team,” says Shabayek. “We will now continue to work with the results to ensure our algorithm culminates in a concrete and economically viable application.”

Contribute to Our Research
Our scientists are still busy collecting data for this research, and if you would like to help them make the next step in 3D body modelling, you can volunteer to be scanned in 3D. The process takes 20 minutes, and you will go home with a 3D digital selfie, electronic access to your 3D data and a set of free sports clothing (worn during the scanning process). The scanning takes place on Tuesdays and Thursdays at the Maison du Nombre on the University’s Belval Campus. For further information, please email Shapify3D@uni.lu.

Notes:
ICIP is a prestigious conference sponsored by the IEEE Signal Processing Society. It is the premier forum for the presentation of technological advances and research results in the fields of theoretical, experimental, and applied image and video processing. ICIP 2017 is the 24th in the series that has been held annually since 1994. It brings together leading engineers and scientists in image and video processing from around the world. Research frontiers in fields ranging from traditional image processing applications to evolving multimedia and video technologies are regularly advanced by results first reported in ICIP technical sessions.