From low-code geometric algebra to no-code geometric deep learning: computational models, simulation algorithms and authoring platforms for immersive scientific visualization, experiential visual analytics and the upcoming educational metaverse


More information here

More than 1 billion jobs, almost one-third of all jobs worldwide, are likely to be transformed by technology in the next decade, according to OECD and World Economic Forum estimates. In addition, 5 billion people today lack access to proper surgical and anesthesia care, due to the limited number of health professionals entering the workforce, as a direct result of the lack of innovation in medical training over the last 150 years. This growing need for continuous upskill and reskill, becomes even more critical in the post COVID-19 pandemic era. Extended Reality (XR) together with 5G spatial computing enabling technologies can pose as the next final frontier, regarding psychomotor/cognitive training and education content creation. XR can provide the means for qualitative hands-on education (knowledge) and training (skills), using affordable technology with on-demand, immersive scientific visualization techniques coupled with personalized, experiential visual analytics. As the expectations of the upcoming educational metaverse are rising, we review in this talk fundamental analytic as well as neural geometric computational models that are powering latest low-code as well as no-code content creation tools. Geometric algebra-based graphics character animation and rendering algorithms, Entity-Component-System scenegraphs and graph neural networks are poised to make the difference in the evolution of experiential educational metaverse applications.