Real-Time AI-Based Video Analytics: Theory and Applications

Peerapon Vateekul
Professor Dr. Peerapon Vateekul

Nowadays, AI techniques have been advancing and are being applied in many kinds of data, especially in video analytics. In this session, we aim to present many of our research works in real-time deep learning-based video analytics. First, DeepGI is our innovation to assist endoscopists in detecting anomalies in gastrointestinal (GI) tracts in real-time from various types of endoscopy videos. Our models can (i) detect polyps from colonoscopy videos to prevent colon cancer, (ii) segment gastric intestinal metaplasia (GIM) lesions from gastroscopy videos, and (iii) classify malignant scenes of bile duct strictures from cholangioscopy videos. Second, D-mind is an innovation from AI for Mental Health (AIMET) that helps detect depression from interview videos in real-time through a mobile application. At the moment, more than 200,000 users are using the D-mind application. Third, we can identify the severity level of Parkinson’s Disease (PD) patients using facial expression and gait videos. All of these works are good examples that theory in the AI domain can be applied in real-world applications.


Peerapon Vateekul received a Ph.D. degree from the Department of Electrical and Computer Engineering, University of Miami (UM), FL, USA, in 2012. Currently, he is an professor at the Department of Computer Engineering, Faculty of Engineering, Chulalongkorn University, Thailand. His research interests include machine learning, data mining, deep learning, text mining, and big data analytics. To be more specific, his works include variants of classification, natural language processing, data quality management, video analytics, and applied deep learning and reinforcement learning techniques in various domains viz. healthcare, geoinformatics, hydrometeorology, transportation, and energy. Some examples of AI-assisted medical diagnoses are real-time polyp detection from colonoscopy videos, gastrointestinal metaplasia segmentation from gastroscopy videos, depressive scoring from interview videos, Parkinson’s face classification, and movement disorder diagnosis. He is a certified SAS instructor for more than 10 years. Moreover, he is also a certified instructor for NVIDIA Deep Learning Institute and has joined NVIDIA AI Technology Center (NVAITC) since 2018.

Nature-inspired Robot Intelligence: From Nature to Advanced Robotics Technology

Poramate Manoonpong
Professor Dr. Poramate Manoonpong

Living creatures can quickly form their gaits within minutes of being born. This is due to their neural locomotion control circuits comprising genetically encoded. They can quickly adapt their movement to traverse a variety of substrates and even take proactive steps to avoid colliding with an obstacle. Furthermore, in addition to locomotion, they can also perform diverse complex autonomous behaviors, such as object transportation and navigation, with a high degree of energy efficiency. Biological studies reveal that these capabilities are the result of the coupling of their biomechanics (e.g., structures, muscles, and materials) and neural mechanisms with plasticity and memory (brain).

In this talk, I will present “how we can realize biomechanics and neural mechanisms inspired by nature for robots so they can become more intelligent like living creatures”. I will also demonstrate that this nature-inspired robotics can help us not only address scientific questions, but also advance robotics technology for real world (industrial) applications. It may even bring the goal of creating “true robot intelligence” a little bit closer.


He is a Professor at the School of Information Science & Technology, Vidyasirimedhi Institute of Science & Technology (VISTEC), located in Rayong, Thailand. In addition to his primary role, he holds an ancillary academic position as a Professor of Biorobotics at the University of Southern Denmark (SDU), Denmark. Furthermore, he serves as the head of the Research Center for Advanced Robotics and Intelligent Automation at VISTEC.

As author or co-author, he has published over 120 publications in journals (e.g., Nature Physics, Nature Machine Intelligence, IEEE Transactions on Cybernetics, IEEE Transactions on Neural Networks and Learning Systems, IEEE Robotics and Automation Letters) and conferences (e.g., IROS, ICRA) and his articles have been cited in total more than 3100 citations. His H-index is 29 (from google scholar).

He has been the Principal Investigator (PI) or co-Principal Investigator (co-PI) on more than 10 funded projects, including those funded by EU Horizon 2020, Human Frontier Science Program (HFSP), and Doctoral Networks - Marie Skłodowska-Curie Actions. Currently, he serves as an associate editor of IEEE Robotics and Automation Letters, Frontiers in Neuroscience (Neurorobotics), and Adaptive Behavior (SAGE), a guest associate editor of Frontiers in Robotics and AI, and an associate editor for IEEE International Conference on Soft Robotics (2020,2021,2022). He also serves as the editorial board of the Scientific Reports and Journal of the Royal Society Interface.

The central goal of his research is to understand “how brain-like mechanisms and biomechanics can be realized in robots so they can become more intelligent like living creatures?”. According to this, his team has developed bio-inspired behaving robots with general bio-inspired machine learning methods and could show that these robots can acquire complex behaviors with learning and adaptation. In addition to this, his team also focuses on transferring biomechanical and neural developments of robots to other real-world applications, like inspection, healthcare, industry, service.