Trung Thanh Nguyen
🔬 PhD Candidate @ Nagoya University | Student Researcher @ RIKEN

I am a PhD candidate at Nagoya University, specializing in the Department of Intelligent Systems. My research focuses on vision-language models, multimodal recognition, and video captioning, with applications in solving real-world problems.
Currently, I am a student researcher at RIKEN National Science Institute, working on the Guardian Robot Project. My research involves open-world action detection and multi-view multi-modal action recognition by analyzing multimodal sensory data.
Additionally, I am in charge at the Center for Artificial Intelligence, Mathematical and Data Science, collaborating with Japanese corporations to develop practical AI solutions.
đź“© Contact: nguyent (at) cs.is.i.nagoya-u.ac.jp
news
Jul 17, 2025 | I received a Letter of Appreciation from RIKEN in recognition of excellent research achievements. |
---|---|
Jul 16, 2025 | I received a Certificate of Achievement from Academia Sinica, Taiwan. |
Jun 24, 2025 | On a business trip to Academia Sinica until Jul 16, Taiwan. |
Jun 05, 2025 | Our paper on MMASL was accepted by the ACM TOMM (IF: 5.2) journal. |
May 29, 2025 | Our paper “MultiSensor-Home” won the Best Student Paper Award at IEEE FG2025, United States. |
May 29, 2025 | I presented our MultiSensor-Home paper at IEEE FG2025, United States. |
May 20, 2025 | I received a review recognition certificate from the Pattern Recognition journal by Elsevier. |
May 07, 2025 | I was selected to participate in the Mediterranean Machine Learning program, Croatia. |
Apr 28, 2025 | I received an invitation from Tsinghua University for the 2025 MAKE IT SHENZHEN program, China. |
Apr 26, 2025 | I was selected for the Doctoral Consortium at the IEEE Biometrics Council’s conference, United States. |
selected publications
- IEEE FGMultiSensor-Home: A Wide-area Multi-modal Multi-view Dataset for Action Recognition and Transformer-based Sensor FusionIn Proceedings of the 19th IEEE International Conference on Automatic Face and Gesture Recognition, 2025