Shijie Li | Embodied AI | Best Researcher Award

Dr. Shijie Li | Embodied AI | Best Researcher Award

Scientist | A*STAR Institute for Infocomm Research | Singapore

Dr. Shijie Li is a computer vision researcher with expertise in 3D perception, embodied AI, and vision-language models, contributing to the development of intelligent systems for real-world applications. He earned his Ph.D. in Computer Science from Bonn University under the supervision of Prof. Juergen Gall, following a master’s degree from Nankai University and a bachelor’s degree in Automation Engineering from the University of Electronic Science and Technology of China. His professional experience includes research positions and internships at A*STAR Singapore, Qualcomm AI Research in Amsterdam, Intel Labs in Munich, Alibaba DAMO Academy in China, and Technische Universität München in Germany, showcasing strong international collaborations and applied research expertise. His research interests lie in 3D scene understanding, motion forecasting, vision-language integration, semantic segmentation, and novel view synthesis. He has published in leading journals and conferences such as ICCV, CVPR, IEEE TPAMI, IEEE TNNLS, WACV, BMVC, ICRA, and IROS, reflecting impactful and consistent contributions. His academic excellence has been recognized through scholarships and awards including the Fortis Enterprise Scholarship, National Inspirational Scholarship, First Class Scholarship, and Outstanding Graduate Award. He has also served as a reviewer for top journals and conferences such as IEEE TPAMI, IJCV, CVPR, ICCV, ECCV, NeurIPS, and AAAI, reflecting his active role in the research community. His skills include deep learning, diffusion models, semantic and motion forecasting, vision-language modeling, and embodied AI, with a focus on interdisciplinary innovation. His research impact is reflected in 183 citations, 10 documents, and an h-index of 7.

Profiles: Google Scholar | Scopus | ORCID | LinkedIn

Featured Publications

Li, S., Abu Farha, Y., Liu, Y., Cheng, M., & Gall, J. (2023). MS-TCN++: Multi-stage temporal convolutional network for action segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(6), 6647–6658.

Chen, X., Li, S., Mersch, B., Wiesmann, L., Gall, J., Behley, J., & Stachniss, C. (2021). Moving object segmentation in 3D LiDAR data: A learning-based approach exploiting sequential data. IEEE Robotics and Automation Letters, 6(4), 6529–6536.

Qiu, Y., Liu, Y., Li, S., & Xu, J. (2020). MiniSeg: An extremely minimum network for efficient COVID-19 segmentation. Proceedings of the AAAI Conference on Artificial Intelligence, 34(11), 13180–13187.

Li, S., Chen, X., Liu, Y., Dai, D., Stachniss, C., & Gall, J. (2021). Multi-scale interaction for real-time LiDAR data segmentation on an embedded platform. IEEE Robotics and Automation Letters, 7(2), 738–745.

Li, S., Zhou, Y., Yi, J., & Gall, J. (2021). Spatial-temporal consistency network for low-latency trajectory forecasting. Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 10737–10746.