75 / 100 SEO Score

Assist. Prof. Dr. Qinglan Wei | Artificial Intelligence | Best Researcher Award

Department chair at Communication University of China, China

Dr. Qinglan Wei is a distinguished researcher and Department Chair at the Communication University of China, specializing in artificial intelligence and multimodal affective computing. With a PhD from the School of Artificial Intelligence at Beijing Normal University, including joint training at Carnegie Mellon University, Dr. Wei has significantly contributed to AI advancements in group emotion analysis, video generation, and online public opinion analysis. His work has earned him numerous accolades, including the ChinaMM 2024 and BDSC 2023 paper awards, and third place in EmotiW.

🔹Professional Profile:

Scopus Profile

Orcid Profile

🎓Education Background

  • PhD in Artificial Intelligence, School of Artificial Intelligence, Beijing Normal University, with joint training at Carnegie Mellon University.

💼 Professional Development

Dr. Wei has led and participated in multiple national and provincial-level scientific research projects, including those funded by the National Natural Science Foundation and various ministries. As a professor and department chair, he oversees cutting-edge research in AI and multimodal affective computing, advancing AI applications in media, national defense, and education.

🔬Research Focus

Dr. Wei’s research focuses on multimodal affective computing, large model intelligent agents, group emotion communication, video generation, communication simulation, and the intelligent analysis of online public opinion. His work is at the intersection of AI, human-computer interaction, and social sciences, with applications spanning from media to public sentiment governance.

📈Author Metrics:

  • Published 14 papers in SCI/CCF Class A/SSCI journals, including 6 in the IEEE Transactions series.

  • Three national invention patents authorized, focusing on video emotion recognition, group emotion analysis, and intelligent shot generation.

  • Led 8 national and provincial research projects, with a citation index of up to 75 citations per article in prominent journals.

🏆Awards and Honors:

  • ChinaMM 2024 and BDSC 2023 paper awards for contributions to AI in multimedia.

  • Achieved third place in the EmotiW competition.

  • Led the development of the first group emotion analysis system for audio-visual programs.

  • Numerous patents in the field of video emotion recognition and multimodal computing.

📝Publication Top Notes

1. MEAS: Multimodal Emotion Analysis System for Short Videos on Social Media Platforms

  • Published: January 2024
  • Journal: IEEE Transactions on Computational Social Systems
  • Co-authors: Yaqi Zhou, Shenlian Xiang, Yuan Zhang
  • Summary: This article proposes a novel affective computing system (MEAS) designed for short videos on social platforms. It addresses challenges such as video resolution disunity and large-scale data collection. The system combines multiscale resolution adaptability and RoBERTa for efficient emotion analysis, improving the contributions of text modality. It also integrates automatic audio segmentation and transcription. The system outperforms the leading algorithm V2EM, achieving significant improvements in weighted accuracy and F1 scores. A new dataset, “Bili-news,” was also introduced to validate MEAS.

2. Public Emotional Atmosphere During Disasters: Understanding Emotions in Short Video Comments on the Zhengzhou Flood

  • Published: August 2023
  • Journal: Chinese Journal of Communication
  • Co-authors: Xiaohong Wang, Chen Zhang, Yichun Zhao
  • Summary: This paper explores how emotions in short video comments during disasters, specifically the Zhengzhou flood, can reflect and influence public sentiment. It offers insights into the emotional dynamics in online video commentaries during crises.

3. Influence and Philosophical Reflection on ChatGPT in the Media Industry

  • Published: August 2023
  • Type: Conference Paper
  • Co-authors: Yufan Xia, Beibei Wang
  • Summary: This paper analyzes the impact of ChatGPT on the media industry, considering its philosophical implications and the way AI influences communication and content creation.

4. FV2ES: A Fully End2End Multimodal System for Fast Yet Effective Video Emotion Recognition Inference

  • Published: September 2022 (Preprint)
  • Co-authors: Xuling Huang, Yuan Zhang
  • Summary: The paper presents a fully multimodal system for fast yet effective emotion recognition from videos. It introduces a hierarchical attention method for sound spectra, a multi-scale approach for visual extraction, and a single-branch system for inference. The system significantly improves inference efficiency without compromising accuracy and reduces computational costs.

5. FV2ES: A Fully End2End Multimodal System for Fast Yet Effective Video Emotion Recognition Inference

  • Published: January 2022
  • Journal: IEEE Transactions on Broadcasting
  • Co-authors: Xuling Huang, Yuan Zhang
  • Summary: This article addresses the challenges in video emotion analysis by proposing FV2ES, a system designed to efficiently process multimodal data from videos. It focuses on improving the contribution of acoustic modalities, optimizing visual extraction, and reducing computational costs for large-scale applications in social networks.

.Conclusion:

Dr. Qinglan Wei stands out as a front-runner for the Best Researcher Award due to his:

  • Strong academic output,

  • Innovative research systems (MEAS, FV2ES),

  • Real-world societal relevance,

  • Recognition in high-profile venues,

  • Leadership in multimodal AI.

His work exemplifies a rare blend of technical excellence, academic leadership, and social impact. With continued efforts toward open science and international outreach, he is poised to become a global thought leader in affective computing and AI-driven media analytics.

Highly recommended for the Best Researcher Award.

Qinglan Wei | Artificial Intelligence | Best Researcher Award

You May Also Like