Track Co-chairs
|
Renee Rui Chen
Assistant Professor
rchen@szu.edu.cn
Shenzhen University
|
|
Yangjun Li
Assistant Professor
liyangjun@bit.edu.cn
Beijing Institute of Technology
|
Brief Introduction
Cyberspace has evolved into a critical global information commons, yet its health is increasingly threatened by clickbait, disinformation, and rumors that undermine social stability and economic development. As the axiom "he who controls the network controls the world" suggests, governing this digital ecosystem has become a critical imperative. The rapid advancement of AI has introduced profound new challenges: deepfakes generate fabricated content, personalized recommendation algorithms trap users in "information cocoons," and malicious actors weaponize digital tools for cyberbullying, sophisticated fraud, and doxxing. In commercial context like social commerce, threats manifest as deepfake influencers, synthetic reviews, and AI-manipulated product demonstrations that deceive millions in real time. Such negative applications directly threaten individual safety, consumer rights, market fairness, and the integrity of digital platforms. This track aims to explore cyberspace content governance issues, uncover the mechanisms of content creation, propagation and evolution and seek innovative strategies to safeguard cyberspace, especially in the AI era.
This track aims to explore cyberspace content governance issues, uncover the mechanisms of content creation, propagation and evolution and seek innovative strategies to safeguard cyberspace, especially in the AI era.
Topics
- Information Cocoons and Echo Chambers: Investigating how personalized algorithms limit exposure to diverse products and reinforce consumer preferences.
- Misinformation in Online Marketplaces: Examining how fake reviews and AI-generated testimonials influence consumer trust and purchase decisions.
- Deviant Online Behaviors of Users: Understanding and combating online abuse, trolling campaigns, and the weaponization of personal data, such as privacy violations and doxxing.
- Human-AI in Content Moderation: Exploring human-AI collaboration models for effective and ethical content review.
- Platform Accountability and Governance Frameworks: Developing regulatory models and industry standards for a safer digital future.
- Ethical Design of AI Systems: Embedding ethics and transparency into AI-powered content systems to prevent unintended harm to consumers and sellers.
- User Resilience and Digital Literacy: Designing interventions to help consumers identify deceptive content and make informed decisions in digital marketplaces.
|