fbpx

AI at the Forefront of Beijing’s Campaign Against the US

The TDR Three Takeaways on China:

  1. China leverages AI to produce propaganda highlighting societal issues in the US, questioning the reality of the ‘American Dream.
  2. Through AI-generated videos, China’s state media CGTN disseminates anti-American messages globally, showcasing deep societal fractures.
  3. AI’s role in China’s influence campaigns signals a new era in digital propaganda, potentially reshaping perceptions of global power dynamics.

China is increasingly embracing artificial intelligence (AI) to bolster its propaganda efforts, particularly targeting the United States and its iconic ‘American Dream.’ This strategic move is evident in the “A Fractured America” animated series, broadcasted by the Chinese state-run CGTN. The series utilizes AI to generate animated videos critiquing various societal issues in the US, including drug addiction, wealth inequality, and the military-industrial complex. With its hyper-stylized aesthetic and synthetic audio, the series aims to present the US as a declining power, failing to live up to its promises of equality and opportunity.

The use of AI in propaganda marks a significant shift in Beijing’s approach to influence campaigns. The technology enables the rapid creation of high-quality multimedia content, reducing costs and time involved in traditional content production. This automation of content creation not only allows for a broader dissemination of China’s narratives but also introduces new challenges in identifying state-sponsored materials.

China’s internet propaganda, once dominated by the “wumao” troll army, has evolved to incorporate social media platforms and online influencers, further expanding its global reach. The state’s backing of movements like Black Lives Matter in the US, while suppressing criticism of its treatment of ethnic minorities, highlights the strategic use of social issues to project China as a rising leader and the US in decline.

Microsoft’s Threat Analysis Center warns that AI facilitates the generation of viral content, complicating the detection of state actors behind such materials. AI-generated content, especially as it becomes more sophisticated, poses risks of deepfakes and astroturfing, potentially enhancing the effectiveness of influence operations.

The RAND report emphasizes that AI could significantly improve astroturfing, creating the illusion of widespread consensus on specific issues. This is particularly concerning as more than 60 countries, including the US and Taiwan, approach critical elections. In Taiwan, deepfake videos targeting political figures have already surfaced, attributed to China’s Ministry of State Security, showcasing AI’s potential in spreading misinformation at scale.

As AI continues to advance, its implications for political discourse and democracy are profound. AI-generated content, difficult to distinguish from genuine material, opens new avenues for state actors to manipulate public opinion and interfere in elections. The CGTN video series, while using occasionally awkward grammar, taps into real grievances voiced by US citizens, illustrating the complex landscape of AI in propaganda. Want to keep up to date with all of TDR’s research and news, subscribe to our daily Baked In newsletter.  


You might also like

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More