OpenAI bans accounts linked to a suspected Chinese surveillance tool – but how safe is AI from political misuse?
![]() |
OpenAI bans accounts tied to a Chinese surveillance tool, sparking debate on the misuse of AI by authoritarian regimes for political monitoring. Image/ Illustration: ChicHue |
San Francisco, USA — February 24, 2025:
OpenAI has recently taken action by banning several accounts allegedly involved in developing a social media surveillance tool designed to monitor anti-China protests in Western countries. The tool, which aimed to gather real-time data and provide reports to Chinese authorities, raises significant questions about the ethical use of artificial intelligence for political purposes, reports the Star.
These banned accounts were reportedly using OpenAI’s ChatGPT to create sales materials and debug code for the so-called "Qianyue Overseas Public Opinion AI Assistant." This software was designed to track online conversations about human rights demonstrations, particularly related to China, and send this data to Chinese intelligence agencies. The news comes amid growing global concerns about how AI technologies can be exploited by authoritarian regimes for surveillance and control.
What prompted OpenAI’s decision to ban these accounts?
According to OpenAI’s principal investigator, Ben Nimmo, the company’s decision to publicly address the situation was driven by the need to shed light on the troubling trend of non-democratic actors attempting to use democratic technologies to advance their agendas. Nimmo emphasized that such practices could have serious implications for privacy, security, and freedoms in democratic nations.
While OpenAI was unable to independently verify the claims about the tool, marketing materials obtained by the company outlined how the software would monitor social media platforms like X (formerly Twitter), Facebook, and Instagram for specific content related to protests. The goal was to track public sentiment on human rights issues in China, which could potentially be used by Chinese authorities for political purposes.
How can AI be protected from such misuse?
OpenAI’s report highlights the increasing concern about the use of US-developed AI tools for non-democratic purposes. With AI technology becoming more globally accessible, the risk of authoritarian regimes misusing it for political gain is growing. Meta Platforms, which developed the Llama open-source AI model mentioned by the banned accounts, also acknowledged that AI tools are available for various uses, but reiterated that it does not condone their use for surveillance or authoritarian purposes.
In response, OpenAI has stressed that using its AI for unauthorized monitoring of individuals or communications surveillance is strictly against its policies. The company’s public disclosure of these actions serves as a warning about the need for more robust safeguards and regulations around the global use of AI, especially in politically sensitive contexts.
The revelation also highlights broader concerns about AI’s role in geopolitics. As AI models continue to evolve and spread worldwide, how can global society ensure that they are used ethically, without being exploited for state-sponsored surveillance or political manipulation? OpenAI’s decision to take action against these accounts is just one example of how the tech industry is grappling with the complex challenges posed by the intersection of AI and global politics.