OpenAI and Microsoft are in a heated rivalry with Google to be generative AI's major player, but Facebook-owner Meta and upstart Anthropic are also making big moves to compete
The U.S. Department of Homeland Security perceives Russia, Iran and China attempting to influence the Nov. 5 presidential elections, including by using AI tools to spread fake or divisive information. AFP

OpenAI has reported disrupting over 20 operations run by cybercriminals since January, who have been using AI models for a range of malicious activities, including debugging malware, generating fake content like long-form articles and social media comments intended to influence elections across the world, the ChatGPT creator revealed in a report.

In the report released Wednesday, less than a month before the U.S. presidential election, the company claims to have neutralized "more than 20 operations and deceptive networks from around the world that attempted to use our models."

It noted that there were no significant breakthroughs in the criminals' "ability to develop new malware or build large viral audiences," the report pointed out. It highlighted that most of the social media content was focused on the elections in the U.S. and Rwanda, with lesser attention on elections in India and the European Union.

However, none of the election-related operations managed to achieve "viral engagement" or build "sustained audiences" via the use of ChatGPT and OpenAI's other tools, the report stated.

The U.S. Department of Homeland Security has also raised alarms about a growing threat from Russia, Iran, and China attempting to influence the Nov. 5 presidential election via AI tools that spread fake or divisive information.

The OpenAI report further stated that AI companies themselves can be the "targets of hostile activity," as they disrupted a suspected China-based threat actor called "SweetSpecter" that "was unsuccessfully spear phishing OpenAI employees' personal and corporate email addresses."

The increase in AI-generated content has raised significant concerns about election misinformation. According to data from Clarity, a machine learning firm, the number of deepfakes has skyrocketed by 900% year over year, CNBC reported.

The rising popularity of AI since the launch of ChatGPT in 2022 has sparked global concerns, particularly regarding its influence on elections. According to the report, AI models are being used to generate content that "ranged in complexity from simple requests for content generation, to complex, multi-stage efforts to analyze and reply to social media posts."

In late August, an Iranian operation used AI tools to generate long-form articles and social media posts related to the U.S. election. However, it failed to gain audience engagement. Similarly, in July, OpenAI banned ChatGPT accounts in Rwanda for posting election-related comments on X. In May, an Israeli company used ChatGPT to generate social media comments about elections in India. Additionally, in June, OpenAI uncovered a covert operation that used its products to create comments on European Parliament elections in France and on politics in the U.S., Germany, Italy, and Poland.

The company stated that while the posts or articles received little to no attention, some real people did engage with the AI-generated content.