OpenAI sees continued attempts by threat actors to use its models for election influence

OpenAI has seen a number of attempts where its artificial intelligence models have been used to generate fake content, including long-form articles and social media comments, aimed at influencing elections, the ChatGPT-maker said in a report on Wednesday (Oct 9).

Cybercriminals are increasingly using AI tools, including ChatGPT, to aid in their malicious activities such as creating and debugging malware, and generating fake content for websites and social media platforms, the start-up said.

So far this year it neutralised more than 20 such attempts, including a set of ChatGPT accounts in August that were used to produce articles on topics that included the United States elections, the company said.

It also banned a number of accounts from Rwanda in July that were used to generate comments about the elections in that country for posting on social media site X.

None of the activities that attempted to influence global elections drew viral engagement or sustainable audiences, OpenAI added.

There is increasing worry about the use of AI tools and social media sites to generate and propagate fake content related to elections, especially as the US gears for presidential polls.

According to the US Department of Homeland Security, the US sees a growing threat of Russia, Iran and China attempting to influence the Nov 5 elections, including by using AI to disseminate fake or divisive information.

OpenAI cemented its position as one of the world’s most valuable private companies last week after a US$6.6 billion funding round.

ChatGPT has 250 million weekly active users.

Read the rest of the article here.

Channel News Asia: