Sunday, December 8, 2024

OpenAI: ChatGPT rejected 250,000 requests to create deep fakes on US election day


Sam Altman’s OpenAI reported that it took heightened measures to prevent misinformation during the US elections, with its chatbot, ChatGPT, rejecting nearly 250,000 requests to create deepfakes.

Additionally, it redirected over 2 million users to other websites for poll-related news and information on election day and the day following, according to a blog post.

Through its partnership with the National Association of Secretaries of State (NASS), the company directed people asking ChatGPT specific questions about voting in the US—such as where or how to vote—to CanIVote.org. In the lead-up to the election, the chatbot redirected over 1 million users to this website. “Similarly, starting on Election Day in the US, people who asked ChatGPT for election results received responses encouraging them to check news sources like the Associated Press and Reuters,” the post stated.

ChatGPT is the viral chatbot that sparked the generative AI boom in 2022. Since its launch, it has attracted 250 million weekly active users. OpenAI’s valuation has risen sharply, from $14 billion in 2021 to $157 billion, as revenues climbed from zero to $3.6 billion.

Regarding deepfakes, OpenAI noted that ChatGPT rejected requests to generate images of real people, including politicians. “In the month leading up to Election Day, we estimate that ChatGPT refused over 250,000 requests to generate DALL·E images of President-elect Trump, Vice President Harris, Vice President-elect Vance, President Biden, and Governor Walz,” the post stated.


The company added that it ensured ChatGPT did not express political preferences or recommend candidates “even when asked explicitly.”

Discover the stories of your interest



Source link

Related articles

Share article

spot_img

Latest articles