Here’s how ChatGPT maker OpenAI plans to deter election misinformation in 2024
ChatGPT maker OpenAI has outlined a plan to prevent its artificial intelligence tools from being used to spread election misinformation in 2024
NEW YORK (AP) — ChatGPT maker OpenAI has outlined a plan to prevent its tools from being used to spread election misinformation as voters in more than 50 countries prepare to cast their ballots in national elections this year.
The safeguards spelled out by the San Francisco-based artificial intelligence startup in a blog post this week include a mix of preexisting policies and newer initiatives to prevent the misuse of its wildly popular generative AI tools. They can create novel text and images in seconds but also be weaponized to concoct misleading messages or convincing fake photographs.
The steps will apply specifically to OpenAI, only one player in an expanding universe of companies developing advanced generative AI tools. The company, which announced the moves Monday, said it plans to “continue our platform safety work by elevating accurate voting information, enforcing measured policies, and improving transparency.”
It said it will ban people from using its technology to create chatbots that impersonate real candidates or governments, to misrepresent how voting works or to discourage people from voting. It said that until more research can be done on the persuasive power of its technology, it won’t allow its users to build applications for the purposes of political campaigning or lobbying.