As the 2024 Presidential Race officially kicks off, OpenAI’s ChatGPT is imposing new rules on their product use. Customers will not be allowed to use the generator to impersonate candidates or lie about the voting process.
Discouraging turnout of the opposing side is a political tactic that all groups use. Drawing the line between that and actual voter suppression is a line that is not always clear, but does exist. The Verge reports:
The Wall Street Journal noted the new change in policy which were first published to OpenAI’s blog. ChatGPT, Dall-e, and other OpenAI tool users and makers are now forbidden from using OpenAI’s tools to impersonate candidates or local governments and users cannot use OpenAI’s tools for campaigns or lobbying either. Users are also not permitted to use OpenAI tools to discourage voting or misrepresent the voting process.
In addition to being firmer in its policies on election misinformation OpenAI also plans to incorporate the Coalition for Content Provenance and Authenticity’s (C2PA) digital credentials into images generated by Dall-E “early this year”. Currently Microsoft, Amazon, Adobe, and Getty are also working with C2PA to combat misinformation through AI image generation.
Other major tech companies are also rolling out regulations for the use of their own AI engines. Google and Meta would require the disclosure ofAI use for political entities to continue using their products. The Washington Post continues:
Political parties, state actors and opportunistic internet entrepreneurs have used social media for years to spread false information and influence voters. But activists, politicians and AI researchers have expressed concern that chatbots and image generators could increase the sophistication and volume of political misinformation.
OpenAI’s measures come after other tech companies have also updated their election policies to grapple with the AI boom. In December, Google said it would restrict the kind of answers its AI tools give to election-related questions. It also said it would require political campaigns that bought ad spots from it to disclose when they used AI. Facebook parent Meta also requires political advertisers to disclose if they used AI.
While official political actors will be limited in using AI, private citizens will most likely not be. If AI begins to discriminate against a political side, chances are that political groups will begin to develop their own AI machines to match their rivals.
READ NEXT: Extreme Weather Could Impact Energy Supply