A prominent AI ethics organization submitted a complaint with the Federal Trade Commission this week urging the agency to investigate ChatGPT-maker OpenAI and halt its development of future large language learning models. The complaint, filed by the Center for AI and Digital Policy (CAIDP), alleged OpenAI’s recently released GPT4 model is, “biased, deceptive, and a risk to privacy and public safety.”
Can AI Help with Mental Health?
CAIDP issued the complaint just one day after a wide group of more than 500 AI experts signed an open letter demanding AI labs immediately pause the development of LLMs more powerful than GPT4 over concerns they could pose, “profound risks to society and humanity.” Marc Rotenberg, CAIDP’s president, was among the letter’s signatories. That said, CAIDP’s complaint mostly steers clear of the hyperbolic predictions of AI being an existential threat to humanity. Instead, the complaint points to the FTC’s own stated guidance about AI systems which says they should be, “transparent, explainable, fair, and empirically sound while fostering accountability.” GTP4, the complaint argues, fails to meet those standards.
The complaint claims GPT4, which was released earlier this month, launched without any independent assessment and without any way for outsiders to replicate OpenAI’s results. CAIDP warned the system could be used to spread disinformation, contribute to cybersecurity threats, and potentially worsen or “lock in” biases that are already well-known to AI models.
“It is time for the FTC to Act,” the group wrote. “There should be independent oversight and evaluation of commercial AI products offered in the United States.”
G/O Media may get a commission
The FTC confirmed with Gizmodo it had received the complaint but declined to comment. OpenAI did not respond to our request for comment.
FTC sets its sights on AI
The FTC, to its credit, has been thinking out loud about the potential dangers new AI systems could pose to consumers. In a series of blog posts released in recent months, the agency explored the ways chatbots or other “synthetic media” can make it more difficult to parse out what’s real online, a potential boon for fraudsters and others looking to deceive people en masse.
“Evidence already exists that fraudsters can use these tools to generate realistic but fake content quickly and cheaply, disseminating it to large groups or targeting certain communities or specific individuals,” the FTC wrote.
Those concerns, however, fall far short of the potential society-level crisis depicted in the letter released this week by the Future of Life Institute. AI experts, both those who signed the letter and others who did not, expressed deep divisions in the level of concern about future LLM models. Though almost all concerned AI researchers agree policymakers need to catch up and draft smart rules and regulations to guide AI’s development, minds are split when it comes to ascribing human-level intelligence to what are essentially extremely good guessers trained on potentially trillions of parameters.
“What we should be concerned about is that this type of hype can both over-exaggerate the capabilities of AI systems and distract from pressing concerns like the deep dependency of this wave of AI on a small handful of firms,” AI Now Institute Managing Director Sarah Myers West previously told Gizmodo.
Leave a Reply