Attorneys claim that, while training ChatGPT, OpenAI employees scoured the internet, using millions of consumers’ information without obtaining informed consent or offering to compensate them for data collection.
OpenAI, the creator of ChatGPT, is facing a recently-filed lawsuit alleging that the company effectively stole and misappropriated consumer data to train its artificial intelligence tools.
According to CNN, the proposed class action lawsuit was filed earlier this week in a California federal court.
The complaint alleges that OpenAI covertly obtained “massive amounts of personal data from the internet,” including “essentially every piece of data exchanged on the internet [that ChatGPT] could take.”
“Once trained on stolen data, defendants saw the immediate profit potential and rushed the products to market without implementing proper safeguards or controls to ensure that they would not produce or support harmful or malicious content and conduct that could further violate the law, infringe rights and endanger lives,” said Clarkson, the public interest law firm behind the lawsuit. “Without these safeguards, the products have already demonstrated their ability to harm humans, in real ways.”
All of this data, attorneys say, was seized without first requesting permission or offering any form of “compensation.”
“By collecting previously obscure personal data of millions and misappropriating it to develop a volatile, untested technology, OpenAI put everyone in a zone of risk that is incalculable—but unacceptable by any measure of responsible data protection and use,” said attorney Timothy K. Giordano of Clarkson, the firm behind the lawsuit.
Giordano and his colleagues further allege that OpenAI products broadly “use stolen private information, including personally identifiable information, from hundreds of millions of internet users, including children of all ages, without their informed consent or knowledge.”
A gavel. Image via Wikimedia Commons via Flickr/user: Brian Turner. (CCA-BY-2.0).
The proposed class action is seeking injunctive relief, including a potential freeze on any further commercial use of OpenAI products.
The plaintiffs have also asked for payments for so-called “data dividends,” as a form of financial compensation for people whose information was used to develop and train OpenAI tools.
Somewhat interestingly, Clarkson attorneys also said that OpenAI has not only disregarded consumer protection law—it has also overlooked the potential risk posed to humanity by artificial intelligence.
“AI will probably most likely lead to the end of the world, but in the meantime, there’ll be great companies,” OpenAI C.E.O. Sam Altman was quoted as saying.
The Washington Post notes that the lawsuit also claims that OpenAI is not transparent in its practices. For example, attorneys assert that OpenAI does not clearly or adequately explain that anyone who signs up to use tools like ChatGPT could have their data retained, with retained data used to continue training AI models.
Sources
ChatGPT maker OpenAI faces a lawsuit over how it used people’s data
Lawsuit says OpenAI violated US authors’ copyrights to train AI chatbot
OpenAI, maker of ChatGPT, hit with proposed class action lawsuit alleging it stole people’s data
OpenAI, Microsoft face class-action suit over internet data use for AI models
Powered by WPeMatico