OpenAI has been slapped with its first-ever defamation lawsuit after a ChatGPT “hallucination” generated a bogus embezzlement complaint against a Georgia radio host.
“OpenAI has been slapped with its first-ever defamation lawsuit after a ChatGPT “hallucination” generated a bogus embezzlement complaint against a Georgia radio host,” the New York Post reports. “Mark Walters was shocked to learn ChatGPT created a false case that accused him of ‘defrauding and embezzling’ funds from the Second Amendment Foundation (SAF).”
That “chatbots” can “hallucinate” sounds like something out of science fiction, and what that actually means and how it differs from program corruption, coding errors or simply bad source inputs is still not clear: This is all new stuff.
Walters’ complaint against the proprietors of an emerging technology opens a unique chapter in American jurisprudence and raises questions about the moral responsibility and legal liability human beings have for seemingly independent actions by their creations. When they turn destructive, Dr. Frankenstein’s culpability for the choices of his creature does not seem an inappropriate analogy to suggest.
By David Codrea