Anthropic has responded to allegations that it used an AI-fabricated supply in its authorized battle in opposition to music publishers, saying its Claude chatbot made an “sincere quotation mistake.”
In a response filed on Thursday, Anthropic protection lawyer Ivana Dukanovic mentioned that the scrutinized supply was real and that Claude had certainly been used to format authorized citations within the doc. Whereas incorrect quantity and web page numbers generated by the chatbot have been caught and corrected by a “guide quotation test,” Anthropic admits that wording errors had gone undetected.
Dukanovic mentioned, “sadly, though offering the proper publication title, publication 12 months, and hyperlink to the offered supply, the returned quotation included an inaccurate title and incorrect authors,” and that the error wasn’t a “fabrication of authority.” The corporate apologized for the inaccuracy and confusion attributable to the quotation error, calling it “an embarrassing and unintentional mistake.”
That is certainly one of many rising examples of how utilizing AI instruments for authorized citations has brought on points in courtrooms. Final week, a California Judge chastised two legislation companies for failing to reveal that AI was used to create a supplemental transient rife with “bogus” supplies that “didn’t exist.” A misinformation expert admitted in December that ChatGPT had hallucinated citations in a authorized submitting he’d submitted.