Two recent federal rulings appeared to hand major victories to the booming AI industry. But one might prove to be a hint of more challenges to come. Last Monday, federal judge William Alsup ruled that Anthropic's use of purchased books to train its Claude AI model constituted fair use under U.S. copyright law. Alsup likened the AI training process to a human reading and learning from books. Two days later, federal judge Vince Chhabria, in a similar case involving Meta, also ruled in favor of the AI company, but with different reasoning. Chhabria emphasized that while Meta won this lawsuit, the wider practice of training on copyrighted works without permission will often violate fair‑use provisions. Lawyers suggest the two rulings could have different effects on future AI-related lawsuits. Steve Kramarsky, a litigator at Dewey Pegno & Kramarsky, said Alsup's opinion "leaves very little room for future plaintiffs to bring suit unless they argue that the AI model can be prompted to generate specific infringing output." In contrast, Kramarsky said Chhabria's opinion "offers what amounts to a roadmap for the next set of plaintiffs," he said, particularly those aiming to show that AI training can cause broader market harm even in the absence of direct copying. Chhabria took a different tack on the difference between AI training and human learning, warning that the sheer scale of AI-generated output could pose economic risks to creators. Only one thing is certain at this point: AI companies can expect plenty more litigation on the matter. |