Bitcoin

U.S. Lawmakers Introduce New Copyright and Privacy Bill that Will Criminalize Training AI On Copyrighted Content

American legislators introduce a new bill on copyright and confidentiality that will criminalize AI training on the content protected by copyright

The American senators Josh Hawley (R-MO.) And Richard Blumenthal (D-Conn.) Introduced a Bipartisan bill aimed at facing what they call a “historic flight” of intellectual property and personal data by artificial intelligence companies.

The AI Liability and Protection of Personal Data Protection, introduced this week, would make illegal for AI companies to train their models on content protected by copyright or personal information without explicit consent and grant individuals the right to continue for unauthorized use.

“The companies of IA steal the blind people blind while leaving artists, writers and other creators without recourse,” said Hawley. “It is time for the congress to give the American worker his day in court.”

Register For TEKEDIA Mini-MBA Edition 18 (September 15 – December 6, 2025)) Today for early reductions. An annual for access to Blurara.com.

Tekedia Ai in Masterclass Business open registration.

Join Tekedia Capital Syndicate and co-INivest in large world startups.

Register For Tekedia ai lab: From technical design to deployment.

The proposed law would considerably modify the functioning of AI -generating companies such as Openai, Meta, Google, Anthropic and others, which requires complete disclosure on the use of data, strict consent protocols and legal pathways for creators and individuals to claim damage or block improper use. The bill also obliges that companies identify third parties receive data, if applicable, and allows financial sanctions and an injunction.

Blumenthal, a co-sponsor of the legislation, said that the law was urgent to stop the uncontrolled collection and monetization of the private and creative data of people.

“Technological companies must be responsible – and legally responsible – when they violated consumers’ confidentiality, collection, monetization or sharing of personal information without express consent,” he said.

The courts on the side of the companies of AI intensify the request for legislation

The bipartite proposal comes in the midst of an increasing wave of prosecution against AI companies – and an increasing scheme of court decisions that have largely taken the side of technological companies for heaviness. Legal experts claim that legislation reacts to an extended frustration among the authors, musicians, publishers and other content creators who argue that the courts have been too forgiving in the interpretation of copyright in the era of automatic learning.

In June 2024, a federal judge of San Francisco judged that the use by anthropic of books protected by copyright to form its Claude AI models was “very transformer”, which means that the company could claim protection under the doctrine of fair use. Although the court recognized the concerns concerning the “direct violation” in the storage of complete copies of the books protected by copyright, it ceased to penalize Anthropic for the training process itself. The final judgment on damage and potential remedies is still pending.

Likewise, Meta also found support in court. The authors, including Richard Kadrey and Christopher Golden, continued the company, alleging that their books had been used without consent to form the Meta Llama models. In this case, the court also considered the training process as a transformer, which made it likely to fall into fair use, although the judge has opened the possibility that the conservation of complete versions of texts protected by copyright in a set of training data can still engage responsibility, according to how they are used or stored.

These decisions have raised concerns in the creative industries. Many believe that the doctrine of fair use, as currently interpreted, has never been intended to cover mass ingestion of material protected by copyright to build commercial AI products – and that the courts allow technological companies to bypass copyright protections that would apply in any other context.

Historical precedents fuel the debate

In one of the few legal victories for rights holders, Thomson Reuters continued Ross Intelligence, alleging that Ross used his legal head notes from Westlaw owners to build a legal research assistant. The Federal Court agreed, judging that Ross had violated the material protected by the copyright of Thomson Reuters, a decision widely considered as a moment of the watershed for the legal responsibility of the AI. This case is currently in the damage phase.

The New York Times also filed a historic complaint in December 2023 against Openai and Microsoft, accusing companies of using its archived journalism – including paid payment content – to form the GPT -4 and other models. Although the case is underway, the first deposits suggest that OpenAi will also support that its use of content is transforming and protected by fair use, pursuing a trend which, according to legislators, shows the urgent need for new rules.

Meanwhile, musicians, scriptwriters and visual artists echoed similar concerns in Congress costumes and audiences, pointing the wholesale scratching of publications, words and images of social media as a raw material for the formation of the AI model – all without compensation.

Legislation as the next battlefield

Given the legal momentum promoting promoters of the AI, certain legislators on both sides of the aisle maintain that legislation is now the only reliable route to protect American creators.

“This bipartite legislation would finally allow Americans who work now who now find their livelihoods in the reticle of Big Tech anarchy,” said Hawley.

The bill should face strong opposition from the technological industry, which has long argued that the scratching of public data for AI training is not only legal but essential for innovation and competitiveness. Companies are also likely to highlight existing decisions as validation of their data practices.

Whether the Hawley -Blumenthal bill becomes the law or serves as a catalyst for a wider AI regulation, it signals a lively turn of the Washington posture to Silicon Valley – that which places authors, journalists, artists and everyday citizens at the center of the conversation on property and control at the time of AI.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button