Bitcoin

RISE Act Provides AI Guardrails but Not Enough Detail

Civil liability law does not often make a great conversation for dinner, but this can have an immense impact on the way in which emerging technologies like artificial intelligence evolve.

If it is seriously established, the rules of responsibility can create obstacles to future innovation by exposing entrepreneurs – in this case, UNI -law developers at unnecessary legal risks. SUP he argues that American senator Cynthia Lummis, who presented the law on innovation and expertise in complete safety (RISE) last week.

This bill aims to protect the promoters of AI from prosecution by a civil court so that doctors, lawyers, engineers and other professionals “can understand what AI can and cannot do before relying on him”.

The first reactions to the RISE law from sources contacted by Cointelegraph were mainly positive, although some criticized the limited scope of the bill, its shortcomings with regard to transparency and questioned standards offering IA developers a liability shield.

The most characterized increase as a work in progress, not as a finished document.

Is the act of climbing a “gift” for AI developers?

According to Hamid Ekbia, professor at the Maxwell School of Citizenship and Public Affairs of the University of Syracuse, the Lummis bill is “in a timely and necessary”. (Lummis called it the “first legislation on the reform of the country’s targeted responsibility for professional quality”)))

But the bill wins too far in favor of AI developers, Ekbia told Cointelegraph. The RISE law obliges them to publicly disclose the specifications of the model so that professionals can make informed decisions on the AI ​​tools they choose to use, but:

“This puts most of the risk burden on” learned professionals “,” demanding developers only “transparency” in the form of technical specifications – maps and model specifications – and providing them with large immunity otherwise. »»

Unsurprisingly, some were quick to jump on the Lummis bill as a “gift” to AI companies. The democratic subsoil, which describes itself as a “left of the political community of the center”, noted in one of its forums that “the companies of AI do not want to be continued for the failures of their tools, and this bill, if it is adopted, will accomplish this”.

Not all agree. “I would not go so far as to call the bill a” gift “to IA companies,” said at Cointelegraph Felix Shipkevich, director of Shipkevich lawyers.

The provision of immunity proposed by the RISE law seems to protect developers from strict responsibility for the unpredictable behavior of large languages ​​models, explained Shipkevich, in particular when there is no negligence or intention to harm. From a legal point of view, it is a rational approach. He added:

“Without a certain form of protection, the promoters could face an unlimited exhibition for outings, they have no practical ways to control.”

The scope of the proposed legislation is quite narrow. It is largely focused on scenarios in which professionals use AI tools while dealing with their customers or patients. A financial advisor could use an AI tool to help develop an investment strategy for an investor, for example, or a radiologist could use AI software to help interpret an x ​​-ray.

In relation: The Senate adopts the Stablecoin Bill of Engineering in the midst of concerns concerning systemic risk

The RISE law does not really deal with cases in which there is no professional intermediary between the AI ​​developer and the end user, as when chatbots are used as digital companions for minors.

Such a case of civil liability recently occurred in Florida, where a teenager committed suicide after committing himself for months with an AI chatbot. The family of the deceased said that the software had been designed in a way that was not reasonably safe for minors. “Who should be held responsible for loss of life?” asked Ekbia. These cases are not treated in the proposed Senate legislation.

“There is a need for clear and unified standards so that users, developers and all stakeholders include road rules and their legal obligations,” said Ryan Abbott, professor of law and health sciences at the School of Law at the University of Surrey.

But it is difficult because AI can create new types of potential damage, given the complexity, opacity and autonomy of technology. The health arena will be particularly difficult in terms of civil liability, according to Abbott, which has medical and legal diplomas.

For example, doctors have outperformed AI software in medical diagnostics historically, but more recently, evidence emerges that in certain areas of medical practice, a human in the loop “obtains less good results than to let AI do all the work,” said Abbott. “This raises all kinds of interesting liability problems.”

Who will pay compensation if a serious medical error is made when a doctor is no longer in the loop? Will insurance for professional fault will cover it? Maybe not.

The Ai Futures project, a non -profit research organization, provisionally approved the bill (it was consulted while the bill was written). But executive director Daniel Kokotajlo said that the transparency disclosure requested from AI developers are not short.

“The public deserves to know what objectives, values, agendas, prejudices, instructions, etc., companies are trying to give powerful AI systems.” This bill does not require such transparency and therefore does not go far enough, said Kokotajlo.

In addition, “companies can always choose to accept responsibility instead of being transparent, so each time a company wants to do something that the public or regulators would not like, they can simply withdraw,” said Kokotajlo.

EU’s “rights -based” approach

How does the RISE law compare to the provisions of responsibility in the EU AI of the EU of 2023, the first full regulation on AI by a large regulator?

The position of EU AI is in flow. A directive on the responsibility of the EU AI was designed for the first time in 2022, but it was withdrawn in February 2025, some say following the lobbying of the AI ​​industry.

However, EU law generally adopts a framework based on human rights. As indicated in a recent article in the journal of the UCLA law, an approach based on rights “emphasizes the empowerment of individuals”, in particular end users such as patients, consumers or customers.

An approach based on risks, such as that of the Lummis bill, on the other hand, is based on processes, documentation and evaluation tools. It would focus more on the detection and attenuation of biases, for example, rather than providing affected people with concrete rights.

When Cointelegraph asked Kokotajlo if an approach “based on risks” or “rules” of civil liability was more appropriate for the United States, he replied: “I think the objective should be based on risks and focused on those who create and deploy technology.”

In relation: Vulnerable crypto users because Trump dismantles consumer watchdog

The EU adopts a more proactive approach to these questions in general, added Shipkevich. “Their laws force the developers in AI to present themselves in advance that they respect the rules of security and transparency.”

Clear standards are necessary

The Lummis bill will probably require some modifications before being promulgated (if ever).

“I consider the law is positively as long as this proposed legislation is considered a starting point,” said Shipkevich. “It is reasonable, after all, to provide some protection to developers who do not act by negligence and have no control over how their models are used downstream.” He added:

“If this bill evolves to include real transparency requirements and risk management obligations, it could lay the foundations for a balanced approach.”

According to Justin Bullock, vice-president of policies at Americans for Responsible Innovation (ARI), “the Rise law highlights strong ideas, including federal transparency advice, a port of secure with limited scope and clear rules concerning responsibility for AI professional adopters”, although the ARI has not approved the legislation.

But Bullock also had concerns about transparency and disclosure-that is to say, ensuring that the required transparency assessments are effective. He said to Cintelegraph:

“The publication of model cards without verification of checks and robust third party risks can give a false feeling of security.”

However, overall, the Lummis bill “is a first constructive step in the conversation on what should look like federal IA requirements,” said Bullock.

Assuming that the legislation was adopted and signed, it would take effect on December 1, 2025.

Review: Bitcoin’s invisible showdown between combinations and cypherpunks