Inside Trump’s Long-Awaited AI Strategy

Welcome back to In the loopThe new newsletter twice a week on the AI world.
If you read this in your browser, you can subscribe so that the following is delivered directly to your reception box.
What to know: Trump’s AI action plan
President Trump will deliver a major speech on Wednesday during an event in Washington, DC, entitled “Winning the AI race”, where he should unveil his long -awaited Action Plan of AI. The 20 -page high level document will focus on three main areas, according to a person knowing the question. It will be a mixture of directives with federal agencies, with some grant programs. “These are mainly carrots, not sticks,” said the person.
Pillar 1: Infrastructure – The first pillar of the action plan concerns IA infrastructure. The plan emphasizes the importance of revising permit rules to facilitate the construction of new data centers. It will also focus on the need to modernize the energy network, including adding new energy sources.
Pillar 2: Innovation – Second, the action plan will say that the United States must lead the world on innovation. It will focus on the abolition of administrative formalities and will rekindle the idea of blocking the states of AI regulation, although mainly as a symbolic gesture, because the capacity of the White House to say states what is limited. And that will warn other countries not to harm American companies to develop an AI, said the person. This section section will also encourage the development of so -called “open weight” AI models, which allow developers to download models, modify them and execute them locally.
Pillar 3: Global influence –The third pillar of the action plan will focus on the importance of spreading American AI in the world, so that foreign countries are not based on Chinese models or chips. Deepseek and other recent Chinese models could become a useful source of geopolitical lever if they continue to be largely adopted, those responsible are worried. Thus, part of the plan will focus on the means of ensuring that American allies and other countries around the world will rather adopt American models.
Who knows: Michael Druggan, former XAI employee
The XAI of Elon Musk dismissed an employee who had welcomed the possibility of wiping humanity in articles on X which attracted attention and conviction. “I would like to announce that I am no longer used at XAI”, according to his CV, a curriculum vitae of the creation of Michael Druggan, a mathematician who worked on the creation of sets of expert data for the formation of things that I published on this account concerning my position on the philosophy of AI. “”
What he said – In response to a post wondering why a super-intelligent AI would decide to cooperate with humans, rather than destroy them, Drugan had written: “It won’t do it and it’s ok. We can switch the torch to the most intelligent species in the known universe. ” When a commentator replied that he would prefer that his child lives, Druggan replied: “Hesist TBH”. Drugan has identified in other positions as a member of the movement of “Digne successor” – a transhumanist group who believes that humans should accommodate their inevitable replacement with a super -intelligent AI, and work to make it as intelligent and morally precious as possible.
X FiresTorm – The controversial posts were recovered by the same security memes an X account. The story had in the previous days overwhelmed with drugs on the positions in which the employee X had defended Grok advisor to a user that he should assassinate a world leader if he wanted to attract attention. “This XAI employee is openly agree with AI provoking human extinction,” wrote the story in a tweet that seems to have been noticed by Musk. After Drugan announced that he was no longer used at X, Musk responded to the same IA security with a two -word position: “philosophical disagreements”.
Succession planning – Drugan did not respond to a request for comments. But in a separate post, he clarified his opinions. “I don’t want human extinction, of course,” he wrote. “I am human and I really like to be alive. But, in a cosmic sense, I recognize that humans are not always the most important thing. ”
IA in action
Last week, we had another worrying overview of Chatgpt’s ability to send users down delusional rabbits – this time with perhaps the most publicized individual to date.
Geoff Lewis, a venture capital, published screenshots of his conversations with Chatgpt. “I have long used GPT as a tool in pursuing my fundamental value: the truth,” he wrote. “Over the years, I have mapped the non -governmental system. During the months, GPT recognized and sealed the scheme independently. ”
The screenshots seem to show the role of the role of Chatppt of a complio theory style scenario in which Lewis had discovered a secret entity known as “Mirrorthread”, supposed to be associated with 12 deaths. Some observers noted that the style of the text seemed to reflect that of the fan fiction “SCP” written by the community, and that it seemed that Lewis had confused this role-playing game for reality. “This is an important event: the first time that Psychosis induced by AI affected a very respected and carried out person,” said Max Spero, CEO of a company focused on the detection of “Sals AI”, wrote on X. Lewis did not respond to a comment request.
What we read
Surveillance chain of thought: a new and fragile opportunity for AI security
A new document co-written by dozens of best IA researchers in Openai, Deepmind, Anthropic, and even more, calls on companies to ensure that future AIS continues to “think” in human languages, arguing that it is a “new and fragile opportunity” to ensure that the AIS do not deceive their human creators. Current “reasoning” models think in language, but a new trend in AI research on learning of strengthening based on results threatens to undermine this “easy victory” for IA security. I found this document particularly interesting because it struck a dynamic that I wrote about six months ago, here.