Bitcoin

AI has a trust problem — Decentralized privacy-preserving tech can fix it

Opinion of: Felix Xu, co-founder of Arpa Network and Bella Protocol

AI has been a dominant story since 2024, but users and businesses still cannot trust it. Whether it is finance, personal data or health care decisions, hesitation around the reliability and integrity of the AI ​​remains high.

This growing deficit in AI trust is now one of the most important obstacles to generalized adoption. Decentralized technologies and preserving confidentiality are quickly recognized as viable solutions that offer verifiability, transparency and stronger data protection without compromising AI growth.

The confidence deficit of the omnipresent AI

AI was the second most popular category occupying the cryptocurrency of cryptography in 2024, with more than 16% interest in investors. Startups and multinational companies have allocated considerable resources to AI to extend technology to people’s finances, health and all other aspects.

For example, the emerging sector DEFI X AI (DEFAI) sent more than 7,000 projects with advanced market capitalization of $ 7 billion at the start of 2025 before the markets occur. Defai has demonstrated the AI ​​transformer potential to make finances decentralized (DEFI) more user -friendly with orders in natural language, execute complex operations in several stages and carry out complex market studies.

Innovation alone has not resolved the main vulnerabilities of AI: hallucinations, manipulation and confidentiality problems.

In November 2024, a user convinced an AI agent on a base to send $ 47,000 despite being scheduled to never do so. While the scenario was part of a game, it has raised real concerns: can AI agents trust autonomy on financial operations?

Audits, bug bonuses and red teams help, but do not eliminate the risk of rapid injection, logical defects or use of unauthorized data. According to KPMG (2023), 61% of people still hesitate to trust AI, and even industry professionals share this concern. A Forrester survey quoted in Harvard Business Review revealed that 25% of analysts appointed Trust as the largest obstacle in AI.

This skepticism remains strong. A survey at the CIO network of the Wall Street Journal CIO revealed that 61% of the main American IT leaders are still experimenting with AI agents. The others were experimenting or completely avoided them, citing the lack of reliability, the risks of cybersecurity and the confidentiality of data as the main concerns.

Industries such as health care feel these most strongly risks. The sharing of electronic health files (DSE) with LLM to improve the results is promising, but it is also legally and ethically risky without protection for waterproof privacy.