AI has a trust problem — Decentralized privacy-preserving tech can fix it
Opinion of: Felix Xu, co-founder of Arpa Network and Bella Protocol
AI has been a dominant story since 2024, but users and businesses still cannot trust it. Whether it is finance, personal data or health care decisions, hesitation around the reliability and integrity of the AI remains high.
This growing deficit in AI trust is now one of the most important obstacles to generalized adoption. Decentralized technologies and preserving confidentiality are quickly recognized as viable solutions that offer verifiability, transparency and stronger data protection without compromising AI growth.
The confidence deficit of the omnipresent AI
AI was the second most popular category occupying the cryptocurrency of cryptography in 2024, with more than 16% interest in investors. Startups and multinational companies have allocated considerable resources to AI to extend technology to people’s finances, health and all other aspects.
For example, the emerging sector DEFI X AI (DEFAI) sent more than 7,000 projects with advanced market capitalization of $ 7 billion at the start of 2025 before the markets occur. Defai has demonstrated the AI transformer potential to make finances decentralized (DEFI) more user -friendly with orders in natural language, execute complex operations in several stages and carry out complex market studies.
Innovation alone has not resolved the main vulnerabilities of AI: hallucinations, manipulation and confidentiality problems.
In November 2024, a user convinced an AI agent on a base to send $ 47,000 despite being scheduled to never do so. While the scenario was part of a game, it has raised real concerns: can AI agents trust autonomy on financial operations?
Audits, bug bonuses and red teams help, but do not eliminate the risk of rapid injection, logical defects or use of unauthorized data. According to KPMG (2023), 61% of people still hesitate to trust AI, and even industry professionals share this concern. A Forrester survey quoted in Harvard Business Review revealed that 25% of analysts appointed Trust as the largest obstacle in AI.
This skepticism remains strong. A survey at the CIO network of the Wall Street Journal CIO revealed that 61% of the main American IT leaders are still experimenting with AI agents. The others were experimenting or completely avoided them, citing the lack of reliability, the risks of cybersecurity and the confidentiality of data as the main concerns.
Industries such as health care feel these most strongly risks. The sharing of electronic health files (DSE) with LLM to improve the results is promising, but it is also legally and ethically risky without protection for waterproof privacy.
For example, the health care industry is negatively suffering from data confidentiality violations. This problem is made up when hospitals share DSE data to train AI algorithms without protecting patient privacy.
Decentralized and preserved infrastructure of confidentiality
JM Barrie wrote Peter Pan“Everyone is made of the faith, confidence and dust of elf.” Confidence is not only pleasant to have in AI – it’s fundamental. The projected economic increase in AI of $ 15.7 billions of dollars by 2030 could never materialize without it.
Enter decentralized cryptographic systems such as succinct zero succinct non-interactive arguments (ZK-Snarks). These technologies offer a new path: allowing users to verify AI decisions without revealing personal data or the internal functioning of the model.
By applying a cryptography preserving confidentiality to the automatic learning infrastructure, AI can be true, trustworthy and respectful of confidentiality, in particular in sectors such as finance and health care.
Recent: The next big Blockchain breakthroughs: what to watch
ZK-Snarks is based on advanced cryptographic evidence that allow a part to prove that something is true without revealing how. For AI, this makes it possible to check the models for accuracy without disclosing their training data, their input values or their owner logic.
Imagine a decentralized AI loan agent. Instead of reviewing complete financial files, he checks the encrypted evidence of credit dimensions to make autonomous loan decisions without accessing sensitive data. This protects both the confidentiality of users and institutional risk.
ZK technology also addresses the black nature of LLMS. By using dynamic evidence, it is possible to check the AI outputs while protecting both the integrity of the data and the architecture of the model. It is a victory for users and companies – one no longer fears data abuse, while the other backup its IP.
Ai decentralized
We are entering a new phase of AI where better models are not sufficient. Users ask for transparency; Companies need resilience; The regulators expect responsibility.
Decentralized and verifiable cryptography offers the three.
Technologies like ZK -Snarks, a multipartite threshold calculation and BLS -based verification systems are not only “cryptographic tools” – they become the foundation of a trustworthy AI. Combined with the transparency of the blockchain, they create a new powerful battery for preserving, verifiable and reliable AI systems.
Gartner predicted that 80% of companies will use AI by 2026. Adoption will not be motivated by the media or resources. It will depend on the construction of the AI in which people and businesses can really trust.
And it starts with decentralization.
Opinion of: Felix XU, co-founder of Arpa Network and Bella Protocol.
This article is for general information purposes and is not intended to be and must not be considered as legal or investment advice. The points of view, the thoughts and opinions expressed here are the only of the author and do not reflect or do not necessarily represent the opinions and opinions of Cointellegraph.