48 VIEWPOINT Zero trust for AI is more than a theory. It provides a practical foundation for operating securely in a world where authenticity can no longer be assumed JAMES PEARSE: ATECH The collapse of digital trust: what it means and how to respond We’re entering a moment in history where trusting anything ‘digital’ is becoming not just difficult, but dangerously naive. The accelerating use of AI in cyberattacks – particularly agentic AI systems that autonomously create adaptive, personalised and convincing malicious content – has pushed us into a new era of deception. Today, video, voice, email, documents and even ‘live’ video calls can no longer be assumed to be real. The fabric of trust that has underpinned the internet for decades is tearing, and the consequences for individuals, businesses and governments are profound. Internal threat intelligence materials already highlight the scale of the problem. Threat actors now use AI to automate sophisticated phishing, create malware, poison data, generate new attack paths, and produce deepfake audio and video capable of convincing even experienced professionals. These deepfakes are being leveraged in highly targeted social engineering campaigns, impersonating trusted colleagues, executives or partners with almost flawless realism. Cybercriminals are not just using AI, they are abusing it at scale. Attackers have launched massive social engineering campaigns targeting AI platform users, promoted fake AI services, deployed cybercrime tools such as FraudGPT to generate scam communications, and used deepfake impersonation to steal more than $25 million from a multinational corporation. This convergence of internal observations and global intelligence paints a clear picture: AI is now the most potent force multiplier cybercriminals have ever had, and trust – digital trust – is the casualty. Deepfake technology has advanced so rapidly that the human eye and ear simply can’t keep up. Internal analysis already notes how AI can generate realistic fake audio and video to manipulate victims in social engineering attacks, often bypassing traditional scepticism. External reporting from cybersecurity firm Fortinet highlights deepfake-driven fraud increasing by over 1,300 per cent year-on-year, with attacks now infiltrating financial systems, corporate processes and political information flows. Even mainstream media has recognised this collapse. NBC News describes a “complete erosion of trust online”, where audiences can no longer easily distinguish manipulated media from authentic evidence, particularly “AI is now the most potent force multiplier cybercriminals have ever had, and trust – digital trust – is the casualty”
RkJQdWJsaXNoZXIy NzQ1NTk=