Technology Record - Issue 28: Spring 2023

84 INTERVIEW Facing down the threat of ChatGPT While generative artificial intelligence presents exciting possibilities for the future, it also has the potential to be used to refine cyberattacks. We speak to Robert Blumofe of Akamai Technologies to find out more The launch of OpenAI’s ChatGPT chatbot has quickly broken through into the popular imagination, as its detailed and coherent responses across all kinds of topics provide a glimpse into a future with generative artificial intelligence. While there are many exciting opportunities that this presents, there are also new threats to be considered. When a tool like ChatGPT is able to craft messages that are sometimes almost indistinguishable from those written by a human, how can we reliably identify where it came from? This question has particular relevance for phishing, where attackers will attempt to trick a target into revealing sensitive information or downloading malware. Personalised messages could be generated automatically by a generative AI tool far more quickly than they could be by any human attacker and written far more convincingly than is possible with existing tools. “As we see more advances in generative AI and tools like ChatGPT, it will become more difficult for people – even those with tech and security backgrounds – to spot phishing lures,” says Dr Robert Blumofe, executive vice president and chief technology officer at Akamai Technologies. “Phishing attacks are already wreaking havoc on organisations with believable impersonation scams like business email compromise. Generative AI technology will inevitably make this problem even worse and create increased risk for enterprises and their employees.” Even experienced and careful employees therefore need to know about the potential threats they could face as generative AI technology becomes more mature. Education on best practice in dealing with suspicious communication remains a valuable tool for organisations in overcoming such threats. “Generative AI tools can write phishing lures that are highly personalised and appear to come from someone you know,” says Blumofe. “Such lures will be much harder to spot. But BY ALEX SMITH Photo: Unsplash/Emiliano Vittoriosi “ Generative AI tools can write phishing lures that are highly personalised” DR ROBERT BLUMOFE, AKAMAI TECHNOLOGIES

RkJQdWJsaXNoZXIy NzQ1NTk=