Robert Blumofe discusses the cybersecurity threat of ChatGPT

Robert Blumofe discusses the cybersecurity threat of ChatGPT

While generative artificial intelligence presents exciting possibilities, it also has the potential to be used to refine cyberattacks, according to the executive vice president of Akamai 

Alex Smith |


The launch of OpenAI’s ChatGPT chatbot has quickly broken through into the popular imagination, as its detailed and coherent responses across all kinds of topics provide a glimpse into a future with generative artificial intelligence. While there are many exciting opportunities that this presents, there are also new threats to be considered. When a tool like ChatGPT is able to craft messages that are sometimes almost indistinguishable from those written by a human, how can we reliably identify where it came from? 

This question has particular relevance for phishing, where attackers will attempt to trick a target into revealing sensitive information or downloading malware. Personalised messages could be generated automatically by a generative AI tool far more quickly than they could be by any human attacker and written far more convincingly than is possible with existing tools.  

“As we see more advances in generative AI and tools like ChatGPT, it will become more difficult for people – even those with tech and security backgrounds – to spot phishing lures,” says Dr Robert Blumofe, executive vice president and chief technology officer at Akamai Technologies. “Phishing attacks are already wreaking havoc on organisations with believable impersonation scams like business email compromise. Generative AI technology will inevitably make this problem even worse and create increased risk for enterprises and their employees.” 

Even experienced and careful employees therefore need to know about the potential threats they could face as generative AI technology becomes more mature. Education on best practice in dealing with suspicious communication remains a valuable tool for organisations in overcoming such threats. 

“Generative AI tools can write phishing lures that are highly personalised and appear to come from someone you know,” says Blumofe. “Such lures will be much harder to spot. But there is still value in training people to identify suspicious links, reminding them of company protocol to verify requests that will directly impact core operations, and outlining how to report potential phishing attempts.” 

However, given the sophistication that such attacks could attain, education can only go so far in preventing them. Employees cannot be reasonably expected to spot every single attack they might be targeted with. 

“It is most important to consider what you are hoping to achieve with training, while also acknowledging no training will stop all threats,” says Blumofe. “Enterprises need to seriously consider what security tools, technology and processes they are going to pair with education and training to best protect their systems.” 

According to Blumofe, organisations should therefore be looking to contingency plans that will limit the damage that attacks could cause if they were successful. Strategies that value mitigation alongside prevention will be the most effective. 

“There are tools that can help identify phishing lures and block access to phishing sites, but while such tools are very effective, they are not perfect,” he says. “Organisations should also look at protections – including microsegmentation and zero-trust network access – that can help block the spread of malware if and when it does make it past that first line of defence.” 

Ultimately, the advent of new methods of cyberattack will require organisations to adapt the way in which they secure every part of their working processes and communication. It will become more important than ever for them to be able to authenticate the source of any communication before taking any action that could be used as part of an attack.  

“Ask yourself: if someone sent a message to the right person, asking that person to shut down a critical service, claiming to be the chief information officer and demanding urgency, would they do it?” says Blumofe. “It’s all too easy – and even easier now with generative AI – for an attacker to forge such a message and make it very convincing. In fact, the same is true of phone calls and even video calls. It is paramount, therefore, that no critical business process can be triggered by an email, message, phone call, video call, or anything else that cannot be reliably authenticated.”  

Find out more on Akamai’s website.

This article was originally published in the Spring 2023 issue of Technology Record. To get future issues delivered directly to your inbox, sign up for a free subscription

Subscribe to the Technology Record newsletter


  • ©2024 Tudor Rose. All Rights Reserved. Technology Record is published by Tudor Rose with the support and guidance of Microsoft.