Technology Record - Issue 28: Spring 2023

78 VIEWPOINT Bringing GPT to cybersecurity ANAND RAGHAVAN: ARMORBLOX The time is right for the cybersecurity industry to adopt large language models more widely as a means to protect against today’s sophisticated cyberattacks With the announcement of GPT 4.0, it has become clear that transformerbased models can now do increasingly complex tasks with improved precision. While this is something that will help advance various fields of research, it also poses grave challenges when it comes to threat actors gaining access to such technologies. Attention has been focused on GPT’s potential for human-like interaction, but we should not overlook the risk of malicious use of this technology by cybercriminals. For example, attackers can use it to quickly craft messaging for targeted phishing, social engineering, executive impersonation, financial fraud attacks, and more. As the latest IC3 report from the FBI shows, business email compromise (BEC), investment fraud, and ransomware remain the three biggest categories of email-based threats targeting organisations. All of these involve weaponising the contents and language within an email to compromise a human’s trust, manipulating them to take an action that is both harmful for them and the organisation. The types of critical business workflows within an organisation that attackers target usually fall into four categories: those that involve money (wire transfers, invoice payments, payroll), credentials (password reset, SharePoint emails), sensitive data (personally identifiable information, payment card industry data), and confidential data (intellectual property, internal strategy documents). What is common across all of these workflows is that well-trained language models can uniquely categorise emails into these buckets based on their contents. By looking at additional signals such as user identities, behaviour and communication patterns, the model can determine if an email in one of these categories is genuine or compromised. Armorblox has done this by building on our close partnership with Microsoft to bolster its native email security with controls that stop targeted attacks, protect sensitive data, and save time on phishing incident response. Armorblox is part of the Microsoft Intelligent Security Association (MISA) and can run on Azure and connect to Exchange on-premises, as well as Microsoft 365 environments to protect organisations against BEC, financial fraud, email account compromise, ransomware, data loss, social engineering, graymail, and vendor fraud. Account takeover attempts can be detected and stopped, and responding to userreported phishing emails can be automated, saving significant time and resources for security teams. Customers can save time on security operations by routing threats detected by Armorblox to Azure Sentinel, reducing the mean time to respond to threats. The integration also enables users to automate away between 75 and 97 per cent of manual “I have no doubt that GPT and large language models will change cybersecurity forever ”