Bringing artificial intelligence and GPT to cybersecurity

Bringing artificial intelligence and GPT to cybersecurity

Armorblox

The cybersecurity industry should adopt large language models to protect against sophisticated cyberattacks, says Anand Raghavan at Armorblox

Guest contributor |


With the announcement of GPT 4.0, it has become clear that transformer-based models can now do increasingly complex tasks with improved precision. While this is something that will help advance various fields of research, it also poses grave challenges when it comes to threat actors gaining access to such technologies.  

Attention has been focused on GPT’s potential for human-like interaction, but we should not overlook the risk of malicious use of this technology by cybercriminals. For example, attackers can use it to quickly craft messaging for targeted phishing, social engineering, executive impersonation, financial fraud attacks, and more.    

As the latest IC3 report from the FBI shows, business email compromise (BEC), investment fraud, and ransomware remain the three biggest categories of email-based threats targeting organisations. All of these involve weaponising the contents and language within an email to compromise a human’s trust, manipulating them to take an action that is both harmful for them and the organisation. The types of critical business workflows within an organisation that attackers target usually fall into four categories: those that involve money (wire transfers, invoice payments, payroll), credentials (password reset, SharePoint emails), sensitive data (personally identifiable information, payment card industry data), and confidential data (intellectual property, internal strategy documents).   

What is common across all of these workflows is that well-trained language models can uniquely categorise emails into these buckets based on their contents. By looking at additional signals such as user identities, behaviour and communication patterns, the model can determine if an email in one of these categories is genuine or compromised.   

Armorblox has done this by building on our close partnership with Microsoft to bolster its native email security with controls that stop targeted attacks, protect sensitive data, and save time on phishing incident response. Armorblox is part of the Microsoft Intelligent Security Association (MISA) and can run on Azure and connect to Exchange on-premises, as well as Microsoft 365 environments to protect organisations against BEC, financial fraud, email account compromise, ransomware, data loss, social engineering, graymail, and vendor fraud. Account takeover attempts can be detected and stopped, and responding to user-reported phishing emails can be automated, saving significant time and resources for security teams.   

Customers can save time on security operations by routing threats detected by Armorblox to Azure Sentinel, reducing the mean time to respond to threats. The integration also enables users to automate away between 75 and 97 per cent of manual work associated with user-reported phishing emails without creating any rules. Armorblox removes malicious emails across all mailboxes with just one click. In addition, customers can apply forward-looking remediation actions that automatically protect against similar attacks in the future, freeing security teams up to focus on threats that require human review.   

Armorblox connects to Microsoft 365 over Graph APIs and can run in parallel with any Microsoft licenses and complement the native investments into Microsoft. Redundant investments into secure email gateways can be cut down on as organisations look to complete their digital transformation strategy and go all-cloud with Microsoft 365 and other security investments.  

Large language models also fundamentally change how data protection is done inside organisations. Today’s data loss prevention solutions are riddled with false positives that take valuable time and resources away from security teams. Armorblox combines the traditional approach of custom regular expressions and identifiers with natural language techniques to match a wide array of sensitive and custom policy identifiers while reducing the number of false positives by 10 times. Investments made into Microsoft are leveraged to dynamically encrypt emails based on configured policy actions using the SmartHost configuration in the outbound direction, without requiring it to be an inline gateway with its own encryption logic. Armorblox can also use language models to automatically tag and classify data and documents, feeding this information back into Azure Information Protection over application programming interfaces.   

I have no doubt that GPT and large language models will change cybersecurity forever and new use cases will continue to arise. The application possibilities are endless – policy creation copilots, summarisation of logs, detecting drift in configuration settings across the network. Organisations should look to adopt large language models and AI in their security stack if they are to stay ahead of bad actors who will also be quick to leverage these advanced AI tools at their disposal.   

The GPT chapter is just beginning and more will be written and spoken about regarding regulation, compliance, GPT-generated threats, and more. It is an exciting time to be a part of the cybersecurity sphere as we usher in a new era of AI.  

Anand Raghavan is co-founder and chief product officer at Armorblox   

This article was originally published in the Spring 2023 issue of Technology Record. To get future issues delivered directly to your inbox, sign up for a free subscription

Subscribe to the Technology Record newsletter


  • ©2024 Tudor Rose. All Rights Reserved. Technology Record is published by Tudor Rose with the support and guidance of Microsoft.