Technology Record - Issue 29: Summer 2023

INTERVIEW Avoiding an AI apocalypse CyberProof’s Yuval Wollman explains why global collaboration and regulations are key to mitigating the risks associated with the exponential growth of generative artificial intelligence technology BY REBECCA GIBSON 48 From stories about deep fakes to those about disinformation, catastrophic privacy violations, widespread job losses, autonomous weapons and the threat of robots becoming an existential threat to humanity – the alarmist media headlines about the risks of artificial intelligence technology have been growing over the past couple of years as the technology gains traction in various industry sectors. One of the latest subfields of AI technology to dominate the headlines is generative AI, which uses machine learning and AI algorithms to generate text, image, music, code, graphics and other outputs that resemble human-created content. “Generative AI has the potential to transform how we live and work, but it’s growing exponentially and now that more people are experimenting with the technology, it’s impossible to predict how it will evolve and what the consequences will be,” says Yuval Wollman, a former Israeli intelligence officer and now president of advanced managed detection and response service provider CyberProof, a UST company. “We’ve heard many stories of people using generative AI to do everything from creating marketing campaigns to writing code, translating text and improving customer services and interactions. However, we’ve also seen instances where chatbots have generated racist or sexist comments, as well as threat actors using platforms built on large language models to steal data from consumers. It’s this uncertainty that makes people fearful of AI technology.” According to Wollman, one of the biggest challenges related to generative AI is that it poses significant new cybersecurity risks for organisations and their customers. “The technology has altered and expanded the attack surface and introduced new threats that are coming from both internal and external sources,” he explains. “Employees or thirdparty partners using generative AI to carry out daily tasks may unintentionally introduce new vulnerabilities to an organisation’s IT and security infrastructure. For example, they might create a new data risk by sharing information with OpenAI’s ChatGPT or using the code the platform generates without analysing it in a secure environment first.” Meanwhile, the open-source model favoured by generative AI developers makes it easy for malicious threat actors to access the tools and start training the models to build new threat capabilities and attack vectors. This is accelerating what Wollman refers to as the “asymmetric arms race” between threat actors capitalising on the technology to launch cyberattacks and those aiming to defend organisations against these strikes. “Over the past 20 years we’ve seen a rapid and intentional spillover of state-level knowledge to threat actors, often proxies “ It’s crucial to discuss the responsible use of AI on a global scale” Image: Vectorstock.com/ElizaLIV

RkJQdWJsaXNoZXIy NzQ1NTk=