How to avoid an AI apocalypse, according to Yuval Wollman

How to avoid an AI apocalypse, according to Yuval Wollman

CyberProof’s president of managed detection explains why global collaboration and regulations are key to mitigating the risks associated with the exponential growth of generative artificial intelligence technology

Rebecca Gibson |


From stories about deep fakes to those about disinformation, catastrophic privacy violations, widespread job losses, autonomous weapons and the threat of robots becoming an existential threat to humanity – the alarmist media headlines about the risks of artificial intelligence technology have been growing over the past couple of years as the technology gains traction in various industry sectors. One of the latest subfields of AI technology to dominate the headlines is generative AI, which uses machine learning and AI algorithms to generate text, image, music, code, graphics and other outputs that resemble human-created content. 

“Generative AI has the potential to transform how we live and work, but it’s growing exponentially and now that more people are experimenting with the technology, it’s impossible to predict how it will evolve and what the consequences will be,” says Yuval Wollman, a former Israeli intelligence officer and now president of advanced managed detection and response service provider at CyberProof, a UST company.

“We’ve heard many stories of people using generative AI to do everything from creating marketing campaigns to writing code, translating text and improving customer services and interactions. However, we’ve also seen instances where chatbots have generated racist or sexist comments, as well as threat actors using platforms built on large language models to steal data from consumers. It’s this uncertainty that makes people fearful of AI technology.”

According to Wollman, one of the biggest challenges related to generative AI is that it poses significant new cybersecurity risks for organisations and their customers.

“The technology has altered and expanded the attack surface and introduced new threats that are coming from both internal and external sources,” he explains. “Employees or third-party partners using generative AI to carry out daily tasks may unintentionally introduce new vulnerabilities to an organisation’s IT and security infrastructure. For example, they might create a new data risk by sharing information with OpenAI’s ChatGPT or using the code the platform generates without analysing it in a secure environment first.”

Meanwhile, the open-source model favoured by generative AI developers makes it easy for malicious threat actors to access the tools and start training the models to build new threat capabilities and attack vectors. This is accelerating what Wollman refers to as the “asymmetric arms race” between threat actors capitalising on the technology to launch cyberattacks and those aiming to defend organisations against these strikes.

“Over the past 20 years we’ve seen a rapid and intentional spillover of state-level knowledge to threat actors, often proxies acting on behalf of governments to send out covert state-sponsored attacks, such as disinformation campaigns during democratic elections,” says Wollman. “Generative AI makes it even easier for these malicious actors to quickly access the latest knowledge and devise new attack vectors, which is changing the threat surface more rapidly than ever before.

“This puts the cyber defence community in an even more challenging position as the frameworks we currently use to identify and analyse the tactics and techniques threat actors use are becoming less effective. Therefore, it will become increasingly difficult to predict which threats will arise and develop solutions to protect against them. How can they prepare for a threat that doesn’t exist yet?”

Improving collaboration between governments, public sector organisations, academia and the private sector will be key to helping the defence community catch up in the arms race.

“We’ve recently seen very good signs that the public and private sectors are willing to join forces and share cybersecurity and threat data for the greater good,” says Wollman. “Governments are sharing intelligence data with the private sector, and experts who were once part of state-level intelligence communities are using their knowledge to create their own cybersecurity solutions for private and public sector organisations.”

Technology leaders are sharing their intelligence information and cybersecurity knowledge and resources too. “Microsoft and Google, for example, have publicly shared intelligence information with the Ukrainian government since the Russian invasion, while some private sector companies have provided cyber defence training to the country,” says Wollman. “This type of information and knowledge sharing should be taking place at a wider scale.”

He adds that it is also essential to establish a solid regulatory framework to govern the use of generative AI.

“Allowing everyone to use AI freely could have far-reaching negative consequences,” says Wollman. “To ensure it transforms our world for the better, it’s crucial to discuss the responsible use of the technology on a global scale. We must develop policies and regulations at a state, country and global level.”

Wollman is reassured by the fact that technology leaders, industry bodies, governments and other entities have openly voiced concerns about the potential misuse of AI and are advocating for such regulations to be implemented at scale.

“The USA, UK, the European Union and others in the Western world are already considering how they can regulate AI to protect privacy and are proactively collaborating with technology leaders like Microsoft and OpenAI,” he says. “There’s currently less transparency about AI development in the Eastern world, but ideally everyone needs to be involved in this discussion because we need a global solution.”

While organisations wait for these regulations to be established, they must implement their own internal policies to protect their critical assets and data from the security risks posed by generative AI.

“The policies should outline how employees can use generative AI in the workplace to ensure they interact with it in a compliant, ethical and responsible way,” says Wollman. “They should state which specific technologies or platforms employees can use, what data can and can’t be shared, and more. Organisations should also educate employees about the risks of generative AI, as well as the measures they can take to protect the business from attack.”

Wollman also encourages businesses and cybersecurity solutions providers to explore the innovative ways generative AI could be used to help build better cyber defences.

“The technology can help us to develop more robust systems for capturing data and identifying, triaging and responding to threats,” says Wollman. “At CyberProof, we’re already exploring how it can be used to analyse data more thoroughly, making it quicker and easier for us to uncover common threat patterns and develop more effective detection and prevention playbooks to mitigate specific risks. We’re testing several new use cases and we hope to start deploying them for our customers soon.”

Despite the unpredictability of generative AI, Wollman is cautiously optimistic that the benefits of the technology will outweigh the risks in the long term.

“Beyond cybersecurity, one of people’s biggest fears is that AI and robots will replace humans in the workplace, but history shows us that rather than decreasing the workforce, technological revolutions tend to change its structure instead,” he says. “AI is transforming multiple operational processes, which will eliminate some basic manual jobs, but it will open up some new roles too. For example, we’re likely to need fewer code developers but we’ll want more systems engineers to help us connect and integrate the generative AI tools. Therefore, we’ll see a balance in the market over time, so organisations must focus on upskilling or reskilling the workforce with the capabilities and knowledge they need to succeed in an AI-led world.

“By working together and establishing strict policies around its use, we can control the development of generative AI and implement it securely and responsibly so it becomes a valuable tool that revolutionises how we live and work.”

This article was originally published in the Summer 2023 issue of Technology Record. To get future issues delivered directly to your inbox, sign up for a free subscription.

Subscribe to the Technology Record newsletter


  • ©2024 Tudor Rose. All Rights Reserved. Technology Record is published by Tudor Rose with the support and guidance of Microsoft.