Synthetic media: the new cybersecurity threat facing businesses

With misinformation and AI-based scams on the rise, businesses will need to consider how they can defend themselves and tackle these issues

David Macfarlane
By David Macfarlane on 18 August 2022
Synthetic media: the new cybersecurity threat facing businesses

The pandemic proved to be fertile ground for cybercriminals. With remote working becoming the norm, organisations found themselves having to deal with unsecured home connections and increased usage of personal devices for work purposes – leaving IT teams vulnerable. 

As we enter a new era of hybrid working, things will only get more complicated. If anything, because cyberattacks – and the technology being used for such acts – are getting more sophisticated. 

Just like thousands of businesses that are now adopting artificial intelligence (AI) to automate processes, deploy chatbots and personalise experiences, evil forces are using it to hinder governments, businesses and people. AI is quite literally becoming a weapon, and its ammunition is misinformation. 

Nina Schick, author, advisor and deepfake expert, recently spoke at Frontiers, Gamma’s annual flagship event. At Frontiers, Gamma explores future trends and technologies in the communications and collaboration industry. For its 2022 agenda, Gamma analysed stories and insights of the growing misinformation campaigns that can destroy reputations and influence national elections. 

Schick dubbed this new global threat as the “Infocalypse” – a severe crisis of misinformation that is threatening the way our society works. 

“The incredible capability of AI to create fake media – commonly known as deepfakes – has only been around for the past five years or so, but it’s already reshaping our information ecosystem,” says Schick. “Media manipulation is becoming smarter and more accessible. Deepfakes are emerging at a time when video is becoming one of the most important means of communication.” 

It’s predicted that in the next couple of years, video streaming and downloads will make up more than 82 per cent of all internet traffic, and over 5.6 billion people will be producing and sharing video content online. 

This trend will only speed up the spread of deepfakes, to the point they will become accessible to anyone – from TikTokers to cybercriminals. “By 2030, a TikTok user could create deepfake content as convincing as that of a Hollywood studio,” says Schick. “Entire industries, from marketing to communications, are going to be completely transformed as synthetic media becomes the norm. 

“We are at the start of an AI-powered revolution, and I can say, with full confidence, that the future is synthetic.” 

In 2019, the Wall Street Journal reported an “unusual cybercrime case” where cybercriminals used AI-based software to impersonate a chief executive’s voice and demand a fraudulent transfer of $243,000. Unusual as it might have seemed at the time, the use of deepfakes to scam businesses has become increasingly common. 

The good news is that experts like Nina, governments and technology organisations are doing all they can to raise awareness, implement some regulations around the use of deepfakes, and create technological solutions to this issue. 

Schick says: “When we consider the technological solutions, there are two ways to look at it: detection and provenance, or authentication. On the authentication side, some major technology giants are hoping to set industry standards to authenticate media from the point of capture. 

“On the detection side, while there are many efforts already under way, given the nature of the AI behind synthetic media, every time the detection capability gets better, so does the generation. We still don’t even know if AI will become so smart that it will be able to fool any detector.” 

While that question can’t be answered yet, Microsoft has been at the forefront of this fight. In 2020, it released a video authenticator tool to analyse whether a piece of content had been manipulated, as well as a Microsoft Azure built-in tool that enables content producers to add digital certificates to content, thus ensuring its authenticity. 

For businesses, this new threat poses a question around cybersecurity practices and approaches. Investing in detection software to scan video and audio communications will become absolutely paramount for businesses, especially enterprises in the financial, legal or public sectors.  

While detection and authentication technology might not be an option for everyone, something as simple as staff training and awareness could make a noticeable difference to protect businesses against such scams. 

“As our lives become increasingly video-led, we’re going to struggle to know what’s real,” says Schick. “The only way to prepare for this change is to understand the enormity of the paradigm shift that’s under way and raise awareness of the threat we’re facing.” 

One thing is certain – the future is synthetic, whether we’re ready for it or not.

Watch all the Frontiers sessions at: www.gamma.co.uk/gx 

David Macfarlane is a managing director at Gamma 

This article was originally published in the Summer 2022 issue of Technology Record. To get future issues delivered directly to your inbox, sign up for a free subscription.

Topics

Viewpoint, Security, AI

Number of views (1300)/Comments (-)

Tags:
Comments are only visible to subscribers.

Theme picker