Technology Record - Issue 29: Summer 2023

50 INTERVIEW were once part of state-level intelligence communities are using their knowledge to create their own cybersecurity solutions for private and public sector organisations.” Technology leaders are sharing their intelligence information and cybersecurity knowledge and resources too. “Microsoft and Google, for example, have publicly shared intelligence information with the Ukrainian government since the Russian invasion, while some private sector companies have provided cyber defence training to the country,” says Wollman. “This type of information and knowledge sharing should be taking place at a wider scale.” He adds that it is also essential to establish a solid regulatory framework to govern the use of generative AI. “Allowing everyone to use AI freely could have far-reaching negative consequences,” says Wollman. “To ensure it transforms our world for the better, it’s crucial to discuss the responsible use of the technology on a global scale,” says Woolman. “We must develop policies and regulations at a state, country and global level.” Wollman is reassured by the fact that technology leaders, industry bodies, governments and other entities have openly voiced concerns about the potential misuse of AI and are advocating for such regulations to be implemented at scale. “The USA, UK, the European Union and others in the Western world are already considering how they can regulate AI to protect privacy and are proactively collaborating with technology leaders like Microsoft and OpenAI,” he says. “There’s currently less transparency about AI development in the Eastern world, but ideally everyone needs to be involved in this discussion because we need a global solution.” While organisations wait for these regulations to be established, they must implement their own internal policies to protect their critical assets and data from the security risks posed by generative AI. “The policies should outline how employees can use generative AI in the workplace to ensure they interact with it in a compliant, ethical and responsible way,” says Wollman. “They should state which specific technologies or platforms employees can use, what data can and can’t be shared, and more. Organisations should also educate employees about the risks of generative AI, as well as the measures they can take to protect the business from attack.” Wollman also encourages businesses and cybersecurity solutions providers to explore the innovative ways generative AI could be used to help build better cyber defences. “ By working together and establishing strict policies around its use, we can control the development of generative AI”

RkJQdWJsaXNoZXIy NzQ1NTk=