VIEWPOINT 134 Financial software solution provider Finastra is leveraging generative artificial intelligence to enhance creativity and productivity by adopting Microsoft’s Bing Chat for its employees ADAM LIEBERMAN: FINASTRA Plugging humans into the AI playground “Bing Chat acts as our own digital assistant, providing us with vast amounts of knowledge at our fingertips” OpenAI’s ChatGPT has created a completely new artificial intelligence arena. This marks the first time a large language model (LLM) and its generative capabilities have become available for mass consumption. The surge in popularity of ChatGPT compelled businesses worldwide to consider how it impacts their workforce. However, with any new technology comes new potential risks. Whether it’s concerns about security and data, the blurred lines of intellectual property (IP), accentuating bias or ‘hallucinations’, all companies need to implement meticulous risk mitigation strategies for any potential scenario. At Finastra, we recognise the value of generative AI to enhance creativity and productivity when rolled out in a safe and secure way. We searched high and low to find the right solutions to help us unlock our true potential. That’s why we chose to become an early adopter of Microsoft 365 Copilot and roll out Bing Enterprise Chat for all our employees. So, how did we do it and what are we using it for? For our customers, our security is their security. They rely on us to provide industryleading solutions without compromising risk. As a long-standing partner of Microsoft, with many of our solutions deployed on Microsoft Azure, we have always valued the robust level of security, flexibility and scalability it offers. Microsoft’s Bing Chat Enterprise is an AI-powered chat. Trained on data from the web, the tool searches the internet for information in real time. Yet it is an internal tool – our user and business data remain within our own networks. Chat data is not saved, nor is it used to further retrain any underlying models. We set our own parameters to ensure responsible usage and to protect sensitive information. Every employee plays a role in safeguarding our security, which is why we also published our internal Enterprise Generative AI policy. This includes ensuring that we all validate information, do not share restricted data – such as personal, Finastra IP or customer data – and do not generate content that infringes on IP rights. We have also emphasised the importance of monitoring outputs for potential biases and taking corrective actions to mitigate them. Our committees and working groups oversee and manage our governance frameworks, risks, compliance and data management. I am personally leading our AI and machine learning (ML) team, known as GenAI Labs. We support generative AI tools at Finastra and help to develop additional standards, procedures or guidelines alongside the risk and legal teams.