In AI we must trust, says Gartner’s Avivah Litan

In AI we must trust, says Gartner’s Avivah Litan

Analyst details the importance of trust and security in the future of generative artificial intelligence  

Alice Chambers |


The progress of generative artificial intelligence continues to steamroll through many aspects of life and with it, both lawmakers and those responsible for developing the technology have called for us to pause for thought about the safety of AI-based systems. 

But “the reality is that generative AI development is not stopping”, according to Avivah Litan, vice president analyst at Gartner. “Organisations need to act now to formulate an enterprise-wide strategy for AI trust, risk and security management (AI TRiSM),” she says, in a Q&A on the Gartner website. “There is a pressing need for a new class of AI TRiSM tools to manage data and process flows between users and companies who host generative AI foundation models.” 

Litan adds: “AI developers must urgently work with policymakers, including new regulatory authorities that may emerge, to establish policies and practices for generative AI oversight and risk management.”  

Some of the new risks raised by generative AI, according to Litan, are ‘hallucinations’ (AI responses that do not seem to be justified by data), fabrications, deepfakes, data privacy, copyright issues and cybersecurity concerns. 

But there are actions enterprise leaders can take now. 

“It’s important to note that there are two general approaches to leveraging ChatGPT and similar applications,” says Litan. “Out-of-the-box model usage leverages these services as-is, with no direct customisation. A prompt engineering approach uses tools to create, tune and evaluate prompt inputs and outputs. 

“For out-of-the-box usage, organisations must implement manual reviews of all model output to detect incorrect, misinformed or biased results. Establishing a governance and compliance framework for enterprise use of these solutions is key, including clear policies that prohibit employees from asking questions that expose sensitive organisational or personal data.  

“Organisations should monitor unsanctioned uses of ChatGPT and similar solutions with existing security controls and dashboards to catch policy violations. For example, firewalls can block enterprise user access, security information and event management systems can monitor event logs for violations, and secure web gateways can monitor disallowed API calls.  

“For prompt engineering usage, all of these risk mitigation measures apply. Additionally, steps should be taken to protect internal and other sensitive data used to engineer prompts on third-party infrastructure. Create and store engineered prompts as immutable assets.  

“These assets can represent vetted engineered prompts that can be safely used. They can also represent a corpus of fine-tuned and highly developed prompts that can be more easily reused, shared or sold.”  

Read the full Q&A with Avivah Litan.

Subscribe to the Technology Record newsletter


  • ©2024 Tudor Rose. All Rights Reserved. Technology Record is published by Tudor Rose with the support and guidance of Microsoft.