Securing agentic AI in retail

Securing agentic AI in retail

Zebra Technologies’ Srikrishna Shankavaram says that while agents bring a wealth of opportunity to retailers, they should be deployed safely and responsibly   

By Guest contributor |


Agentic AI systems go beyond traditional machine learning and chatbots. They are capable of intelligently automating real-world workplace processes end to end. These systems operate based on goals, not just instructions. They reason, decide and act, which is why industries like retail are putting agents into the hands of their frontline employees. AI agents are making them more connected, giving them greater visibility into inventory management and sales opportunities and requests, and intelligently automating tasks on the shop floor.  

For example, an agent can interpret a customer’s return request and automatically trigger the associated logistics workflows. This includes initiating return approval, notifying the warehouse and updating stock levels in near real time. The agent will be a digital autonomous worker augmenting the frontline worker. 

Agents can also monitor inventory across locations and autonomously place supplier orders based on current trends. This kind of real-time optimisation helps prevent overstocking or stockouts without manual intervention. 

In addition, agents can coordinate promotional activity by updating pricing across e-commerce, point-of-sale and marketing systems. This ensures consistency and rapid responses during time-sensitive campaigns. Some agents can even pull data from internal dashboards and external tools to create concise executive summaries, surfacing key insights for decision-makers.  

Agentic AI unlocks massive value, but it also increases the system’s responsibility. These agents take steps that can directly affect operations, revenue and customer experience. There are some real-world concerns that developers, IT and operational technology (OT) leaders need to be aware of and address. For instance, malicious inputs from customers or attackers can cause agents to behave unpredictably by altering orders or issuing refunds. Agents may also repeat failed tasks, reveal confidential data or interfere with firewall and access control. These are not theoretical concerns; they’re practical risks that increase with autonomy. The solution is not to avoid agentic AI but to deploy it with secure, observable limits and governed design, and to collaborate with partners who can provide the necessary agents, implementation, IT and developer support.  

To deploy agentic AI responsibly, IT and OT leaders in industries like retail must embrace a lifecycle approach that balances innovation with control. This begins by defining clear agent boundaries by being explicit about what the agent is allowed to do autonomously, with a human authorising it and equally clear about what it should never attempt.  

Whether it’s initiating refunds, accessing customer data, or editing product listings, hard lines need to be drawn around the agent’s authority to prevent scope creep. 

Next, consider threat model early in the design process using industry standards, and think like an adversary: how could someone trick the agent? Could it be misused internally or externally? Could it escalate its access? Mapping out abuse scenarios in advance helps organisations identify controls before the agent ever sees production. 

Look at hardening the prompts and internal logic the agent relies on. Avoid building agents that are overly general or capable of improvising beyond their business intent. Introducing guardrails for how agents interpret instructions, reason through tasks and make decisions are critical for safe autonomy. 

Organisations should also test agents collaboratively across teams before going live. Involve AI developers, operations, business stakeholders and security teams. This cross-functional testing helps uncover blind spots and ensures the agent performs safely across real-world use cases. 

Finally, monitor and retrain agents post-launch. Behaviour can drift over time, even in systems without direct learning loops. Set up real-time monitoring and observability pipelines, performance thresholds, and retraining checkpoints. Treat agents like evolving operational systems, not static deployments. 

Developers, IT and OT leaders in industries like retail who partner with AI providers and design agentic AI with security and governance from day one will lead – not just in innovation, but also in trust and resilience. 

Seeking out AI partners who can provide industry trained ready-to-use AI agents delivers a faster return on investment and gives organisations the scope to develop and add more agents with the AI platform and tooling for creating, deploying, and maintaining agentic solution components across a product portfolio. This enables easy development of AI applications and solutions. 

Businesses should prioritise partners with industry knowledge and a track record of working closely with developers in order to discover what tools will be useful for a full end-to-end AI pipeline. This will support developers and software partners to collect data, train AI models and deploy across customer devices with an AI software development kit and pre-trained models. And AI APIs for cloud, hybrid and edge solutions will provide an easy-to-use ecosystem to integrate into any business application.  

Explore AI possibilities for your business and customers 

Srikrishna Shankavaram

Srikrishna Shankavaram is principal cybersecurity architect for the chief technology office at Zebra Technologies 

Discover insights from these partners and more in the Autumn 2025 issue of Technology Record. Don’t miss out – subscribe for free today and get future issues delivered straight to your inbox. 

Contact author

x

Subscribe to the Technology Record newsletter


  • ©2025 Tudor Rose. All Rights Reserved. Technology Record is published by Tudor Rose with the support and guidance of Microsoft.