174 VIEWPOINT Securing agentic AI in retail SRIKRISHNA SHANKAVARAM: ZEBRA TECHNOLOGIES While agents bring a wealth of opportunity to retailers, they should be deployed safely and responsibly Agentic AI systems go beyond traditional machine learning and chatbots. They are capable of intelligently automating real-world workplace processes end to end. These systems operate based on goals, not just instructions. They reason, decide and act, which is why industries like retail are putting agents into the hands of their frontline employees. AI agents are making them more connected, giving them greater visibility into inventory management and sales opportunities and requests, and intelligently automating tasks on the shop floor. For example, an agent can interpret a customer’s return request and automatically trigger the associated logistics workflows. This includes initiating return approval, notifying the warehouse and updating stock levels in near real time. The agent will be a digital autonomous worker augmenting the frontline worker. Agents can also monitor inventory across locations and autonomously place supplier orders based on current trends. This kind of realtime optimisation helps prevent overstocking or stockouts without manual intervention. In addition, agents can coordinate promotional activity by updating pricing across e-commerce, point-of-sale and marketing systems. This ensures consistency and rapid responses during time-sensitive campaigns. Some agents can even pull data from internal dashboards and external tools to create concise executive summaries, surfacing key insights for decision-makers. Agentic AI unlocks massive value, but it also increases the system’s responsibility. These agents take steps that can directly affect operations, revenue and customer experience. There are some real-world concerns that developers, IT and operational technology (OT) leaders need to be aware of and address. For instance, malicious inputs from customers or attackers can cause agents to behave unpredictably by altering orders or issuing refunds. Agents may also repeat failed tasks, reveal confidential data or interfere with firewall and access control. These are not theoretical concerns; they’re practical risks that increase with autonomy. The solution is not to avoid agentic AI but to deploy it with secure, observable limits and governed design, and to collaborate with partners who can provide the necessary agents, implementation, IT and developer support. To deploy agentic AI responsibly, IT and OT leaders in industries like retail must embrace a lifecycle approach that balances innovation with control. This begins by defining clear agent boundaries by being explicit about what the agent is allowed to do autonomously, with a human authorising it and equally clear about what it should never attempt. Whether it’s initiating refunds, accessing customer data, or editing product listings, hard lines need to be drawn around the agent’s authority to prevent scope creep. Next, consider threat model early in the design process using industry standards, and think like an adversary: how could someone trick the agent? Could it be misused internally or externally? Could it escalate its access? Mapping out abuse
RkJQdWJsaXNoZXIy NzQ1NTk=