Rubrik, a firm that works in the data security space, has released “Agent Rewind,” a solution that helps organisations fix mistakes made by AI agents. This comes at a time when corporates are worried about the risks of using autonomous AI in important projects.
The launch follows Rubrik’s acquisition of AI infrastructure company Predibase. The company said Agent Rewind integrates Predibase’s technology with Rubrik’s existing security and recovery tools, giving organisations visibility into AI agent actions and allowing them to reverse changes to data and applications caused by unintended or faulty decisions.
“In highly regulated sectors in India, such as banking, financial services, insurance, and healthcare, maintaining transparency is critical,” said Satish Murthy, Chief Technology Officer for India and Asia Pacific & Japan at Rubrik. “Agent Rewind offers audit trails for AI-driven actions and allows businesses to experiment with generative and agentic AI without fear of irreversible mistakes.”
The software is coming out at a time when more businesses are trying out AI agents, which can do complicated tasks on their own. These kinds of agents have sometimes produced problems with operations, even though they promise to make things run more smoothly. Examples cited by Rubrik include deleted databases, flawed multi-step executions, and legal complications triggered by AI errors.
“As AI agents gain autonomy and optimise for outcomes, unintended errors can lead to business downtime,” said Anneka Gupta, Chief Product Officer at Rubrik. “With Agent Rewind, we’re giving companies the ability to trace, audit, and safely rewind AI actions.”
Unlike traditional observability tools, Agent Rewind provides context-enriched visibility, tracing agent behaviour back to root causes such as prompts, plans, and tool use. It also enables safe rollback via Rubrik Security Cloud and supports broad compatibility with platforms including Microsoft Copilot Studio, Amazon Bedrock Agents, and custom agent frameworks.
IDC Research Manager Johnny Yu noted that as AI becomes more integrated into business systems, tools that address “non-human error” will become essential. “Organizations should explore solutions that allow them to correct potentially catastrophic mistakes made by agentic AI,” he said.