The hype has been strong on agentic artificial intelligence (AI) and the potential business benefits are real. However, their green Autonomy Means you can go off the railway with introducing guardrails from the start to Reduce Risk and Avoid cost blowouts,

Ev Kontsevoy, Chief Executive at Identities Management Platform Teleport, Says The Good News is that we That we Alredy Have Access control Theory, backed by solid mathematics: “So, we know how this needs to be done and we do not need to invent anything new.”

For instance, ai agents in the datacentre need constraints on information access. From a guardrails percective, this can be “a much nastier problem” versus success or failure of a copilot-type laptop implementation, for example.

First, Figure out the identity the ai will have: they cannot be anonymous. Indeed, kontsevoy's view is that ai agents should have the identity type already used for human engineers, machines running works and software applications.

“When access control theory is violated, it is against the identity fragmentation,” Kontsevoy says. “For example, fragmenting identity in datacentres an opportunity for hackers to exploit and for ai agents to misbehave.”

To answer questions, AI agent needs to access data that runs, is present, appropriate and available. It needs to talk to databases and understand their contents. Restrictions – “or guardrails” – should be applied according. Human Resources, For Instruction, May Get Access to Ask Questions About Employee Compensation (or Not, Depending on the jurisdiction). Identity Fragmentation Makes Enforcing Policies and Compliance a Struggle.

The second need is standardisation of how agents access information. Anthropic's Model context protocol (MCP)Announced November 2024Standardies How Applications Furnish Context to Large Language Models (LLMS), Including for Building Agents, Complex Workflows on Top or Interoperability.

“Mcp has been extramely rapidly adopted,” Says kontsevoy. “And Although [MCP] Did not come with a reference implementation, the Specification Itslf is open enough to add access control on top. “

So, companies do't Necessarily Need, For Instruction, to have Security Expertise to set a security guardrail. If your agents “Speak” MCP, they can deploy a technology solution to set that those guardrail authorizations. The method also works for other kinds of guardrail, including cost control, kontsevoy says.

Early Days Adoption

So far, few examples are running in production. For many organisations, the agentic ai hasn son beyond a conversation.

Consider that ai agents may input ai model outputs in another and be working towards a goal without full oversight. According to Ibm's video series on ai agents, guardrails must be considered at Model, Tooling and Orchestration Layers.

Peter Van der Putten, AI Lab Head at Workflow Automation Specialist Pegasystems, Says That Many Organisations do not Feel they will they will get agentic challenges soch as governance and save this year. “Some go, 'They can't even pass a captcha.” Then you have the believers saying, 'Create as many agents as you want and let them run amok.' Both views are flwed, “He says.

Start with Selected Single -Gent Use Cases, See how well they perform, and Ground agents in your Enterprise Architecture Artfacts, Workflows, Business Rules, AppRopritetexT and User ACCESTEXT and User ACCESTEXT and SON

Then Contrast with Reality – Are the agents doing the right thing and are they achieving their goals? Thos are the kinds of strategies a business might apply to enable agent ai.

“Throw in a bunch of requirements, use Process mining to see the actual process (versus what people tell you the process should be). Clean that up, put in other requirements and then that as input into more like design agents that can help you, ”Van der putten says.

Then the human is in the loop trust you can see what you agree with or not. Only then do you build an application that can run very predictable at run time. Of course, if you can't “automate things Away” and Need Human Oversight of Everything, agents Might not be the right answer, van der putten adds.

Choose the right agents or lLMS for Each Aspect and Build on that. Insurance, One Agent Might Assess Risks, Another Claims, While Yet Another Interacts with Other Employees or even an end customer. And then, is a sales-focused agent the right answer in that circumstance? That also depends – you need the exact agent for the context.

Afterward, an agent layered on top might operate by “undersrstanding” the individual steps or specific workflows to call – or not – in a percent; One right at the end might check previous work. And when you have a roadblock, you “Escalate back to the human”.

Only down the track might you consider layering Multi -Gent Systems On top where specialized agents for Particular Tasks Talk to Each Other.

Van der putten says: “The tools need clear processes, rules, policies and maybe non-generative predictive models that assesses likelihood of fraud or simlar. Pull the context, get a full picture of the Sutation and the Sutation request. '

Measuring the benefits

Think About it as Slightly Smarter Robotic Process Automation (RPA), Says Simon James, Data Strategy and AI Managing Director at Publicis Sapient. Start with Mapping Processes and Determining which might benefit from ai agents versus human judgement or traditional automation. Devising a clear, compliant framework can help.

The more chooses you increase, the more scope the Ai Has for Things to Simply Go Wrong and the More Difacity it becomes to Govern. “There's a wheel of death going on somewhere whose several agents are talking to one another, even in highly optimized machine-redeable code, and not in English, adding latency to 20 Systems,” James.

Because Agentic Ai is So New and People often Doon Don't Have The skillsIndustry is still Figuring Things Out. Maybe it can run three different routines or functions and it's got choice between them, but there's not much choice there, james warns. “And it's about how a salesforce version, for example, connects to erp or whitever else so they can pass the logic between each other and the handoff isnful.”

Dominicllington, AI and Data Product Marketing Director at Platform Snaplogic, reiterates that many people are still experts outs out “the hard way” in agentic ai: “Lawyers and compliance are available Involved, and Can Ask Tough Questions Before Sign-off on Going Into Production. Make it to production. “

Often the subset of information that power the pilot to success will not work written. When you want to connect to “Crown Jewels” – Such as the corporate database or crm – you may need to reconsider access to that data and more complete enforcement of policy and power.

“If you're astrazneca, for example, you don't want your pharma pipeline winding up in some model's Training data“He says.” And having “Ground Truth 'is Critical. I Never have to go back more than a couple of days in my news feed to

Of course, with retrieval augmented generation (rag), for example, you can vectorise approves information into the data store, with the llm responding based on what's in a particular data store, of Over what it sees or can respond with. With data masking, quality of service (Qos) and Role-Based Access Control You Can Go Far, Wellington agrees.

That said, considerments run the gamut from ethical challenges to compounding errors, security risk, scalability. Agentic AI Needs Transparency, but it's not easy to know how.

This all sounds Familiarar to Early-Days Cloud Adoption-But with Ai, The Cycle from Hype to Disillusionment has accelerated. However, there are early adopters that can be learned from. “It can be the quieter second wave that actually points the way,” Wellington Adds.

Sunil Agrawal, Chief Information Security Officer (CISO) at Ai Platform Glean, Says It's Worth the fight. AI agents can Reshape how work is done, helping to surface and make sense of needed data. But scaling these systems securely and responsibly is critical.

Agents Must Respect User Roles and data governance Policies from day one, especially in highly regulated environments, and observability of what's going on is crucial. This covers what data they access, how they reason and which models they relay on.

“AI agents are only as reliable as the data they're Built on,” Agrawal Says. “Ground them in Accure, Unified Internal Knowledge. And Threats Like Prompt Injection, Jailbreaking and Model Manipulation Require dedicated defense. Operate Safely, Ethically and Aligned with Organisational Policy. “

Leave a Reply

Your email address will not be published. Required fields are marked *