INSIGHTS  /  ARTICLES

From Automation to Transformation: What Agentic AI Really Demands from Your Business

Wayne Jones, Former COO Watson Customer Engagement at IBM
Linda Leopold, Former Head of AI Strategy at H&M Group

Wayne Jones, senior executive with over 20 years of experience with machine learning and AI and former Chief Operating Officer of Watson Consumer Engagement at IBM, and Linda Leopold, strategic advisor and former head of AI strategy at H&M Group, joined a recent GLG webcast to discuss the findings of GLG’s recent Agentic AI Playbook which surveyed over 100 senior business leaders deploying the technology. Their conversation moved beyond the hype to offer a clear-eyed view of where the technology stands, what it demands from organizations, and what comes next.

Defining the Moment: What Agentic AI Actually Means

Before diving into strategy, the panelists took care to ground the conversation in a precise definition of agentic AI.

For Linda, the key word is agency: the capacity of a system to act independently. Unlike a chatbot responding to a prompt, an agentic AI system can plan, make decisions, and execute multi-step actions across tools, databases, and other software environments – often with minimal human oversight. “AI stops being a tool you use,” she said, “and becomes a system that acts on your behalf.”

Wayne added important texture by noting that agentic AI doesn’t live in a single corner of the tech stack. It’s emerging from within ERP systems, CRM platforms, and cloud application suites, not just from standalone LLM deployments. The common thread, he argued, is autonomy: the system continues to act based on its own reasoning beyond the original trigger.

Linda was also careful to describe agentic AI as a spectrum. Some agents perform simple, repetitive tasks in predefined workflows while others tackle complex, multi-step problems across domains. Understanding where a given deployment falls on that spectrum is essential.

The Efficiency Trap: Why Most Organizations Aren’t Thinking Big Enough

GLG’s research found that most organizations are using agentic AI to automate existing processes rather than reimagine them. Wayne sees this as understandable but potentially limiting.

“There is an agentic AI possibility that says these technologies can drive a lot of the blocking and tackling,” he explained, but the real breakthrough comes when organizations stop asking how to replicate their current processes faster and start asking what would be possible if those steps didn’t need to happen in their traditional order.

The problem, Wayne pointed out, is that companies often don’t understand their own processes well enough to reimagine them. Customer service and IT have seen the most traction with agentic AI precisely because those processes are well-documented, with defined outcomes and concrete measurements. Sales, R&D, and HR are areas with more art than science and present a harder challenge, not because they’re less valuable, but because the tacit knowledge that drives them hasn’t been captured anywhere.

“The self-knowledge of businesses,” he said, “is actually the limiting factor in many cases.”

The Foundations that Matter

Both panelists were asked whether prior experience with automation technologies like RPA gives organizations a meaningful head start. Their answer was nuanced: it’s not about any particular technology, but it’s about the organizational discipline that comes with it.

Wayne outlined what successful companies have in common: clean, well-labeled data; clear documentation of processes and failure conditions; governance structures for managing data quality; IT systems that operate well with APIs; and a security posture moving toward zero trust.

Linda added that knowing the skills currently present in the organization is equally foundational. Together, these elements constitute the infrastructure that agentic technologies need to perform well. Without them even the most capable AI tools will produce poor outcomes.

“It’s not necessarily the fault of the technology,” Wayne said. “This is about companies not taking those preliminary, fundamental steps to understand themselves.”

Risk, Governance, and Where to Draw the Line on Autonomy

Among the concerns most cited in GLG’s playbook: erroneous outputs and misuse of sensitive data. Linda framed this carefully: these aren’t irrational fears. Agentic AI carries all the risks of the generative models it’s built on (hallucinations, bias, data exposure), plus a new set that emerges specifically from autonomy.

A small error in a multi-step agentic workflow can cascade across systems before any human notices. There are also new cybersecurity vulnerabilities, including prompt injection attacks, and genuinely difficult questions about accountability: when a chain of autonomous actions produces a bad outcome, who is responsible?

Linda’s practical recommendation: use a risk-based approach. For low-stakes tasks, higher autonomy is appropriate. For high-stakes decisions, those affecting customers, finances, or legal standing, human oversight remains essential. She also raised a dimension often overlooked in governance discussions: employee trust. Do your people feel comfortable handing tasks to an agent? Do they fear it as a threat to their expertise or their role? These cultural questions, Linda argued, are not soft considerations but strategic ones.

The Consumer Side: Preparing for the Age of the AI Buyer

Organizations thinking only about internal deployment are missing half the picture. Linda flagged a significant behavioral shift already underway on the consumer side: people are increasingly beginning their search and discovery journeys not with an internet search or a brand website, but with an AI assistant.

This has immediate implications for how companies think about visibility. Web traffic from traditional search is declining for some brands; traffic from AI-referred sources is rising. The emerging discipline of “generative engine optimization”, how companies make themselves visible and preferred by AI systems, is already taking shape.

The further horizon is more dramatic. As AI assistants become more agentic, they won’t just surface recommendations, they’ll complete transactions. A consumer could ask AI to research, compare, and purchase a product without ever leaving the chat interface. For B2B companies, the same logic applies: studies already show that many B2B buyers use AI assistants for vendor research. Companies that aren’t thinking about how they appear to AI intermediaries are at risk of being bypassed by them.

Moving from Pilot to Scale: Think Smaller to Go Further

One of the audience’s most practical questions: how do organizations move beyond proof-of-concept and achieve enterprise-scale impact? Wayne’s answer was counterintuitive: the problem is usually that companies don’t think small enough.

Trying to eliminate an entire contact center is too large of an initial goal. Starting instead with something like improving responsiveness during off-hours for a specific region is a concrete, measurable objective that builds the institutional knowledge needed to tackle larger transformations. “In doing that very small, concrete thing,” Wayne said, “you’ll learn a lot of lessons that make it possible to start looking at more processes.”

Linda added that the harder challenge is often change management, not technology. Study after study, she noted, shows that the biggest barrier to AI success is people-related: leadership buy-in, cultural resistance, and the difficulty of bringing employees along on a journey that may fundamentally change how they work.

The Company of One

The panel closed on a forward-looking question: how close is the so-called “company of one”, a single founder, supported by an army of AI agents, running an entire business end-to-end?

Linda noted that for physical goods, significant complexity around logistics and supplier relationships makes this unlikely at scale. For digital products and services, she sees it as realistic. But she also reframed the thought experiment productively: rather than asking whether it’s achievable, organizations should ask themselves how they would build their current company from scratch today and how many people they would actually need.

“The follow-up question,” she said, “is how can you be sure that someone else isn’t doing exactly that right now?”

Wayne agreed the framing is more useful as an innovation exercise than a near-term prediction. The real value is in identifying the points of friction: the missing technologies, the regulatory gaps, the cultural assumptions that aren’t load-bearing, and those that probably should be. Not everything benefits from optimization to its absolute endpoint. Companies will need to decide, intentionally, how far they want to go and what they’d lose if they went too far.

Register to watch the webcast replay here.

Contact Us

Enter your contact information below and a member of our team will reach out to you shortly.