Gartner's Blunt Assessment: AI Needs to Stop Summarizing and Start Doing
In a candid address at Gartner's Data & Analytics Summit in Sydney, Erick Brethenoux, the firm's global chief of AI research, delivered a stark message: "AI is not doing its job today and should leave us alone." This provocative statement cuts through the pervasive hype surrounding artificial intelligence, particularly generative AI, and challenges the industry to refocus on practical, user-centric automation.
Brethenoux's core criticism centers on the current applications of AI, which he argues often add to users' workloads rather than alleviating them. He cited the ubiquitous AI-generated meeting summary as a prime example. While seemingly helpful, these summaries frequently list action items that still require human effort to execute. "I didn't have time to read summaries of meetings two or five years ago," Brethenoux remarked, referring to the pre-generative AI era. Now, with AI providing summaries, the problem persists: "I don't have time to do the five actions in the summary."
This observation highlights a fundamental disconnect between the promise of AI as a productivity enhancer and its current reality in many enterprise applications. Instead of merely summarizing information or identifying tasks, Brethenoux argues that AI should be designed to perform those tiresome tasks automatically. "Just go and do it already," he urged, calling for AI systems that proactively simplify users' lives by handling routine, undesirable chores.
The Power of 'Empathy AI': Automating the Annoying
Brethenoux introduced the concept of "Empathy AI" as a more effective and user-accepted approach to automation. This strategy involves identifying the tasks that employees dread or find most bothersome and then using AI to automate those specific activities. He shared a compelling use case from US healthcare company Vizient. The company's CTO surveyed thousands of employees to pinpoint their most disliked recurring tasks – the kind of chores that make Monday mornings feel particularly heavy. By automating these specific, high-friction activities based on employee feedback, Vizient achieved remarkable results.
"Instant adoption, zero change management problems," Brethenoux reported. This success wasn't just about efficiency; it built trust and buy-in among employees, who then became enthusiastic proponents of AI and began suggesting further automation opportunities. This demonstrates that focusing AI on genuinely improving the daily work experience, rather than imposing potentially unwanted or unhelpful tools, can be a powerful driver of successful AI adoption.
Another example of this principle in action comes from a real estate company. Their process for assessing prospective tenants involved a lengthy, 17-step sequence. A candidate's failure at step 16 meant the company had wasted considerable time and resources on the preceding 15 steps. By implementing AI to automate and process all 17 steps in parallel, the company could quickly identify unsuitable candidates, saving significant time and effort. This shift from a sequential, human-driven process to a parallel, AI-accelerated one exemplifies how targeted automation can deliver tangible business value and improve operational efficiency.
Agentic AI: Hype Versus the Hard Reality of Software Engineering
While "Empathy AI" focuses on automating specific, identified pain points, the broader industry narrative is increasingly dominated by the concept of AI agents – autonomous bots capable of performing tasks independently. Enterprise tech vendors are heavily promoting the vision of AI agents automating everything from IT operations to serving as personal assistants that manage schedules and workflows. Brethenoux, however, urges caution, suggesting tech buyers should take this vision "with a large pinch of salt."
His skepticism stems from two main points. Firstly, AI agents themselves are not a new invention. Industrial sectors have utilized agents for decades, typically within relatively closed and controlled systems. While effective for specific, well-defined tasks in these environments, these historical examples have rarely demonstrated the ability to handle highly complex or unpredictable tasks autonomously.
Secondly, the current vendor narrative often portrays personal AI agents seamlessly interacting with vast, disparate data sources across an entire enterprise. The vision includes agents making decisions like automatically accepting meeting invitations and adding them to a user's calendar. Brethenoux highlights the enormous, often unaddressed, challenge this presents. "Now you have 50,000 agents running around the enterprise," he posited. "How do you orchestrate this? How do they negotiate?"
He recounted asking vendors how such automated scheduling systems would account for the competing priorities and needs of an employee's boss, partner, or children – factors that heavily influence real-world scheduling decisions. The response, he noted, was telling: "silence."
This silence, according to Brethenoux, reveals a significant gap in the current discourse around agentic AI. Vendors and users alike have not given sufficient consideration to the complex software engineering required to build and manage these multi-agent systems effectively and safely. "This is a software engineering problem," he asserted. Building robust agentic systems demands expertise in decomposing complex systems, defining communication protocols, determining appropriate levels of autonomy for each agent, and specifying what information agents can perceive, control, and execute upon.
These are not trivial challenges. They involve intricate design considerations, robust error handling, security implications, and mechanisms for human oversight and intervention. Yet, despite knowing the technical hurdles, Brethenoux suggests vendors continue to promote the idea that an "agentic nirvana" is just around the corner. This, he argues, contributes to unrealistic expectations and potential disappointment for organizations investing heavily in AI technologies.
The Danger of Misnaming and Fuzzy Definitions
Part of the problem, Brethenoux believes, is the conflation of terms like "AI agent" and "generative AI." The rapid advancements and widespread attention given to large language models and generative AI have fueled a new wave of AI hype. However, generative AI, while powerful for tasks like content creation and summarization, is distinct from the concept of autonomous agents capable of independent action and complex task execution.
Using fuzzy or interchangeable definitions for these terms contributes to confusion and unrealistic expectations about what current AI can actually achieve. Brethenoux quoted French philosopher Albert Camus to underscore the point: "To misname things is to contribute to the world's miseries." In the context of AI, misnaming capabilities and conflating different technologies can lead to misguided investments, failed projects, and ultimately, a loss of trust in AI's potential.
The current focus on generative AI's ability to produce text or summaries, while impressive, distracts from the more fundamental challenge of building AI systems that can truly automate complex workflows and make decisions in dynamic, real-world environments. The technical leap from generating text based on a prompt to an agent autonomously navigating enterprise systems, negotiating conflicts, and executing multi-step tasks is substantial and requires addressing deep software engineering challenges that are not yet solved at scale.
Beyond the Hype: A Call for Practical, Problem-Solving AI
Brethenoux's critique serves as a necessary reality check in an era of intense AI enthusiasm. While the potential of AI, including advanced agents, remains vast, the path to realizing that potential is fraught with technical and implementation challenges that require serious attention, not just marketing hype. The focus, he suggests, should shift from creating impressive demos of generative capabilities or promising futuristic autonomous agents to building practical AI solutions that address real, identified problems for users and businesses today.
The "Empathy AI" approach offers a blueprint for this more grounded strategy. By starting with the user's pain points – the tedious, time-consuming tasks that drain productivity and morale – organizations can deploy AI in ways that are immediately valuable and welcomed by employees. This bottom-up approach, focused on automating specific, well-understood processes, builds confidence and provides a solid foundation for potentially more complex automation efforts in the future.
Moreover, the challenges Brethenoux raises regarding agent orchestration are critical for the industry to confront. As organizations contemplate deploying numerous AI agents, they must grapple with how these agents will interact, prioritize tasks, handle conflicts, and operate within the existing IT infrastructure and business processes. This requires significant investment in software architecture, integration platforms, and governance frameworks. Ignoring these complexities in pursuit of an overly simplistic vision of autonomous agents is a recipe for failure.
The conversation needs to move beyond the capabilities of individual AI models (like large language models) to the challenges of building reliable, scalable, and manageable systems composed of multiple interacting AI components and integrating them effectively into human workflows. This is where the expertise of software engineers, system architects, and domain experts becomes paramount.
The Future of Work and AI's Role
The debate over AI's role in the workplace is intrinsically linked to discussions about the future of work itself. Will AI replace jobs wholesale, or will it augment human capabilities? Brethenoux's perspective suggests that the most immediate and beneficial impact of AI lies in augmentation – specifically, in freeing up human workers from the most tedious and repetitive aspects of their jobs. By automating these tasks, AI can allow employees to focus on more creative, strategic, and fulfilling work.
However, achieving this requires a deliberate and thoughtful approach to AI deployment. It necessitates understanding the human element – identifying what tasks are truly burdensome and how automation can genuinely improve the employee experience. It also requires acknowledging the technical complexities involved in building reliable automation systems, especially as they become more autonomous.
The vision of highly autonomous AI agents remains a powerful one, but realizing it requires overcoming significant hurdles in areas like multi-agent coordination, safety, security, and explainability. Vendors have a responsibility to be transparent about these challenges and work with customers to develop realistic deployment strategies. Organizations, in turn, must look beyond the hype and evaluate AI solutions based on their ability to deliver tangible value and address specific business needs, starting perhaps with the "Empathy AI" approach of automating the tasks that employees wish would just disappear.
Ultimately, Brethenoux's message is a call for pragmatism and a renewed focus on the fundamental purpose of enterprise technology: to make work easier and more productive for humans. Current AI, while impressive in its generative capabilities, still has a long way to go before it can truly "do its job" of seamless, helpful automation, especially when it comes to the complex orchestration required for effective AI agents across the enterprise.
The path forward involves not just developing more powerful AI models, but also investing heavily in the software engineering and system design required to integrate these models into reliable, manageable, and truly helpful automation solutions that employees will embrace rather than resent.
The challenges of deploying AI at scale in the enterprise are well-documented. Companies often struggle with data quality, integration issues, and the complexity of managing AI models in production. Adding autonomous agents into this mix introduces new layers of complexity, particularly around how these agents interact with each other and with existing systems. Addressing these foundational challenges is crucial for successful AI implementation, whether it's simple automation or sophisticated agentic systems. Organizations must build a robust data and infrastructure foundation before attempting advanced AI deployments.
Furthermore, the ethical implications of deploying autonomous agents are significant. Questions around accountability, bias, and control become even more pressing when systems are designed to act independently. Ensuring that AI agents operate safely and ethically requires careful design and governance, including mechanisms for human oversight and the ability to understand and audit agent decisions. This is another area where the software engineering challenges are substantial and cannot be overlooked.
The current wave of AI hype, while driving innovation and investment, also risks creating an "AI winter" if expectations are not managed. If businesses invest heavily based on unrealistic promises of fully autonomous agents and fail to see tangible results, disillusionment could set in, slowing down future AI progress. Understanding the typical technology hype cycle is important for setting realistic timelines and goals for AI projects. Focusing on incremental value delivery, as suggested by the "Empathy AI" concept, can help maintain momentum and demonstrate ROI.
The development of AI agents is an active area of research and development. Companies are exploring various architectures and frameworks for building and orchestrating agents. However, the leap from controlled environments or limited tasks to widespread, complex enterprise deployment is significant. The evolution of AI agent technology is ongoing, and current capabilities may not match the ambitious visions being promoted. It's essential for businesses to evaluate agentic solutions based on their proven capabilities and the vendor's ability to address the underlying engineering complexities.
Ultimately, the success of AI in the enterprise will depend not just on the power of the algorithms, but on the thoughtful application of these technologies to solve real problems, the robust engineering of the systems that deploy them, and a clear-eyed understanding of their current limitations versus future potential. Brethenoux's call for AI to "leave us alone" from generating useless summaries and instead focus on automating the tasks we truly want to offload is a powerful reminder of what practical, user-centric AI should aim to achieve.
The journey towards truly effective and widespread AI automation, particularly with autonomous agents, is a marathon, not a sprint. It requires patience, significant technical effort, and a willingness to prioritize user needs and engineering reality over marketing narratives. Exploring diverse enterprise automation strategies beyond just generative AI is key for businesses looking to leverage technology to improve efficiency and employee satisfaction. Solutions that integrate various AI techniques and focus on workflow automation are often more impactful in the short to medium term.
The challenge of orchestrating complex systems is not unique to AI agents. Large-scale distributed systems have always presented significant engineering hurdles. However, the dynamic and potentially unpredictable nature of autonomous AI agents adds new dimensions to this problem. The technical difficulties in coordinating multiple AI agents, ensuring they don't conflict, and managing their interactions with human users and legacy systems are substantial research and development areas. Vendors promoting agentic solutions must demonstrate clear plans for addressing these orchestration challenges.
Brethenoux's perspective serves as a valuable counterpoint to the prevailing optimism. It encourages a more critical evaluation of AI tools and a focus on building solutions that genuinely improve productivity and the working lives of employees. By prioritizing "Empathy AI" and acknowledging the significant engineering work required for advanced agentic systems, organizations can navigate the AI landscape more effectively and realize the technology's true potential.
It's time for AI to move beyond being a sophisticated summarization tool or a source of futuristic hype and become a reliable partner in automating the mundane, freeing humans to focus on what they do best.
The conversation around AI needs to shift from 'what can it generate?' to 'what can it *do* for us?' This practical, task-oriented view, championed by analysts like Brethenoux, is essential for moving AI from a fascinating technology to a truly transformative force in the workplace. The success stories of automating dreaded tasks highlight the path forward – one focused on solving real problems and delivering tangible benefits, grounded in solid software engineering principles rather than just ambitious visions.
The future of AI in the enterprise will likely involve a combination of specialized models, generative capabilities, and increasingly sophisticated agents. However, the pace of adoption and the ultimate success will depend on the industry's ability to address the complex integration, orchestration, and human-centric design challenges that Brethenoux so pointedly highlighted.
In conclusion, while generative AI has captured the public imagination, Gartner's Erick Brethenoux reminds us that the true measure of AI's success in the enterprise will be its ability to automate tasks that genuinely improve productivity and employee satisfaction. The hype around agentic AI is premature, given the significant, unaddressed software engineering challenges involved in building and orchestrating such systems at scale. A pragmatic approach, focusing on "Empathy AI" and acknowledging the technical realities, is crucial for navigating the current AI landscape and realizing its long-term potential.