Stay Updated Icon

Subscribe to Our Tech & Career Digest

Join thousands of readers getting the latest insights on tech trends, career tips, and exclusive updates delivered straight to their inbox.

OpenAI's AGI Definition Sparks Conflict with Microsoft Over Partnership Terms

7:42 AM   |   28 June 2025

OpenAI's AGI Definition Sparks Conflict with Microsoft Over Partnership Terms

The AGI Clause: How OpenAI's Pursuit of Advanced AI Is Straining Its Microsoft Partnership

In the rapidly evolving landscape of artificial intelligence, partnerships are crucial, none more so than the deep alliance between OpenAI and Microsoft. Microsoft has poured over $13 billion into OpenAI, integrating its groundbreaking models into products like Azure and Copilot. This collaboration has been a cornerstone of Microsoft's AI strategy and has provided OpenAI with the vast computational resources and funding necessary for its ambitious research goals. However, even the strongest alliances can face unexpected challenges, and for OpenAI and Microsoft, that challenge centers on a seemingly small, once-hypothetical clause within their foundational contract: the definition and declaration of Artificial General Intelligence (AGI).

This clause stipulates that if OpenAI's board formally declares that the company has achieved AGI, it would significantly alter the terms of their agreement, specifically limiting Microsoft's contracted access to OpenAI's subsequent, most advanced technologies. What was initially perceived as a distant possibility has reportedly become a flashpoint in ongoing negotiations between the two companies. According to reports, Microsoft is now actively seeking the removal of this clause and has even considered the drastic step of walking away from the partnership entirely, underscoring the immense stakes involved.

The Unreleased Paper and Internal Debate

Adding complexity to this already fraught situation is an internal OpenAI research paper titled “Five Levels of General AI Capabilities.” This paper, which has not been publicly released, outlines a framework for classifying AI systems based on a spectrum of increasing capabilities rather than a simple binary of 'not AGI' or 'AGI'. Sources familiar with the matter claim that this paper became a subject of intense debate within OpenAI late last year. The concern was that by proposing a detailed, multi-stage classification system, the paper could inadvertently complicate OpenAI's ability to make a clear, unilateral declaration of having achieved AGI, potentially weakening their position in negotiations with Microsoft.

An OpenAI spokesperson, Lindsay McCallum, commented on the paper, stating, “We’re focused on developing empirical methods to evaluate AGI progress—work that is reproducible, measurable, and useful to the broader field.” McCallum characterized the “Five Levels” paper as “an early attempt at classifying stages and terminology to describe general AI capabilities,” emphasizing that it “was not a scientific research paper.” Microsoft has declined to comment on the matter.

Defining AGI: The Heart of the Conflict

At the core of the tension is the definition of AGI itself. OpenAI's corporate structure blog post notes that AGI “is excluded from IP licenses and other commercial terms with Microsoft.” The company publicly defines AGI as "a highly autonomous system that outperforms humans at most economically valuable work.” However, the contractual reality appears more nuanced.

Sources familiar with the partnership discussions highlight that the contract includes at least two relevant concepts related to AGI:

  • **Board Declaration:** OpenAI's board can unilaterally decide the company has reached AGI based on its charter definition. This declaration would immediately terminate Microsoft's rights to access or generate revenue from the AGI technology, although Microsoft would retain rights to all technology developed *before* that milestone.
  • **Sufficient AGI:** A concept added in 2023, this defines AGI based on a system's ability to generate a certain level of profit. If OpenAI asserts it has reached this benchmark, Microsoft's approval is required for the determination.

The contract also reportedly includes a provision preventing Microsoft from pursuing AGI independently or through third parties using OpenAI's intellectual property. These contractual intricacies reveal that the simple public definition of AGI is far less important than the specific triggers and definitions embedded within the partnership agreement.

The Stakes for Both Giants

The ongoing renegotiation of the partnership agreement comes as OpenAI is reportedly preparing for a corporate restructuring. Microsoft's primary objective is to ensure continued access to OpenAI's cutting-edge models, even if OpenAI believes it has reached AGI before the partnership's scheduled end in 2030. One perspective suggests that Microsoft doesn't believe AGI will be achieved by 2030 anyway, making the clause less of an immediate threat, while another views the clause as OpenAI's ultimate leverage in the relationship.

The situation has become so sensitive that, according to reports, OpenAI has considered whether its progress, perhaps exemplified by an advanced AI coding agent, might be sufficient to invoke the AGI clause. The tensions have reportedly escalated to the point where OpenAI has even discussed the possibility of publicly accusing Microsoft of anticompetitive behavior, highlighting the depth of the disagreement.

Sam Altman, OpenAI's CEO, has publicly expressed optimism about the timeline for achieving AGI, stating in one interview that he expects to see it during the current US presidential term. This forward-looking perspective from OpenAI's leadership naturally puts pressure on the contractual terms governing their most significant partnership.

The 'Five Levels' Framework: A Spectrum vs. a Threshold

The unreleased “Five Levels of General AI Capabilities” paper offers a glimpse into OpenAI's internal thinking about measuring AI progress. Bloomberg previously reported on the existence of this framework, noting that OpenAI planned to share it with investors as a “work in progress.” Sam Altman and chief research officer Mark Chen have referenced similar concepts in public discussions.

A version of the paper from September 2024 reportedly details a five-step scale. At that time, many OpenAI models were classified at Level 1, described as “An AI that can understand and use language fluently and can do a wide range of tasks for users, at least as well as a beginner could and sometimes better.” Some models were approaching Level 2, defined as “An AI that can do more advanced tasks at the request of a user, including tasks that might take an hour for a trained expert to do.”

Crucially, the paper deliberately avoids providing a single, definitive definition of AGI. Instead, it advocates for using a spectrum to describe AI systems that are increasingly general and capable. This approach contrasts sharply with the binary nature of the AGI clause in the Microsoft contract, which is triggered by a declaration or a specific benchmark.

The paper also explores the potential societal impacts of reaching each level of AI capability, discussing changes in education, employment, scientific research, and politics. It includes warnings about the new risks that emerge as AI tools become more powerful and autonomous.

Altman himself has publicly downplayed the importance of the term AGI, stating at a conference, “I think mostly the question of what AGI is doesn’t matter. It is a term that people define differently; the same person often will define it differently.” This perspective aligns with the paper's approach of using a spectrum, but it also highlights the potential for definitional ambiguity to become a point of contention in high-stakes legal and business agreements.

Why Was the Paper Not Released?

The timing and potential implications of the “Five Levels” paper's publication became a point of internal discussion at OpenAI. Sources suggest the paper was nearing completion late last year, with copy editors hired and visuals prepared for a potential blog post announcement. However, the paper was not released.

Multiple sources familiar with the situation indicated that OpenAI's partnership with Microsoft was cited internally as a reason to delay or hold off on publishing the paper. Discussions with Microsoft were reportedly “mentioned as a blocker for putting the paper out.” The concern was that publicly outlining a detailed, multi-level framework for AI capabilities could complicate future AGI declarations or provide Microsoft with specific points to challenge such a declaration based on the paper's own criteria.

OpenAI spokesperson Lindsay McCallum, however, offered a different explanation, stating that “it’s not accurate to suggest we held off from sharing these ideas to protect the Microsoft partnership.” Another source suggested the paper was not released because it did not meet internal technical standards for publication. These differing accounts highlight the sensitivity surrounding the paper and its potential implications for the crucial Microsoft relationship.

Sam Altman, chief executive officer of OpenAI Inc., during a Senate Commerce, Science, and Transportation Committee hearing.
Sam Altman, chief executive officer of OpenAI Inc., during a Senate Commerce, Science, and Transportation Committee hearing in Washington, DC, US, on Thursday, May 8, 2025.
Photograph: Nathan Howard/Getty Images, via Wired.

The Broader Implications of the Conflict

The dispute between OpenAI and Microsoft over the AGI clause and the implications of the “Five Levels” paper is more than just a corporate squabble; it reflects fundamental questions about the nature of AI progress and the future of the tech industry. It underscores the difficulty in defining and measuring advanced intelligence, particularly when billions of dollars and strategic control are at stake.

The conflict reveals the inherent tension in partnerships between a research-focused entity like OpenAI, with its stated mission to ensure AGI benefits all of humanity, and a commercial giant like Microsoft, focused on integrating AI into its vast product ecosystem and generating revenue. While their goals have been aligned in accelerating AI development, the potential arrival of AGI introduces a divergence of interests explicitly addressed (and now complicated) by the contract.

Microsoft's reported pushback on the AGI clause demonstrates its desire to secure its investment and continued access to the most advanced AI capabilities, regardless of how OpenAI classifies its progress. OpenAI, on the other hand, views the AGI clause as essential to its mission and potentially its independence, providing leverage as its technology approaches unprecedented levels of capability.

The debate also touches upon broader industry discussions about AI safety, governance, and the responsible development of increasingly powerful systems. The “Five Levels” paper, even in its unreleased state, highlights the need for frameworks to understand and manage the societal impact of AI as it advances through different stages of capability.

Navigating the Future

As OpenAI and Microsoft continue their negotiations, the definition and timing of AGI will remain central. The outcome of these discussions could set a precedent for how future partnerships in the AI space are structured, particularly as other companies also pursue increasingly general and capable AI systems. Will Microsoft succeed in removing the clause, securing its long-term access? Or will OpenAI maintain this leverage, potentially altering the dynamics of one of tech's most important alliances?

The situation underscores that the path to AGI is not just a technical challenge but also a complex interplay of corporate strategy, contractual obligations, and differing visions for the future of artificial intelligence. The unreleased “Five Levels” paper, intended perhaps as a tool for clarity, has instead become entangled in the very corporate tensions it might have sought to illuminate, highlighting how the abstract concept of AGI has concrete, multi-billion-dollar implications in the real world.

The resolution of this conflict will likely depend on finding a middle ground that satisfies both companies' strategic imperatives while navigating the uncertain waters of defining and achieving truly general artificial intelligence. It serves as a powerful reminder that in the race for AI dominance, the fine print of a contract can be just as impactful as the code itself.