Status
As of February 2026, no major AI company or research laboratory has made a formal, official declaration that it has achieved Artificial General Intelligence. The term AGI remains contested, with no consensus definition among researchers, companies, or policymakers.
However, the rhetoric has shifted dramatically. In late 2025, OpenAI CEO Sam Altman stated on the Big Technology Podcast that "AGI kinda went whooshing by" and that OpenAI had "built AGIs," while simultaneously arguing the term had become "very sloppy" and proposing the industry move on to defining superintelligence [s1]. This was not a formal corporate declaration but rather a rhetorical reframing — an attempt to move the goalposts toward a bigger target.
In February 2026, a Nature commentary by Eddy Keming Chen, Mikhail Belkin, Leon Bergen, and David Danks argued that "the vision of human-level machine intelligence laid out by Alan Turing in the 1950s is now a reality," citing LLM performance at the International Mathematical Olympiad, theorem proving, scientific hypothesis generation, and PhD-level exam solving [s2]. This represents the strongest academic claim to date, though it remains a commentary by researchers — not a declaration by an AI company.
In January 2026, Sequoia Capital partners Pat Grady and Sonya Huang published "2026: This is AGI," arguing that long-horizon agents — AI systems capable of autonomous work over extended periods — constitute functional AGI. They defined AGI pragmatically as "the ability to figure things out," citing METR tracking data showing agent performance on long-horizon tasks doubling roughly every seven months [s12]. This represents a notable shift: a major venture capital firm declaring AGI has arrived, albeit using a functional rather than cognitive definition.
The gap between informal claims and formal declarations matters. Under the October 2025 Microsoft-OpenAI partnership restructuring, any AGI declaration by OpenAI must now be verified by an independent expert panel before it takes effect — a mechanism that underscores how consequential (and legally significant) a formal declaration would be [s3]. Meanwhile, in mid-February 2026, OpenAI quietly revised its mission statement, dropping the word "safely" and shifting from "ensure that artificial general intelligence benefits all of humanity" to language centered on "building" beneficial AI — a change critics view as reflecting the company's transition from safety-focused nonprofit to profit-driven corporation [s13].
Leading Candidates
OpenAI
OpenAI defines AGI as "a highly autonomous system that outperforms humans at most economically valuable work." The company operates a five-level internal framework: Level 1 (Chatbots), Level 2 (Reasoners), Level 3 (Agents), Level 4 (Innovators), Level 5 (Organizations). OpenAI has released GPT-5 (August 2025) and subsequent iterations through GPT-5.3-Codex, along with reasoning models o3 and o4-mini. GPT-5.3-Codex set new industry highs on SWE-Bench Pro and Terminal-Bench for coding and agentic tasks [s14]. The o3 reasoning model scored 87.7% on GPQA Diamond (expert-level science questions) but only 3% on ARC-AGI-2, the harder novel-reasoning benchmark [s4][s15].
OpenAI's published roadmap targets AI "research interns" by 2026 and fully automated AI researchers by March 2028 [s5]. Altman has said he believes AGI will "probably get developed during [Trump's] term" [s1]. In February 2026, OpenAI completed its restructuring into a for-profit public benefit corporation, with a valuation exceeding $500 billion and up to $60 billion in combined investment from Amazon, Nvidia, and Microsoft [s13]. Any formal AGI declaration still triggers the independent verification panel established in the October 2025 partnership agreement with Microsoft [s3].
Anthropic
Anthropic CEO Dario Amodei stated at the World Economic Forum in Davos in January 2026 that AI models would "replace the work of all software developers within one year" and reach "Nobel-level" scientific research within two years, with 50% of white-collar jobs disappearing within five years. He suggested that predictions of human-level AI by 2026–27 might not be "that far off" [s6][s16]. Anthropic has not published a formal AGI definition or framework comparable to OpenAI's five levels.
Anthropic's Claude Opus 4.5 (with thinking mode) scored 37.6% on ARC-AGI-2 as the top verified commercial model [s15]. The company's focus remains on agent capabilities — systems that take actions and operate independently. Amodei noted that inside Anthropic, AI is already changing hiring needs, with the company requiring fewer people at junior and intermediate levels [s16].
Google DeepMind
Google DeepMind CEO Demis Hassabis has progressively shortened his AGI timeline, from "as soon as 10 years" in autumn 2024 to "probably three to five years away" by January 2025. At the World Economic Forum 2026, Hassabis estimated about a 50 percent chance of AI systems matching all human cognitive capabilities by the end of the decade (2030) — the most conservative major-lab estimate. Notably, Hassabis said current AI systems are "nowhere near" human-level AGI and identified critical gaps: learning from few examples, continuous learning, long-term memory, and improved reasoning. He argued AGI requires "one or two more breakthroughs" [s6][s16].
DeepMind co-authored a notable 2023 paper proposing a levels-of-AGI framework that emphasizes both breadth (versatility across tasks) and depth (proficiency within them), measured across ten cognitive domains. Under this framework, current frontier models score well below the threshold for AGI [s8]. Google's most capable model topped the LMArena Leaderboard in early 2026 with a breakthrough score of 1501 Elo, achieving 37.5% on Humanity's Last Exam and 91.9% on GPQA Diamond [s17].
Other Players
Elon Musk's xAI has claimed its Grok models will achieve superhuman intelligence by 2026 and that "by the end of 2026, programming may disappear, with AI directly generating optimized software without traditional coding steps." The company has not published a formal AGI definition or framework [s9].
Yann LeCun left Meta in December 2025 after 12 years — reportedly over tensions with the company's LLM-focused strategy under Meta Superintelligence Labs — and launched AMI Labs, raising €500 million at a €3 billion valuation. AMI Labs is pursuing "world models" built on LeCun's Joint Embedding Predictive Architecture (JEPA), which trains AI to understand physics and spatial dynamics rather than predicting text. LeCun argues the industry is dangerously "LLM-pilled" and that current language models will never achieve AGI without fundamental architectural changes [s18][s10].
What Would Constitute AGI
There is no agreed-upon definition of AGI. Major proposals differ significantly:
OpenAI's definition: "A highly autonomous system that outperforms humans at most economically valuable work." This is an economic framing — it ties AGI to labor market displacement rather than cognitive benchmarks [s4].
DeepMind's levels framework: Measures performance across ten cognitive domains (language, reasoning, spatial, mathematical, etc.) inspired by the Cattell-Horn-Carroll theory of intelligence. Requires both breadth and depth. Current estimates place GPT-4 at roughly 27% and GPT-5 at 58% of the AGI threshold [s8].
ARC-AGI benchmark: Designed by François Chollet to test novel reasoning — tasks that are "easy for humans, hard for AI." ARC-AGI-2, released in 2025, proved dramatically harder: OpenAI's o3 scored only 3% (versus 60% for the average human), and the best frontier model result is GPT-5.2 at 54% on the Semi-Private Test Set. ARC-AGI-3 (early 2026) adds interactive environments requiring exploration, planning, persistent memory, and goal inference — the first major format change since ARC was introduced in 2019 [s11][s15].
Turing-style functional tests: The February 2026 Nature commentary argues for inference to the best explanation — if a system produces outputs indistinguishable from a generally intelligent agent across diverse domains, it is generally intelligent. Critics counter that this conflates competence with performance and ignores underlying mechanisms [s2].
Functional/economic definitions: Sequoia Capital's January 2026 essay defines AGI as "the ability to figure things out," arguing that long-horizon agents already meet this bar. METR tracking data shows agent task performance doubling every seven months, projecting agents capable of tasks requiring a full human workday by 2028 [s12]. This pragmatic framing sidesteps the cognitive debates but is contested — as MIT Technology Review noted, achieving a one-hour time horizon on METR evaluations does not mean replacing one hour of real-world human work [s19].
A credible AGI declaration would likely need to satisfy multiple criteria: consistent expert-level performance across diverse domains (not just language), ability to learn novel tasks without retraining, robust reasoning under uncertainty, and independent verification by researchers outside the declaring organization. No current system meets all of these criteria.
References
- OpenAI CEO Sam Altman claims AGI might have already "whooshed by" — Windows Central
- Does AI already have human-level intelligence? The evidence is clear — Nature, February 2026
- Microsoft says once AGI is declared by OpenAI it will be verified by independent experts — TechRadar
- Understanding OpenAI's Five Levels of AI Progress Towards AGI — QuinteFT
- OpenAI roadmap revealed: AI research interns by 2026, full-blown AGI researchers by 2028 — TechRadar
- WEF Davos 2026: Google DeepMind, Anthropic CEOs Debate AGI Timelines And Jobs — BW Businessworld
- In 2026, AI will move from hype to pragmatism — TechCrunch
- Levels of AGI for Operationalizing Progress on the Path to AGI — DeepMind / arXiv
- AGI Approaches: 2026 May Be AI Turning Point, Musk Warns — TradingKey
- AGI Needs World Models and State of World Models — NextBigFuture
- ARC-AGI In 2026: Why Frontier Models Still Don't Generalize — Adaline Labs
- 2026: This is AGI — Sequoia Capital, January 2026
- The evolution of OpenAI's mission statement — Simon Willison, February 2026
- Introducing GPT-5.3-Codex — OpenAI, February 2026
- ARC Prize 2025 Results and Analysis — ARC Prize
- AI luminaries at Davos clash over how close human-level intelligence really is — Fortune, January 2026
- AI Updates Today (February 2026) — Latest AI Model Releases — LLM Stats
- Yann LeCun's new venture is a contrarian bet against large language models — MIT Technology Review, January 2026
- This is the most misunderstood graph in AI — MIT Technology Review, February 2026