In an exclusive interview with CNBC on August 11th, OpenAI CEO Sam Altman sharply criticized the key term in the field of artificial intelligence, "artificial general intelligence (AGI)," arguing that the concept is gradually losing its practical meaning. He pointed out that with the rapid advancement of AI technology, the definition of AGI has become increasingly vague—ranging from "capable of all human tasks" to "accomplishing a vast amount of work," with varying standards and difficulty in quantification, making it impossible to effectively measure technological progress.
This view quickly resonated with the industry. Altman emphasized that the current focus should be on the specific capabilities of AI in vertical fields such as healthcare and programming, rather than on the abstract debate over whether AGI has been achieved. Nick Patience, Vice President of The Futurum Group, further pointed out that the sci-fi nature of AGI easily fuels speculation, obscuring the real breakthroughs in technological implementation. In fact, global AI funding exceeded $82 billion in 2024, but the abuse of the AGI concept by some companies to inflate valuations has drawn regulatory attention.
Notably, Altman's criticism reflects the cognitive shift facing the AI industry. With models like ChatGPT now capable of handling 80% of everyday office tasks, the industry is beginning to rethink the system for determining "human-level intelligence." CNBC analysis suggests that establishing quantifiable performance benchmarks (such as error rates and generalization levels) will become a new trend, helping to direct resources toward innovation in real-world scenarios. As Altman put it, "We don't need new labels; we need an honest assessment of the boundaries of technology."