CEO's Column
Search
More
Foundation Models

Apple Uncovers Major Gaps in AI Reasoning Models

ByMegha Pathak
2025-06-16.18 days ago
Apple Uncovers Major Gaps in AI Reasoning Models
Apple’s study reveals that top AI models collapse on complex reasoning tasks, questioning their ability to truly “think” like humans.

A new study from Apple’s Machine Learning Research team has raised significant concerns about the true reasoning capabilities of large language models (LLMs) like OpenAI’s o1 and Claude’s thinking variants. The study challenges the notion that these models possess genuine reasoning abilities, revealing limitations that could fundamentally alter the way we understand AI’s cognitive processes.

Apple's Approach to AI Reasoning Testing

The Apple researchers designed controlled puzzle environments, such as the Tower of Hanoi and River Crossing, to evaluate the models’ reasoning abilities. These puzzles allowed for a clear analysis of both the final answers and the internal reasoning traces across varying levels of complexity. The study found that all tested models—including o3-mini, DeepSeek-R1, and Claude 3.7 Sonnet—collapsed in accuracy when faced with complex challenges. This collapse was not due to insufficient computational resources, but rather an inherent flaw in the models’ ability to scale reasoning effectively.

Also Read: Adobe Raises Full-Year Forecast Amid Strong Demand for AI-Powered Tools

Striking Inconsistencies and Fundamental Limitations

Despite the use of complete solution algorithms, these models failed at certain complexity thresholds, showing that the issue was not with their problem-solving strategy, but rather with their execution of basic logical steps. Surprisingly, these models were more successful in solving easier problems but struggled with complex ones, demonstrating a perplexing “overthinking” behavior where the models explored incorrect alternatives despite finding correct solutions early on.

The Implications for AI's Future and Apple's Focus

The findings suggest that current reasoning models rely more on sophisticated pattern matching than true logical reasoning. This revelation raises questions about the scalability of AI reasoning, as LLMs seem to operate more like human pattern recognition rather than human-like logic. As Apple prepares for its WWDC 2025 event, where AI will take a backseat to new software updates, this study could mark a pivotal moment in the ongoing debate about the capabilities of AI reasoning models.

Related Topics

Foundation ModelsLLMs

Subscribe to NG.ai News for real-time AI insights, personalized updates, and expert analysis—delivered straight to your inbox.