Artificial intelligence does not reason, according to Apple, but is there a solution?
Can AI discern?

- October 15, 2024
- Updated: November 3, 2024 at 10:43 AM

Apple’s artificial intelligence research team has published an interesting paper on the weaknesses in the reasoning capabilities of language models. In the paper, available on arXiv (via Macrumors), the team explains how it evaluated a series of language models from different leading developers, including OpenAI and Meta, to determine their ability in solving mathematical and logical reasoning problems. The results point to a concerning fragility in the performance of these models, which seems to be more linked to pattern matching than to logical reasoning itself.
The problem of “reasoning” in AI
One of the most notable findings of the study is that small variations in the formulation of a question can trigger large discrepancies in the models’ responses. In situations where logical coherence and precision are required, this inconsistency undermines the reliability of these AIs. For example, when posing an apparently simple mathematical question, the inclusion of irrelevant details can lead to incorrect answers.
In one of the tests, a math problem asked how many kiwis a person had collected over several days. By introducing extra information, such as the size of some kiwis, the models, including OpenAI’s o1 and Meta’s Llama, got the total wrong, even though those details did not affect the final result at all.
According to the Apple team, the models are not applying logical reasoning, but are using patterns learned during their training to “guess” the answers. The study highlights that even a change as minor as the names used in the questions can alter the results by 10%.
The challenge of logic in AI: what is the solution?
The main concern arising from these findings is that current AI models are not capable of authentic reasoning. Instead of using logic, these systems recognize complex patterns in the data they were trained on, allowing them to generate convincing responses across a wide variety of tasks. However, this approach has a clear limitation: when the task requires consistent and precise reflection, AI often fails.
In light of this situation, Apple suggests a possible solution: the combination of neural networks with traditional symbolic reasoning, an approach known as neurosymbolic AI. This hybrid approach aims to leverage the best of both worlds. Neural networks are excellent for pattern recognition and natural language processing tasks, but they lack the logical reasoning capabilities needed in many scenarios. By integrating symbolic techniques, which are more rigid but much more precise in terms of logic, AIs could improve in decision-making and problem-solving.
The results of Apple’s study highlight a key limitation of current AI technologies. Although it may not seem like it, and while traces of more Apple Intelligence functions appear, we are in the early stages of developing artificial intelligences and still exploring what they are capable of. In this context, research like this sets a clear path to follow when it comes to evolving these tools. A path where AIs are capable of reasoning and providing us with precision and coherence when we need it.
Architect | Founder of hanaringo.com | Apple Technologies Trainer | Writer at Softonic and iDoo_tech, formerly at Applesfera
Latest from David Bernal Raspall
You may also like
Experience Supercar Speed: ZEEKR’s 7GT Accelerates from 0-100 km/h in Just 2.95 Seconds
Read more
Rising Tensions: Protests and Counter-Protests Highlight Political Divide in America
Read more
Is this the end of HDMI? These ports could replace it
Read more
New Battery Plant Promises Over 1,100 Jobs in US
Read more
If you have to pay for only one AI subscription, this is the best one
Read more
ChatGPT is the most downloaded app in the world, but is it enough?
Read more