News

Researchers propose a more human approach to evaluating AIs

New research suggests AI should be evaluated based on brain-inspired structures, promoting efficiency, transparency, and a deeper understanding of cognition.

Researchers propose a more human approach to evaluating AIs

Agencias

  • June 13, 2025
  • Updated: July 1, 2025 at 9:24 PM
Researchers propose a more human approach to evaluating AIs

A new study from Rensselaer Polytechnic Institute and City University of Hong Kong suggests rethinking how we build and evaluate artificial neural networks. Instead of focusing solely on scaling models outward with more layers and data, the researchers propose a more biologically inspired and introspective approach, which could transform the efficiency and intelligence of AI systems.

A vertical leap in AI architecture

Current AI models rely heavily on expanding horizontally—adding more layers and parameters to boost performance. But this new framework introduces a vertical dimension and feedback loops, mimicking how the human brain processes information in three dimensions. This internal structure allows networks to relate, reflect, and refine outputs—leading to smarter, more adaptable systems with lower resource demands.

Toward brain-inspired intelligence

Inspired by biological cognition, this new architecture could allow neural networks to learn and adapt more efficiently, making them not just faster but more insightful. It paves the way for real-time applications in fields like healthcare, robotics, and education, while also helping researchers better understand neurological conditions such as epilepsy or Alzheimer’s.

More sustainable and explainable AI

Beyond performance, the study emphasises the importance of creating sustainable and accessible AI technologies. By requiring fewer computational resources, these brain-like models could reduce environmental impact and expand global access. Additionally, the feedback mechanisms offer greater transparency, moving toward more explainable and trustworthy AI.

A new standard for AI evaluation

The researchers argue that we need new ways to evaluate these advanced systems—ones that reflect their internal reasoning and capacity for self-improvement. This shift could mark a turning point, not just in how AI works, but in how we understand and trust its decisions.

Latest Articles

Loading next article