AI

Three features Apple Intelligence should add

Apple Intelligence needs to close the gap with rivals. These three features—audio overviews, protocol-based automation, and screen sharing—could make all the difference.

Three features Apple Intelligence should add
Avatar of Agencias

Agencias

  • May 15, 2025
  • Updated: July 1, 2025 at 9:38 PM
Three features Apple Intelligence should add

Despite all the buzz surrounding AI at last year’s WWDC, Apple still somehow lags behind key rivals like Google, OpenAI, and Anthropic. As WWDC approaches again, the pressure is mounting for Apple to demonstrate real progress in AI integration. If Apple Intelligence is to live up to its name, here are three features it should adopt.

Safari should offer audio overviews like Notebook LM

Google’s Notebook LM allows users to generate daily audio digests from technical papers or saved links. For Apple, implementing something similar in Safari or Apple Notes would transform how we consume content. Imagine listening to summarized articles while commuting or walking, without ever opening your screen.

Apple needs an MCP-like protocol for Siri

Anthropic’s Model Context Protocol (MCP) is quickly becoming a foundational standard for AI-tool interaction. Apple already has a partial framework with Siri Shortcuts, but it’s outdated. Introducing an MCP-like layer would allow users to ask Siri to generate a Keynote from a Pages document, or automate complex tasks across apps, unlocking new possibilities for power users and those who rely on accessibility features.

Visual Intelligence must support screen sharing

ChatGPT’s ability to view and respond to what’s on screen is a game-changer for real-time assistance. Apple’s Visual Intelligence, while promising, is still limited to static images. Adding live screen sharing and interactive AI help would make iPhones and Macs exponentially more useful, especially in contexts like travel, shopping, or learning.

Latest Articles

Loading next article