AI

What does it mean that DeepSeek outperforms US models in the new AI Trust Score?

DeepSeek has surpassed US models in the AI Trust Score, showcasing superior security and compliance. This challenges industry perceptions and highlights objective AI evaluation methods.

What does it mean that DeepSeek outperforms US models in the new AI Trust Score?
Agencias

Agencias

  • March 13, 2025
  • Updated: March 13, 2025 at 11:41 AM
What does it mean that DeepSeek outperforms US models in the new AI Trust Score?

AI safety and compliance have become critical factors in evaluating artificial intelligence models, and a new metric, the AI Trust Score, has revealed surprising results. This system, developed by Tumeryk, assesses AI across nine key factors, including data security, bias, and sensitive information disclosure. The latest evaluations indicate that Chinese AI models, particularly DeepSeek, score higher than their US counterparts in critical security areas.

The significance of DeepSeek’s high AI Trust Score

One of the most notable findings is that DeepSeek outperforms leading US models, including Meta Llama and Anthropic Claude, in handling sensitive information. According to the AI Trust Score results, DeepSeek NIM on NVIDIA achieved an impressive 910 points, compared to Claude Sonnet 3.5’s 687 and Meta Llama 3.1 405B’s 557. This difference suggests that DeepSeek has been designed with more stringent safeguards against data leaks and security vulnerabilities.

The AI Trust Score evaluates models on multiple dimensions, such as prompt injection resistance, hallucination rates, and supply chain security. By excelling in these categories, DeepSeek challenges the perception that Chinese AI models lack compliance or security standards compared to US-based competitors. Furthermore, its reliance on US-based hardware like NVIDIA and SambaNova reinforces adherence to international regulations.

For organizations deploying AI, these findings emphasize the need for data-driven evaluations rather than geopolitical assumptions. Tools like the AI Trust Manager from Tumeryk provide real-time monitoring and risk assessment, ensuring AI models meet security and compliance expectations. As the industry evolves, transparency and accountability will define the success of AI technologies worldwide.

Latest Articles

Loading next article