News

Be careful about sharing false or AI-generated content on LinkedIn: the responsibility will be yours alone

LinkedIn will continue to offer features that can generate automated content, but it is now the user's responsibility

Be careful about sharing false or AI-generated content on LinkedIn: the responsibility will be yours alone

Chema Carvajal Sarabia

  • October 9, 2024
  • Updated: July 1, 2025 at 10:52 PM
Be careful about sharing false or AI-generated content on LinkedIn: the responsibility will be yours alone

LinkedIn is shifting the responsibility to users for sharing misleading or inaccurate information created by its own AI tools, instead of the tools themselves, in a completely unexpected but logical move to absolve itself of any issues.

LinkedIn DOWNLOAD

A November 2024 update to its Service Agreement will hold users accountable for sharing any misinformation created by AI tools that violate the privacy agreement.

Since no one can guarantee that AI-generated content is truthful or correct, companies protect themselves by placing the responsibility on users to moderate the content they share.

The blame will always be on the users, not the companies

The update follows in the footsteps of LinkedIn’s parent company, Microsoft, which in early 2024 updated its terms of service to remind users not to take AI services too seriously, and to address the limitations of AI, warning that “it is not designed to be used as a substitute for professional advice.”

LinkedIn will continue to offer features that can generate automated content, but with the warning that it may not be reliable.

The new policy reminds users that they must verify all information and edit it when necessary to comply with community guidelines.

“Please review and edit such content before sharing it with others. As with all the content you share on our Services, you are responsible for ensuring that it complies with our Professional Community Policies, including not sharing misleading information,” says LinkedIn.

The social network site is likely hoping that its genAI models will improve in the future, especially since it now uses user data to train its models by default, requiring users to opt out if they do not want their data to be used.

Chema Carvajal Sarabia

Journalist specialized in technology, entertainment and video games. Writing about what I'm passionate about (gadgets, games and movies) allows me to stay sane and wake up with a smile on my face when the alarm clock goes off. PS: this is not true 100% of the time.

Editorial Guidelines

Latest Articles

Loading next article