Ethics and big companies don’t usually get along. Meta has once again shown this by dissolving its Responsible AI (RAI) team. Following this, they’ve allocated their resources to bolster the development of projects based on generative artificial intelligence.
According to The Information, members of the commission will transition to working on products based on generative AI, while others will focus on Meta’s AI infrastructure. Contrary to appearances, the conglomerate led by Zuckerberg asserts its intent to work on creating responsible AI (as explained on their own website).
Earlier this year, the team underwent a restructuring that significantly reduced the RAI’s capacity. This team had been active since 2019, but its initiatives always required lengthy negotiations before they could be implemented.
The purpose of the RAI was primarily to oversee the AI training process. The team would review the data used for such training to ensure it was diverse and appropriate. Despite the advancements in these systems, many end up causing issues. Among some of the most notable is the case of Instagram, whose algorithm promoted accounts with sexually abusive content to minors.
Among the ongoing projects at Meta is LlaMa, its own version of ChatGPT. It’s available for download and can be used in various fields. Currently, Meta is embroiled in legal battles in the United States. The reason? Being sued for the psychological harm caused by their platforms (Instagram, Facebook, etc.) on younger users.