Even though AI has been creeping into more and more of our daily lives for a while now, it feels like 2022 has been the year when it has really broken through into the public’s consciousness. Interestingly, it is not scarily inaccurate and equally dystopian biometric facial recognition, or that public governance is increasingly being delegated to automated decision-making systems that struggle to understand who most of us are that has done it. Rather, everybody is talking about AI because of the creative flourishes it has shown this year, as major generative protocols such as DALL-E 2, Stable Diffusion, and most recently ChatGPT have come into the public domain.
These powerful AI engines combined with the almost infinite creativity of the internet have come together to create some truly breathtaking works of art and literature, as well as all sorts of unique creative and useful endeavors, piquing everybody’s interest. It is in this context then, that we are peering into 2023 to see what we can expect from AI. Behind us, we have a year of impressive commercial AI applications overlaid onto a complicated mix of broader and more insidious applications that are starting to have real sway over how the world around us is run and what our place is in it. Let’s take a look.
AI vs Search Engines?
One of the big things that has come out of the rapid rise of ChatGPT, is its ability to offer rich and impressive responses to queries and prompts that look a lot like search terms you’d type into Google. In fact, the New York Times recently said we are going through a code red moment for search and developers have even built Chrome plugins that will automatically overlay ChatGPT responses onto the Google Search results page every time you search for something. It looks then as though 2023 will be a battleground year between traditional search engines and AI generative chat machines.
It is key here, however, to point out that as impressive as these AI tools have been this year and as good their responses can look, there have been a lot of examples of the responses given being nothing more than fancy-looking nonsense. They are not fact and therefore, not something that can be relied upon in the same way a traditional search engine can be, which is exactly what OpenAI’s CEO Sam Altman recently said himself. This, however, doesn’t mean that there won’t be a battle, as the likes of Altman will be aware that this lack of factual reliability in answers given by their tools is one of the major current drawbacks and will, therefore, be working to address the issue. How much progress they make next year will be very interesting to see.
With everybody getting so excited about ChatGPT it is easy to forget that it is actually a reworking of the Generative Pre-trained Transformer 3 (GPT-3) protocol, which launched back in June 2020. GPT-3, however, was only released to researchers and other noted professionals rather than the general public. This meant that while news of its capabilities was breaking through into the public consciousness, it was nothing like we’ve seen over the last few weeks.
This might be different, however, when those same researchers and noted professionals get their hands on GPT-4 which is heavily rumored to be launching early next year. Little is known about what we can expect from GPT-4 but it is almost certain that more people will be paying attention this time around, following the massive fame achieved by ChatGPT.
Text-to-video AI to grab some headlines
We’ve already seen a text-to-video AI protocol in 2022, thanks to Meta’s impressive Make-A-Video tool. The tool converts short descriptions into short GIFs, with users able to choose between three video styles: surreal, realistic, and stylized. Make-A-Video uses a layer of unsupervised learning to understand motion and apply it to traditional text-to-image generation, addressing the challenge of generating video from text rather than static images.
However, professors at Stanford University’s Human-Centered Artificial Intelligence have highlighted generative video as one of their most expected AI developments for 2023. Associate professor of computer science, Percy Liang said:
“We may be getting to a point next year where we won’t be able to distinguish whether a human or computer generated a video. Up to today, if you watch a video, you expect it to be real, but we’re seeing that hard-line start to evaporate.”
Furthermore, with OpenAI also having recently unveiled POINT-E, which can create 3D models from text prompts, we may even see AI-generated items turning up in video games as well as being used as background props in movies.
Troublesome implementations of unsafe AI products
Sticking with the Stanford professors for a moment, another one of the things they expect to see in AI in 2023 relates to the buzz that currently surrounds the subject. With so much interest currently swirling around the topic, it is highly likely that many developers may rush their products to market in order to capitalize on the hype.
This is a big deal because AI is shaping up to be a truly transformational technology. Even if a lot of the buzz that has been whipped up by ChatGPT in recent weeks often sounds like hyperbole, it is an early iteration of a product that has been put out into the wild so that the developers can learn about how it works and what potential problems may be. It is going to get better and it will transform many aspects of our modern life. This means, that if people are rushing out products without fully testing them or considering the effects they will have on society, things could get messy really quickly.
More talk of regulation
Governments and lawmakers stepping in to regulate the use of AI to ensure it is used safely is something that has been on the cards for a while now. The EU AI Act is the most prominent example of this, which is currently being debated by the European Union. If passed, the Act will be the first horizontal law to regulate all uses of AI, categorizing applications into three risk levels. The first category includes applications that pose an unacceptable risk, such as government social scoring, which would be banned. The second category includes high-risk applications, such as CV-scanning tools, which would be subject to legal requirements. The third category includes applications that are not explicitly banned or listed as high-risk, which would largely be unregulated. The EU AI Act will likely come into force in 2023 or 2024, and even if that does happen there will then be a 24-36 month grace period before the main requirements come into force.
In the United States, the White House Office of Science and Technology Policy has released a Blueprint for an AI Bill of Rights, which outlines five principles to guide the design, use, and deployment of automated systems in the age of AI. These principles include ensuring that AI systems are fair, transparent, and accountable; protecting privacy and civil liberties; promoting access to the benefits of AI; encouraging collaboration and innovation; and ensuring that AI systems are sustainable and safe. However, this blueprint is not binding legislation and is more of an educational tool to raise awareness about AI issues. This means it is something to build on rather than something to rely on.
In addition to the EU AI Act and the AI Bill of Rights, there are a number of other national regulations being developed in countries such as Canada, China, the UK, and Brazil. These laws may address issues related to human rights and protecting citizens from potential AI-based abuses. However, it remains to be seen how effective these laws will be in practice. Additionally, there is a concern that the development of AI laws and regulations may not keep pace with the rapid advancement of the technology, leaving gaps in the legal framework. Overall, it is clear that the development and implementation of laws and regulations to govern AI is a complex and ongoing process that will go well into 2023 and beyond.