Sam Altman, the CEO of rising star tech company OpenAI, says that people are setting themselves up for disappointment in terms of the capabilities of GPT-4. In a recent interview with StrictlyVC, Altman answered questions about the new language model and expanded upon why he believes it’s going to disappoint people.
The biggest question on everyone’s lips is when GPT-4 will be released. To this, Altman responded, ‘It’ll come out at some point when we are confident we can do it safely and responsibly.’
There’s a lot of speculation about when the next generation of the GPT language model will come out, and people are incredibly excited about it. However, Altman gives no release time frame, and he believes that people will ultimately be disappointed by the model when it drops.
‘The GPT-4 rumor mill is a ridiculous thing. I don’t know where it all comes from. People are begging to be disappointed, and they will be. The hype is just like… We don’t have an actual AGI, and that’s sort of what’s expected of us.’
For clarification, AGI means artificial general intelligence. AGI essentially refers to software that has generalized human cognitive abilities and can think, process information, and make decisions similar to how humans can. This type of system, to our knowledge, doesn’t yet exist. But, this is what people are expecting of GPT-4.
In general, Altman was very conscious of his language in the interview, setting a new standard for not trying to oversell tech like companies usually do. Instead, he was careful not to make promises, and instead kept to the facts. The prevailing facts here are that GPT-4 is not coming out until it can be released safely and responsibly and that it’s not going to be an example of artificial general intelligence, consciousness, or sentience.
The interviewer also posed questions pertaining to the more practical side of GPT-4 and other large language models, all of which Altman answered with candor and the same measure of consciousness as before.
When queried about whether OpenAI is making money, Altman responded ‘Not much. We’re very early.’ This presumably refers to profits turned due to their advancements, as we know that Microsoft is looking to invest $10 billion into OpenAI. However, there is a trend with tech startups that sees the companies gaining massive investment, but remaining fairly unprofitable for a while, so this statement makes sense.
Altman was also asked about his perspective on AI with different viewpoints. He responded that everyone should be able to decide how they want their AI to behave. ‘If you want the super never-offend, safe-for-work model, you should get that, and if you want an edgier one that is creative and exploratory but says some stuff you might not be comfortable with, or some people might not be comfortable with, you should get that.’
He also added: ‘And really what I think — but this will take longer — is that you, as a user, should be able to write up a few pages of ‘here’s what I want; here are my values; here’s how I want the AI to behave’ and it reads it and thinks about it and acts exactly how you want because it should be your AI.’
The interviewer also expressed interest in Altman’s views on whether AI text generators are a benefit or detriment to learning and society. He stated that ‘There may be ways we can help teachers be a little bit more likely to detect output of a GPT-like system, but a determined person will get around them, and I don’t think it’ll be something society can or should rely on long term.’
He added that ‘Generated text is something we all need to adapt to, and that’s fine. We adapted to calculators and changed what we tested in maths class, I imagine. This is a more extreme version of that, no doubt. But also the benefits of it are more extreme as well.’
It’s true that there are massive benefits to systems like GPT-3, ChatGPT, and the unreleased GPT-4. However, it’ll be up to companies like OpenAI, governments and policymakers, and individuals, whether the increased use thereof is beneficial or detrimental to society.
We’ll keep you updated on the highly-anticipated release of GPT-4, but we caution you against getting your hopes up too high for this utility. In all likelihood, it’ll be a slightly advanced version of GPT-3, as has been the case with previous language models.