OpenAI considers usage-based pricing for ChatGPT
OpenAI is exploring usage-based pricing for ChatGPT, according to CEO Sam Altman.
In a recent Bloomberg interview, Altman admitted that ChatGPT‘s current pricing strategy isn’t exactly sophisticated. When they first launched paid tiers, they just tested two price points – $20 and $42 monthly. Users balked at $42 but seemed happy with $20, so that’s what they went with. No focus groups, no market research – just a gut call made in late 2022.
Now OpenAI is considering a more flexible approach. “A lot of customers are telling us they want usage-based pricing,” Altman explained. “Some months I might need to spend $1,000 on compute, some months I want to spend very little.” He specifically ruled out time-based billing though, calling that an AOL-era relic they want to avoid.
This shift makes sense given OpenAI’s new “o” models, where more computing power can lead to better results – and higher operating costs. The company has already moved into the premium segment with ChatGPT Pro, which uses a more capable o-model and provides increased computing power access for $200 monthly – about ten times the regular premium subscription.
Ad
Despite the premium pricing, Altman today revealed on X that ChatGPT Pro is actually losing money. He admits he personally set the price thinking it would be profitable for OpenAI – a miscalculation that likely adds urgency to their pricing strategy revision, given how capital-intensive the business is.
AGI and Superintelligence remain core goals
Despite all this talk about pricing and products, Altman insists OpenAI hasn’t lost sight of its ultimate goal: building Artificial General Intelligence (AGI) and superintelligence (ASI).
Currently, OpenAI’s research team works from a separate building a few miles from the rest of the company – though Altman says this was just a logistical space-planning decision. They plan to eventually bring everyone together on one campus, where research will still have its own dedicated area. “Protecting the core of research is really critical to what we do,” Altman explains.
What counts as AGI? According to Altman, it’s when AI can replace highly skilled human workers. “If you could hire AI as a remote employee to be a great software engineer, I think a lot of people would say, ‘OK, that’s AGI-ish,'” he says. He thinks we could see something like that within four years.
But Altman admits “AGI” has become a fuzzy term. Questions about autonomy remain unanswered, and the goalposts keep moving as AI advances. This is why OpenAI has begun discussing AI development in terms of specific levels to better represent progress. According to Altman, one potential indicator of superintelligence (ASI) would be AI’s ability to accelerate scientific progress.
Recommendation
Over the past year, OpenAI has faced significant internal criticism for allegedly lax safety precautions, particularly from its AGI and ASI teams. The company currently has three safety oversight bodies: the Safety Advisory Group (SAG) for technical studies, a board-level Safety and Security Committee (SSC), and a joint Deployment Safety Board (DSB) with Microsoft. Altman says that this three-tiered structure creates confusion within the company, and they’re working on streamlining it.