Natural language processing

OpenAI and Anthropic lose billions on AI development and operations

OpenAI and Anthropic lose billions on AI development and operations



summary
Summary

An analysis by The Information, based on internal financial data, shows that OpenAI could lose as much as $5 billion this year. Its rival, Anthropic, also faces significant losses in the billions.

OpenAI’s costs for training AI models and running the inference system could reach $7 billion, The Information reports. Inference costs are expected to rise further when Apple introduces its ChatGPT integration. Additionally, personnel costs could reach up to $1.5 billion.

OpenAI spends nearly $4 billion on renting Microsoft servers alone, despite receiving a discount on computing power ($1.30 per Nvidia A100 chip per hour). This supports the idea that Microsoft’s interest in AI investments is primarily tied to the growth of its Azure cloud platform. Microsoft’s own AI products, such as Copilot or Bing integrations, are underperforming in comparison.

OpenAI’s AI training costs, including data payments, could rise to $3 billion this year. The company currently employs about 1,500 people and plans to grow further. According to The Information, personnel expenses could reach $1.5 billion by year-end.

Ad

In total, OpenAI’s operating costs could amount to $8.5 billion this year, compared to revenues of $3.5 to $4.5 billion, depending on second-half sales.

Anthropic’s situation appears worse, albeit on a smaller scale. A source familiar with the figures says Anthropic expects to spend over $2.7 billion this year, with revenue only a fifth to a tenth of OpenAI’s. The startup estimated $2.5 billion for computing costs alone.

By the end of the year, Anthropic expects to generate approximately $800 million in annualized revenue, or $67 million per month. However, Anthropic will have to share that revenue with Amazon.

Doubts grow about the economic viability of large AI models

The costly development and operation of AI models faces fierce competition, not just between Anthropic and OpenAI. Meta is getting involved with open-source models, and smaller companies like Mistral and Cohere also want a piece of the pie regionally in Europe or in certain niches like B2B data chat.

Organizations struggle to measure the value of generative AI in their processes, especially when implementing chatbot systems as a “general purpose technology” without clear use cases for all employees, such as Microsoft’s Copilot or OpenAI’s ChatGPT Enterprise. In these cases, it’s just hard to systematically measure who’s using AI for what and whether it’s really improving quality or speed.

Recommendation

Accordingly, initial doubts are emerging about the economic viability of the current AI market. This doesn’t negate the value of generative AI in general, but questions whether the investments are proportionate to the benefits.

Potential growth areas include new products like OpenAI’s SearchGPT, but replicating ChatGPT’s success is uncertain. Competing products, such as Google’s Gemini subscription service, have failed to make a significant impact. ChatGPT might be a one-trick pony.

More versatile multimodal models could create new use cases, leading to new applications, increased usage and higher revenues. If efficiency is improved at the same time, margins could ultimately improve. However, many questions remain about the ultimate quality of these capabilities and the cost of producing multimodal content such as video.

To get to the next level, the AI market may need a major breakthrough in scaling general reasoning capabilities. This would open up new automation and business opportunities, and potentially solve fundamental problems with current AI systems, such as generating bullshit.

For OpenAI CEO Sam Altman and others, this is likely the ultimate bet, explaining why large companies continue to invest billions in research and development.

OpenAI and Anthropic lose billions on AI development and operations

Source link