No, they didn't raise $122B as the HN title implies. A big chunk of that $122B is a "maybe" that depends on various things that need to happen in the future.
Oh, man... I can't wait to see where this is going. Might not be pretty after all.
I've wondered how many announced fundraising rounds were like this. It's in everyone's interest (VCs and entrepreneurs) if the message to the outside world is "this company is amazing so they've raised a boatload of cash". But VCs might not want to give it all up front, or unconditionally.
It makes it hard to say what the valuation of a company is. If the milestones are unlikely to be hit, then it's anyone's guess.
This is a common structure. It's confusing to people who don't know finance or startups when they first see it.
Even VCs don't get all of their fund money delivered into their bank account when they raise a funding round. It's inefficient and undesirable for everyone involved to have to move all of the money up-front, at once.
If you talk to anyone in startup funding or finance they'll be familiar with the term "capital call" which describes how committed capital obligations are delivered at a later date than the initial deal: https://en.wikipedia.org/wiki/Capital_call
I think more people are aware that VCs raise commitments for a fund that they can pull in via capital calls than are aware that startup funding from VCs come with hurdles to clear.
This is perhaps because the most common round to raise is a small/early one, and these tend not to have hurdles. Founders that only ever raised these rounds wouldn't necessarily know what happens in later/bigger rounds.
Also, I wonder if capital calls come with hurdles as well? That is, can an LP refuse to put in more money if the VC's recent investments have not done well? I would think not, since it typically takes many years to determine whether investments were good or not.
Gotta hit that high IRR as a fund manager and the clock starts when the cash comes in so capital calls are appreciated by fund managers. Unless they are emerging managers (the startup equivalent in finance) and their LP’s are less than institutional and ghost them when the capital call hits.
Their ~$50 million total Alibaba investment turned into ~$70 billion. As of two years ago they were still liquidating out of it.
January 26, 2024 - "Japanese investment holding firm SoftBank Group Corp has largely cleared its ownership in e-commerce giant Alibaba Group Holding, concluding one of the most successful deals in China's internet industry and a holding that spanned about 23 years."
"SoftBank, which invested US$20 million into Alibaba when it was still a start-up in 2000, said in a corporate filing on Thursday that it was set to book a gain of 1.26 trillion yen (US$8.5 billion) - about 425 times the value of its initial outlay - for the Tokyo-based firm's 2024 financial year after divesting its [remaining] shares via subsidiary Skybridge."
It just makes comparing funding rounds hard to understand, since money in the bank is money in the bank, and a lot of the "committed capital if you reach a milestone" is capital that would be easy to get if you reached that milestone, if it is sufficiently advanced, and has enough outs, etc., that you may as well have just raised another round in the future.
Note that even that "money in the bank" of traditional venture firm is not really money in the bank. VC, PE, and hedge fund managers usually don't have all the cash for the fund sitting in the bank at all times. Rather, their agreement with the LPs that fund the fund is structured as a series of capital calls: it gives the fund the right to demand that their LPs deposit cash in their bank accounts within 10-30 days, which can then be used to fund the investments that the VC firm makes. The capital calls are backed by legal documents enforceable in court, with pretty stiff penalties for failing to meet a capital call.
Such a funding structure here isn't all that different: the funding agreement gives OpenAI the right to call on their backers to make certain cash deposits, contingent upon milestones being met. Deep down inside, "money in the bank" doesn't actually exist, it's just mutual agreements backed by force of law.
That’s logically inconsistent. If the company was performing poorly enough that they couldn’t meet their funding milestones from a previous round, they’re not going to have an easy time raising the same money in a future round.
The milestones aren’t a hard-stop that forbids the previous funding round participants from providing the money if they still choose. It’s just an out.
What I am saying is that if you do meet the milestones from your previous round, you're going to have an easy time fundraising anyway, so funding contingent on milestones isn't that different than just saying "well, if we need more money we can do another round"
Fundraising rounds are difficult, laborious, and distracting. It would be extremely different to try to multiply the number of rounds by 3-5X. There's nothing easy about that.
You're also ignoring that the market changes frequently. If you only raised as much money as you needed for the next 4-6 months with plans to re-raise all the time, you'd have to constantly be sizing your growth plans up or down based on how the market felt about startup investing that month.
Imagine the company having to either do speed hiring or large layoffs every few months to adjust to the size of the fundraising round they were able to get this time around.
Nothing about what you're suggesting would be easier, or easy at all
The funds are committed under the terms of the deal (share price, things like board seats, and other details). There are legal obligations to provide it.
This is a common structure for large investments. It would be really inefficient for all of these investors and companies to have to have the money sitting in cash to do a deal and then transfer it into the company's bank where it sits and earns interest for years until they can deploy it.
Even VC firms who raise funds work this way. The capital is "committed" but investors don't wire all of the money over right away so it can sit in the VC firm's bank accounts, waiting. The VCs do what's called a "capital call" through which they're legally bound to provide the money they committed when requested, under the terms of the deal.
It's splashier this way, and is meant to shape the narrative, make other companies fear their warchest, and make hiring easier. Of course, those who are in-the-know won't be fooled, but the perception of the general public will be set in stone by the PR framing.
With NASDAQ and NYSE looking to reduce the timelines for new public companies to be included into indices (“fast entry” rule), I have a feeling that OpenAI and SpaceX and Anthropic are mostly looking to dump their inflated shares into the public’s retirement accounts by force.
Michael Burry called out this structural manipulation play recently:
$2b/month which is $24b/year. Not as much as I expected considering they were at $20b by end of 2025.[0] They only added $4b since?
Anthropic had $19b by end of February 2026 and they added $6b in February alone.[1] This means if they added another $6b in March, they're higher than OpenAI already.
However, I heard that OpenAI and Anthropic report revenue in a different way. OpenAI takes 20% of revenue from Azure sales and reports revenue on that 20%. Anthropic reports all revenue, including AWS's share.[2]
They aren't reporting anything yet. What we hearing is just from news media who get their leaks/info from investors who get some form of IR reports/ presentation.
Both will do public reporting only when they IPO[4] and have regulatory requirement to do so every quarter.
For private companies[1] reporting to investors there are no fixed rules really[3]
Even for public companies, there is fair amount of leeway on how GAAP[2]expects recognize revenue. The two ways you highlight is how you account for GMV- Gross Merchandise Value.
The operating margin becomes very less so multiples on absolute revenue gets impacted when you consider GMV as revenue.
For example if you consider GMV in revenue then AMZN only trades at ~3x ($2.25T/$~800B )to say MSFT($2.75T/$300B) and GOOG ($3.4T/$400B) who both trade at 9x their revenue.
While roughly similar in maturity, size, growth potential and even large overlap of directly competing businesses, there is huge (3x / 9x) difference because AMZN's number includes with GMV in retail that GOOG and MSFT do not have in same size in theirs.
---
[1] There are still a lot of rules reporting to IRS and other government entities, but that information we (and news media) get is from investors not leaks from government reporting - which would be typically be private and illegal to disclose to public.
[2] And the Big 4 who sign off on the audit for companies prefer to account for it.
[3] As long as it is not explicit fraud or cooking the books, i.e. they are transparent about their methods.
[4] Strictly this would be covered in the prospectus(S-1) few weeks before going public and that is first real look we get into the details.
Does the GAAP accounting matter if everyone passively buys shares due to the new fast entry rules, which corruptly will force us all to buy into these companies? The fundamentals and true value seem less relevant than ever:
They aren't reporting anything yet. What we hearing is just from news media who get their leaks/info from investors who get some form of IR reports/ presentation.
The $24b figure is literally in OpenAI's announcement.
The $19b ARR and $6b added in Feb came directly from Anthropic CEO recently.
Except it's not 100x revenues, and it's not 17% growth. I don't know where you got those numbers from?
The numbers OpenAI gave in the post would mean a 30x multiple pre-money. And the $20B -> $24B run-rate growth since the start of the year could plausibly mean anything from 110% to 200% annualized growth rate, depending on whether that happened over two or three months. The $24B is a lower bound as well, since they only gave use one significant digit for the monthly revenue.
You're right, I was thinking about 100x revenues and forgot to confirm the math. Updated to reflect your point. ChatGPT itself provided the 17% number (it's most recently available growth rate)...
And that is revenue only. In the past 15 or so years most US companies (and especially startups) always talk about revenue only. Wheras only profit should matter.
E.g. what good is 20 billion per year when "OpenAI is targeting roughly $600 billion in total compute spending through 2030". That is $150 billion per year?
Why are we treating OpenAI and Anthropic differently than say, Amazon or Uber? Both companies invested in growth for many years before making a profit. Most tech companies in the last 2-3 decades lost money for years before making a profit.
Why are we saying that OpenAI and Anthropic can't do the same?
How did Uber somewhat break even? They lost $34b before making a profit.
Uber was only on a path to monopoly in the US, not world wide. It’s lost to local competitors in most countries. And it can get disrupted by self driving cars soon.
OpenAI’s SOTA LLM training smells like a natural monopoly or duopoly to me. The cost to train the smartest models keep increasing. Most competitors will bow out as they do not have the revenue to keep competing. You can already see this with a few labs looking for a niche instead of competing head on with Anthropic and OpenAI.
It's not as much as you think. Google is spending $185b on data centers this year alone. Amazon is spending $200b this year. Total capex for big tech is ~$700b in 2026 and we're not including neo clouds, Chinese clouds, and other sovereign data centers.
Since everyone is trying to get compute from anywhere they can, including OpenAI going to Google, it's hard to tell what is used internally vs externally.
For example, it's entirely possible that Google's internal roadmap for Gemini sees it using $600b of compute through 2030 as well. In that case, OpenAI needs to match since compute is revenue.
why should only profits matter? if i had a killer product today that i just need to sell tomorrow, wouldn't you still invest today knowing i'll probably only start to make money tomorrow (or perhaps next week)?
the expectation is that they'll eventually make money. they can't raise forever. only startups are not profitable for a few years. but most companies that have existed for a long while have been profitable
and since they're expected to make a LOT of money, everyone wants a piece of that future pie, pushing up the valuation and amount raised to admittedly somewhat delusional levels like here
And why do you think twenty competitors can stay competitive for years to come?
Industries always consolidate and winners emerge. SOTA LLMs look like a natural monopoly or duopoly to me because the cost to train the next model keeps going up such that it won't make sense for 20 competitors to compete at the very high end.
TSMC is a perfect example of this. Fab costs double every 4 years (Rock’s Law). It's almost impossible to compete against TSMC because no one has the customer base to generate enough revenue to build the next generation of fabs - except those who are propped up by governments such as Intel and Rapidus. Samsung is basically the SK government.
I don’t see how companies can catch OpenAI or Anthropic without the strong revenue growth.
Profit is money you couldn’t figure out how to spend. During growth, you want positive operating margins with nominal profits. When the company/market matures, you want pure profits because shareholders like money. If you can find a way to invest those profits in new areas of growth, that’s better.
Everyone wants to treat OpenAI like a car wash business where they need to make a profit almost immediately. I don’t know why people can’t understand that the industry is in a rapid growth stage and investing the money is more important than making a profit now. The profits will come later.
> Today, we closed our latest funding round with $122 billion in committed capital at a post money valuation of $852 billion.
A couple things that stand out to me about this is the use of the phrase "committed capital", which only sounds like a promise that could break from various circumstances, and the valuation of their funding keeps changing so it sounds like a max rather than the valuation every investor invested at.
Probably a lot? It would be much more tax-advantageous to do it this way, $50B worth of credits != $50B worth of spend on Amazon's part, and they might meet in the middle about how much equity that translates to.
You're Amazon. You give OpenAI $50B cash investment, they then hand you back the $50B because they buy $50B worth of Amazon AWS services (they would use AWS or other equivalent compute anyway). OpenAI pays an additional $1-5B in sales taxes. You have $25B opex, $25B profits, you pay 21% corporate taxes on the profits.
Situation B:
You're Amazon. You let OpenAI use your services by handing them API credentials that unlock what would normally cost $50B worth of services. You have zero revenue from the transaction, write off the $25B opex as a tax writeoff on your other profits. OpenAI also doesn't have to pay sales tax.
That’s typical. Large funding rounds usually aren’t delivered as one single giant lump sum into the bank account. The capital is committed in stages that can depend on hitting milestones or goals.
This is done even in smaller startup funding rounds some times.
Fair, I think a lot of what I've been perceiving is the gymnastics in how funding and valuation and deals get reported. There ends up being a ton of asterisks that makes the headline news deviate quite significantly from reality, e.g. https://arstechnica.com/information-technology/2026/02/five-...
I'm old enough to remember when companies worth $1 billion were called "unicorns." Now we have a company raising 122 times that? Valued at nearly 1000 times that...?
At least they're throwing consumers a bone via the ARK deal. It's crazy how little AI exposure is available to anyone who isn't already wealthy and/or connected.
I think this is reality-distortion field rivaling that of Jobs', and a crisis of faith. Nobody apparently believes that capital is worth investing into anything but AI.
> Nobody apparently believes that capital is worth investing into anything but AI.
This is the main reason we see this insane investment into AI imo. If you imagine having lots of money, where should you invest that currently?
Housing market: Seems very overvalued (at least in germany). Also with the current uncertainty and inflation its hard to make an investment that pays back over 20-30 years. So building is also difficult.
Stocks are very volatile currently. Not only since Iran. To me it seems since the financial crisis 2008 investors don't enjoy stocks as before.
Gold: Only if you are paranoid about collapse of society. It doesn't make sense to invest into s.th. without interest rates.
Crypto: Same as gold, but better if you like gamling. I would assume most people who are very rich don't gamble with most of their fortune.
Looking around, and especially forward, it would be military tech, e.g. [1], and its supply chain, e.g. [2] :-\ Valuations are not as crazy, but I bet there'll going to be a lot of demand in the coming decade, unfortunately.
Chip production, too, of course, but it's overflowing with money already, apparently. It's growing though, because there are real actual shortages of stuff like RAM and SSDs, there's money to be made immediately if you can. Chinese RAM manufacturers are building out like crazy.
Would you be fine with the ethical implications of funding the industry to fight WWII? Would you consider funding Ukrainian military unethical? Or Taiwanese?
This is, sadly, not theoretical, and I'm afraid we'll soon see more of such choices, not fewer.
It's the result of too much echo chambered bullshit floating around daily about how capable LLMs really are. It's literally crypto/blockchain all over again. It's one big lie that a lot of people have bought into which causes it to self-perpetuate, like religion.
Also, the valuation for such a debt laden company should be viewed with great skepticism. I'm afraid a lot of mutual funds will end up holding the bags.
> At least they're throwing consumers a bone via the ARK deal. It's crazy how little AI exposure is available to anyone who isn't already wealthy and/or connected.
It is deliberate. Period.
It's always been known that you make money in the private markets and pre-IPO companies and retail is the final exit for insiders and early investors.
Retail is not allowed to be early into these companies (Because that would ruin the point of being an insider) and this "exposure" has to be at the near top.
> The broad consumer reach of ChatGPT creates a powerful distribution channel into the workplace
They mention this line in different forms a couple of times in the article. It’s clear they’re pretty rattled about Anthropic’s momentum in enterprise, I wonder how confident they really are in this rationale.
Kind of makes me wonder how 'accelerated' the timeline of publishing this article was based upon the Claude Code leak today. Considering everyone has gotten a sneak peek at what Anthropic is working on OpenAI might be a little worried. This could also just be coincidence, but this piece really does read like self-encouraging fluff.
This announcement completes the betrayal of their founding principles.
"Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return."
- Not advancing digital intelligence
- While locking people into a superapp
- Because they are further constrained to generating financial returns
There is a lot of talk about the AI bubble. I think there are comparisons to the late 90's/early 00's here with early stars rising quickly but, ultimately, falling. Since essentially everything touches the internet now it is clear that the 'internet bubble' was more of a shakeup of companies than a real over-hype of the internet. That, I think, is at play right now too with the 'AI bubble'. AI isn't going away but some of the early stars may not make it.
So, the real question here is: Is OpenAI Netscape, or are they Google?
The title is incorrect. The $122B includes previous promises. They raised an additional $12B of promises:
"The round totaled $122 billion of committed capital, up from the $110 billion figure that the company announced in February. SoftBank co-led the round alongside other investors, including Andreessen Horowitz and D. E. Shaw Ventures, OpenAI said."
This IPO, if anyone underwrites it, is going to fleece retail so hard. Better make it a SPAC with the help of Chamath and Cantor & Fitzgerald.
This all smells fishy. They didn’t “raise” $122B. Raise means someone put funds in your bank account and said send us the next quarterly report to tell us how our investment is doing.
They have pieces from paper of folks saying they may put up funds or goods and services in that amount. But it’s important to remember that:
1. While they are “raising” commitments others are backing out of deals (see Disney, various data center things). Big deals announced to major fanfare are falling through.
2. They slashed capital expenditure for the future after previously boasting about all the commitments. This is turning into bonkers math of X + Y - X + Z + W - 1/2 of Y = ? On trying to keep track of what’s actually “raised / real” vs what was PR puffery that folks ran away from later.
3. Circular financing still seems to be going on. Big difference of here’s cash, have fun and various “commitments” and balance sheet games that seem to still be going on.
Net net this all still looks very scary and iffy at best.
If OpenAI goes down their investors will lose any chance at getting their money back. They need to keep pretending things are going great for as long as possible.
Are we truly arguing semantics on HN which is a news aggregator for startups and everyone truly knows what a "raise" is and it is obviously not funds in your bank account? I don't disagree with the rest of your comment and the core thesis is valid that OpenAI is very much doing circular financing.
Edit: A raise comes with stipulations on what you can use the money for. I don't know if I was being too mean about responding to a parent but before you comment just google what a raise has..
I can't help but think building an "everything" app is so.. both unbelievably ambitious, and a folly. I am not personally convinced that people want all the things that this super app purports to do.
I am from a generation that still sits behind a desktop computer when making "big purchases." I can't even buy a flight on my phone. I am so much less likely to want to have an AI agent do that for me.
Then the idea that daily consumption of these products will drive people to use them more at work... I have a very different life outside of work. My use of AI outside of work is exceedingly different to what I use it for at work.
I sometimes feel wildly out of touch. But sometimes I view this as the VR moment. To me there are some things that I think may always be preferable to do outside of that ecosystem. And for me, a lot of tasks that 'agents' enable are small enough or important enough that I want to do them myself.
I don't think I'll ever be comfortable allowing an agent to call me a taxi, or order food on my behalf. Because the convenience of asking for food isn't worth the chance it'll mess up, and opening an app and looking at a menu is simpler.
I also think we're coming to a moment where we can start identifying the markers of AI generated content on sight. And I think there's a growing animosity to it. I might be comfortable asking AI something, but when I am looking for or searching for other content, seeing AI content markers make me angry at this point.
To finish, I do just sort of straight up hate the idea that we're comparing this moment to the invention of electricity. It's on the face of it absurd.
I think you lack imagination. This is going to be the future because it is legitimately a step up from the previous ways of doing things. I can do things that were way more difficult before.
It doesn't have to be AI all the way - no one's asking AI to book things on its own and make the payments on their own. What does work is, make AI do the research and you verify and you do the payment. Human in the loop.
To me this is clearly the future - AI has access to all the data sources and can translate your intent by accessing these tools in a loop and use intelligence to automate things.
Maybe there's a scenario where that is useful. But again, I don't know why I'd want an AI to do this research for me. I hop on Skyscanner. I type my location, and where I'd like to go. It presents me with a list of options, and I can then use the filters to find times that work best for me.
I see a flight that isn't in my time frame, but is actually like 400 euros cheaper. And I decide in that moment that waking up at 5am is worth the savings.
I'd have not typed that into a prompt. I made that decision at the moment I saw the possibility. I didn't even know that it was an option prior to that moment.
Then I go look at hotels. I have a list of requirements, but I see that one of the hotels that I just glanced at has a really nice long pool, and the amenities look nicer from the images. I change my mind at that exact moment, I can walk 15 minutes more to the beach.
Now it should be even clearer why this is important for food.
>> To finish, I do just sort of straight up hate the idea that we're comparing this moment to the invention of electricity. It's on the face of it absurd.
Do you feel that any technology is comparable in it’s impact?
Most of modern medicine, by which I mean each discovery and invention in their own right, stand alongside electricity. Particularly vaccines.
AI isn’t there yet. You could turn off AI tomorrow and there’d be a shock but people would quickly switch back. You could not do the same for electricity, medicine, combustion engines (or steam engines/turbines), computers, the internet, modern building materials, etc. You try to swap back off any of those and the modern world (literally and figuratively) collapses. Turn off AI, and there’d be a financial collapse but afterwards everything would return relatively easily to an earlier way of doing things (ye know, the way from just 4 years ago, and which is still 99% of how people do things :) )
Sure, but compare this to "turn[ing] off" combustion engines a mere four years after commercial adoption rather than 162 years later (now). Back then, going back to horses wouldn't have been as big of a deal as it would be now.
I think the Internet is the more apt analogy. But even with electricity, you could have taken it away within the first couple decades of its popularity and society would have shrugged it off. Once they got used to that telegraph thing, not so much.
Yeah, I agree, but AI isn’t there yet. It’s too early to call it one way or the other. There’s plenty else that’s as important as electricity in my view, and maybe AI will join those ranks in 15 years or so when it’s gone through the hype loop and when the economy has recovered from the now-basically-inevitable AI- and war-fueled turmoil of the next decade.
That's primarily a function of the time for adoption, though, not the utility of the technology. In 20 years, people would not be able to so easily say that they could turn off AI with no impact.
That..what..no. The question was whether there are any comparable to electricity, of which I have put forth a number of examples. And also offered my opinion that it is too early to judge whether AI will be as significant or not.
There are loads of technologies that, despite being decades old, do not qualify. So, no, it’s not “primarily a function of time”. It absolutely is about the utility. We can only be in a position to judge utility when sufficient time has passed, and AI ain’t had enough time yet to prove its utility. Given enough time, it might prove as useful as electricity, or it might just sit alongside computer operating systems - never quite making it onto anyone’s “this changed the world” list, even if it has as much utility as an OS.
I hate to read this line when academics and graduate students who work in basic and hard sciences have their funding cut. The grand funding that pays minimum wage to grad students is a burden for this society, yet for a company that took all the valuable data from sources that never got credit, raises billions of dollars. Open says the name, but closed it is by operation. Sorry for this rant, but the priorities of this world suck.
Or all of the people that they didn’t ask, let alone compensate, that made all of the stuff they munged up for training data, so they could sell cheap knockoffs in the same markets.
feels like an insult to readers to try to pretend that their revenue per month is comparable to google or apples growth when the funding is absurdly different, not to mention inflation itself.
I am very much onboard with AI within my workflow. I just don't really see a future where openai/anthropic are the absolute front runners for devs though. Maybe OpenAI does just have the better vision by targeting the general public instead, and just competing to become the next google before google can just stay google?
What is their next step to ensure local models never overtake them? If i could use opus 4.6 as a local model isntead and wrap it in someone else's cli tool, i 100% do it today. are the future model's gonna be so far beyond in capability that this sounds foolish? the top models are more than enough to keep up with my own features before i can think of more... so how do they stretch further than that?
A side note i keep thinking about, how impossible is a world where open source base models are collectively trained similar to a proof of work style pool, and then smaller companies simply spin off their own finishing touches or whatever based on that base model? am i thinking of thinks too simplistically? is this not a possibility?
Anthropic is definitely gaining ground over OpenAI in the business world. Cowork is the absolute hotness right now, and even prompted MSFT to drop their own variant yesterday
Codex and Gemini CLI seem 1-2 months behind Claude Code. They will catch up. This race will eventually be won by whoever can come up with the cheapest compute.
> how impossible is a world where open source base models are collectively trained similar to a proof of work style pool
Current multi-GPU training setups assume much higher bandwidth (and lower latency) between the GPUs than you can get with an internet connection. Even cross-datacenter training isn't really practical.
LLM training isn't embarrassingly parallel, not like crypto mining is for example. It's not like you can just add more nodes to the mix and magically get speedups. You can get a lot out of parallelism, certainly, but it's not as straightforward and requires work to fully utilize.
It's hard to train models in the open. All the big players are using lots of "dodgy" training data. Like books, video, code, destinations. If you did that in the open, the lawyers would shut you down.
Though I think these companies are wildly overvalued, I don't see LLMs as a service going away in the future. The value in OpenAI is that it provides extra compute, data access, etc. My money is on local AI becoming more of a thing, while services like OpenAI still exist for local AIs to consult with. If a local model can somehow know that it's out of it's depth on a question/prompt, it can ask an OpenAI model if it's available, but otherwise still work locally if OpenAI fails to respond or goes out of business. To me that makes a lot more sense than the future being either-or.
> What is their next step to ensure local models never overtake them?
As someone who experiments with local models a lot, I don’t see this as a threat. Running LLMs on big server hardware will always be faster and higher quality than what we can fit on our laptops.
Even in the future when there are open weight models that
I can run on my laptop that match today’s Opus, I would still be using a hosted variant for most work because it will be faster, higher quality, and not make my laptop or GPU turn into a furnace every time I run a query.
If your laptop overheats when you push your GPU, you can buy purpose-built "gaming" laptops that are at least nominally intended to sustain those workloads with much better cooling. Of course, running your inference on a homelab platform deployed for that purpose, without the thermal constraints of a laptop, is also possible.
I didn't say it overheats. It gets hot and the fans blow, neither of which are enjoyable.
MacBook Pro laptops are preferred over "gaming" laptops for LLM use because they have large unified memory with high bandwidth. No gaming laptop can give you as much high-bandwidth LLM memory as a MacBook Pro or an AMD Strix Halo integrated system. The discrete gaming GPUs are optimized for gaming with relatively smaller VRAM.
$122B in "committed capital" (read: pinky promises) for a company whose entire thesis is "scaling laws hold forever and nobody figures out efficiency." DeepSeek and Google already proved that's shaky. Twice.
I ship code every day. I use Claude, I use GPT, I run llama locally. The gap between frontier models and what fits on a 4090 shrinks every six months. Building a "super app" in response isn't vision — it's panic. You don't consolidate into an everything-app when you're winning. You do it when your core product is commoditizing and you need to lock people in before they notice.
Also love the electricity comparison. Electricity doesn't hallucinate, doesn't need $300B in cumulative funding to turn on the lights, and never told me a function exists that doesn't.
Hope it works out. Competition is good. But "flywheel" is just VC for "trust me bro."
isn't it weird that there is no attribution to a human here? i mean, eventually, they have to dropkick sama and install GPT itself as king, right? EOQ seems as good a time as any
Anthropic doesn't have anything else other than the Claude models.
But notice that no-one, not a single mention of Deepseek tells me that they are preparing to scare everyone again. Which is why Dario continues to scare-monger on local models.
Sometimes you do not need hundreds of billions of dollars for inference when it can be done locally with efficient software; and Google proved that. But where is the money in that? So continues the flawed belief in infinitely buying GPUs to scale which Nvidia needs you to do.
Only a matter of time for local models to reach Opus level. We are 1 or at most 2 years behind that and Anthropic knows that.
You have a GPU already (at least an iGPU and an NPU on most newer platforms) as part of your computer, might as well get some use out of it with local inference. And trying to do inference on a larger model with an undersized GPU will have you idling a lot less than 99% - but that still makes a lot of sense for most casual users who will only rarely need a genuine "Pro" class answer from AI. Doing that locally is way less hassle than paying for a subscription or messing with API spend.
They have to focus on the distant future (where they are frankly unlikely to exist) because they are falling further and further behind in the immediate future.
Their latest desperate bid for relevance is a plugin for Claude Code that uses Codex as a second opinion. Please clap.
This a big exaggeration. Codex is probably one of the top two LLM programming tools, along with Claude Code. GPT-5.4 models are strong, unlike the initial GPT-5 ones, which were comparatively bad, and can hold up against Opus 4.6. In my experience, they are better at analytical work.
I cannot really see how they are "far behind," or how some plugin for Claude Code is a "last desperate bid." The tools are close enough to each other that I regularly use Codex one month and Claude Code the next without much disruption, just to try out any new models or features that might be available.
I do not have much visibility into the non-code applications, so maybe it is stickier there.
If/when the AI bubble pops and takes OpenAI down with it, I would not expect Anthropic to come out unscathed either.
I'm seeing diminishing returns, though in fairness we have no idea yet how to integrate properly with existing good practices and principles. I suspect improvement is going to come mainly from improved took usage rather than more impressive models.
I feel that too, every technology has its limits.
I use AI daily. But I can’t see the “intelligence“.
All I see is fine tuning and bigger datasets.
Yesterday I asked claude to fix the color issues of graph. It failed miserably.
Opus 4.6 wasn’t able to figure out why the text was grey. It made something up, instead of realizing the problem was simple, oklch wrapped inside a hsl color. hsl(oklch(…))
I easily figured this out by just looking at the css and adding some logs to js.
This is not intelligence. This is a tool that’s smart. Not sentient. AGI won’t be achieved by scaling alone.
No mention of "AGI" this time. Since we all knew it was a scam. But this is the most damning of them all:
> The OpenAI flywheel is simple. More compute drives more intelligent models. More intelligent models drive better products. Better products drive faster adoption, more revenue and more cashflow.
FTX had a "flywheel". It fell off. Being saddled with hundreds of billions of debt makes this situation ten times worse.
> The OpenAI flywheel is simple. More compute drives more intelligent models. More intelligent models drive better products. Better products drive faster adoption, more revenue and more cashflow. That gives us the ability to reinvest and deliver intelligence more efficiently to consumers, enterprises, and builders around the world.
-x-
In short, the musical chairs are still playing... Keep on walkin' round, y'all, till the music stops.
What what? Are you surprised it's that low, that high, that they can tell what their revenue is, that they report it on a monthly rather than annual basis, or something totally different?
It's going to be pretty hard to get a good answer to whatever you're having difficulties understanding if you can't be bothered to write more than a word.
> Within a year of launching ChatGPT, we reached $1B in revenue. By the end of 2024 we were generating $1B per quarter. We are now generating $2B in revenue per month.
They raised $122B.
122 / 12*2 = 5 years to get your money back (I simplify, I know revenue <> profit)
They are so big that almost no one can afford to acquire them. It is similar as someone would like to acquire MSFT or AAPL.
No, they didn't raise $122B as the HN title implies. A big chunk of that $122B is a "maybe" that depends on various things that need to happen in the future.
Oh, man... I can't wait to see where this is going. Might not be pretty after all.
I've wondered how many announced fundraising rounds were like this. It's in everyone's interest (VCs and entrepreneurs) if the message to the outside world is "this company is amazing so they've raised a boatload of cash". But VCs might not want to give it all up front, or unconditionally.
It makes it hard to say what the valuation of a company is. If the milestones are unlikely to be hit, then it's anyone's guess.
This is a common structure. It's confusing to people who don't know finance or startups when they first see it.
Even VCs don't get all of their fund money delivered into their bank account when they raise a funding round. It's inefficient and undesirable for everyone involved to have to move all of the money up-front, at once.
If you talk to anyone in startup funding or finance they'll be familiar with the term "capital call" which describes how committed capital obligations are delivered at a later date than the initial deal: https://en.wikipedia.org/wiki/Capital_call
I think more people are aware that VCs raise commitments for a fund that they can pull in via capital calls than are aware that startup funding from VCs come with hurdles to clear.
This is perhaps because the most common round to raise is a small/early one, and these tend not to have hurdles. Founders that only ever raised these rounds wouldn't necessarily know what happens in later/bigger rounds.
Also, I wonder if capital calls come with hurdles as well? That is, can an LP refuse to put in more money if the VC's recent investments have not done well? I would think not, since it typically takes many years to determine whether investments were good or not.
Gotta hit that high IRR as a fund manager and the clock starts when the cash comes in so capital calls are appreciated by fund managers. Unless they are emerging managers (the startup equivalent in finance) and their LP’s are less than institutional and ghost them when the capital call hits.
Don't let reality get in the way of vibes
that being said, how can Softbank keep throwing around all these astronomical numbers after so many bad investments? Leftover iPhone money?
Most people know Softbank as the company who lost billions on WeWork and not the company who made several more billions on the ARM IPO.
with these swings, I'm not sure how Son-san keeps himself from getting an ulcer
At some point it must just be Monopoly money.
They borrowed $40B from JP Morgan. They literally did not have the money otherwise.
also they need to pay back that in one year, so if OpenAI don't IPO this year they are screwed
Was curious about the source here. Seems widely reported and I just missed it. This a unpaywalled source I found
https://techcrunch.com/2026/03/27/why-softbanks-new-40b-loan...
Saudi oil money
Which might not be a thing anymore soon the way things are going…
Other way around - the Saudis are making bank.
cant stop winning!
Their ~$50 million total Alibaba investment turned into ~$70 billion. As of two years ago they were still liquidating out of it.
January 26, 2024 - "Japanese investment holding firm SoftBank Group Corp has largely cleared its ownership in e-commerce giant Alibaba Group Holding, concluding one of the most successful deals in China's internet industry and a holding that spanned about 23 years."
"SoftBank, which invested US$20 million into Alibaba when it was still a start-up in 2000, said in a corporate filing on Thursday that it was set to book a gain of 1.26 trillion yen (US$8.5 billion) - about 425 times the value of its initial outlay - for the Tokyo-based firm's 2024 financial year after divesting its [remaining] shares via subsidiary Skybridge."
https://finance.yahoo.com/news/japans-softbank-concludes-run...
Having large funding rounds contingent on meeting milestones is common. Always has been.
It just makes comparing funding rounds hard to understand, since money in the bank is money in the bank, and a lot of the "committed capital if you reach a milestone" is capital that would be easy to get if you reached that milestone, if it is sufficiently advanced, and has enough outs, etc., that you may as well have just raised another round in the future.
Note that even that "money in the bank" of traditional venture firm is not really money in the bank. VC, PE, and hedge fund managers usually don't have all the cash for the fund sitting in the bank at all times. Rather, their agreement with the LPs that fund the fund is structured as a series of capital calls: it gives the fund the right to demand that their LPs deposit cash in their bank accounts within 10-30 days, which can then be used to fund the investments that the VC firm makes. The capital calls are backed by legal documents enforceable in court, with pretty stiff penalties for failing to meet a capital call.
Such a funding structure here isn't all that different: the funding agreement gives OpenAI the right to call on their backers to make certain cash deposits, contingent upon milestones being met. Deep down inside, "money in the bank" doesn't actually exist, it's just mutual agreements backed by force of law.
When a startup raises money without contingencies, typically they do get a large amount of money in the bank all at once.
If investments are not tranched then the money is not delivered in tranches, yes.
The first rule of tautology club is...
That’s logically inconsistent. If the company was performing poorly enough that they couldn’t meet their funding milestones from a previous round, they’re not going to have an easy time raising the same money in a future round.
The milestones aren’t a hard-stop that forbids the previous funding round participants from providing the money if they still choose. It’s just an out.
sure they can. that's the whole point of the "pivot"
What I am saying is that if you do meet the milestones from your previous round, you're going to have an easy time fundraising anyway, so funding contingent on milestones isn't that different than just saying "well, if we need more money we can do another round"
Fundraising rounds are difficult, laborious, and distracting. It would be extremely different to try to multiply the number of rounds by 3-5X. There's nothing easy about that.
You're also ignoring that the market changes frequently. If you only raised as much money as you needed for the next 4-6 months with plans to re-raise all the time, you'd have to constantly be sizing your growth plans up or down based on how the market felt about startup investing that month.
Imagine the company having to either do speed hiring or large layoffs every few months to adjust to the size of the fundraising round they were able to get this time around.
Nothing about what you're suggesting would be easier, or easy at all
Why not announce the funding after the milestones have been met?
The funds are committed under the terms of the deal (share price, things like board seats, and other details). There are legal obligations to provide it.
This is a common structure for large investments. It would be really inefficient for all of these investors and companies to have to have the money sitting in cash to do a deal and then transfer it into the company's bank where it sits and earns interest for years until they can deploy it.
Even VC firms who raise funds work this way. The capital is "committed" but investors don't wire all of the money over right away so it can sit in the VC firm's bank accounts, waiting. The VCs do what's called a "capital call" through which they're legally bound to provide the money they committed when requested, under the terms of the deal.
It's splashier this way, and is meant to shape the narrative, make other companies fear their warchest, and make hiring easier. Of course, those who are in-the-know won't be fooled, but the perception of the general public will be set in stone by the PR framing.
One of the stipulations is that OpenAI achieves "AGI"... Need I say more?
Also a lot of this "money" is in cloud compute and credits not cash so...
The assumption that's conveniently left out is that the milestones are realistic
Seems like all of OpenAI's "deals" are announcement fodder with no real contract, primed to quietly fall through later.
With NASDAQ and NYSE looking to reduce the timelines for new public companies to be included into indices (“fast entry” rule), I have a feeling that OpenAI and SpaceX and Anthropic are mostly looking to dump their inflated shares into the public’s retirement accounts by force.
Michael Burry called out this structural manipulation play recently:
https://www.benzinga.com/markets/tech/26/03/51248353/michael...
Ok, let's switch to the HTML doc title above.
"Here comes another bubble..."
$2b/month which is $24b/year. Not as much as I expected considering they were at $20b by end of 2025.[0] They only added $4b since?
Anthropic had $19b by end of February 2026 and they added $6b in February alone.[1] This means if they added another $6b in March, they're higher than OpenAI already.
However, I heard that OpenAI and Anthropic report revenue in a different way. OpenAI takes 20% of revenue from Azure sales and reports revenue on that 20%. Anthropic reports all revenue, including AWS's share.[2]
[0]https://www.reuters.com/business/openai-cfo-says-annualized-...
[1]https://finance.yahoo.com/news/anthropic-arr-surges-19-billi...
[2]https://x.com/EthanChoi7/status/2036638459868385394
They aren't reporting anything yet. What we hearing is just from news media who get their leaks/info from investors who get some form of IR reports/ presentation.
Both will do public reporting only when they IPO[4] and have regulatory requirement to do so every quarter. For private companies[1] reporting to investors there are no fixed rules really[3]
Even for public companies, there is fair amount of leeway on how GAAP[2]expects recognize revenue. The two ways you highlight is how you account for GMV- Gross Merchandise Value.
The operating margin becomes very less so multiples on absolute revenue gets impacted when you consider GMV as revenue.
For example if you consider GMV in revenue then AMZN only trades at ~3x ($2.25T/$~800B )to say MSFT($2.75T/$300B) and GOOG ($3.4T/$400B) who both trade at 9x their revenue.
While roughly similar in maturity, size, growth potential and even large overlap of directly competing businesses, there is huge (3x / 9x) difference because AMZN's number includes with GMV in retail that GOOG and MSFT do not have in same size in theirs.
---
[1] There are still a lot of rules reporting to IRS and other government entities, but that information we (and news media) get is from investors not leaks from government reporting - which would be typically be private and illegal to disclose to public.
[2] And the Big 4 who sign off on the audit for companies prefer to account for it.
[3] As long as it is not explicit fraud or cooking the books, i.e. they are transparent about their methods.
[4] Strictly this would be covered in the prospectus(S-1) few weeks before going public and that is first real look we get into the details.
Does the GAAP accounting matter if everyone passively buys shares due to the new fast entry rules, which corruptly will force us all to buy into these companies? The fundamentals and true value seem less relevant than ever:
https://www.benzinga.com/markets/tech/26/03/51248353/michael...
The $19b ARR and $6b added in Feb came directly from Anthropic CEO recently.
Announcing isn't reporting. Am I the only one old enough to remember Enron?
30x revenues at 17% revenue growth is... aggressive.
Except it's not 100x revenues, and it's not 17% growth. I don't know where you got those numbers from?
The numbers OpenAI gave in the post would mean a 30x multiple pre-money. And the $20B -> $24B run-rate growth since the start of the year could plausibly mean anything from 110% to 200% annualized growth rate, depending on whether that happened over two or three months. The $24B is a lower bound as well, since they only gave use one significant digit for the monthly revenue.
You're right, I was thinking about 100x revenues and forgot to confirm the math. Updated to reflect your point. ChatGPT itself provided the 17% number (it's most recently available growth rate)...
And that is revenue only. In the past 15 or so years most US companies (and especially startups) always talk about revenue only. Wheras only profit should matter.
E.g. what good is 20 billion per year when "OpenAI is targeting roughly $600 billion in total compute spending through 2030". That is $150 billion per year?
Give me a billion and I'll have 500M of revenue in no time by selling dollars at 50 cents.
Why are we treating OpenAI and Anthropic differently than say, Amazon or Uber? Both companies invested in growth for many years before making a profit. Most tech companies in the last 2-3 decades lost money for years before making a profit.
Why are we saying that OpenAI and Anthropic can't do the same?
Two reasons. They somewhat broke even, and kept getting investment. The potential for quasi monopoly was obvious.
Openai can't claim either.
How did Uber somewhat break even? They lost $34b before making a profit.
Uber was only on a path to monopoly in the US, not world wide. It’s lost to local competitors in most countries. And it can get disrupted by self driving cars soon.
OpenAI’s SOTA LLM training smells like a natural monopoly or duopoly to me. The cost to train the smartest models keep increasing. Most competitors will bow out as they do not have the revenue to keep competing. You can already see this with a few labs looking for a niche instead of competing head on with Anthropic and OpenAI.
The cost of copying SOTA models though is super cheap and doesn’t take super long.
It's not as much as you think. Google is spending $185b on data centers this year alone. Amazon is spending $200b this year. Total capex for big tech is ~$700b in 2026 and we're not including neo clouds, Chinese clouds, and other sovereign data centers.
Since everyone is trying to get compute from anywhere they can, including OpenAI going to Google, it's hard to tell what is used internally vs externally.
For example, it's entirely possible that Google's internal roadmap for Gemini sees it using $600b of compute through 2030 as well. In that case, OpenAI needs to match since compute is revenue.
why should only profits matter? if i had a killer product today that i just need to sell tomorrow, wouldn't you still invest today knowing i'll probably only start to make money tomorrow (or perhaps next week)?
the expectation is that they'll eventually make money. they can't raise forever. only startups are not profitable for a few years. but most companies that have existed for a long while have been profitable
and since they're expected to make a LOT of money, everyone wants a piece of that future pie, pushing up the valuation and amount raised to admittedly somewhat delusional levels like here
> why should only profits matter?
In this case because it's not clear that anybody has actually figured out how to sell inference for more than it costs
not if your product is selling two dollars for one dollar and as soon as you'll start to charge more I'll switch to one of your twenty competitors
profit isn't a function of having a killer product, it's a function of having no competition
And why do you think twenty competitors can stay competitive for years to come?
Industries always consolidate and winners emerge. SOTA LLMs look like a natural monopoly or duopoly to me because the cost to train the next model keeps going up such that it won't make sense for 20 competitors to compete at the very high end.
TSMC is a perfect example of this. Fab costs double every 4 years (Rock’s Law). It's almost impossible to compete against TSMC because no one has the customer base to generate enough revenue to build the next generation of fabs - except those who are propped up by governments such as Intel and Rapidus. Samsung is basically the SK government.
I don’t see how companies can catch OpenAI or Anthropic without the strong revenue growth.
no competition is a bit extreme. Limited competition yes due to competitive advantages.
> Wheras only profit should matter
Profit is money you couldn’t figure out how to spend. During growth, you want positive operating margins with nominal profits. When the company/market matures, you want pure profits because shareholders like money. If you can find a way to invest those profits in new areas of growth, that’s better.
Not sure why you’re downvoted.
Everyone wants to treat OpenAI like a car wash business where they need to make a profit almost immediately. I don’t know why people can’t understand that the industry is in a rapid growth stage and investing the money is more important than making a profit now. The profits will come later.
> Today, we closed our latest funding round with $122 billion in committed capital at a post money valuation of $852 billion.
A couple things that stand out to me about this is the use of the phrase "committed capital", which only sounds like a promise that could break from various circumstances, and the valuation of their funding keeps changing so it sounds like a max rather than the valuation every investor invested at.
I do wonder how much of Amazon's $50B share (per last press release) is in AWS credits rather than money in the bank.
To then claim that Trainium is “selling” and not a dud? I’d bet a lot.
Probably a lot? It would be much more tax-advantageous to do it this way, $50B worth of credits != $50B worth of spend on Amazon's part, and they might meet in the middle about how much equity that translates to.
I can see a lot of advantages for Amazon, but I don't see why it would be tax-advantageous.
Situation A:
You're Amazon. You give OpenAI $50B cash investment, they then hand you back the $50B because they buy $50B worth of Amazon AWS services (they would use AWS or other equivalent compute anyway). OpenAI pays an additional $1-5B in sales taxes. You have $25B opex, $25B profits, you pay 21% corporate taxes on the profits.
Situation B:
You're Amazon. You let OpenAI use your services by handing them API credentials that unlock what would normally cost $50B worth of services. You have zero revenue from the transaction, write off the $25B opex as a tax writeoff on your other profits. OpenAI also doesn't have to pay sales tax.
That’s typical. Large funding rounds usually aren’t delivered as one single giant lump sum into the bank account. The capital is committed in stages that can depend on hitting milestones or goals.
This is done even in smaller startup funding rounds some times.
Fair, I think a lot of what I've been perceiving is the gymnastics in how funding and valuation and deals get reported. There ends up being a ton of asterisks that makes the headline news deviate quite significantly from reality, e.g. https://arstechnica.com/information-technology/2026/02/five-...
good catch! committed capital is not same as we raised.
It makes sense for such a huge amount to be "committed", not sitting idle in a bank somewhere.
that's why they have to open through banks and other less valuable more sliced share system.
I'm old enough to remember when companies worth $1 billion were called "unicorns." Now we have a company raising 122 times that? Valued at nearly 1000 times that...?
At least they're throwing consumers a bone via the ARK deal. It's crazy how little AI exposure is available to anyone who isn't already wealthy and/or connected.
I think this is reality-distortion field rivaling that of Jobs', and a crisis of faith. Nobody apparently believes that capital is worth investing into anything but AI.
> Nobody apparently believes that capital is worth investing into anything but AI.
This is the main reason we see this insane investment into AI imo. If you imagine having lots of money, where should you invest that currently?
Housing market: Seems very overvalued (at least in germany). Also with the current uncertainty and inflation its hard to make an investment that pays back over 20-30 years. So building is also difficult.
Stocks are very volatile currently. Not only since Iran. To me it seems since the financial crisis 2008 investors don't enjoy stocks as before.
Gold: Only if you are paranoid about collapse of society. It doesn't make sense to invest into s.th. without interest rates.
Crypto: Same as gold, but better if you like gamling. I would assume most people who are very rich don't gamble with most of their fortune.
Looking around, and especially forward, it would be military tech, e.g. [1], and its supply chain, e.g. [2] :-\ Valuations are not as crazy, but I bet there'll going to be a lot of demand in the coming decade, unfortunately.
Chip production, too, of course, but it's overflowing with money already, apparently. It's growing though, because there are real actual shortages of stuff like RAM and SSDs, there's money to be made immediately if you can. Chinese RAM manufacturers are building out like crazy.
[1]: https://www.ultimamarkets.com/academy/anduril-stock-price-ho...
[2]: https://www.marketscreener.com/quote/stock/RHEINMETALL-AG-43...
> Looking around, and especially forward, it would be military tech, e.g. [1], and its supply chain, e.g. [2]
Only viable if you’re okay with the ethical implications of funding war.
Would you be fine with the ethical implications of funding the industry to fight WWII? Would you consider funding Ukrainian military unethical? Or Taiwanese?
This is, sadly, not theoretical, and I'm afraid we'll soon see more of such choices, not fewer.
> Stocks are very volatile currently. Not only since Iran. To me it seems since the financial crisis 2008 investors don't enjoy stocks as before.
These returns do not qualify as “enjoying stocks”?
https://investor.vanguard.com/investment-products/etfs/profi...
The returns are higher than before 2008, the previous 15 years are unprecedented.
https://www.macrotrends.net/2526/sp-500-historical-annual-re...
I wonder what is not getting invested in bc AI has been crowding out everything else since 22.
It has to be brutal out there for everybody else, if all the money is going to AI.
But they're really cagey about actually handing money over to them today
It's the result of too much echo chambered bullshit floating around daily about how capable LLMs really are. It's literally crypto/blockchain all over again. It's one big lie that a lot of people have bought into which causes it to self-perpetuate, like religion.
The money is worth much much less than it was before, we live in times of global hyper inflation.
https://www.ark-funds.com/funds/arkvx
The fund is invested in most of the hot tech companies.
VCX (Fundrise) has way more exposure than ARKVX
I would not call an effective 2.9% expense ratio "throwing a bone".
Also, the valuation for such a debt laden company should be viewed with great skepticism. I'm afraid a lot of mutual funds will end up holding the bags.
> At least they're throwing consumers a bone via the ARK deal. It's crazy how little AI exposure is available to anyone who isn't already wealthy and/or connected.
It is deliberate. Period.
It's always been known that you make money in the private markets and pre-IPO companies and retail is the final exit for insiders and early investors.
Retail is not allowed to be early into these companies (Because that would ruin the point of being an insider) and this "exposure" has to be at the near top.
Who are "these" companies? Did retail get into Google, Facebook, Amazon, Tesla, etc before the top?
Also, aren't AI businesses losing a lot of money each year? Pretty sure there is some risk involved that is not good for retail.
There are ways now for retail to get in to these companies including, check out hiive or equityzen...just beware of massive dilution.
> The broad consumer reach of ChatGPT creates a powerful distribution channel into the workplace
They mention this line in different forms a couple of times in the article. It’s clear they’re pretty rattled about Anthropic’s momentum in enterprise, I wonder how confident they really are in this rationale.
Kind of makes me wonder how 'accelerated' the timeline of publishing this article was based upon the Claude Code leak today. Considering everyone has gotten a sneak peek at what Anthropic is working on OpenAI might be a little worried. This could also just be coincidence, but this piece really does read like self-encouraging fluff.
The timing of this coming out today is cause today is the last day of the month/quarter and has nothing to do with Claude.
Ah, yeah that makes way more sense, I always forget about financial quarter timings.
This announcement completes the betrayal of their founding principles.
"Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return."
Funny how quickly they have become like every other tech company. There is basically no hint of OpenAI the non-profit anymore.
Edit: Why did this go from their press release to a news story?
There is a lot of talk about the AI bubble. I think there are comparisons to the late 90's/early 00's here with early stars rising quickly but, ultimately, falling. Since essentially everything touches the internet now it is clear that the 'internet bubble' was more of a shakeup of companies than a real over-hype of the internet. That, I think, is at play right now too with the 'AI bubble'. AI isn't going away but some of the early stars may not make it.
So, the real question here is: Is OpenAI Netscape, or are they Google?
> This is not just product simplification. It is a distribution and deployment strategy.
iykyk
Are you suggesting this was written by AI?
You are absolutely right.
It's a very frequently used structure by LLMs especially ones writing for LinkedIn.
Why would they not use their own AI?
It’s not just a suggestion. It’s a demonstration.
It's the demonstration layer
This has to be just an extension of their previous raise, right? This was a month ago: https://openai.com/index/scaling-ai-for-everyone/
Doesn't look like it, that previous round was entirely Softbank + Nvidia + Amazon, while this one is VC + private investors.
The valuation seems odd though, you'd expect $840B post-money from that earlier round?
Maybe? Previous valuation is $730B + $122B raised in this announcement = $852B valuation in this announcement (no actual increase in valuation)
Previous was $730B pre money. This one is $852B post money. So yeah it's the same one. Good catch.
yup and begging for retailers money.
Each passing day the lie that markets efficiently allocate capital is further exposed
The title is incorrect. The $122B includes previous promises. They raised an additional $12B of promises:
"The round totaled $122 billion of committed capital, up from the $110 billion figure that the company announced in February. SoftBank co-led the round alongside other investors, including Andreessen Horowitz and D. E. Shaw Ventures, OpenAI said."
This IPO, if anyone underwrites it, is going to fleece retail so hard. Better make it a SPAC with the help of Chamath and Cantor & Fitzgerald.
With $122B what are you going to build next? Spaceships?
Rather than advancing the state of the art, they'll use it to slow down competition by starving them of resources. In the style of a monopoly.
All the high performance RAM, frontier node wafer capacity, flash and disk drives on Earth. Also, all the gas turbines.
If they buy all the jet engines too (to turn them into gas turbines) they can be carbon neutral. More gas burned, but less jet fuel. It's a win-win.
Universal paperclips. And here I am asking ChatGPT which type of onion to use dor coq au vin.
This all smells fishy. They didn’t “raise” $122B. Raise means someone put funds in your bank account and said send us the next quarterly report to tell us how our investment is doing.
They have pieces from paper of folks saying they may put up funds or goods and services in that amount. But it’s important to remember that:
1. While they are “raising” commitments others are backing out of deals (see Disney, various data center things). Big deals announced to major fanfare are falling through.
2. They slashed capital expenditure for the future after previously boasting about all the commitments. This is turning into bonkers math of X + Y - X + Z + W - 1/2 of Y = ? On trying to keep track of what’s actually “raised / real” vs what was PR puffery that folks ran away from later.
3. Circular financing still seems to be going on. Big difference of here’s cash, have fun and various “commitments” and balance sheet games that seem to still be going on.
Net net this all still looks very scary and iffy at best.
If OpenAI goes down their investors will lose any chance at getting their money back. They need to keep pretending things are going great for as long as possible.
Are we truly arguing semantics on HN which is a news aggregator for startups and everyone truly knows what a "raise" is and it is obviously not funds in your bank account? I don't disagree with the rest of your comment and the core thesis is valid that OpenAI is very much doing circular financing.
Edit: A raise comes with stipulations on what you can use the money for. I don't know if I was being too mean about responding to a parent but before you comment just google what a raise has..
Other than funds in a bank account I do not know what it would mean.
Stipulations on what you can use the funds for.
*hypothetical future funds
So when my coworker tells me he got a "raise", they're not talking about money that will end up in their bank account?
Right because, what does even "buy" mean?
https://thedeepdive.ca/openai-locked-up-40-of-global-ram-wit...
I can't help but think building an "everything" app is so.. both unbelievably ambitious, and a folly. I am not personally convinced that people want all the things that this super app purports to do.
I am from a generation that still sits behind a desktop computer when making "big purchases." I can't even buy a flight on my phone. I am so much less likely to want to have an AI agent do that for me.
Then the idea that daily consumption of these products will drive people to use them more at work... I have a very different life outside of work. My use of AI outside of work is exceedingly different to what I use it for at work.
I sometimes feel wildly out of touch. But sometimes I view this as the VR moment. To me there are some things that I think may always be preferable to do outside of that ecosystem. And for me, a lot of tasks that 'agents' enable are small enough or important enough that I want to do them myself.
I don't think I'll ever be comfortable allowing an agent to call me a taxi, or order food on my behalf. Because the convenience of asking for food isn't worth the chance it'll mess up, and opening an app and looking at a menu is simpler.
I also think we're coming to a moment where we can start identifying the markers of AI generated content on sight. And I think there's a growing animosity to it. I might be comfortable asking AI something, but when I am looking for or searching for other content, seeing AI content markers make me angry at this point.
To finish, I do just sort of straight up hate the idea that we're comparing this moment to the invention of electricity. It's on the face of it absurd.
I've worked for 3 different startups where the CEO at one point gave us the talk of "we're building a super app".
Admittedly openAI is in a better position to do it, but not by much.
Everyone wants to be WeChat in china. No user wants that from them.
WeChat is not a super app. It's a browser. Tencent WeChat is the equivalent of Google Chrome.
cries in Musk
I think you lack imagination. This is going to be the future because it is legitimately a step up from the previous ways of doing things. I can do things that were way more difficult before.
It doesn't have to be AI all the way - no one's asking AI to book things on its own and make the payments on their own. What does work is, make AI do the research and you verify and you do the payment. Human in the loop.
To me this is clearly the future - AI has access to all the data sources and can translate your intent by accessing these tools in a loop and use intelligence to automate things.
Maybe there's a scenario where that is useful. But again, I don't know why I'd want an AI to do this research for me. I hop on Skyscanner. I type my location, and where I'd like to go. It presents me with a list of options, and I can then use the filters to find times that work best for me.
I see a flight that isn't in my time frame, but is actually like 400 euros cheaper. And I decide in that moment that waking up at 5am is worth the savings.
I'd have not typed that into a prompt. I made that decision at the moment I saw the possibility. I didn't even know that it was an option prior to that moment.
Then I go look at hotels. I have a list of requirements, but I see that one of the hotels that I just glanced at has a really nice long pool, and the amenities look nicer from the images. I change my mind at that exact moment, I can walk 15 minutes more to the beach.
Now it should be even clearer why this is important for food.
>> To finish, I do just sort of straight up hate the idea that we're comparing this moment to the invention of electricity. It's on the face of it absurd.
Do you feel that any technology is comparable in it’s impact?
Most of modern medicine, by which I mean each discovery and invention in their own right, stand alongside electricity. Particularly vaccines.
AI isn’t there yet. You could turn off AI tomorrow and there’d be a shock but people would quickly switch back. You could not do the same for electricity, medicine, combustion engines (or steam engines/turbines), computers, the internet, modern building materials, etc. You try to swap back off any of those and the modern world (literally and figuratively) collapses. Turn off AI, and there’d be a financial collapse but afterwards everything would return relatively easily to an earlier way of doing things (ye know, the way from just 4 years ago, and which is still 99% of how people do things :) )
Sure, but compare this to "turn[ing] off" combustion engines a mere four years after commercial adoption rather than 162 years later (now). Back then, going back to horses wouldn't have been as big of a deal as it would be now.
I think the Internet is the more apt analogy. But even with electricity, you could have taken it away within the first couple decades of its popularity and society would have shrugged it off. Once they got used to that telegraph thing, not so much.
Yeah, I agree, but AI isn’t there yet. It’s too early to call it one way or the other. There’s plenty else that’s as important as electricity in my view, and maybe AI will join those ranks in 15 years or so when it’s gone through the hype loop and when the economy has recovered from the now-basically-inevitable AI- and war-fueled turmoil of the next decade.
That's primarily a function of the time for adoption, though, not the utility of the technology. In 20 years, people would not be able to so easily say that they could turn off AI with no impact.
That..what..no. The question was whether there are any comparable to electricity, of which I have put forth a number of examples. And also offered my opinion that it is too early to judge whether AI will be as significant or not.
There are loads of technologies that, despite being decades old, do not qualify. So, no, it’s not “primarily a function of time”. It absolutely is about the utility. We can only be in a position to judge utility when sufficient time has passed, and AI ain’t had enough time yet to prove its utility. Given enough time, it might prove as useful as electricity, or it might just sit alongside computer operating systems - never quite making it onto anyone’s “this changed the world” list, even if it has as much utility as an OS.
I thought they needed $7 trillion or they'd be unable to keep training new models?
I hate to read this line when academics and graduate students who work in basic and hard sciences have their funding cut. The grand funding that pays minimum wage to grad students is a burden for this society, yet for a company that took all the valuable data from sources that never got credit, raises billions of dollars. Open says the name, but closed it is by operation. Sorry for this rant, but the priorities of this world suck.
Or all of the people that they didn’t ask, let alone compensate, that made all of the stuff they munged up for training data, so they could sell cheap knockoffs in the same markets.
Are there any Polymarket / Kalshi bets on the over-under for the price? I wonder when the music will stop.
https://polymarket.com/event/openai-ipo-closing-market-cap-a...
I don't gamble but if I did, I'd bet a lot on $1.6t.
feels like an insult to readers to try to pretend that their revenue per month is comparable to google or apples growth when the funding is absurdly different, not to mention inflation itself.
I am very much onboard with AI within my workflow. I just don't really see a future where openai/anthropic are the absolute front runners for devs though. Maybe OpenAI does just have the better vision by targeting the general public instead, and just competing to become the next google before google can just stay google?
What is their next step to ensure local models never overtake them? If i could use opus 4.6 as a local model isntead and wrap it in someone else's cli tool, i 100% do it today. are the future model's gonna be so far beyond in capability that this sounds foolish? the top models are more than enough to keep up with my own features before i can think of more... so how do they stretch further than that?
A side note i keep thinking about, how impossible is a world where open source base models are collectively trained similar to a proof of work style pool, and then smaller companies simply spin off their own finishing touches or whatever based on that base model? am i thinking of thinks too simplistically? is this not a possibility?
Anthropic is definitely gaining ground over OpenAI in the business world. Cowork is the absolute hotness right now, and even prompted MSFT to drop their own variant yesterday
Ask anybody you know that works in Big Tech. They're all pushing hard for Claude Code adoption.
Codex and Gemini CLI seem 1-2 months behind Claude Code. They will catch up. This race will eventually be won by whoever can come up with the cheapest compute.
And that's a dangerous game because the cheaper compute gets, the more likely consumers are to self-host rather than pay a subscription.
Apple could figure out a way to neatly package it into their ecosystem.
Not really. Most people won't self host.
The general public will self-host it's built in to your next phone or laptop straight out of the box or maybe from the App Store.
> how impossible is a world where open source base models are collectively trained similar to a proof of work style pool
Current multi-GPU training setups assume much higher bandwidth (and lower latency) between the GPUs than you can get with an internet connection. Even cross-datacenter training isn't really practical.
LLM training isn't embarrassingly parallel, not like crypto mining is for example. It's not like you can just add more nodes to the mix and magically get speedups. You can get a lot out of parallelism, certainly, but it's not as straightforward and requires work to fully utilize.
It's hard to train models in the open. All the big players are using lots of "dodgy" training data. Like books, video, code, destinations. If you did that in the open, the lawyers would shut you down.
Though I think these companies are wildly overvalued, I don't see LLMs as a service going away in the future. The value in OpenAI is that it provides extra compute, data access, etc. My money is on local AI becoming more of a thing, while services like OpenAI still exist for local AIs to consult with. If a local model can somehow know that it's out of it's depth on a question/prompt, it can ask an OpenAI model if it's available, but otherwise still work locally if OpenAI fails to respond or goes out of business. To me that makes a lot more sense than the future being either-or.
Models not being able to reliably know if they are out of their depth is a foundational limitation of the currently generation of models, though.
Best they can do is to somewhat reliably react to objective signals that they've failed at something (like test failures).
> What is their next step to ensure local models never overtake them?
As someone who experiments with local models a lot, I don’t see this as a threat. Running LLMs on big server hardware will always be faster and higher quality than what we can fit on our laptops.
Even in the future when there are open weight models that I can run on my laptop that match today’s Opus, I would still be using a hosted variant for most work because it will be faster, higher quality, and not make my laptop or GPU turn into a furnace every time I run a query.
If your laptop overheats when you push your GPU, you can buy purpose-built "gaming" laptops that are at least nominally intended to sustain those workloads with much better cooling. Of course, running your inference on a homelab platform deployed for that purpose, without the thermal constraints of a laptop, is also possible.
I didn't say it overheats. It gets hot and the fans blow, neither of which are enjoyable.
MacBook Pro laptops are preferred over "gaming" laptops for LLM use because they have large unified memory with high bandwidth. No gaming laptop can give you as much high-bandwidth LLM memory as a MacBook Pro or an AMD Strix Halo integrated system. The discrete gaming GPUs are optimized for gaming with relatively smaller VRAM.
You can host a website on any rackmount server for pennies compared to AWS. But people still use AWS.
The market for local models is always gonna be a small niche, primarily for the paranoid.
"The market for local models is always gonna be a small niche, primarily for the paranoid."
Have you ever heard of industrial espionage? Pr privacy regulations? Or military applications?
(Also the US military runs claude as a local model)
>"But people still use AWS"
I do not, I self host. My current client is also got rid from AWS packing up nice savings as a result
Are we going to see a trillion dollar IPO?
The only thing that's really accelerating is how fast you get rate-limited on ChatGPT.
The big AI firms are all heavily compute-constrained, so that shouldn't be much of a surprise.
Looking forward to the movie about this absolute scammer Sam Altman when this is all said and done.
$122B in "committed capital" (read: pinky promises) for a company whose entire thesis is "scaling laws hold forever and nobody figures out efficiency." DeepSeek and Google already proved that's shaky. Twice.
I ship code every day. I use Claude, I use GPT, I run llama locally. The gap between frontier models and what fits on a 4090 shrinks every six months. Building a "super app" in response isn't vision — it's panic. You don't consolidate into an everything-app when you're winning. You do it when your core product is commoditizing and you need to lock people in before they notice.
Also love the electricity comparison. Electricity doesn't hallucinate, doesn't need $300B in cumulative funding to turn on the lights, and never told me a function exists that doesn't.
Hope it works out. Competition is good. But "flywheel" is just VC for "trust me bro."
Don't know why so many down votes for something that's clearly just another opinion
Probably because it reads a little like an LLM and also has an emdash
isn't it weird that there is no attribution to a human here? i mean, eventually, they have to dropkick sama and install GPT itself as king, right? EOQ seems as good a time as any
Wow. I doubt Anthropic can raise that. Are they more efficient, can they do with less?
Given how all of Big Tech (except Google obviously) is going all in on Claude Code, I wouldn't be surprised if Anthropic becomes profitable first.
Anthropic doesn't have anything else other than the Claude models.
But notice that no-one, not a single mention of Deepseek tells me that they are preparing to scare everyone again. Which is why Dario continues to scare-monger on local models.
Sometimes you do not need hundreds of billions of dollars for inference when it can be done locally with efficient software; and Google proved that. But where is the money in that? So continues the flawed belief in infinitely buying GPUs to scale which Nvidia needs you to do.
Only a matter of time for local models to reach Opus level. We are 1 or at most 2 years behind that and Anthropic knows that.
> Only a matter of time for local models to reach Opus level. We are 1 or at most 2 years behind that and Anthropic knows that.
Can confirm. Kimi K2.5 is pretty intelligent and most of the time there's no difference between Opus and Kimi.
Local models just make no economic sense since the GPU will idle 99% of the time.
You have a GPU already (at least an iGPU and an NPU on most newer platforms) as part of your computer, might as well get some use out of it with local inference. And trying to do inference on a larger model with an undersized GPU will have you idling a lot less than 99% - but that still makes a lot of sense for most casual users who will only rarely need a genuine "Pro" class answer from AI. Doing that locally is way less hassle than paying for a subscription or messing with API spend.
well we do need at least two powerful AI companies, so they can cross checkout each other when I use them.
Good luck competing with Google which has "unlimited" budget.
Inflation is a hell of a drug.
Nah this raise (commitment) is... blah.
Last announcement I reckon pre-IPO and the inevitable collapse.
They have to focus on the distant future (where they are frankly unlikely to exist) because they are falling further and further behind in the immediate future.
Their latest desperate bid for relevance is a plugin for Claude Code that uses Codex as a second opinion. Please clap.
This a big exaggeration. Codex is probably one of the top two LLM programming tools, along with Claude Code. GPT-5.4 models are strong, unlike the initial GPT-5 ones, which were comparatively bad, and can hold up against Opus 4.6. In my experience, they are better at analytical work.
I cannot really see how they are "far behind," or how some plugin for Claude Code is a "last desperate bid." The tools are close enough to each other that I regularly use Codex one month and Claude Code the next without much disruption, just to try out any new models or features that might be available.
I do not have much visibility into the non-code applications, so maybe it is stickier there.
If/when the AI bubble pops and takes OpenAI down with it, I would not expect Anthropic to come out unscathed either.
They are trying very hard to convince themselves that it's going to work, when we see all the models plateauing... it's clearly hitting the ceiling
I don't know anyway using these models everyday who think they are hitting a ceiling.
If anything there's a plateau between each model release.
I'm seeing diminishing returns, though in fairness we have no idea yet how to integrate properly with existing good practices and principles. I suspect improvement is going to come mainly from improved took usage rather than more impressive models.
I feel that too, every technology has its limits. I use AI daily. But I can’t see the “intelligence“. All I see is fine tuning and bigger datasets.
Yesterday I asked claude to fix the color issues of graph. It failed miserably. Opus 4.6 wasn’t able to figure out why the text was grey. It made something up, instead of realizing the problem was simple, oklch wrapped inside a hsl color. hsl(oklch(…)) I easily figured this out by just looking at the css and adding some logs to js.
This is not intelligence. This is a tool that’s smart. Not sentient. AGI won’t be achieved by scaling alone.
No mention of "AGI" this time. Since we all knew it was a scam. But this is the most damning of them all:
> The OpenAI flywheel is simple. More compute drives more intelligent models. More intelligent models drive better products. Better products drive faster adoption, more revenue and more cashflow.
FTX had a "flywheel". It fell off. Being saddled with hundreds of billions of debt makes this situation ten times worse.
Money has lost all meaning in tech. 122 Billion raise! This is some kind of dream.
didn't they just raise last month?
I think that runway has run out /s
Thsi is the most blatant pump & dump scam I've ever seen. It's going to crash like a meteor.
"Despite unprecedented capital investment in our R&D, our core product isn't getting meaningfully better so now we're building an app."
Doesn't really strike me as the kind of statement that comes out of a company that can sustain a ~$1T market cap...
> The OpenAI flywheel is simple. More compute drives more intelligent models. More intelligent models drive better products. Better products drive faster adoption, more revenue and more cashflow. That gives us the ability to reinvest and deliver intelligence more efficiently to consumers, enterprises, and builders around the world.
-x-
In short, the musical chairs are still playing... Keep on walkin' round, y'all, till the music stops.
/s
"This is not just product simplification. It is a distribution and deployment strategy."
I am so sick of AI writing.
The snowclone is annoying, but comparisons are sometimes necessary. The problem here is the actual content is sloppy.
corporate speech existed long before AI
Mmm, it’s almost like OpenAI built statistical models using pre-existing corporate speech as the target data... ;)
Get used to it. All PR statements nowadays get AI treatment before going public.
> We are now generating $2B in revenue per month
What??
What what? Are you surprised it's that low, that high, that they can tell what their revenue is, that they report it on a monthly rather than annual basis, or something totally different?
It's going to be pretty hard to get a good answer to whatever you're having difficulties understanding if you can't be bothered to write more than a word.
They got a very sweet deal from the Pentagon, it seems.
Word on the street is that Anthropic is roughly at half that. Hard to know what they include and not, and what their real, non-subsidized costs are.
> Within a year of launching ChatGPT, we reached $1B in revenue. By the end of 2024 we were generating $1B per quarter. We are now generating $2B in revenue per month.
They raised $122B.
122 / 12*2 = 5 years to get your money back (I simplify, I know revenue <> profit)
They are so big that almost no one can afford to acquire them. It is similar as someone would like to acquire MSFT or AAPL.
WCGW?
Correction: no one can afford to acquire them.