I'm not sure what people are on in the comments. It doesn't beat the other models, but it sure competes despite its size.
GLM 5.1 is an excellent model, but even at Q4 you're looking at ~400GB.
Kimi K2.5 is really good too, and at Q4 quantization you're looking at almost ~600GB.
This model? You can run it at Q4 with 70GB of VRAM. This is approaching consumer level territory (you can get a Mac Studio with 128GB of RAM for ~3500 USD).
For the Claude-pilled people, I don't know if you only run Opus but when I was on the Pro plan Sonnet was already extremely capable. This beats the latest Sonnet while running locally, without anyone charging you extra for having HERMES.md in your repo, or locking you out of your account on a whim.
Mistral has never been competitive at the frontier, but maybe that is not what we need from them. Having Pareto models that get you 80% of the frontier at 20% of the cost/size sounds really good to me.
> This model? You can run it at Q4 with 70GB of VRAM. This is approaching consumer level territory (you can get a Mac Studio with 128GB of RAM for ~3500 USD).
The one thing I would want everyone curious about local LLMs to know is that being able to run a model and being able to run a model fast are two very different thresholds. You can get these models to run on a 128GB Mac, but we need to first tell if Q4 retains enough quality (models have different sensitivities to quantization) and how fast it runs.
For running async work and background tasks the prompt processing and token generation speeds matter less, but a lot of Mac Studio buyers have discovered the hard way that it's not going to be as responsive as working with a model hosted in the cloud on proper hardware.
For most people without hard requirements for on-site processing, the best use case for this model would be going through one of the OpenRouter hosted providers for it and paying by token.
> This beats the latest Sonnet while running locally
Almost every open weight model launch this year has come with claims that it matches or exceeds Sonnet. I've been trying a lot of them and I have yet to see it in practice, even when the benchmarks show a clear lead.
Cloud hardware is not inherently more "proper" than what's being proposed here, there's nothing wrong per se about targeting slower inference speeds in an on prem single-user context.
The quantization for some models can be very detrimental and their quality can drop considerably from the posted benchmarks which are probably at bf16, this is why having considerable RAM can be important.
You could run it on a single Mac Studio with M3 Ultra, or two Mac Studios with M4 Max at higher perf than that. And lightly quantizing this could give us modern dense models in the ~80GB params range, which is a very compelling target.
Wouldn't matter much still. M3 ultra has 819GB/s unified memory bandwidth. That means theoretical max tokem rate is 819/128 =~ 6.39 t/s. At 80 GB (5 bit quantization), its still near about 10 t/s ... far from a good coding experience. Also, these are theoretical max.. real world token generation rates would be at least 15-20% less.
As always, rooting for these guys — model and national diversity is great. This looks like a solid foundation to build on; hopefully the 3.6/3.7 will dial in more gains. It looks like maybe from the computer use benchmarks that their vision pipeline could use improvement, but that’s just speculation.
The different results on some benchmarks vibes as if this is truly an independently trained model, not just exfiltrated frontier logs, which I think is also really important - having different weight architectures inside a particular model seems like a benefit on its own when viewed from a global systems architecture perspective.
With Most OSS releases being MoEs, and modern GPUs optimized for MoEs, can somebody with knowledge of the topic explain or speculate why Mistral might have opted for a dense model?
Where are the competitive models from Singapore, Japan, Taiwan, Korea, Russia, Canada, India, the UK? From anywhere that isn't China or the US?
There are none. Mistral Small 4 is pareto-competitive in its pricing bracket at $0.15/$0.60, at worst it's second to Gemma 4 26B A4B. The above countries have never had a model that is even close to being so.
This particular Mistral Medium looks to be uncompetitive at that pricing. I'm surprised it's so expensive given its size. Wonder if we'll see other providers offer it for cheaper.
but that doesn't mean Mistral has never produced anything useful.
Without Google’s funding its not obvious i DeepMind would have went anywhere.
Unless the moved to US for funding while keeping a back office in the UK.
It’s strange to expect anything significant to come out from Europe when VCs there are either very risk averse and/or don’t have enough cash to begin with. It’s not like government or EU funding can replace that since its almost always wasted or missdirected
You mentioned "pareto-competitive", and EXAONE certainly was that. The statement that the "above countries have never had a model that is even close to being so" is simply too broad.
Although the Manus decision might change things for AI, Singapore-washing is quite rampant among Chinese companies, so I wouldn't call this place of origin an alternative market.
A few months ago China was being criticized left and right on how somehow it was not able to compete, and once DeepSeek showed up then all the hatred shifted onto how China was actually competing but exploring unfair competitive advantages.
Funny how that works.
Also, aren't the likes of OpenAI burning through over $2 of investment for each $1 of revenue?
China is not competing, it is distilling US models. Where are the Chinese models that are blowing US ones out of the water? There aren't any. The US continues to innovate, China replicate, Europe regulate. As is tradition.
>Also, aren't the likes of OpenAI burning through over $2 of investment for each $1 of revenue?
Yes, innovation costs money.
Edit: In response to below, EUV machines use tech licensed from the US, so yes, the US worked on them.
2 businesses working to get money from the same customers in the same field is competition. Kellogs is competing with store brand cereal. People are choosing to use these Chinese AI apis because they are good enough for some workflows and cheaper. If they didn't exist, the money would go to the frontier labs. There is no world where this would not be defined as competition.
Qwen3.6 runs on a single GPU and beats claudes sonnet. In benchmarks and real world tests from humans. Kimi is awesome but most people won't be able to host it themselves.
A lot of people are slowly realizing the moat of 1T closed source models is gone as of the last few weeks. It's going to change the industry. April was a huge month for open models, it'll be curious to see if that continues.
This Mistral submission is another nail in the coffin.
> China is not competing, it is distilling US models.
I think you should check your notes. The likes of Kimi K2 thinking shows up as high as the second best general purpose model currently in existence. It seems they compete just fine.
If you believe "distilling" is all it takes to put together a model at the top of any synthetic benchmark then I wonder what you would have to say about all US models that greatly underperform in comparison and still manage to be used extensively in professional settings.
But your argument is an emotional one and not rarional, isn't it?
According to benchmarks which are gamed to the extreme these days. Trusting them blindly isn’t exactly rational either. They don’t necessarily translate that well to real world tasks
It’s obviously not “distilling” as such but there are reasons why Chinnese models are consistently several months behind OpenAI/Antropic
I don't mind Chinese but US under Trump is a fascist state based on ethnic and theological grounds pretty much or soon would be if electorate doesn't decide otherwise.
China and rest of the world has sane leadership that aren't mentally retarted.
Compared to all other hosted LLMs that I have tested, Mistral seems to be the only one with rather strict CSP headers. When you ask them to create a website with some javascript library it will not preview, even though le chat offers canvas mode.
Sometimes when a new release comes around from any provider I just want to test it a bit on the web. without paying and using an agent harness.
Given what Vibe already did in the previous versions with codestral-v2, that's great news. Keep up the good work ! I don't want to depend on the world's two hungry superpowers.
So that has the alias "mistral-medium-latest", but the official ID is "mistral-medium-2508" which suggests it's the model they released in August 2025.
But... that 1777479384 timestamp decodes to Wednesday, April 29, 2026 at 04:16:24 PM UTC
Wow. I get that "how well can it make SVGs" isn't the (or a) gold standard for how useful a model is or isn't, but the fact the Gemma 4 26B A4B I'm running locally can blow it out of the water doesn't give me high confidence for the model. Maybe an unfair comparison, but...
It's so bad I don't want to spend the 18 EUR just to test it for a month. It can't even create an SVG of the facebook logo. There should be plenty of examples of that around.
I'm curios: are you doing a real apples to apples comparison, or are you running a harness that already curates prompts? There's a far and wide margin how any of these models respond based on already loaded context. Most models are pretty much hot garbage until their context is curated appropiately.
I just copied and pasted each prompt as specified by Mashimo and simonw into a chat interface, using a 4-bit Unsloth quantization of Gemma 4 26B, with the default sampler settings recommended by Google, and a system prompt of "You are a helpful assistant". The results are miles ahead of what the Mistral model output.
I've gotten a lot of use out of Mistral models, and I imagine this model is pretty good at other things, but it really feels like a 128B parameter dense model should be at least a little better than this.
I'm using mistral-medium-2508 for some text transformation operations. It's giving me better results than mistral-large for my use cases.
Looking forward to testing this new model, although I'm not sure if it's really meant at replacing the previous medium model since it's a lot more expensive and presented more as a coding / agentic model (mistral-medium-2508 was priced $0.4/$2 per 1M tokens, mistral-medium-3.5 is $1,5/$7.5).
This release Mistral really reminds you of the gap between the frontier labs and everyone else.
Pre-agent, there wasn't always an obvious difference between models. Various models had their charms. Nowadays, I don't want to entertain anything less than the frontier models. The difference in capability is enormous and choosing anything less has a real cost in terms of productivity.
I've been a big fan of the smaller labs like Mistral and especially Cohere but it's been a while since I've been excited by a release by either company.
That said, I'm using mistral voxtral realtime daily – it's great.
Coding has always been the main real-world business usecase since day one. There has been no point since the very first public availability of GPT 3.5 in November 2022, that it wasn't.
A lot of us have been agentic coding since almost 2 years ago, mid-2024. I have. The productivity gap of "best vs 2nd vs 3rd best model" was biggest back then and has slowly been shrinking ever since.
> Pre-agent, there wasn't always an obvious difference between models. Various models had their charms. Nowadays, I don't want to entertain anything less than the frontier models. The difference in capability is enormous and choosing anything less has a real cost in terms of productivity.
It's just apples to oranges.
There is not a clear, across the board, winner on non-agentic tasks between Gemini, ChatGPT, and Claude - the simple chatbot interface.
But Claude Code is substantially better than Codex which itself is notably better than Gemini-cli.
In this vein, it should not be surprising that Claude Code is way better than non-frontier models for agentic coding... It's substantially better than other frontier models at specialized agentic tasks.
I’ve been comparing Claude Code and Codex extensively side by side over the past couple of weeks with my favorite prompting framework superpowers…
From my perspective, Claude Code is decidedly not better than Codex. They’re slightly different and work better together. I would have no issues dropping CC entirely and using codex 100%.
If you’re working off of “defaults”, in other words no custom prompting, Claude Code does perform a lot better out of the box. I think this matters, but if you’re a professional software developer, I’d make the case that you should be owning your tools and moving beyond the baked in prompts.
> Pre-agent, there wasn't always an obvious difference between models. Various models had their charms. Nowadays, I don't want to entertain anything less than the frontier models.
This is a very naive and misguided opinion. In most tasks, including complex coding tasks, you can hardly tell the difference between a frontier model and something like GPT4.1. You need to really focus on areas such as context window, tool calling and specific aspects of reasoning steps to start noticing differences. To make matters worse, frontier models are taking a brute force approach to results which ends up making them far more expensive to run, both in terms of what shows up on your invoice and how much more you have to wait to get any resemblance of output.
And I won't even go into the topic or local models.
This is a very interesting strategy that might pay off. This model is a very good option for enterprise self host. I would argue a lot of companies are VRAM constrained rather than compute constrained. You could fit 4-5 running instances on one H100 cluster where you can only fit 1-2 Kimi K2 or GLM5.
I like the idea of Mistral, but the last time I evaluated Mistral Vibe it was really nice for $15/month but not as effective as Gemini Plus with AntiGravity and gemini-cli. I am currently running Gemini Ultra on a 3 month 'special deal' and AntiGravity with Opus 4.7 tokens is pretty much fantastic.
That said, when I stop spending money on Gemini Ultra, I will give Mistral Vibe another 1-month test.
I like the entire business model and vibe of Mistral so much more than OpenAI/Anthropic/Google but I also have stuff to get done. I am curious if Mistral Vibe for $15/month is a stable business model (i.e., can they make a profit).
One thing in particular I was disappointed in was its bad explanations when asking about French grammar. It made multiple mistakes and the other models got it right, even Qwen 3.6 27b!
There's a good chance that they'll catch up. The "AI race" is a race to the bottom, with the leaders blowing huge wads of cash on capabilities that get replicated months later by the competition at a fraction of the cost.
The only benefit of leading is mindshare. OpenAI is doubling down on that, by investing in communication companies. That's their pathetic attempt at a "moat".
They catch up by distilling frontier models. They will eventually figure out how to prevent that from happening. No one has any interest in investing tens of billions if the product can be copied and sold for less.
I'm rooting for Mistral. It seems they are making a big bet that smaller models will win over larger ones and I can see it happening. I was running some simple chat and tool-calling benchmarks for small models and Mistral Small 4 performed well for it's price ($.15/$.60). Seeing this today got me excited, benchmarks seems solid compared to models much larger, but it's priced higher than Haiku, 5.4 mini, and all the the Chinese models it's comparing itself too. It's not even winning those benches either, just being competitive with them, which is great, those models are 5x+ the size, but they are also 1/2 the price. Hard to be excited about that.
GP is stating that the second best in the field, the Chinese, is so far behind the best in the field, GPT 5.5, that it is not even worth testing anything else.
I am not following this obsession with SOTA and benchmark rankings
I have been using DeepSeek and GLMnmodels with OpenCode and Codex and Claudr side by side.
I have not found the Chinese models lacking. I enjoy for coding and like to maintain full control of my codebade and deeply care about the GOF patterns. So I am very stringent in terms of what I want the LLM to code and how to code.
So from my perspective, they are all about the same.
That I agree with, but for more complex autonomous changes the differences are considerable. However, it seems that most models will reach the saturation time in which they will be useful for almost everything and the difference will be in more and more niche and specialized tasks.
They did not stop using it due to contamination. They said it's flawed and indirectly said anthropics results were impossible. It's very possible they are sore losers
Gemma has been better for us at EU languages than mistral (for comparable sized models) :/ so ... dunno. What mistral does well and others are lagging behind is deploying on prem with their engineers and know-how, offering tuned models for your tasks and finetuning on your own data. (I expect google to start offering this next)
It's sad that despite their strength in this for onprem, they're so behind on this in the cloud. No publicly available cloud SFT at all. Meanwhile Google has been offering that for years - though remains to be seen if they will for Gemini 3 when GA.
And on top of it a range of providers like Fireworks and so on that offer it for Chinese models. This seems such an obvious thing for Mistral to offer.
I'm not sure what people are on in the comments. It doesn't beat the other models, but it sure competes despite its size.
GLM 5.1 is an excellent model, but even at Q4 you're looking at ~400GB. Kimi K2.5 is really good too, and at Q4 quantization you're looking at almost ~600GB.
This model? You can run it at Q4 with 70GB of VRAM. This is approaching consumer level territory (you can get a Mac Studio with 128GB of RAM for ~3500 USD).
For the Claude-pilled people, I don't know if you only run Opus but when I was on the Pro plan Sonnet was already extremely capable. This beats the latest Sonnet while running locally, without anyone charging you extra for having HERMES.md in your repo, or locking you out of your account on a whim.
Mistral has never been competitive at the frontier, but maybe that is not what we need from them. Having Pareto models that get you 80% of the frontier at 20% of the cost/size sounds really good to me.
I didn't know about HERMES.md ... (??) - found information here for others who are curious https://github.com/anthropics/claude-code/issues/53262
The competition is on DeepSeek v4 Flash for similar size / deployment target.
> This model? You can run it at Q4 with 70GB of VRAM. This is approaching consumer level territory (you can get a Mac Studio with 128GB of RAM for ~3500 USD).
The one thing I would want everyone curious about local LLMs to know is that being able to run a model and being able to run a model fast are two very different thresholds. You can get these models to run on a 128GB Mac, but we need to first tell if Q4 retains enough quality (models have different sensitivities to quantization) and how fast it runs.
For running async work and background tasks the prompt processing and token generation speeds matter less, but a lot of Mac Studio buyers have discovered the hard way that it's not going to be as responsive as working with a model hosted in the cloud on proper hardware.
For most people without hard requirements for on-site processing, the best use case for this model would be going through one of the OpenRouter hosted providers for it and paying by token.
> This beats the latest Sonnet while running locally
Almost every open weight model launch this year has come with claims that it matches or exceeds Sonnet. I've been trying a lot of them and I have yet to see it in practice, even when the benchmarks show a clear lead.
Cloud hardware is not inherently more "proper" than what's being proposed here, there's nothing wrong per se about targeting slower inference speeds in an on prem single-user context.
> Cloud hardware is not inherently more "proper" than what's being proposed here
Cloud hardware can run the original model. Quantization will reduce quality. The quality drop to Q4 is not trivial.
Cloud hardware is also massively faster in time to first token and token generation speed.
> there's nothing wrong per se about targeting slower inference speeds in a local single-user context.
If that's what the user wants and expects then it's fine
Most people working interactively with an LLM would suffer from slower turns.
The quantization for some models can be very detrimental and their quality can drop considerably from the posted benchmarks which are probably at bf16, this is why having considerable RAM can be important.
It has similar SWE bench score to qwen 3.6 27b[1]. No one is comparing it to frontier.
[1]: There is no other common benchmark in the blog.
>This model? You can run it at Q4 with 70GB of VRAM. >This beats the latest Sonnet while running locally
Not sure it will beat Sonet at Q4.
>This is approaching consumer level territory (you can get a Mac Studio with 128GB of RAM for ~3500 USD).
For $3500 I can get 7-8 years of GLM using coding plans, have a faster model and much better code quality.
The point is it's open weight and is tiny compared to a lot of it's competitors. 4gpus for world class performance - sweet!
It’s 128b dense model. Good luck getting more than 3t/s out of a mac. It doesn’t matter if it fits or not.
You could run it on a single Mac Studio with M3 Ultra, or two Mac Studios with M4 Max at higher perf than that. And lightly quantizing this could give us modern dense models in the ~80GB params range, which is a very compelling target.
Wouldn't matter much still. M3 ultra has 819GB/s unified memory bandwidth. That means theoretical max tokem rate is 819/128 =~ 6.39 t/s. At 80 GB (5 bit quantization), its still near about 10 t/s ... far from a good coding experience. Also, these are theoretical max.. real world token generation rates would be at least 15-20% less.
I was hoping a lot from it... but this one, is not up to that mark. For example, here is it's comparion with 4.7x smaller model, qwen3.7-27b.
https://chatgpt.com/share/69f239e8-7414-83a8-8fdd-6308906e5f...
Tldr: qwen3.6-27b, a 4.7x smaller model, have similar performance.
That's a chatgpt summary. Actual usage would a better test.
yep.. until then, this is good enough since the tests are standard, and the results are numeric and can be compared without any doubt.
To be fair MoE from Qwen itself had the same "problem". 3.5 122B MoE was same or worse than 3.5 27B. Yet to see 122B 3.6.
UPD. NVM, Mistral Medium 3.5 is dense. So yes, it is worse in every way.
As always, rooting for these guys — model and national diversity is great. This looks like a solid foundation to build on; hopefully the 3.6/3.7 will dial in more gains. It looks like maybe from the computer use benchmarks that their vision pipeline could use improvement, but that’s just speculation.
The different results on some benchmarks vibes as if this is truly an independently trained model, not just exfiltrated frontier logs, which I think is also really important - having different weight architectures inside a particular model seems like a benefit on its own when viewed from a global systems architecture perspective.
With Most OSS releases being MoEs, and modern GPUs optimized for MoEs, can somebody with knowledge of the topic explain or speculate why Mistral might have opted for a dense model?
It's okay, nothing exceptional, but any news from non US and non Chinese models is still good news.
This is the bar for Europe, huh?
Where are the competitive models from Singapore, Japan, Taiwan, Korea, Russia, Canada, India, the UK? From anywhere that isn't China or the US?
There are none. Mistral Small 4 is pareto-competitive in its pricing bracket at $0.15/$0.60, at worst it's second to Gemma 4 26B A4B. The above countries have never had a model that is even close to being so.
This particular Mistral Medium looks to be uncompetitive at that pricing. I'm surprised it's so expensive given its size. Wonder if we'll see other providers offer it for cheaper.
but that doesn't mean Mistral has never produced anything useful.
DeepMind, which is headquartered in London, probably had a significant role in the development of the Gemini and Gemma models.
Yes, it might be a problem that the UK allows companies like this to be bought up by foreign countries.
Without Google’s funding its not obvious i DeepMind would have went anywhere.
Unless the moved to US for funding while keeping a back office in the UK.
It’s strange to expect anything significant to come out from Europe when VCs there are either very risk averse and/or don’t have enough cash to begin with. It’s not like government or EU funding can replace that since its almost always wasted or missdirected
> Korea
EXAONE from LG AI Research https://huggingface.co/LGAI-EXAONE
They had one of the best small models a few months ago and they released a new model just last week.
There's also HyperCLOVA X (haven't tested it, but maybe it is also good) https://huggingface.co/naver-hyperclovax
> India
India has the Sarvam model series, which admittedly are not SotA, but they have pretty good voice capabilities https://huggingface.co/sarvamai
The UAE (not part of the list above) also has a few noteworthy models: https://huggingface.co/tiiuae
I'm familiar with those models. They're nowhere near competitive. Miles away from Mistral or (obviously) Chinese models.
> (haven't tested it, but maybe it is also good)
I have. It is not.
You mentioned "pareto-competitive", and EXAONE certainly was that. The statement that the "above countries have never had a model that is even close to being so" is simply too broad.
they should ask unsloth to follow them. For my usecases locally w/128GB, Qwen3.5-Coder-Next is SOTA.
Although the Manus decision might change things for AI, Singapore-washing is quite rampant among Chinese companies, so I wouldn't call this place of origin an alternative market.
This is the bar for anybody that's not the frontier labs.
> This is the bar for Europe, huh?
A few months ago China was being criticized left and right on how somehow it was not able to compete, and once DeepSeek showed up then all the hatred shifted onto how China was actually competing but exploring unfair competitive advantages.
Funny how that works.
Also, aren't the likes of OpenAI burning through over $2 of investment for each $1 of revenue?
China is not competing, it is distilling US models. Where are the Chinese models that are blowing US ones out of the water? There aren't any. The US continues to innovate, China replicate, Europe regulate. As is tradition.
>Also, aren't the likes of OpenAI burning through over $2 of investment for each $1 of revenue?
Yes, innovation costs money.
Edit: In response to below, EUV machines use tech licensed from the US, so yes, the US worked on them.
2 businesses working to get money from the same customers in the same field is competition. Kellogs is competing with store brand cereal. People are choosing to use these Chinese AI apis because they are good enough for some workflows and cheaper. If they didn't exist, the money would go to the frontier labs. There is no world where this would not be defined as competition.
> China is not competing, it is distilling US models
China are cheating by using data obtained without permission to train their models in an evil commie way!
They should have done what the US did instead and trained models on data obtained without permission in a fair and freedum way!
> Where are the Chinese models that are blowing US ones out of the water?
Kimi2 blows every US model out of the water in any comparison that includes both costs and performance.
Qwen3.6 runs on a single GPU and beats claudes sonnet. In benchmarks and real world tests from humans. Kimi is awesome but most people won't be able to host it themselves.
A lot of people are slowly realizing the moat of 1T closed source models is gone as of the last few weeks. It's going to change the industry. April was a huge month for open models, it'll be curious to see if that continues.
This Mistral submission is another nail in the coffin.
> beats claudes sonnet
Based on benchmarks which don’t mean that much these days.
> models is gone as of the last few weeks.
Yes, that’s exactly what people were saying after every major release for the past year or so. It’s always a couple of weeks away
i run qwen 3.6. you need to drink some settle down juice.
No way it's awesome.
Theft is quite a slippery slope argument not in your favor in the context of US based LLMs and how/what they were trained on..
> China is not competing, it is distilling US models.
I think you should check your notes. The likes of Kimi K2 thinking shows up as high as the second best general purpose model currently in existence. It seems they compete just fine.
If you believe "distilling" is all it takes to put together a model at the top of any synthetic benchmark then I wonder what you would have to say about all US models that greatly underperform in comparison and still manage to be used extensively in professional settings.
But your argument is an emotional one and not rarional, isn't it?
> high as the second best general purpose model
According to benchmarks which are gamed to the extreme these days. Trusting them blindly isn’t exactly rational either. They don’t necessarily translate that well to real world tasks
It’s obviously not “distilling” as such but there are reasons why Chinnese models are consistently several months behind OpenAI/Antropic
Ah yes, like those EUV machines America and China have worked on.
I don't mind Chinese but US under Trump is a fascist state based on ethnic and theological grounds pretty much or soon would be if electorate doesn't decide otherwise.
China and rest of the world has sane leadership that aren't mentally retarted.
Yes, China is a much freer and more democratic country than the USA. It's not like you can get a Uygur killed to order for a new kidney or anything.
You wouldn't happen to be a Trump supporter by any chance, would you?
A 1000B model, can we call it 1KB model?
Compared to all other hosted LLMs that I have tested, Mistral seems to be the only one with rather strict CSP headers. When you ask them to create a website with some javascript library it will not preview, even though le chat offers canvas mode.
Sometimes when a new release comes around from any provider I just want to test it a bit on the web. without paying and using an agent harness.
Why are they like this ;_;
Edit: Christ on a bike it's bad at drawing SVGs https://chat.mistral.ai/chat/23214adb-5530-4af9-bb47-90f5219...
I have never wanted, needed or hoped to draw svgs with an LLM. All of the models suck at it, some are just more fun or something.
Given what Vibe already did in the previous versions with codestral-v2, that's great news. Keep up the good work ! I don't want to depend on the world's two hungry superpowers.
I can't figure out if this is available in the official Mistral API or not.
Their model listing API returns this:
So that has the alias "mistral-medium-latest", but the official ID is "mistral-medium-2508" which suggests it's the model they released in August 2025.But... that 1777479384 timestamp decodes to Wednesday, April 29, 2026 at 04:16:24 PM UTC
So is that the new Mistral Medium?
Some poking around in the source code for https://github.com/mistralai/mistral-vibe got me to this:
Which did work: https://gist.github.com/simonw/f3158919b18d2c47863b0a5dc257a... - it's pretty disappointing.Weird that it doesn't show up in the model list:
I also did some SVG tests, it's really bad.
https://chat.mistral.ai/chat/897fbe7d-b1ae-4109-9b29-f3ccc4f...
Wow. I get that "how well can it make SVGs" isn't the (or a) gold standard for how useful a model is or isn't, but the fact the Gemma 4 26B A4B I'm running locally can blow it out of the water doesn't give me high confidence for the model. Maybe an unfair comparison, but...
It's so bad I don't want to spend the 18 EUR just to test it for a month. It can't even create an SVG of the facebook logo. There should be plenty of examples of that around.
Gemini fast could do that in under 5 seconds.
It sounds like they focussed performance on not drawing svgs. Which honestly, makes a lot of sense to me.
I'm curios: are you doing a real apples to apples comparison, or are you running a harness that already curates prompts? There's a far and wide margin how any of these models respond based on already loaded context. Most models are pretty much hot garbage until their context is curated appropiately.
I just copied and pasted each prompt as specified by Mashimo and simonw into a chat interface, using a 4-bit Unsloth quantization of Gemma 4 26B, with the default sampler settings recommended by Google, and a system prompt of "You are a helpful assistant". The results are miles ahead of what the Mistral model output.
I've gotten a lot of use out of Mistral models, and I imagine this model is pretty good at other things, but it really feels like a 128B parameter dense model should be at least a little better than this.
I'm using mistral-medium-2508 for some text transformation operations. It's giving me better results than mistral-large for my use cases. Looking forward to testing this new model, although I'm not sure if it's really meant at replacing the previous medium model since it's a lot more expensive and presented more as a coding / agentic model (mistral-medium-2508 was priced $0.4/$2 per 1M tokens, mistral-medium-3.5 is $1,5/$7.5).
It's funny that 128B is now considered Medium. I remember back in the day when 355M parameters was considered medium with GPT-2.
And GPT-2 1.5B was considered too dangerous to release.
They were perhaps right.
This release Mistral really reminds you of the gap between the frontier labs and everyone else.
Pre-agent, there wasn't always an obvious difference between models. Various models had their charms. Nowadays, I don't want to entertain anything less than the frontier models. The difference in capability is enormous and choosing anything less has a real cost in terms of productivity.
I've been a big fan of the smaller labs like Mistral and especially Cohere but it's been a while since I've been excited by a release by either company.
That said, I'm using mistral voxtral realtime daily – it's great.
Can't agree at all. Productivity gap just 1 year ago was much larger for frontier model vs non-frontier. Let alone 2 years ago.
When I was thinking pre-agentic, I was actually thinking more pre-"coding seen as the main use case for these models".
Coding has always been the main real-world business usecase since day one. There has been no point since the very first public availability of GPT 3.5 in November 2022, that it wasn't.
A lot of us have been agentic coding since almost 2 years ago, mid-2024. I have. The productivity gap of "best vs 2nd vs 3rd best model" was biggest back then and has slowly been shrinking ever since.
> Pre-agent, there wasn't always an obvious difference between models. Various models had their charms. Nowadays, I don't want to entertain anything less than the frontier models. The difference in capability is enormous and choosing anything less has a real cost in terms of productivity.
It's just apples to oranges.
There is not a clear, across the board, winner on non-agentic tasks between Gemini, ChatGPT, and Claude - the simple chatbot interface.
But Claude Code is substantially better than Codex which itself is notably better than Gemini-cli.
In this vein, it should not be surprising that Claude Code is way better than non-frontier models for agentic coding... It's substantially better than other frontier models at specialized agentic tasks.
I’ve been comparing Claude Code and Codex extensively side by side over the past couple of weeks with my favorite prompting framework superpowers…
From my perspective, Claude Code is decidedly not better than Codex. They’re slightly different and work better together. I would have no issues dropping CC entirely and using codex 100%.
If you’re working off of “defaults”, in other words no custom prompting, Claude Code does perform a lot better out of the box. I think this matters, but if you’re a professional software developer, I’d make the case that you should be owning your tools and moving beyond the baked in prompts.
I think there's a fair amount of evidence that the heavy harnesses actually drag down performance compared to bare harnesses.
CC is not better than Codex, nor is it better than OpenCode, Crush, Pi etc…
> Pre-agent, there wasn't always an obvious difference between models. Various models had their charms. Nowadays, I don't want to entertain anything less than the frontier models.
This is a very naive and misguided opinion. In most tasks, including complex coding tasks, you can hardly tell the difference between a frontier model and something like GPT4.1. You need to really focus on areas such as context window, tool calling and specific aspects of reasoning steps to start noticing differences. To make matters worse, frontier models are taking a brute force approach to results which ends up making them far more expensive to run, both in terms of what shows up on your invoice and how much more you have to wait to get any resemblance of output.
And I won't even go into the topic or local models.
> You need to really focus on areas such as context window, tool calling and specific aspects of reasoning steps to start noticing differences.
This is like saying "the current models and the old models are the same if you ignore every important advance they've made"
This is a very interesting strategy that might pay off. This model is a very good option for enterprise self host. I would argue a lot of companies are VRAM constrained rather than compute constrained. You could fit 4-5 running instances on one H100 cluster where you can only fit 1-2 Kimi K2 or GLM5.
I like the idea of Mistral, but the last time I evaluated Mistral Vibe it was really nice for $15/month but not as effective as Gemini Plus with AntiGravity and gemini-cli. I am currently running Gemini Ultra on a 3 month 'special deal' and AntiGravity with Opus 4.7 tokens is pretty much fantastic.
That said, when I stop spending money on Gemini Ultra, I will give Mistral Vibe another 1-month test.
I like the entire business model and vibe of Mistral so much more than OpenAI/Anthropic/Google but I also have stuff to get done. I am curious if Mistral Vibe for $15/month is a stable business model (i.e., can they make a profit).
I'm testing it right now and it seems very buggy and unstable, just like before.
I use Mistral Le Chat quite a bit.
One thing in particular I was disappointed in was its bad explanations when asking about French grammar. It made multiple mistakes and the other models got it right, even Qwen 3.6 27b!
Anyway, I'm hoping they catch up some more.
There's a good chance that they'll catch up. The "AI race" is a race to the bottom, with the leaders blowing huge wads of cash on capabilities that get replicated months later by the competition at a fraction of the cost.
The only benefit of leading is mindshare. OpenAI is doubling down on that, by investing in communication companies. That's their pathetic attempt at a "moat".
They catch up by distilling frontier models. They will eventually figure out how to prevent that from happening. No one has any interest in investing tens of billions if the product can be copied and sold for less.
>No one has any interest in investing tens of billions if the product can be copied and sold for less.
That is what has happened until now though
I'm rooting for Mistral. It seems they are making a big bet that smaller models will win over larger ones and I can see it happening. I was running some simple chat and tool-calling benchmarks for small models and Mistral Small 4 performed well for it's price ($.15/$.60). Seeing this today got me excited, benchmarks seems solid compared to models much larger, but it's priced higher than Haiku, 5.4 mini, and all the the Chinese models it's comparing itself too. It's not even winning those benches either, just being competitive with them, which is great, those models are 5x+ the size, but they are also 1/2 the price. Hard to be excited about that.
TLDR: Mistral Medium 3.5, text-only, 128B dense model, 256k context window, modified MIT license. Model is ~140G ...
https://huggingface.co/mistralai/Mistral-Medium-3.5-128B
They more or less claim this exceeds Claude Sonnet 3.5 on most things, but is worse than Sonnet 3.6, and exceeds all other open models.
Oh and they have a cloud service that will code your apps "in the cloud". But, yeah, at this point, so does my cat.
And, yes, unsloth is on it: https://huggingface.co/unsloth/Mistral-Medium-3.5-128B-GGUF (but 4bit quant is 75G)
Sonnet 4.5 and 4.6*
There is no way it exceeds “all other” open models - but it does exceed all of Mistral’s past models.
You can see it getting blown past by GLM 5.1 and Kimi in this.
Still excited to give it a try
It looks like qwen 3.6 is winning and smaller for the April small model roll out
Unfortunately they only compare to old “all other open models”. There are probably over 10 other open models better than it by now.
You mean Sonnet 4.5 and 4.6 riight
right
Oh they are still a thing?! Completely forgot about Mistral. I am assuming they are still burning trough investor money.
What's better than Voxtral for locally processed voice input? More competition is always better.
I want to believe it's gonna be good, but after trying GPT-5.5 even the most advanced Chinese models seem depressing.
This is a French model sir
Évidemment
Funny detail: Google AI (the one they use in search) can't spell évidemment correctly.
What's French for 'goblin'...?
Then you’ll be happy to learn it’s not Chinese
GP is stating that the second best in the field, the Chinese, is so far behind the best in the field, GPT 5.5, that it is not even worth testing anything else.
Thanks for the translation, I did not express it very clearly. Anything that I try is so much worse.
Is GPT 5.5 the best in the field? I think Opus is still better despite Anthropic's recent stumbling.
I am not following this obsession with SOTA and benchmark rankings
I have been using DeepSeek and GLMnmodels with OpenCode and Codex and Claudr side by side.
I have not found the Chinese models lacking. I enjoy for coding and like to maintain full control of my codebade and deeply care about the GOF patterns. So I am very stringent in terms of what I want the LLM to code and how to code.
So from my perspective, they are all about the same.
That I agree with, but for more complex autonomous changes the differences are considerable. However, it seems that most models will reach the saturation time in which they will be useful for almost everything and the difference will be in more and more niche and specialized tasks.
Honestly I depends on the context which this performance matters. Mistral is quiet cheap
Looks at first graph. It's SWE-Bench Verified. A benchmark Open-AI stopped using two months ago due to contamination.
Doesn't look to promising. Is there any reason to consider Mistral other than it's not US?
They did not stop using it due to contamination. They said it's flawed and indirectly said anthropics results were impossible. It's very possible they are sore losers
If it's not US and it's within a few percent of SOTA that might be good enough for a lot of people (eg Europeans)
Gemma has been better for us at EU languages than mistral (for comparable sized models) :/ so ... dunno. What mistral does well and others are lagging behind is deploying on prem with their engineers and know-how, offering tuned models for your tasks and finetuning on your own data. (I expect google to start offering this next)
It's sad that despite their strength in this for onprem, they're so behind on this in the cloud. No publicly available cloud SFT at all. Meanwhile Google has been offering that for years - though remains to be seen if they will for Gemini 3 when GA.
And on top of it a range of providers like Fireworks and so on that offer it for Chinese models. This seems such an obvious thing for Mistral to offer.
Price and speed.