The 2nd order effect that not a lot of people talk about is price: the fact that model scaling at this pace also correlates with price is amazing.
I think this is just as important to distribution of AI as model intelligence is.
AFAIK there are no fundamental "laws" that prevent price from continuing to fall, at least correlated with Moore's law (or whatever the current AI/Nvidia chip development cycle is called right now)- each new generation of hardware is significantly faster/cheaper than the next- so will we see a ChatGPT-5 model at half the price in a year? (yes I know that thinking models cost more, but just on a per-token basis)
You are vastly underestimating the price decline. To cherrypick one article; in the first two years since GPT 3.5, inference price for the same amount of intelligence has decreased 10x per year according to a study by Andreessen Horowitz https://a16z.com/llmflation-llm-inference-cost/. So in a stark slowdown scenario, we could still see a 1000x decrease in the next 5 years.
Price deflation is not tied to Moore's right now because much of the performance gains are from model optimization, high bandwidth memory supply chains, and electrical capacity build out, not FLOP density.
True! I just know that model optimization gains are much less guaranteed than say, FLOP density, even though model optimization has so far provided way more gains than hardware advancements.
Part of me is optimistic that when the AI bubble bursts the excess data center capacity is going to be another force driving the cost of inference down.
> I just know that model optimization gains are much less guaranteed than say, FLOP density, even though model optimization has so far provided way more gains than hardware advancements.
Performance gained from model improvements has outpaced performance gained from hardware improvements for decades.
Strange - the model is marked as "Trains on data" ("To our knowledge, this provider may use your prompts and completions to train new models. This provider is disabled, but it can be re-enabled by changing your data policy.").
This is usually not the case for paid models -- is Openrouter just marking this model incorrectly or do Deepseek actually train on submitted data?
I don't know why they need to claim to be open. Their job is to connect you to providers on the basis of price and various metrics they track. Open or close would makes no difference to me.
I always interpreted it as "open" as in "open market".
It's a frictionless marketplace connecting inference providers and customers, creating a more competitive market. Or a more open market if you play a bit fast and loose with terminology
It's in the name. Why not name themselves ModelRouter or something similar?
If they lead the market, they'll extract value in lots of ways that an open company could at least be compelled not to. Plus there won't be competition.
They're probably selling your data to LLM companies and you don't even see what they're doing.
Without competition, they'll raise their rates.
If they were open, you could potentially run the offering on-prem. You could bolt on new providers or use it internally for your own routing.
I think it's just called OpenRouter because the founder previously started OpenSea (an NFT marketplace), and also probably to sound a bit similar to OpenAI. It's like companies calling their products "natural" or "organic" or "artisan" when they can get away with it, just a marketing strategy of using words that conjure up vaguely positive connotations in your mind.
They can't raise their prices much because providers have the upper band, so users will always be able to go directly to the source. I use openrouter and openai, anthropic, google, etc.
They trained a thing to learn mimicking the full attention distribution but only filtering the top-k (k=2048) most important attention tokens so that when the context window increases, the compute does not go up linearly but constantly for the attention->[query,key] process (it does grow up linearly in the graph because you still need to roughly scan the entire context window (which an "indexer" will do), but just very roughly here in order to speed up things, which is O(L) here).
One huge problem with these "cheap" models is that they happen to be more expensive in the typical agent workflow if the provider does not support caching.
Input and output costs are peanuts compared to the order of magnitude(or more) amount of tokens that hit the cache.
At that point you might as well use GPT-5. It will be the same price or cheaper, and more capable.
> One huge problem with these "cheap" models is that they happen to be more expensive in the typical agent workflow if the provider does not support caching.
DeepSeek supports caching and cache hits are a tenth of the cost.
First you complained about lack of caching. When you were informed that the model supports caching, instead of admitting your error you switched to an unrelated complaint. I hope that you you do not use similar strategies for discussion in your personal and work life.
caching is not a function of the model but the provider, all models can be cached. the provider serving the model decides if they are going to cache it. openrouter is not a provider but a middleman between providers, so some of their providers for deepseek might provide caching and some might not. if you just use any then you might run into the issue. some of their provider might use your data for training, some might not. you have to look at the list and you can cherry pick ones that won't train on your data and that also provide caching.
Interesting that models still evolve fast enough that dedicated model-specific hardware isn't a big contender right now. We're still seeing major scaling gains on mostly generic platforms.
You guys rock!
I'm very curious how will this perform against real word data, where small nuance matters.
Also have you tested it beyond 128K context window?
The 2nd order effect that not a lot of people talk about is price: the fact that model scaling at this pace also correlates with price is amazing.
I think this is just as important to distribution of AI as model intelligence is.
AFAIK there are no fundamental "laws" that prevent price from continuing to fall, at least correlated with Moore's law (or whatever the current AI/Nvidia chip development cycle is called right now)- each new generation of hardware is significantly faster/cheaper than the next- so will we see a ChatGPT-5 model at half the price in a year? (yes I know that thinking models cost more, but just on a per-token basis)
You are vastly underestimating the price decline. To cherrypick one article; in the first two years since GPT 3.5, inference price for the same amount of intelligence has decreased 10x per year according to a study by Andreessen Horowitz https://a16z.com/llmflation-llm-inference-cost/. So in a stark slowdown scenario, we could still see a 1000x decrease in the next 5 years.
Price deflation is not tied to Moore's right now because much of the performance gains are from model optimization, high bandwidth memory supply chains, and electrical capacity build out, not FLOP density.
True! I just know that model optimization gains are much less guaranteed than say, FLOP density, even though model optimization has so far provided way more gains than hardware advancements.
Part of me is optimistic that when the AI bubble bursts the excess data center capacity is going to be another force driving the cost of inference down.
> I just know that model optimization gains are much less guaranteed than say, FLOP density, even though model optimization has so far provided way more gains than hardware advancements.
Performance gained from model improvements has outpaced performance gained from hardware improvements for decades.
Haha, I love how delusional everyone is about AI.
Yeppers, when that bubble burst - that's hilarious. This is the kinda stuff grandkids won't believe someday.
> has decreased 10x per year according to a study by Andreessen Horowitz
I believe you but that's not exactly an unbiased source of information.
We are heading into the future of very low-cost AI inference. It's a good thing, and expected.
Happy to see Chinese OSS models keep getting better and cheaper. It also comes with a 50% API price drop for an already cheap model, now at:
$0.28/M Input ($0.028/M cache hit) > $0.42/M Output
This price drop is nice but I wonder how long it will last. Their prices used to be very low,then they almost doubled, and now it dropped.
I don't know if it will stay this low but the whole point of v3.2 is to be cheaper to run than <= v3.1.
(The inference costs are cheaper for them now as context grows because of the Sparse attention mechanism)
I was using it daily, but after the price jump, using codex and claude was much cheaper than using deepseek.
What was the price before? I thought they had just increased their prices.
Input: $0.07 (cached), $0.56 (cache miss)
Output: $1.68 per million tokens.
https://api-docs.deepseek.com/news/news250929
https://openrouter.ai/deepseek/deepseek-v3.2-exp
Strange - the model is marked as "Trains on data" ("To our knowledge, this provider may use your prompts and completions to train new models. This provider is disabled, but it can be re-enabled by changing your data policy.").
This is usually not the case for paid models -- is Openrouter just marking this model incorrectly or do Deepseek actually train on submitted data?
It is no longer the case that paid providers don't train on your data on Openrouter. You can exclude such sources in the settings.
Yep I have that setting disabled so the number of providers for that model on Openrouter currently is 0 for me.
I guess I'll wait for a 3rd party provider on Openrouter that doesn't log DS 3.2.
https://cdn.deepseek.com/policies/en-US/deepseek-privacy-pol...
https://openrouter.ai/docs/features/privacy-and-logging#data...
It seems so.
Is Open Router really open? I see their "main" repo as archived and various smaller projects.
Is it just the API client bindings that are open and the core routing service is closed!
I don't know why they need to claim to be open. Their job is to connect you to providers on the basis of price and various metrics they track. Open or close would makes no difference to me.
I always interpreted it as "open" as in "open market".
It's a frictionless marketplace connecting inference providers and customers, creating a more competitive market. Or a more open market if you play a bit fast and loose with terminology
It's in the name. Why not name themselves ModelRouter or something similar?
If they lead the market, they'll extract value in lots of ways that an open company could at least be compelled not to. Plus there won't be competition.
They're probably selling your data to LLM companies and you don't even see what they're doing.
Without competition, they'll raise their rates.
If they were open, you could potentially run the offering on-prem. You could bolt on new providers or use it internally for your own routing.
Lots of reasons.
Here's an open source alternative you can self-host: https://llmgateway.io/
I think it's just called OpenRouter because the founder previously started OpenSea (an NFT marketplace), and also probably to sound a bit similar to OpenAI. It's like companies calling their products "natural" or "organic" or "artisan" when they can get away with it, just a marketing strategy of using words that conjure up vaguely positive connotations in your mind.
Fun fact, we own closedrouter.ai and redirects to llmgateway.io
They can't raise their prices much because providers have the upper band, so users will always be able to go directly to the source. I use openrouter and openai, anthropic, google, etc.
Not sure if I get it correctly:
They trained a thing to learn mimicking the full attention distribution but only filtering the top-k (k=2048) most important attention tokens so that when the context window increases, the compute does not go up linearly but constantly for the attention->[query,key] process (it does grow up linearly in the graph because you still need to roughly scan the entire context window (which an "indexer" will do), but just very roughly here in order to speed up things, which is O(L) here).
wow...gigantic reduction in cost while holding the benchmarks mostly steady. Impressive.
One huge problem with these "cheap" models is that they happen to be more expensive in the typical agent workflow if the provider does not support caching.
Input and output costs are peanuts compared to the order of magnitude(or more) amount of tokens that hit the cache.
At that point you might as well use GPT-5. It will be the same price or cheaper, and more capable.
> One huge problem with these "cheap" models is that they happen to be more expensive in the typical agent workflow if the provider does not support caching.
DeepSeek supports caching and cache hits are a tenth of the cost.
$0.028/M for cache hit
$0.28/M for cache miss
$0.42/M for output
— https://api-docs.deepseek.com/news/news250929
I auto disqualify the chinese first party endpoints.
If they are okay for you, then sure go ahead. Enjoy the caching.
What other provider is going to support it?
> I auto disqualify the chinese first party endpoints.
Why?
I’m guessing it’s something along the lines of this: https://youtu.be/kYiUY07TzS4
by your logic then you have to disqualify openai and anthropic first party endpoints for testing gpt and claude...
There is no bug in my logic. Anthropic and OpenAI are not chinese first party providers.
you declared a huge problem and followed up with an IF.
deepseek API supports caching, stop manufacturing problems where there is none.
https://api-docs.deepseek.com/guides/kv_cache
Sure. But there is no way I'm going to use the deepseek endpoint.
Openrouter says they might use your data for training.
First you complained about lack of caching. When you were informed that the model supports caching, instead of admitting your error you switched to an unrelated complaint. I hope that you you do not use similar strategies for discussion in your personal and work life.
Your broad attack on me as a person is unnecessary.
If you read my post carefully, you will realize that I did not make any contradictory statements.
Not a broad attack, it is specifically targeted at your proud xenophobia.
Absolutely ridiculous.
My wife is Chinese.
caching is not a function of the model but the provider, all models can be cached. the provider serving the model decides if they are going to cache it. openrouter is not a provider but a middleman between providers, so some of their providers for deepseek might provide caching and some might not. if you just use any then you might run into the issue. some of their provider might use your data for training, some might not. you have to look at the list and you can cherry pick ones that won't train on your data and that also provide caching.
I was under the impression that this model does support caching. The pricing page says the cost of input tokens (cache hit) is $0.028.
Interesting that models still evolve fast enough that dedicated model-specific hardware isn't a big contender right now. We're still seeing major scaling gains on mostly generic platforms.
google tpm, groq and cerebras needs yo be mentioned even if they are more general architecture optimized.
Looks like Deep Sparse Attention can help with code (structured and long-file reasoning)
Prices fall, benchmarks remain stable. Maybe in the future, LLM will spend most of its money on electricity.
You guys rock! I'm very curious how will this perform against real word data, where small nuance matters. Also have you tested it beyond 128K context window?
awesome that sparse attention used in real world setting
[dead]
What happened to Meta Open weights models? Lately I keep hearing more of Deepseek than LAAMA?
Wasn't The Llama 4 maverick and scout a flop?