If backside power requires say 10 more imaging steps, then for a given number of imaging machines you can produce fewer wafers per month.
If there is strong demand for the new process (which, since it'll be industry leading I'm sure there will be), you make more money if you can produce more wafers per month but missing a feature.
As demand reduces and more imaging machines get delivered, then it makes sense to add in backside power again.
As it is likely that the number of launch customers for this node is one (continuing the trend), this may be necessary simply to reach agreed upon volumes.
Making multilayer nanosheets means O(n) additional multi-patterning imaging steps. 1x for each nanosheet layer. This is what is possible with current "bruteforce" approach.
A nanosheet device will have to get thicker for an equal performance to FinFET if they want to keep the number of layers sane.
This is why it's accepted that first generations of multilayer GAA devices will be inferior to FinFETs of equivalent density.
Everybody is of course searching for how to do multiple nanosheets with single imaging step.
20 years old experimental nanosheet designs won't do it in production today. I.E. cutting multiple sheets with RIE is way too coarse, and destructive for modern feature sizes. Depositing sheets into a negative pattern on a sacrificial poly layer brings alignment issues.
Is my memory mistaken but I thought that originally the N2 was suppose to be released this year. And because it wasn't, and instead pushed to next year, it disrupted Apples plans with the M2.
But looking at timeline, it says N2 isn't available now until 2025.
Is my memory wrong that N2 was suppose to be 2022, then pushed to 2023?
I hope Intel get their act together and start manufacturing ARM chips designed by Apple for Apple. Competition will push TSMC even harder to innovative faster to stay ahead.
Hmm, I wouldn't bet on it. Apple's chip designers are obviously great, but I bet a lot of their industry-leading performance/watt comes from being a node or two ahead of everyone else thanks to TSMC.
Bingo. Colocated RAM and the bespoke bridges are a massive part of this too, but buying out what is effectively the entire global capacity for a node is why Apple chips are where they are.
It's a main reason why ipod won so big too, they bought exclusive rights to the only existing small hard-drive tech for music player applications (you could still buy the drives for things like PDAs). Everyone else had to use laptop sized drives.
Similar to watching Intel scalp the entire industry, I don't think the US government cares who is in the lead, as long as that company is in their pocket. Apple's move to leapfrog the industry was smart, but also calls into question how actually efficient the chip is. If you compare the Apple A12z (their 7nm predecessor to the M1), to Intel's 7nm offerings, the performance starts weighing a lot closer in Intel's favor. In SIMD benchmarks, Intel can be 10-100x faster than ARM's NEON stopgap.
In other words, I'm really curious what the landscape will look like once Apple, Intel and AMD are all on the same node. I reckon x86 still has some life in it yet, a processor with AMD's IPC enhancements and Intel's process enhancements would cleanly smoke the ARM competition. It just remains to be seen if Intel or AMD will be the first to get there...
I don’t know if their position is quite so secure as Intel’s used to be.
Production can continue to scale up, and there’s massive spending to increase US capability due to the Taiwan situation. Apple obviously still have the money to throw around but I’m sure TSMC want to tread incredibly carefully here with regards to making their products available to more customers. Being an Apple-exclusive fab is lucrative in the short term but encourages your other customers to look for supply elsewhere, which is an existential risk to your dominance.
There are many factors that come together to make the M chips so efficient. The node is one. Another is full integration and optimization of all the systems together. Another is the inherent efficiency of ARM instruction decoding.
Power is an interesting part of the trend for TSMC -- 4 generations of 30% power reduction -- so that should result in a 75% reduction in power -- so where you burned 10W before, you now burn only 2.5W -- but does that mean you get the lower power and the better perf as well? I would assume its 75% power reduction for the performance of 7nm node -- not positive though..
Judging by TSMC's reliability and consistency in delivering nodes for the past decade, I'd bet on them instead of Intel, who are still delaying products today because of node issues. Chances are sadly high that Intel will end up biting off more than they can chew, yet again.
Strictly speaking, Intel isn't trying to do this in one go. They have an entire internal node which they are developing exclusively to disentangle the risk of GAAFET transistors and backside power delivery. The internal node uses FinFET transistors based on their 4/3NM process but also includes backside power delivery to allow them to work out the kinks one step at a time. This development node will never see any volume production and is being done exclusively to reduce the risk of moving to GAAFET and backside power delivery at 2NM.
I think they hired people from TSMC and Samsung to copy their processes, and these new plants are the intended results. Can't really keep being ahead against a much richer competitor, and Intel still got tons of money especially since they are backed by the US state. So really there is probably not a lot of risk here, they just bring in enough foreign talent and let them do the same thing there.
I think the main decider here could be financial.
If backside power requires say 10 more imaging steps, then for a given number of imaging machines you can produce fewer wafers per month.
If there is strong demand for the new process (which, since it'll be industry leading I'm sure there will be), you make more money if you can produce more wafers per month but missing a feature.
As demand reduces and more imaging machines get delivered, then it makes sense to add in backside power again.
The main decider here is probably risk.
Intel tried to do too many things at one time with 10nm and it set them behind over half a decade.
As it is likely that the number of launch customers for this node is one (continuing the trend), this may be necessary simply to reach agreed upon volumes.
In practical terms, how long does an imaging step take? A few seconds? Half an hour? Just wondering about the sense of scale.
I assume there'd also be yield implications, since each step would add risk that something gets screwed up.
This probably depends upon the steps before and after.
Removal of solvents and unused dopants might be more complex depending upon the step in processing.
Does the step require EUV? That laser-pulsed tin isn't free for the asking.
Making multilayer nanosheets means O(n) additional multi-patterning imaging steps. 1x for each nanosheet layer. This is what is possible with current "bruteforce" approach.
A nanosheet device will have to get thicker for an equal performance to FinFET if they want to keep the number of layers sane.
This is why it's accepted that first generations of multilayer GAA devices will be inferior to FinFETs of equivalent density.
Everybody is of course searching for how to do multiple nanosheets with single imaging step.
20 years old experimental nanosheet designs won't do it in production today. I.E. cutting multiple sheets with RIE is way too coarse, and destructive for modern feature sizes. Depositing sheets into a negative pattern on a sacrificial poly layer brings alignment issues.
Is my memory mistaken but I thought that originally the N2 was suppose to be released this year. And because it wasn't, and instead pushed to next year, it disrupted Apples plans with the M2.
But looking at timeline, it says N2 isn't available now until 2025.
Is my memory wrong that N2 was suppose to be 2022, then pushed to 2023?
https://images.anandtech.com/doci/17469/tsmc-roadmap-june-20...
The first sentence of TFA says n2 was announced “earlier this month” (in 2022), so it must be mistaken memory or you were reading unreliable rumors.
I think you're thinking of N3. M2 is still on N5.
> M2 is still on N5
That's my point. M2 was intended to be on N3, IIRC.
I'm watching the chip-engineer version of "Who's On First?" evolve in real time on Hackernews. Awesome.
But N2 being pushed back is orthogonal to N3. It's a full node.
How is it orthogonal?
If N3 gets pushed, it'll inevitably also them push N2 as a result.
I would say it's neither orthogonal nor inevitable.
But either way, N2 was never scheduled for 2022.
The full Anandtech article link:
https://www.anandtech.com/show/17452/tsmc-readies-five-3nm-p...
Covers a lot of variants of N3 as well.
Everything about your recollection is close to correct if you subsitute N3 for N2.
I hope Intel get their act together and start manufacturing ARM chips designed by Apple for Apple. Competition will push TSMC even harder to innovative faster to stay ahead.
Hmm, I wouldn't bet on it. Apple's chip designers are obviously great, but I bet a lot of their industry-leading performance/watt comes from being a node or two ahead of everyone else thanks to TSMC.
Bingo. Colocated RAM and the bespoke bridges are a massive part of this too, but buying out what is effectively the entire global capacity for a node is why Apple chips are where they are.
It's a main reason why ipod won so big too, they bought exclusive rights to the only existing small hard-drive tech for music player applications (you could still buy the drives for things like PDAs). Everyone else had to use laptop sized drives.
> buying out what is effectively the entire global capacity for a node
Apple has the money so Apple gets the node which makes Apple more money and everybody else has to wait.
Similar to watching Intel scalp the entire industry, I don't think the US government cares who is in the lead, as long as that company is in their pocket. Apple's move to leapfrog the industry was smart, but also calls into question how actually efficient the chip is. If you compare the Apple A12z (their 7nm predecessor to the M1), to Intel's 7nm offerings, the performance starts weighing a lot closer in Intel's favor. In SIMD benchmarks, Intel can be 10-100x faster than ARM's NEON stopgap.
In other words, I'm really curious what the landscape will look like once Apple, Intel and AMD are all on the same node. I reckon x86 still has some life in it yet, a processor with AMD's IPC enhancements and Intel's process enhancements would cleanly smoke the ARM competition. It just remains to be seen if Intel or AMD will be the first to get there...
I don’t know if their position is quite so secure as Intel’s used to be.
Production can continue to scale up, and there’s massive spending to increase US capability due to the Taiwan situation. Apple obviously still have the money to throw around but I’m sure TSMC want to tread incredibly carefully here with regards to making their products available to more customers. Being an Apple-exclusive fab is lucrative in the short term but encourages your other customers to look for supply elsewhere, which is an existential risk to your dominance.
There are many factors that come together to make the M chips so efficient. The node is one. Another is full integration and optimization of all the systems together. Another is the inherent efficiency of ARM instruction decoding.
I believe this is what Intel is betting, maybe not the bank, but at least their bailout package on.
Power is an interesting part of the trend for TSMC -- 4 generations of 30% power reduction -- so that should result in a 75% reduction in power -- so where you burned 10W before, you now burn only 2.5W -- but does that mean you get the lower power and the better perf as well? I would assume its 75% power reduction for the performance of 7nm node -- not positive though..
Maybe Intel is correct in trying to do this in one go. Maybe TSMC knows something about EUVL that Intel doesn't.
Judging by TSMC's reliability and consistency in delivering nodes for the past decade, I'd bet on them instead of Intel, who are still delaying products today because of node issues. Chances are sadly high that Intel will end up biting off more than they can chew, yet again.
Strictly speaking, Intel isn't trying to do this in one go. They have an entire internal node which they are developing exclusively to disentangle the risk of GAAFET transistors and backside power delivery. The internal node uses FinFET transistors based on their 4/3NM process but also includes backside power delivery to allow them to work out the kinks one step at a time. This development node will never see any volume production and is being done exclusively to reduce the risk of moving to GAAFET and backside power delivery at 2NM.
I think they hired people from TSMC and Samsung to copy their processes, and these new plants are the intended results. Can't really keep being ahead against a much richer competitor, and Intel still got tons of money especially since they are backed by the US state. So really there is probably not a lot of risk here, they just bring in enough foreign talent and let them do the same thing there.
https://www.theregister.com/2022/07/11/intel_exec_hiring/
Intel legitimately will never be allowed to die, as an US company that owns fabs themselves that is fairly removed from foreign geopolitics.
I don't think so - this is exactly why Intel started to lag behind.
Changing more than one risky thing at a time is a rookie engineering mistake that management types just won't understand.
Why is a month and a half old article trending on HN?
Decades old articles can trend here, HN is not limited to brand new content.