I remember reading that Intel was having TSMC run off the lowest end Altera FPGAs because Intel's fabs were just too expensive.
Other possibilities off the top of my head might be Cadence being able to do stuff that's now required for these smaller and using EUV lithography nodes, and/or a backup in case their "10nm" never makes it, and their "7nm" having problems or also failing. Embarrassment should be preferable to the worst case alternatives, and we have no reason to believe Intel will be able to regain its ability to move to smaller nodes.
It is indeed pretty wild but the "3 nm" name is not an actual distance, it is a marketing term supposed to represent the transistor density. The actual transistors are much larger than that.
For example Intel's 10nm is equivalent Samsung's 7nm. They both have a pitch of around 40nm.
Any comparison for transistor density (and power consumption) of different manufactures/generations? Though probably they are hiding this as hard as possible, throwing only marketing nonsense on consumers.
At this point I don't think they physically measure anything on the chip to get that number. It's just an indication of how tightly you can space the transistors, plus a bit of marketing hype. The pics in the article make me wonder if the new gate design is even wider than before, but allows for stacking up transistors which ends up increasing the density to the equivalent of 3nm.
> At this point I don't think they physically measure anything on the chip to get that number.
They are measuring the smallest feature size - the smallest shape that is reliably replicated across the wafer. Actually pretty impressive when you think about the lithography behind it.
Not in this case. It's mostly just a marketing term. If I wanted to be pedantic, Intel's first gen 10nm has a tip fin width of 7nm. 14nm had a tip fin width of 8nm.
Going much smaller than we have right now, and you run up against some exotic issues--if you decrease the dielectric thickness, you run the risk of break down between the gate and source/drain under normal operating conditions. Your steady current rating also drops dramatically if the other feature sizes are decreased. I would probably say a 3 atom MOSFET is not possible, since at that scale you can't really have an effective dielectric between the atoms, and the conductors connected to the "single-atom terminals" will have a bulk effect on the behavior.
A semiconductor is made of multiple surfaces/areas. Their lithographic process allows to have some at 3nm in size, but a transistor can be a sum of many parts.
Can someone elaborate on how this translates to the actual performance of GPU. What is the performance jump (will this allow to run 4k x 4k per eye VR headset) when compared to current GPU generation ?
Today Even with Nvidia 1080 cards running 2k x 2k per eye headsets is not an easy task and beyond a smooth VR experience.
It's right there in the article: 1.35x. That sounds like raw speed though, with actual changes probably depending more on what people do with the new technology.
" The headline PPA values that Samsung is announcing are also impressive: compared to 7nm, 3GAE will offer 1.35x performance, 0.5x power, with a 0.65x die area."
...
Samsung states that these performance numbers are based on using larger width cells for critical paths where frequency is important, and smaller width cells for non-critical paths where power savings are crucial. Technically Fmax of the widest cells is listed as 1.5x, while power at Fmax is 0.6x. Power at iso-performance is where the 0.5x number comes from."
I don't think 35% at fixed frequency makes sense; 35% is the frequency boost (with power savings too).
Nearly every existing VR game runs perfectly smooth with a 2080. You won’t get 90fps 4K even with a Ti card, but for FHD we’re well served, price aside.
GPU scales really well with transistor count and You can expect about double the performance with best of 7nm, and likely another double for 3nm. So you are looking at 4x the performance.
Of course this does not take into account about TDP, Clockspeed, Memory Speed, Cost of Die Size etc. What is technologically possible may not be economically possible. We are going to need much faster memory, GDDR7? or HBM3, how much would those cost?
And a 3nm ( Whether that is from Samsung or TSMC ) Nvidia GPU will likely be 2022 or 2023 at the earliest.
More transistors require more energy. GPUs are already at 300W. Doubling the number of transistors means double the power consumption which means 600W. Smaller transistors are more energy efficient but they can't reduce the energy needed per doubling by 50%. Therefore you end up with a 400W GPU. That's still too high. Energy efficiency is now far more important than raw transistor count.
Edited a bit, Nvidia has always been late ( or waiting for it to mature ) to leading node. So in reality we are looking at 2024 / 2025 for mainstream part.
I'm super curious in the semi technology to create 3 nm layers or beyond. There's not many processes to choose from to begin with thats scalable with adequate uniformity. Do they use some sort of MBE process?!
Rumours heard by me were of apple's tapeout on 7nm being tragically low yield in between 30 to 40% (throwing away half of the wafer,) but also that TSMC eventually got 7nm yields into sane values.
New processes almost always have a yield drop, followed up by recovery, but anything below 50% is really unprecedented.
Apple 12 is not that of a huge chip. It is big for a mobile SoC, but smaller than say a mainstream gaming grade GPU.
I greatly doubt the use of MBE, most likely some regular kind of epitaxy + a lot of process control and swearing to get it right. MBE would've increased cycle times way too much (need for ultra deep vacuum and 0 impurities)
The information in the article is incorrect. The PDK gives more performance OR more power, but not both. Other reporters have this correct, while this article is incorrect.
Why is nobody talking about the fact that the ultimate limiting factor of 7/5/3nm nodes is the type of litography being used (i.e. only EUV-equipped foundries can go beyond 7nm) and that the only serious manufacturer of cutting edge EUV tools is ASML (which has only managed to develop powerful EUV light sources with tremendous r&d expenses and years of delays) ? Once you have an EUV system like NXE3400B (just one of these EUV tools is $200M and up... plus high running expenses), the rest is "easy". And by easy I mean "possible" or "up to the foundry". I feel like the R&D expenses and the history of EUV development show a bleak picture of the future of semiconductor miniaturization. BTW: As far as I know, commercial 3nm EUV tools do not exist yet. If you're looking forward to 3nm products, you should expect to wait another 5 years. 5nm to 3nm is going to take a lot longer than 7nm to 5nm.
To add to that, I recently heard that Intel is about to jump the Synopsys ship for Cadence.
I was completely dazzled thinking "what to they get from that?" until it hit me that only Cadence has 7nm TSMC and Samsung workflow ready.
This made me thinking, is Intel finally thinking about doing 7nm tapeouts at TSMC or Samsung?
I remember reading that Intel was having TSMC run off the lowest end Altera FPGAs because Intel's fabs were just too expensive.
Other possibilities off the top of my head might be Cadence being able to do stuff that's now required for these smaller and using EUV lithography nodes, and/or a backup in case their "10nm" never makes it, and their "7nm" having problems or also failing. Embarrassment should be preferable to the worst case alternatives, and we have no reason to believe Intel will be able to regain its ability to move to smaller nodes.
> only Cadence has 7nm TSMC and Samsung workflow ready.
I think you're not right, unless I am missing something.
https://news.synopsys.com/2018-04-30-TSMC-Certifies-Synopsys...
https://news.synopsys.com/2018-06-22-Synopsys-Custom-Design-...
Well, nice to know that. Nevertheless, Cadence was first to get to 7nm, and is still ahead with never versions of 7nm processes like 7HPC.
To my knowledge, they are the only one who have just anything workable for EUV based processes.
That's pretty wild. Most atoms are 0.1-0.2nm in diameter.
It is indeed pretty wild but the "3 nm" name is not an actual distance, it is a marketing term supposed to represent the transistor density. The actual transistors are much larger than that.
For example Intel's 10nm is equivalent Samsung's 7nm. They both have a pitch of around 40nm.
Any comparison for transistor density (and power consumption) of different manufactures/generations? Though probably they are hiding this as hard as possible, throwing only marketing nonsense on consumers.
At this point I don't think they physically measure anything on the chip to get that number. It's just an indication of how tightly you can space the transistors, plus a bit of marketing hype. The pics in the article make me wonder if the new gate design is even wider than before, but allows for stacking up transistors which ends up increasing the density to the equivalent of 3nm.
> At this point I don't think they physically measure anything on the chip to get that number.
They are measuring the smallest feature size - the smallest shape that is reliably replicated across the wafer. Actually pretty impressive when you think about the lithography behind it.
Not in this case. It's mostly just a marketing term. If I wanted to be pedantic, Intel's first gen 10nm has a tip fin width of 7nm. 14nm had a tip fin width of 8nm.
https://www.anandtech.com/show/13405/intel-10nm-cannon-lake-...
Also critical, the crystal lattice spacing between atoms for silicon is 0.5nm
Kinda makes me wonder what the absolute limit is. Is a 3 atom transistor possible?
Going much smaller than we have right now, and you run up against some exotic issues--if you decrease the dielectric thickness, you run the risk of break down between the gate and source/drain under normal operating conditions. Your steady current rating also drops dramatically if the other feature sizes are decreased. I would probably say a 3 atom MOSFET is not possible, since at that scale you can't really have an effective dielectric between the atoms, and the conductors connected to the "single-atom terminals" will have a bulk effect on the behavior.
> if you decrease the dielectric thickness
This is why people are moving away from planar transistors to "FinFets" and eventually having an "all around gate"
You can go pretty small: https://www.nature.com/articles/nnano.2012.21
https://en.wikipedia.org/wiki/Single-atom_transistor
3nm (or any "node" measurement) is not a measure of the entire transistor, only of the smallest feature size.
What do you mean by "feature size"?
A semiconductor is made of multiple surfaces/areas. Their lithographic process allows to have some at 3nm in size, but a transistor can be a sum of many parts.
A very rough approximation would be to describe a wind-up watch (processor) by the smallest tooth of the smallest gear (component of the transistor)
Can someone elaborate on how this translates to the actual performance of GPU. What is the performance jump (will this allow to run 4k x 4k per eye VR headset) when compared to current GPU generation ?
Today Even with Nvidia 1080 cards running 2k x 2k per eye headsets is not an easy task and beyond a smooth VR experience.
It's right there in the article: 1.35x. That sounds like raw speed though, with actual changes probably depending more on what people do with the new technology.
It is a comparison against a 7nm along with 0.65 area die reduction.
Current Nvidia is 12 nm I believe, so the question si what is the possible pixel count bump.
article and slides don't give baselines for those percentages: is it 35% more at fixed frequency? fixed power?
" The headline PPA values that Samsung is announcing are also impressive: compared to 7nm, 3GAE will offer 1.35x performance, 0.5x power, with a 0.65x die area." ... Samsung states that these performance numbers are based on using larger width cells for critical paths where frequency is important, and smaller width cells for non-critical paths where power savings are crucial. Technically Fmax of the widest cells is listed as 1.5x, while power at Fmax is 0.6x. Power at iso-performance is where the 0.5x number comes from."
I don't think 35% at fixed frequency makes sense; 35% is the frequency boost (with power savings too).
Nearly every existing VR game runs perfectly smooth with a 2080. You won’t get 90fps 4K even with a Ti card, but for FHD we’re well served, price aside.
GPU scales really well with transistor count and You can expect about double the performance with best of 7nm, and likely another double for 3nm. So you are looking at 4x the performance.
Of course this does not take into account about TDP, Clockspeed, Memory Speed, Cost of Die Size etc. What is technologically possible may not be economically possible. We are going to need much faster memory, GDDR7? or HBM3, how much would those cost?
And a 3nm ( Whether that is from Samsung or TSMC ) Nvidia GPU will likely be 2022 or 2023 at the earliest.
More transistors require more energy. GPUs are already at 300W. Doubling the number of transistors means double the power consumption which means 600W. Smaller transistors are more energy efficient but they can't reduce the energy needed per doubling by 50%. Therefore you end up with a 400W GPU. That's still too high. Energy efficiency is now far more important than raw transistor count.
> And a 3nm ( Whether that is from Samsung or TSMC ) Nvidia GPU will likely be 2022 or 2023.
That seems really optimistic to me.
Edited a bit, Nvidia has always been late ( or waiting for it to mature ) to leading node. So in reality we are looking at 2024 / 2025 for mainstream part.
That still sounds quite optimistic to me.
There is heat dissipation to consider as well so you can't just make a die 4x more dense.
Well hopefully the smaller transistors will each take less power too.
I'm super curious in the semi technology to create 3 nm layers or beyond. There's not many processes to choose from to begin with thats scalable with adequate uniformity. Do they use some sort of MBE process?!
Rumours heard by me were of apple's tapeout on 7nm being tragically low yield in between 30 to 40% (throwing away half of the wafer,) but also that TSMC eventually got 7nm yields into sane values.
New processes almost always have a yield drop, followed up by recovery, but anything below 50% is really unprecedented.
Apple 12 is not that of a huge chip. It is big for a mobile SoC, but smaller than say a mainstream gaming grade GPU.
I greatly doubt the use of MBE, most likely some regular kind of epitaxy + a lot of process control and swearing to get it right. MBE would've increased cycle times way too much (need for ultra deep vacuum and 0 impurities)
The information in the article is incorrect. The PDK gives more performance OR more power, but not both. Other reporters have this correct, while this article is incorrect.
Samsung's own slides state:
Fmax +50% Power @ Fmax -40% Power @ isoperf -50%
Different parts of Samsung aren't talking to each other.
I was under the impression that at this scale you have quantum tunneling issues.
You have "quantum tunneling" well before this process node. Otherwise, flash memories wouldn't work.
Why is nobody talking about the fact that the ultimate limiting factor of 7/5/3nm nodes is the type of litography being used (i.e. only EUV-equipped foundries can go beyond 7nm) and that the only serious manufacturer of cutting edge EUV tools is ASML (which has only managed to develop powerful EUV light sources with tremendous r&d expenses and years of delays) ? Once you have an EUV system like NXE3400B (just one of these EUV tools is $200M and up... plus high running expenses), the rest is "easy". And by easy I mean "possible" or "up to the foundry". I feel like the R&D expenses and the history of EUV development show a bleak picture of the future of semiconductor miniaturization. BTW: As far as I know, commercial 3nm EUV tools do not exist yet. If you're looking forward to 3nm products, you should expect to wait another 5 years. 5nm to 3nm is going to take a lot longer than 7nm to 5nm.