what do you mean? in image processing or in general? in general they're used a lot where performance is important, especially in high frequency trading, crypto-mining, etc. also they're quite important for image processing in defence/medical imaging.
If you go to nvidia's jobs website today you'll find they're actively hiring FPGA developers for ASIC prototyping. Obviously they're not dumping their 5090 RTL straight into some 10 metre wide FPGA chip. First, they grab the largest FPGA you can get your hands on - the FPGA vendors tend to have a couple of comically expensive comically large SKUs for specifically this purpose. Then you pop a few of them onto a development board and partition your design across the cluster of FPGAs with some custom interconnect, orchestration and DFT stuff. FPGAs offer quite a compelling way of getting test mileage vs simuluation/emulation in software.
Every. If I was to guess, NVidia probably uses Cadence Palladium/Protium solutions[1]. They're basically industry standard, and essentially everyone uses Cadence design tools for circuit design.
That's quite an extensive tour through the world of image signal processing.
I can't quite grok the filter added to the DDS to generate twiddle factors for FFTs. I'll have to re-read that section a few time for it to sink in.
> 3.5 Hardware Implementation of Flat-Field Correction
Shouldn't one use some hardware description language (HDL) in such chapters? Or I've overlooked where the code is placed?
It's left as an exercise to the student^H^H^H^H^H^H^H LLM.
FPGAs? This is something I assumed was processed long ago with ASICs
what do you mean? in image processing or in general? in general they're used a lot where performance is important, especially in high frequency trading, crypto-mining, etc. also they're quite important for image processing in defence/medical imaging.
Every digital ASIC design is simulated on FPGAs first.
Every? On which FPGA did Nvidia simulate the 5090?
If you go to nvidia's jobs website today you'll find they're actively hiring FPGA developers for ASIC prototyping. Obviously they're not dumping their 5090 RTL straight into some 10 metre wide FPGA chip. First, they grab the largest FPGA you can get your hands on - the FPGA vendors tend to have a couple of comically expensive comically large SKUs for specifically this purpose. Then you pop a few of them onto a development board and partition your design across the cluster of FPGAs with some custom interconnect, orchestration and DFT stuff. FPGAs offer quite a compelling way of getting test mileage vs simuluation/emulation in software.
Every. If I was to guess, NVidia probably uses Cadence Palladium/Protium solutions[1]. They're basically industry standard, and essentially everyone uses Cadence design tools for circuit design.
[1]: https://www.cadence.com/en_US/home/tools/system-design-and-v...
You don't need to simulate the design completely/simultaneously. The FPGA sim implementation might only contain 1 CUDA SM, for instance.
For large ASIC designs like this, companies often use numerous (12+) FPGAs connected via transceivers on dedicated simulation boards.
Yes, many SoCs already have an ISP core. Maybe FPGAs are used in low-volume specialized cameras.
But asics are a big $$$ ask.
Sure. If you buy like a million units.