So the single place that we can buy this is showing no stock (already) and not clear if this will even ship given all the customs and tariffs stuff. I must say after waiting for months on the 'almost ready to ship' DGX Spark (with multiple partners no less), getting strong announce-ware vibes from this already.
My naive first reaction was that a unit like that would consume a way too much power to be practical on a robot, but then I remembered how many calories our own brains need vs the rest of our body (Google says 20% of total body needs).
Looks like power consumption for the Thor T5000 is between 30W-140W. The Unitree G1 (https://www.unitree.com/g1) has a 9Ah battery that lasts 2hrs under normal operation. Assuming an operating voltage of 48V (13s battery), that implies the robot's actuator and sensor power usage is ~216W.
Assuming average power usage is somewhere down the middle (85W), a thor unit would consume 28% of the robot's total power needs. This doesn't account for the fact that the robot would have to carry around the additional weight of the compute unit though. Can't say if that's good or bad, just interesting to see that it's in the same ballpark.
Autonomous vehicles are indeed robots, but they have power constraints (that Thor can reasonably fit within). Most industrial robots aren't meaningfully power constrained though.
It was a bit of a culture shock the first time I was involved with industrial robots because of how much power constraints had impacted the design of previous systems I worked on.
I tried to look up human wattage as a comparison and I'm very surprised that it lands around the same ballpark. Around 145W as a daily average and around 440W as a an approximate hourly average during exercise.
I thought current gen robots would be an order of magnitude less efficient. Maybe I'm misunderstanding something.
2000 kilocalorie converts to 8.3 megajoules. This should be the amount of energy consumed per day.
8.3 megajoules / 24 hours is 96 watts. This should be the average rate of energy expediture.
96 watts * 20% is 19 watts. This should be the portion your brain uses out of that average.
96 watts * 24 hours is 464 watthours. This should be the average amount of energy your brain uses in a day.
This is why I've never found "AI" to be particularly competitive with human beings. The level of energy efficiency that our brains operate at is amazing. Our electrical and computer engineering is several orders of magnitude out from the achievements of nature and biology.
To be fair we'd have to consider how much of this same secondary energy would be required to build, operate and maintain the power grid. The grid itself is not 100% efficient either so we'd need to calculate how much power is directly wasted every single day just in inserting and extracting power from those overhead lines.
Electric motors are very energy efficient. I believe they are actually far more efficient on a per-joint movement basis, and the equivalence between us and them is largely due to inefficient locomotion.
Where we excel is energy storage. Far less weight, far higher density.
The efficacy to weight ratio of meat vs. rocks and metal is freakin' absurd. We don't know how to build a robot that's as strong and damage-resistant as a human body and weighs only as much as one. Similarly we don't know how to build something as energy-efficient as a human brain that thinks anywhere near as well. Artificial superintelligence may well be a thing in the coming decades, but it will be profoundly energy-greedy; I fear the first thing it will resolve to do is secure fuel for itself by stealing our energy supplies like out of Superman III.
What are the variables that prefer local GPUs vs cloud inference? Is connectivity the dividing line or are there other variables that influence the choice?
Anduril submersibles probably need local processing, but does my laundry/dishes robot need local processing? Or machines in factories? Or delivery drones?
Any sort of continuous video processing, especially low-latency.
Imagine you were tracking items on video at a self-service checkout. Sure, you could compress the video down to 15 Mbps or so and send it to the cloud. But now, a store with 20 self-checkouts needs 300 Mbps of upload bandwidth. That's one more problem making it harder for Wal-Mart to buy and roll out your product.
Also, if you know you need an NVIDIA L4 dedicated to you 24/7 for a year, a g6.xlarge will cost $7,000/year on-demand or $4,300/year reserved [1] while you can buy the card for $2,500.
Of course for many other use cases the cloud is a fine choice. If you only need a fraction of a GPU, or you only need a monster GPU a tiny fraction of the time, or you need an enormous LLM that demands water cooling and tolerates latency easily, the cloud can be a fine choice.
Simple example, Security cameras that only use bandwidth when they've detected something. The cost of live streaming 20 cameras over 5g is very high. The cost of sending text messages with still images when you see a person is reasonable.
Why the hell would a dishwasher need to be connected, or smart for that matter?
I just want clean dishes/clothes, not to be upsold into some stupid shit that fails when it can’t ping google.com or gets bricked when the company closes.
I would pay premium for certified mindless products.
Anecdotally, I don't have any direct physical evidence or written evidence to support this. But I talked to someone in the industry over a decade ago when "run it on a GPU" was just heating up. It's drones. Not DJI ones, military ones with surveillance gear and weapons.
Mining, remote construction, remote power station inspection, battlefields. there are many many places where a stable network connection can't be taken for granted.
If I had to guess there is significant interest in this product from a certain Eastern European nation. I don’t think they are intending to use it for “robotics” though.
Wow: notably a more advanced CPU than DGX GB200! 14 Neoverse V3AE cores, where-as Grace Hopper is 72x Neoverse V2. Comparing versus big GB100: 2560/96 CUDA/Tensor cores here vs big Blackwell's 18432/576 cores.
> Compared to NVIDIA Jetson AGX Orin, it provides up to 7.5x higher AI compute and 3.5x better energy efficiency.
I could really use a table of all the various options Nvidia has! Jetson AGX Orin (2023) seems to start at ~$1700 for a 32GB system, with 204GB/s bandwidth, 1792 Ampere, 56 Tensor, & 8 A78AE ARM Cores, 200 TOPS "AI Performance", 15-45W. Slightly bigger model of 2048/64/12 cores/275 TOPS, 15-60W available. https://en.wikipedia.org/wiki/Nvidia_Jetson#Performance
Now Jetson T5000 is 2070 TFLOPS (but FP4 - Sparse! Still ~double-ish). 2560 Core Blackwell, 96 Tensor cores, 14 Neoverse V3AE cores. 273GB/s 128GB. 4x25Gbe is a neat new addition. 40-130W. There's also a lower spec T4000.
Seems like a pretty in line leap at 2x the price!
Looks like a physically pretty big unit. Big enough to scratch my head in the intro video of robots opening up the package & wonder: where are they going to fit their new brain? But man, the breakdown diagram: it's- unsurprisingly- half heatsink.
The models that run on robots do things like "where is the road" or "is this package damaged"; people will run LLMs on this thing, but that's not it's primary bread-and-butter
Thor is a pretty big jump in power and the current prices are a bargain compared to what else is out there if you need the capabilities. I wish there was a competitive alternative, because Nvidia is horrible to work with.
And now everyone's $2000 Orins will be stuck forever on Ubuntu 24.04 just like the Xaviers were abandoned on 20.04 and the TX1/2 on 18.04.
Nothing like explaining to your ML engineers that they can only use Python 3.6 on an EOL operating system because you deployed a bunch of hardware shortly before the vendor released a new shiny thing and abruptly lost interest in supporting everything that came before.
Same experience here, plus serial port drivers that don't work, bootloader bugs causing bricked machines in the field. This on a platform nearly a decade old! The hardware is great but the software quality is abysmal, when compared to other industrial SoC manufacturers.
I think what's most galling about it is that Nvidia gets away with behaving like this because even a decade later they're still basically the only game in town if you want a low power embedded GPU solution for edge AI stuff.
AMD has managed to blunder multiple opportunities to launch something into this space and earn the trust of developers. And no, NUC form factor APU machines are not the answer— both for power/heat concerns and the software integration story being an incomplete patchwork.
Ahhhh I see there's someone else who has experienced the serial port driver bugs :). I was responsible for helping them figure out and fix the one related to DMA buffers but still encounter the "sometimes it just stops sending data" one often enough.
Or "VR Chat + Mostly human art and control but digital representation" ends up being more fun, engaging, cheap and/or humanizing. Wonder that the best accurate full-body tracking + VR headset one could put together today for that? Feels like it could be cheaper than just the "hardware brain" part of that.
It has ARM CPU cores and an Nvidia GPU so it can do whatever you want but it's optimized for AI video analysis. Great for factory robots or self-driving cars.
No, at least the Muskian end of the MAGA spectrum is very much into humanoid robots. I'm afraid it's because they imagine themselves at the head of a giant robot slave army.
Maybe more practically they see robots taking over the jobs that immigrants now do in America.
Trump is using environmental law to halt green energy projects. (Not just cutting unnecessary subsidies, declaring projects that are already under construction illegal.)
Presumably prices will come down as this market segment matures; it's not unreasonable to assume performance will double and the price will reduce by half within a decade. A $2000 brain in a $20,000 robot is 10% of the total cost but at that price point it's not prohibitively expensive for the market they're catering to. The unitree G1 can be had for as little as $16,000 usd allegedly but capable models can be north of $40,000.
If you're buying a durable good like a warehouse robot or household chores robot that costs as much as a car this doesn't seem like that high of a starting point for the market segment to me.
Pretty much everyone in my part of the industry is either working with thor-family chips already or actively investigating whether they should switch to them, with very few exceptions. The pricing seems completely viable based on that alone.
Anyone who can use an RPi (or one of many other SoCs in that class) should absolutely consider them, but that's not the market this is competing in. RPis are more comparable to the Jetson nano line, which had sub-$100 dev kits. Slightly above that are the Orin-based tegras like the SoC in the switch 2, which are still clearly viable.
If I were Jensen Huang, the first thing I'd do...well, the _first_ thing I'd do is ditch the silly leather jacket and dress like an adult. But the first thing I'd do with Nvidia is make sure the company's product line is well diversified for the coming AI winter.
> the _first_ thing I'd do is ditch the silly leather jacket and dress like an adult
Given what he has accomplished, he has more than earned the right to wear a leather jacket if he wants to. People didn't complain about Steve Jobs wearing a turtleneck for the same reason. When you have accomplished as much as these guys, then you can dish out fashion advice and maybe someone will listen.
From his biography, the turtleneck was kind of an accident:
> “So I called Issey [Miyake] and asked him to design a vest for Apple,” Jobs recalled. “I came back with some samples and told everyone it would be great if we would all wear these vests. Oh man, did I get booed off the stage. Everybody hated the idea.”
> “So I asked Issey to make me some of his black turtlenecks that I liked, and he made me like a hundred of them.” Jobs noticed my surprise when he told this story, so he gestured to them stacked up in the closet. “That’s what I wear,” he said. “I have enough to last for the rest of my life.”
I wonder if Issey thought the turtlenecks might be the Apple uniform, and that's why he made lots.
So the single place that we can buy this is showing no stock (already) and not clear if this will even ship given all the customs and tariffs stuff. I must say after waiting for months on the 'almost ready to ship' DGX Spark (with multiple partners no less), getting strong announce-ware vibes from this already.
https://www.arrow.com/en/products/945-14070-0080-000/nvidia?...
My assumotion would be it will siffer the same delays as DGX Spark as it is a very similar chipset, so maybe December?
My naive first reaction was that a unit like that would consume a way too much power to be practical on a robot, but then I remembered how many calories our own brains need vs the rest of our body (Google says 20% of total body needs).
Looks like power consumption for the Thor T5000 is between 30W-140W. The Unitree G1 (https://www.unitree.com/g1) has a 9Ah battery that lasts 2hrs under normal operation. Assuming an operating voltage of 48V (13s battery), that implies the robot's actuator and sensor power usage is ~216W.
Assuming average power usage is somewhere down the middle (85W), a thor unit would consume 28% of the robot's total power needs. This doesn't account for the fact that the robot would have to carry around the additional weight of the compute unit though. Can't say if that's good or bad, just interesting to see that it's in the same ballpark.
Can self-driving cars be framed as robots?
An electric car would have no issue sustaining this level of power; a gas-powered car doubly-so.
Autonomous vehicles are indeed robots, but they have power constraints (that Thor can reasonably fit within). Most industrial robots aren't meaningfully power constrained though.
It was a bit of a culture shock the first time I was involved with industrial robots because of how much power constraints had impacted the design of previous systems I worked on.
I tried to look up human wattage as a comparison and I'm very surprised that it lands around the same ballpark. Around 145W as a daily average and around 440W as a an approximate hourly average during exercise.
I thought current gen robots would be an order of magnitude less efficient. Maybe I'm misunderstanding something.
I happen to have an envelope handy:
2000 kilocalorie converts to 8.3 megajoules. This should be the amount of energy consumed per day.
8.3 megajoules / 24 hours is 96 watts. This should be the average rate of energy expediture.
96 watts * 20% is 19 watts. This should be the portion your brain uses out of that average.
96 watts * 24 hours is 464 watthours. This should be the average amount of energy your brain uses in a day.
This is why I've never found "AI" to be particularly competitive with human beings. The level of energy efficiency that our brains operate at is amazing. Our electrical and computer engineering is several orders of magnitude out from the achievements of nature and biology.
Calculate how much energy needs to be input into acriculture and transport to provide that wattage.
To be fair we'd have to consider how much of this same secondary energy would be required to build, operate and maintain the power grid. The grid itself is not 100% efficient either so we'd need to calculate how much power is directly wasted every single day just in inserting and extracting power from those overhead lines.
That's way off the envelope though.
Electric motors are very energy efficient. I believe they are actually far more efficient on a per-joint movement basis, and the equivalence between us and them is largely due to inefficient locomotion.
Where we excel is energy storage. Far less weight, far higher density.
We do a whole lot of things a robot doesn't have to do, like filtering blood, digesting, keeping warm.
Body maintenance.
Every hardware piece of such a robot can do a few things. Our body parts do orders of magnitude more, including growing and regeneration.
> too much power to be practical on a robot
robot could be useful even when permanently plugged to the grid.
From a UAV perspective, even at 140W it's not too bad. For a multi-rotor, that's about the same energy needed to lift around 750g-1kg of payload.
The efficacy to weight ratio of meat vs. rocks and metal is freakin' absurd. We don't know how to build a robot that's as strong and damage-resistant as a human body and weighs only as much as one. Similarly we don't know how to build something as energy-efficient as a human brain that thinks anywhere near as well. Artificial superintelligence may well be a thing in the coming decades, but it will be profoundly energy-greedy; I fear the first thing it will resolve to do is secure fuel for itself by stealing our energy supplies like out of Superman III.
Here's NVIDIA's blog post on this:
NVIDIA Jetson Thor Unlocks Real-Time Reasoning for General Robotics and Physical AI
https://blogs.nvidia.com/blog/jetson-thor-physical-ai-edge/
What are the variables that prefer local GPUs vs cloud inference? Is connectivity the dividing line or are there other variables that influence the choice?
Anduril submersibles probably need local processing, but does my laundry/dishes robot need local processing? Or machines in factories? Or delivery drones?
Any sort of continuous video processing, especially low-latency.
Imagine you were tracking items on video at a self-service checkout. Sure, you could compress the video down to 15 Mbps or so and send it to the cloud. But now, a store with 20 self-checkouts needs 300 Mbps of upload bandwidth. That's one more problem making it harder for Wal-Mart to buy and roll out your product.
Also, if you know you need an NVIDIA L4 dedicated to you 24/7 for a year, a g6.xlarge will cost $7,000/year on-demand or $4,300/year reserved [1] while you can buy the card for $2,500.
Of course for many other use cases the cloud is a fine choice. If you only need a fraction of a GPU, or you only need a monster GPU a tiny fraction of the time, or you need an enormous LLM that demands water cooling and tolerates latency easily, the cloud can be a fine choice.
[1] https://instances.vantage.sh/aws/ec2/g6.xlarge?currency=USD&...
Anything latency sensitive. Anything bandwidth constrained.
Simple example, Security cameras that only use bandwidth when they've detected something. The cost of live streaming 20 cameras over 5g is very high. The cost of sending text messages with still images when you see a person is reasonable.
Why the hell would a dishwasher need to be connected, or smart for that matter?
I just want clean dishes/clothes, not to be upsold into some stupid shit that fails when it can’t ping google.com or gets bricked when the company closes.
I would pay premium for certified mindless products.
Anecdotally, I don't have any direct physical evidence or written evidence to support this. But I talked to someone in the industry over a decade ago when "run it on a GPU" was just heating up. It's drones. Not DJI ones, military ones with surveillance gear and weapons.
Mining, remote construction, remote power station inspection, battlefields. there are many many places where a stable network connection can't be taken for granted.
If I had to guess there is significant interest in this product from a certain Eastern European nation. I don’t think they are intending to use it for “robotics” though.
I want local processing for my local data. That includes my photos, documents and surveillance camera feeds.
it depends if the plates were expensive.
GMTec AMD Ryzen™ AI Max+ 395 --EVO-X2 AI Mini PC seems pretty similar and only $2000.
Would be interested to see head to head benchmarks including power usage between those mini PCs and the Nvidia Thor.
Wow: notably a more advanced CPU than DGX GB200! 14 Neoverse V3AE cores, where-as Grace Hopper is 72x Neoverse V2. Comparing versus big GB100: 2560/96 CUDA/Tensor cores here vs big Blackwell's 18432/576 cores.
> Compared to NVIDIA Jetson AGX Orin, it provides up to 7.5x higher AI compute and 3.5x better energy efficiency.
I could really use a table of all the various options Nvidia has! Jetson AGX Orin (2023) seems to start at ~$1700 for a 32GB system, with 204GB/s bandwidth, 1792 Ampere, 56 Tensor, & 8 A78AE ARM Cores, 200 TOPS "AI Performance", 15-45W. Slightly bigger model of 2048/64/12 cores/275 TOPS, 15-60W available. https://en.wikipedia.org/wiki/Nvidia_Jetson#Performance
Now Jetson T5000 is 2070 TFLOPS (but FP4 - Sparse! Still ~double-ish). 2560 Core Blackwell, 96 Tensor cores, 14 Neoverse V3AE cores. 273GB/s 128GB. 4x25Gbe is a neat new addition. 40-130W. There's also a lower spec T4000.
Seems like a pretty in line leap at 2x the price!
Looks like a physically pretty big unit. Big enough to scratch my head in the intro video of robots opening up the package & wonder: where are they going to fit their new brain? But man, the breakdown diagram: it's- unsurprisingly- half heatsink.
> CEO Jensen Huang has said robotics is the company’s largest growth opportunity outside of artificial intelligence
> The Jetson Thor chips are equipped with 128GB of memory, which is essential for big AI models.
Just put it into a robot and run some unhinged model on it, that should be fun.
The models that run on robots do things like "where is the road" or "is this package damaged"; people will run LLMs on this thing, but that's not it's primary bread-and-butter
The future of advanced robotics likely requires LLM-scale models. With more bias towards vision and locomotion than the usual LLM, of course.
> CEO Jensen Huang has said robotics is the company’s largest growth opportunity outside of artificial intelligence
Does "robotics outside of AI" imply they want to get into making actual robots (beyond the GPU "brains")?
There's already this hilarious bot. It's able to use people's outfits to woo them, or insult them. It's pretty good!
https://www.instagram.com/rizzbot_official/
AMD should jump on this immediately.
Edge compute has not yet been won. There is no ecosystem for CUDA for it yet.
Someone else but Nvidia please pay attention to this market.
Robots can't deal with the latency of calls back to the data center. Vision, navigation, 6DOF, articulation all must happen in real time.
This will absolutely be a huge market in time. Robots, autonomous cars, any sort of real time, on-prem, hardware type application.
[dead]
This sounds very similar to the dgx spark, which still hasn't shipped afaik.
Orin was pretty expensive at $2,000; now Thor is significantly more.
Thor is a pretty big jump in power and the current prices are a bargain compared to what else is out there if you need the capabilities. I wish there was a competitive alternative, because Nvidia is horrible to work with.
And now everyone's $2000 Orins will be stuck forever on Ubuntu 24.04 just like the Xaviers were abandoned on 20.04 and the TX1/2 on 18.04.
Nothing like explaining to your ML engineers that they can only use Python 3.6 on an EOL operating system because you deployed a bunch of hardware shortly before the vendor released a new shiny thing and abruptly lost interest in supporting everything that came before.
And yes, TX2 was launched in 2017, but Nvidia continued shipping them until the end of 2024, so it's absurd they never got updated software: https://forums.developer.nvidia.com/t/jetson-tx2-lifecycle-e...
Same experience here, plus serial port drivers that don't work, bootloader bugs causing bricked machines in the field. This on a platform nearly a decade old! The hardware is great but the software quality is abysmal, when compared to other industrial SoC manufacturers.
I think what's most galling about it is that Nvidia gets away with behaving like this because even a decade later they're still basically the only game in town if you want a low power embedded GPU solution for edge AI stuff.
AMD has managed to blunder multiple opportunities to launch something into this space and earn the trust of developers. And no, NUC form factor APU machines are not the answer— both for power/heat concerns and the software integration story being an incomplete patchwork.
Ahhhh I see there's someone else who has experienced the serial port driver bugs :). I was responsible for helping them figure out and fix the one related to DMA buffers but still encounter the "sometimes it just stops sending data" one often enough.
128 GB for $3,499 doesn't sound bad at all.
Can these be used for local inference on large models? I'm assuming the 128G of memory is like system memory, not like GPU VRAM.
It has a unified memory architecture, so the 128G is shared directly between CPU and GPU. Though it's slower than dGPU VRAM.
Yes, but it is substantially cheaper and usually faster to buy a Jetson Orin chip or build an x86 homelab.
Has anyone deployed jetson or similar in production? whats the BOM like at scale?
AGX Thor + TensorRT + SDXL-Turbo (or SD 1.5 LCM) + ControlNet (depth/canny) + ROS 2 + Isaac ROS + CUDA zero-copy camera feeds = fun!!
Or "VR Chat + Mostly human art and control but digital representation" ends up being more fun, engaging, cheap and/or humanizing. Wonder that the best accurate full-body tracking + VR headset one could put together today for that? Feels like it could be cheaper than just the "hardware brain" part of that.
You don’t need Thor nor ROS for this, but it can certainly help.
The Strix 395+ or whatever it is called is $2k with 128gb, but I think less performance.
What exactly does this chip do?
It has ARM CPU cores and an Nvidia GPU so it can do whatever you want but it's optimized for AI video analysis. Great for factory robots or self-driving cars.
So this is the thing that's going to go into the next generation of Russian and Ukrainian drones.
Looks very similar to DGX spark
It is, some small differences. Also rumoured this will become a laptop chipset in future, although probably at lower power.
Can it run doom? Can it make doom come true?
The shovel sellers new shovels
[flagged]
No, at least the Muskian end of the MAGA spectrum is very much into humanoid robots. I'm afraid it's because they imagine themselves at the head of a giant robot slave army.
Maybe more practically they see robots taking over the jobs that immigrants now do in America.
What makes you think MAGA hates those things? Texas and Florida are right behind California for adoption of EVs and green energy.
Trump administration halts work on an almost-finished wind farm: https://www.npr.org/2025/08/23/nx-s1-5513919/trump-stops-off...
EV subsidies ending in a month or two: https://www.kiplinger.com/taxes/ev-tax-credit
Tons of examples
Trump is using environmental law to halt green energy projects. (Not just cutting unnecessary subsidies, declaring projects that are already under construction illegal.)
Are there massive subsidies for humanoid robots?
I don't think subsidies are the issue. Farmers get billions in subsidies every year, and they don't get that much hate from the MAGA world.
cost needs to be reduced by 90% to be viable
Serious question, what comparable hardware can you buy for 10% of the cost?
I meant for a dev kit it's fine, but it's not viable for anything beyond that. Shouldn't cost 100x of RPi if you gonna use as a part of a robot.
Presumably prices will come down as this market segment matures; it's not unreasonable to assume performance will double and the price will reduce by half within a decade. A $2000 brain in a $20,000 robot is 10% of the total cost but at that price point it's not prohibitively expensive for the market they're catering to. The unitree G1 can be had for as little as $16,000 usd allegedly but capable models can be north of $40,000.
If you're buying a durable good like a warehouse robot or household chores robot that costs as much as a car this doesn't seem like that high of a starting point for the market segment to me.
Pretty much everyone in my part of the industry is either working with thor-family chips already or actively investigating whether they should switch to them, with very few exceptions. The pricing seems completely viable based on that alone.
Anyone who can use an RPi (or one of many other SoCs in that class) should absolutely consider them, but that's not the market this is competing in. RPis are more comparable to the Jetson nano line, which had sub-$100 dev kits. Slightly above that are the Orin-based tegras like the SoC in the switch 2, which are still clearly viable.
If I were Jensen Huang, the first thing I'd do...well, the _first_ thing I'd do is ditch the silly leather jacket and dress like an adult. But the first thing I'd do with Nvidia is make sure the company's product line is well diversified for the coming AI winter.
> the _first_ thing I'd do is ditch the silly leather jacket and dress like an adult
Given what he has accomplished, he has more than earned the right to wear a leather jacket if he wants to. People didn't complain about Steve Jobs wearing a turtleneck for the same reason. When you have accomplished as much as these guys, then you can dish out fashion advice and maybe someone will listen.
From his biography, the turtleneck was kind of an accident:
> “So I called Issey [Miyake] and asked him to design a vest for Apple,” Jobs recalled. “I came back with some samples and told everyone it would be great if we would all wear these vests. Oh man, did I get booed off the stage. Everybody hated the idea.”
> “So I asked Issey to make me some of his black turtlenecks that I liked, and he made me like a hundred of them.” Jobs noticed my surprise when he told this story, so he gestured to them stacked up in the closet. “That’s what I wear,” he said. “I have enough to last for the rest of my life.”
I wonder if Issey thought the turtlenecks might be the Apple uniform, and that's why he made lots.
First thing I'd do is keep making more expensive shovels until gold miners stop buying them.