That title is most definitely a swipe at OpenAI. It seems there’s a theme of two types of platform companies: the closed one that has the “premium” platform, and the open one that gets market share by commodifying the platform.
macOS vs Windows
iOS vs Android
OpenAI vs Meta AI?
Edit: Another observation is Meta seems to open things defensively. Facebook had a rich developer platform until Facebook stopped needing to grab market share. Meta’s VR platform was closed until Apple challenged them with Vision Pro. Then Meta announced open sourcing Horizon OS. I wonder if Meta will truly keep things open if they win, or if it’s more like a case of EEE?
Meta's VR wasn't really a lot more closed to be honest. They've just dropped the deep review process and anyone can now publish to the main store what was previously 'app lab'. They've been heading this way for a long time, a couple years ago they didn't even have app lab.
And the OS being open isn't really that relevant unless you're a system integrator. I really doubt whether this is in response to Apple. They seem to operate in entirely different markets. I love VR but I can never buy a vision pro. It's twice what I've ever spent on my most expensive car.
Oculus devices and AVP are presently in different categories from a cost perspective, but my understanding is the two companies were chasing after the same eventual market. Apple intended (intends?) on releasing a lower end device. Arguably AVP was meant as dev device anyway so devs could start building apps. And Meta was working on a higher end device until they nixed it after AVP sales tanked. Zuck himself went on a PR attack campaign about superiority of their technology vs Apple's.
> “I finally tried Apple’s Vision Pro,” Zuckerberg says. “And I have to say, before this, I expected that Quest would be the better value for most people since it’s really good and seven times less expensive but after using it, I don’t just think Quest is the better value. I think Quest is the better product period.”
The longer term horizon here seems to be that Reality Labs will release AR glasses that may have more mass appeal, and I would expect Apple will too.
Apple is more about luxury branding than premium offering. I don't think OpenAI is trying to position its services as projecting wealth or style.
Facebook's hardware designs don't really help them differentiate since they don't run a public cloud. If they release the open source hardware and successfully drum up interest, then it will be cheaper for them to procure hardware.
Releasing Llama guards against Google AI dominance. Google and Facebook are longtime competitors and Google controls the OS on most phones Facebook needs to run its ads on.
Open sourcing AI hardware guards against a potentially dominant Nvidia, especially if they team up with one of the cloud companies.
I'm sure they're also worried about OpenAI, but at this point it doesn't look like OpenAI is on Facebook's turf.
Regardless, I meant premium as in better or dominant in the market. It's hard to overcome first mover advantage with a "me too" product, but "commoditizing your complement" seems to be a pretty effective disrupter. I think Meta is worried about Google, OpenAI, Anthropic, and anyone else with gen AI models. It seems to be related to where they're going.
For many, the terms 'luxury' and 'premium' are interchangeable, signifying both high quality and exclusivity. Most Apple users, may not be able to list specific specs of their devices — a testament to the brand's focus on overall user experience rather than technical details. Ask any Apple user if their device is superior to that of the competition, and the majority will affirm it does, because it just feels premium.
OCP has been about for _years_. Almost since facebook was a thing.
Facebook the site had a rich platform until they had to secure it against people stealing your data, and your mate's data and pretending that they had a really rich dataset on most of the western world's adult population (ie Cambridge analytica )
Horizon OS is a fucking mess and isn't anything really to do with Apple, its more to do with the making sure samsung don't use android for what ever shit they produce.
OpenAI aren't going to compete with Meta, as they aren't in the same game. OpenAI has to make money from its offerings, Meta's AI shit is a byproduct of other things (see massive spending on "reality labs")
Windows and Android took the commodity approach but they always had a business model, while Meta AI is currently running on Underpants Gnomes economics.
1. Spend billions on a product, then give it away for free.
2. ???
3. Profit!
> Then Meta announced open sourcing Horizon OS.
Open sourcing isn't really the right term, they're allowing third party hardware vendors to use it but it's still proprietary. Horizon OS is built on top of Android and they're following the Android playbook where the core is technically open source but the version nearly everyone actually uses has a bunch of proprietary Google (or Meta) software layered on top, and Google (or Meta) dictates the terms of using that software, which lets them ensure that revenue always flows back to Google (or Meta) regardless of who made the hardware.
Meta and Google are the companies most able to monetize AI-generated or AI-enhanced[1] content on their respective properties.
1. Meta showed off automatic audio translation that preserves speaker's voices. Content creators can now expand their following beyond their spoken languages, generating more ad impressions.
>Meta’s business model is about building the best experiences and services for people. To do this, we must ensure that we always have access to the best technology, and that we’re not locking into a competitor’s closed ecosystem where they can restrict what we build.
I guess that would translate to
1. Spend billions on AI, give away open source version
2. Use AI to get people to click on ads in their "best experiences and services for people"
3. $39bn profit in 2023
It's a bit like the real Underpant Gnomes business. Phase gnomes one collect underpants. Phase two Parker and Stone sign a $900m deal with Paramount. The money is made by an associated business.
Not exactly "sell", but I do recall Zuckerberg saying that they have a revenue sharing agreement with platforms like AWS Bedrock that offer Llama inference.
Maybe not "sell" but if you look at my other comment on this post you'll see their valuation has increased 162 billion since they released their first open source model. They definitely are looking to profit, they just take a different approach to monetization. Same thing in my book.
Meta knows it can’t be the top player in this space, their best play, therefore is to control the second tier. Ironically I think they aren’t even the best in the second tier - Mistral is.
I've used OpenAI's tech, and I've used their competitors tech, but I've never used any of Facebooks AI tech yet. Is there something I'm missing out on?
OpenAI does not have its own platform yet but depend on Azure? If so, they probably should build their platform to squeeze out as much cost as possible.
macOS isn't a premium product over Windows nor is iOS over Android. Windows isn't really an open platform either. OpenAI is more open as it doesn't require logging in.
A simple narrative doesn't exist connecting these three product battles. The only thing that comes to mind is macOS, iOS and OpenAI are designed for the novice. You could probably add Coinbase vs Binance or Uber vs Lyft or Facebook vs Google Plus and be able to keep that narrative.
Microsoft was anti open source until the world changed. Windows was never open source or source available. It's ironic they own github. It is more shocking macOS is source available.
Apple is undeniably a premium product for consumers, and its OSes get the halo effect from that. Whether they're "premium" to develop or even use for is an entirely different question.
I'm kind of split about this. Yes, Facebook done a lot of great Open Source in the past, and I'm sure they'll do more great Open Source in the future.
But it's really hard to see them in a positive light when they keep misleading people about Llama, and publish blog posts that say how important Open Source is etc etc, then refuse to actually release Llama as Open Source, refuse to elaborate on why they see it as Open Source while no one else does it and refuse to take a step back and understand how the FOSS community feels when they actively mislead people like this.
What a lot of people complain about with Llama is the fact that the weights are open but not the training data and training code. That feels like a red herring to me—code is data and data is code, and we shouldn't require someone to be developing entirely in the open in order for the output to be open source.
The weights are the "preferred form of the work for making modifications to it", to quote the GPL. The rest is just the infrastructure used to produce the work.
Where "open source" is misleading with Llama is that it's restricted to companies under a certain size and has restrictions for what you can and can't do with it. That kind of restriction undermines the freedoms promised by the phrase "open source", and it's concerning to me that people have gotten so fixating on weights vs data when there's a big gap in the freedoms offered on the weights.
> The weights are the "preferred form of the work for making modifications to it", to quote the GPL. The rest is just the infrastructure used to produce the work.
I disagree with this, if you want to actually have a fundamental impact on the model, the real work and innovation goes into how the training is done, and the architecture of the model. That's the "source", and what Meta et al is currently trying to protect and keep private.
Guess why they're so adamant at keeping the training code secret?
To provide an example, the recent Llama3.2 release includes a clause in their acceptable use policy that says any individual or business located in EU has no rights to use their multimodal models. This is discrimination against persons/groups which violates most open source definitions. They even went as far as getting Huggingface to implement georestrictions so that EU users would get an error message and be unable to download the weights.
From my understanding this comes from a feud with EU privacy law because Facebook wants to train models on EU users data, but GDPR makes that complicated (see their letter at euneedsai.com). So they made this license change to "punish" the EU.
Not to mention that they are discriminating against fields of endeavors, and also requiring anyone that uses Llama in any shape or form to display "Built with Llama" prominently.
As far as I know, no one does that (https://ollama.com as an example of a platform that breaks Llama's terms and conditions) and Meta isn't enforcing it. But who knows, they might do so in the future.
Not sure how any sane person can claim Llama is Open Source after realizing these things.
Zuckerberg does what’s good for Zuckerberg. We should all have zero reservations about what kind of person this guy is at this point. If open source is beneficial he’ll do that, but when it stops being beneficial you can count on him to do what’s best for himself at the expense of the general public.
Zuckerberg is just following the Bezos strategy of someone else’s margin being his opportunity. This open source move is predatory.
I personally think our academia should be training and curating these kinds of models and the data they are based on, but this is an acceptable second best.
Spending $100 million one time on a GPT4-level model that is open-source would help with a lot of that research. Especially after all the 3rd party groups fine-tuned it or layered their tools on it.
I think the Copilot-equivalent tools alone would make it quickly pay itself off in productivity gains. Research summaries, PDF extraction, and OCR would add more to that.
basically no one in the entire world was willing to spend the kind of money on massive compute and data centers that Meta did spend, is spending and will spend. The actual numbers are (I think) rare to find and so large that it is hard to comprehend it.
VR also gets a lot of hate, and they definitely dropped the ball on user-facing software, but Meta is doing very substantial, deep and valuable long-term R&D on VR hardware. They are also doing a lot on systems software, with their OS and all the low-key AI that enables excellent real-time tracking, rendering and AR.
It might not all be open-source, and they are doing it with an expectation of long-term profit, but they are earnestly pushing the horizons (pun intended) of the field and taking-on lots of risk for everyone else.
It's undeniable now that they are a serious and innovative engineering organization, while Google is rapidly loosing that reputation.
Most new products fail at meta, because they become a "priority", throw thousands of engineers at the problem and get bogged down in managing a massive oversubscription of engineers to useful work ratio.
Threads happened because a few people managed to convince each other to take a risk and build a instagram/mastodon chimera. They managed to show enough progress to continue without getting fucked in the performance review, but not enough for an exec to get excited about building an empire around it.
> "This effort pushed our infrastructure to operate across more than 16,000 NVIDIA H100 GPUs, making Llama 3.1 405B the first model in the Llama series to be trained at such a massive scale."
So at 20k a pop (assuming meta has a decent wholesale price from Nividia) they spent $320 MILLION on the 405B model (not including probably 5-10 million in electricity for the training process, water, staff, infra).
Do we think that brings more than 400+ million in value to Meta? I think so. I don't want to do the math, so I'll ask Perplexity to look it up:
> "How much has Meta's valuation increased since they released their first open source model"
Answer (edited):
> Closing price on February 23, 2023: $509.50
> Closing price on October 11, 2024: $573.68
> The increase in stock price is $64.18 per share.
> Total increase = Price increase per share × Number of outstanding shares
> Total increase = $64.18 × 2,534,000,000 = $162,632,100,000
> Meta's stock valuation has increased by approximately $162.63 billion since the release of their first open source model on February 24, 2023.
> Do we think that brings more than 400+ million in value to Meta?
Tough to tell, given nobody is turning a net profit on LLMs yet.
Companies have a tendency to develop neuroses, though, just like people. Apple’s near miss with bankruptcy fuelled cash hoarding. For Facebook, their disastrous IPO and near miss of mobile seems to have made them hyper aware of the Innovator’s Dilemma. $400mm spent on a defensive move is certainly wider than tens of billions on the metaverse.
Correct. We know these models are producing fucktonnes of revenue. At least some of them can be run at a gross profit, i.e. where marginal power costs and capital costs are less than marginal revenues. (Put another way: if OpenAI were an absolute monopoly and stopped training new models, could it turn a profit?)
What’s unclear is if this is a short-term revenue benefit from people fucking around with the newest, shiniest model, or recurring revenue that is only appearing unprofitable because the frontier is advancing so quickly.
From the little we know about OpenAIs inference infra, I feel like I can confidently say that if training stopped today, and they got cut off Azure subsidies, their $20.00 subscription model would probably not cover the cost of Inference.
I know nothing about the enterprise side of OpenAI but I'm sure they're profitable there. I doubt the subscription cost of a single power user of ChatGPT Plus covers the water they consume as a single user (This is probably an exaggeration, but I think I'm in the ballpark).
It may be that extra-large LLMs don’t make sense for ChatGPT. They’re for enterprise use, like supercomputers. The reason I say “at least some” is I’ve found use in running a local instance of Llama, which seems to imply there is some niche of (legal) activities AI can support sustainably. (Versus, e.g. crypto.)
> Tough to tell, given nobody is turning a net profit on LLMs yet.
I suspect in the case of Meta and other big players, profit isn't necessary required to bring substantial value. Imagine their model being able to help them moderate more fairly and accurately. This alone could prevent potential legal actions from individuals, companies, and governments.
> profit isn't necessary required to bring substantial value
They’re private companies. If they can’t tie it to profit, it’s not adding value.
> being able to help them moderate more fairly and accurately
This reduces legal costs and increases strategic flexibility. Sort of like HR or legal departments: cost centres add value by controlling costs, a critical component of profitability.
> They’re private companies. If they can’t tie it to profit, it’s not adding value.
Not true, even strictly from an accounting perspective. If you spend $400M and build an asset that is worth >$400M, then you have increased the value of the company without modifying profit. For example, if a company were to buy land, and build a building, that building has a value regardless of if it is associated with any revenue.
> * If you spend $400M and build an asset that is worth >$400M, then you have increased the value of the company without modifying profit. For example, if a company were to buy land, and build a building, that building has a value regardless of if it is associated with any revenue*
Why does it have value? It’s because it can be rented, occupied for productive use or sold to someone who can do either of those. Revenue-free assets are essentially money. (Companies aren’t in the business of non-revenue non-monetary assets—that’s the domain of society at large.)
> Providing toilets to employees does not tie to profit. Toilets in all offices are now closed. We are saving over $10k per day
Right. How do those companies tend to wind up?
I didn't say short-term profits. I said ultimately, the value of a non-monetary assets are tied to profitability. Particularly financial assets, e.g. C corporations. That doesn't mean that's the only measure of value. But for a company it's damn close.
Put another way: when a for-profit company starts arguing that profits don't matter, it's a little bit curious.
> They’re private companies. If they can’t tie it to profit, it’s not adding value.
Smart companies understand other types of value exist.
If a democratic population hates you, it is harder to convince politicians to do your bidding. (Not impossible, just harder!)
If potential employees don't think kindly of you, it is harder to recruit.
Llama is a constant source of good PR for Meta in the developer community. Compared to just a couple of years ago when they were mostly laughed at by devs for metaverse stuff. Now it is "holy cow Zuck is standing up to Microsoft and Amazon and democratizing AI!"
With Llama, Meta has got great PR, and also developed cutting edge tech.
They also get to benefit from thousands of developers trying to make Meta's models run more efficiently.
Sure. But they intermediate to profit. I'm not suggesting leadership should be justifying everything in those terms. But if an entire product line doesn't have a solid long-term net revenue generating or cost saving consequence, it's a flag for governance.
> potential employees don't think kindly of you, it is harder to recruit
And you get higher churn. Cost imperative. Not the sole reason--you have to work with the people, after all. But that's an agent benefit. Companies treat high-value employees well because it makes sense to, and they'd probably cease to exist if they stopped doing it.
Do you think that the 16k GPUs get used once and then are thrown away? Llama 405B was trained over 56 days on the 16k GPUs; if I round that up to 60 days and assume the current mainstream hourly rate of $2/H100/hour from the Neoclouds (which are obviously making margin), that comes out to a total cost of ~$47M. Obviously Meta is training a lot of models using their GPU equipment, and would expect it to be in service for at least 3 years, and their cost is obviously less than what the public pricing on clouds is.
> I don't want to do the math, so I'll ask Perplexity to look it up
These numbers are totally wrong, and it takes about 30 seconds to look it up. It closed at 589.95 on 2024-10-11 and 172.04 on 2023-02-23. The other numbers appear to be wrong too.
SPY has gone from 404 to 579 in that timeframe - so meta lost value and performance due to their choices? Or you're using a terrible metric to judge things by.
I wonder how this math works in light of Moore's law? E.g say it cost meta $320 million to train this model this year. How much does it cost to train that model next year or the year after instead? Is it significantly cheaper? Are the returns on investment the same? Makes me think there is a business case in watching someone spend a pile of money to train model X, waiting to see if there's interest in the market in this model X, then spend a comparatively smaller pile of money on the same model X yourself taking advantage of lower future costs of compute and undercut the original company hand over fist.
Meta seems to - in my view correctly - understand that the risk to Facebook & other Meta properties is someone else walking away with great success, capturing the market for generative AI & being left out.
They don't need to make money, increase their value. They need to ward off existential risk. The best way to insure they aren't locked out from the future is to keep the future open. That's the only hope they have to maintain the locks they already have on much of the information technology world.
One of the things driving this crazy spend on AI infrastructure by big tech is excess capital.
Big tech is getting increasing scrutiny by regulators, etc for monopolistic/anticompetitive practices/etc[0]. From S&P:
"Regulatory scrutiny is likely to translate into fewer mega deals, however. Adobe Inc. recently abandoned its $20 billion acquisition of Figma Inc. in the face of ongoing opposition from the UK's Competition and Markets Authority and the EU's European Commission. While Microsoft Corp. was able to close its $68.7 billion Activision Blizzard buy, it took nearly two years to complete the deal amid an intense fight with the regulators.
The harsh regulatory environment will continue to have a chilling effect on large tech M&A, but strategic buyers are likely to focus on smaller tuck-ins that do not tempt regulators to intervene."
So instead of growing further by spend on acquiring/rolling out new platforms to draw further scrutiny, they're shoveling piles of cash towards other capital expenses (with accounting tricks) that don't get further attention/scrutiny from regulators.
Meta doesn't use this infrastructure just for LLaMA - they get to use these hundreds of millions of dollars of spend to extract more value (deeper, not wider) from the users and platforms they already have.
Case in point: AI generated ads[1]. There's a not-too-distant future where the ads displayed on Facebook are an AI generated video of you in a new Toyota (or whatever). In terms of even open models Meta is much more than LLaMA. They have done a lot with speech (seamless), vision (segment anything), and much more.
That model training, inference, etc is running on these Nvidia GPUs as well.
There is also a take from Reid Hoffman saying that buying 100,000 GPUs or whatever is basically "table stakes" at this point[2]. Boards, analysts, the market, etc says "What I know for sure is this AI stuff needs Nvidia GPUs. Elon has 100k. How many do we have?".
So it's also a Cold War arms race/missile gap scenario. Plus, don't discount ego/"male anatomy measuring contest" that goes on with these guys. Consider this story where a bunch of centi-billionaires got together at Nobu to fight over Nvidia GPU shipments[3].
Likely not tense as it seems pretty clear where everyone stands on this. OpenAI and Anthropic are likely having conversations along the lines of 'move faster and build the moat!' while Meta is having a conversation along the lines of 'move faster and destroy the moat!'
It's not that Meta has anything against moats, it's just that they've seen how it works when they try to build their moat on top of someone else's (i.e. Apple and Google re: mobile) and it didn't work out too well.
Also the EU is far more aggressive and opinionated about what they want to see in the market than during the smartphone era. And this is spreading across the world.
So being committed to an open, standards-based strategy is an easy way for them to avoid most of the risks.
I wonder when Meta, Microsoft and OpenAI will partner on an open chip design to compete with NVIDIA.
They’re all blowing billions of dollars on NVIDIA hardware with like 70% margin and with triton backing PyTorch it shouldn’t be that hard to move off of CUDA stack.
It would require a fairly big bet on AI models not changing structure all that much, I guess
I think if the tooling and supply chain falls into place it would surprise me if meta and friends didn't make their own chips, assuming it was a good fit of course.
Note: AMD's missed opportunity here is so bad people jump "make their own chip" rather than "buy AMD". Although watch that space.
What is a large mount of money to you, is not that significant to some of these companies. I suspect for the vast majority of these companies, it still represents a small expense. General sentiment is that there is probably overspending in the area but its better to spend it and not risk being left behind.
Half of NVIDIA's 2nd quarter revenue (30 billion) came from 4 customers, with Microsoft and Meta already having spent 40-60 billion each on GPU data centers (of which most goes to NVIDIA). "Open"AI just raised a few billion and is supposedly planning on building their own training clusters soon.
For a small fraction of that they could poach a ton of people from NVIDIA and publish a new open chip spec that anyone could manufacture.
That again underestimates the challenges in that undertaking. After all these costs are still drops on the bucket. Why distract yourself from your business to go and build chips.
They all use SFDC, should they go and create and open source sales platform?
That's exactly what they id with their server design.
I'm saying come up with an open standard for tensor processing chips, with open drivers and core compute libraries, then let hardware vendors innovate and compete to drive down the price.
Each of them is designing their own hardware. The goal isn't really to compete with nvidia though, whose market is general purpose GPU compute. Instead they're customizing hardware for inference to drive down product cost.
And to power all those fused-multiply-add circuits, Meta will likely soon be involved in the construction or restart or life-extension of nuclear fission reactors, just like Microsoft and Google and Amazon (and Oracle, allegedly).
Quoting Yann LeCun (Vice-President, Chief AI Scientist at Meta):
AI datacenters will be built next to energy production sites that can produce
gigawatt-scale, low-cost, low-emission electricity continuously.
Basically, next to nuclear power plants.
The advantage is that there is no need for expensive and wasteful
long-distance distribution infrastructure.
Note: Yes, solar and wind are nice and all, but they require lots of land
and massive-scale energy storage systems for when there is too little sun
and/or wind. Neither simple nor cheap.
The article talks about NVidia racks, but also about their DSF [0], which is Ethernet-based (as opposed to Infiniband) built on switches using Cisco and Broadcom chipsets and custom ASIC xPU accelerators such as their own MTIA [1] which is built for them by Broadcom. So they are talking more than one approach simultaneously.
Sort of, while yes this uses Nvidia, one of Nvidia's moats or big advantages is its rack scale integration. AMD and other providers just can't scale up easily, they are behind in terms of connecting tons of GPUs together effectively. So doing this part themselves instead of buying Nvidia's much-hyped (deservedly) NVL72 solution, which is a nonpareil rack system with 72 GPUs in it, and then opensourcing it, opens the door to possibly integrating AMD GPUs in the future and this hurts Nvidia's moat.
Having used and learned from the architectures that Facebook has published via OCP over the past decade, this is not a "meme" but actual real information and designs that Facebook uses to commoditize their supply chain and that others can use too.
I've not noticed a change in the amount of usage of "open" or any mental shortcuts about it since it was introduced via the term "open source" in the 1990s. If anything, there's less discussion and abuse of the term now. The only memery that I see is from OpenAI, which had aspirations to openness in the past but which today is a sham.
Facebooks open efforts here are completely great, IMHO, and have benefited me personally and professionally.
That title is most definitely a swipe at OpenAI. It seems there’s a theme of two types of platform companies: the closed one that has the “premium” platform, and the open one that gets market share by commodifying the platform.
macOS vs Windows
iOS vs Android
OpenAI vs Meta AI?
Edit: Another observation is Meta seems to open things defensively. Facebook had a rich developer platform until Facebook stopped needing to grab market share. Meta’s VR platform was closed until Apple challenged them with Vision Pro. Then Meta announced open sourcing Horizon OS. I wonder if Meta will truly keep things open if they win, or if it’s more like a case of EEE?
Meta's VR wasn't really a lot more closed to be honest. They've just dropped the deep review process and anyone can now publish to the main store what was previously 'app lab'. They've been heading this way for a long time, a couple years ago they didn't even have app lab.
And the OS being open isn't really that relevant unless you're a system integrator. I really doubt whether this is in response to Apple. They seem to operate in entirely different markets. I love VR but I can never buy a vision pro. It's twice what I've ever spent on my most expensive car.
Oculus devices and AVP are presently in different categories from a cost perspective, but my understanding is the two companies were chasing after the same eventual market. Apple intended (intends?) on releasing a lower end device. Arguably AVP was meant as dev device anyway so devs could start building apps. And Meta was working on a higher end device until they nixed it after AVP sales tanked. Zuck himself went on a PR attack campaign about superiority of their technology vs Apple's.
> “I finally tried Apple’s Vision Pro,” Zuckerberg says. “And I have to say, before this, I expected that Quest would be the better value for most people since it’s really good and seven times less expensive but after using it, I don’t just think Quest is the better value. I think Quest is the better product period.”
The longer term horizon here seems to be that Reality Labs will release AR glasses that may have more mass appeal, and I would expect Apple will too.
Apple is more about luxury branding than premium offering. I don't think OpenAI is trying to position its services as projecting wealth or style.
Facebook's hardware designs don't really help them differentiate since they don't run a public cloud. If they release the open source hardware and successfully drum up interest, then it will be cheaper for them to procure hardware.
Releasing Llama guards against Google AI dominance. Google and Facebook are longtime competitors and Google controls the OS on most phones Facebook needs to run its ads on.
Open sourcing AI hardware guards against a potentially dominant Nvidia, especially if they team up with one of the cloud companies.
I'm sure they're also worried about OpenAI, but at this point it doesn't look like OpenAI is on Facebook's turf.
> Apple is more about luxury branding than premium offering. I don't think OpenAI is trying to position its services as projecting wealth or style.
What do you think OpenAI hired Jony Ive to do? ;)
https://www.theverge.com/2024/9/21/24250867/jony-ive-confirm...
Regardless, I meant premium as in better or dominant in the market. It's hard to overcome first mover advantage with a "me too" product, but "commoditizing your complement" seems to be a pretty effective disrupter. I think Meta is worried about Google, OpenAI, Anthropic, and anyone else with gen AI models. It seems to be related to where they're going.
For many, the terms 'luxury' and 'premium' are interchangeable, signifying both high quality and exclusivity. Most Apple users, may not be able to list specific specs of their devices — a testament to the brand's focus on overall user experience rather than technical details. Ask any Apple user if their device is superior to that of the competition, and the majority will affirm it does, because it just feels premium.
Just as most Mercedes owners don’t really care how many horsepower the car has. It just has “enough”.
Great analogy.
OCP has been about for _years_. Almost since facebook was a thing.
Facebook the site had a rich platform until they had to secure it against people stealing your data, and your mate's data and pretending that they had a really rich dataset on most of the western world's adult population (ie Cambridge analytica )
Horizon OS is a fucking mess and isn't anything really to do with Apple, its more to do with the making sure samsung don't use android for what ever shit they produce.
OpenAI aren't going to compete with Meta, as they aren't in the same game. OpenAI has to make money from its offerings, Meta's AI shit is a byproduct of other things (see massive spending on "reality labs")
Windows and Android took the commodity approach but they always had a business model, while Meta AI is currently running on Underpants Gnomes economics.
1. Spend billions on a product, then give it away for free.
2. ???
3. Profit!
> Then Meta announced open sourcing Horizon OS.
Open sourcing isn't really the right term, they're allowing third party hardware vendors to use it but it's still proprietary. Horizon OS is built on top of Android and they're following the Android playbook where the core is technically open source but the version nearly everyone actually uses has a bunch of proprietary Google (or Meta) software layered on top, and Google (or Meta) dictates the terms of using that software, which lets them ensure that revenue always flows back to Google (or Meta) regardless of who made the hardware.
Meta and Google are the companies most able to monetize AI-generated or AI-enhanced[1] content on their respective properties.
1. Meta showed off automatic audio translation that preserves speaker's voices. Content creators can now expand their following beyond their spoken languages, generating more ad impressions.
To quote Zuck:
>Why Open Source AI Is Good for Meta
>Meta’s business model is about building the best experiences and services for people. To do this, we must ensure that we always have access to the best technology, and that we’re not locking into a competitor’s closed ecosystem where they can restrict what we build.
I guess that would translate to
1. Spend billions on AI, give away open source version
2. Use AI to get people to click on ads in their "best experiences and services for people"
3. $39bn profit in 2023
It's a bit like the real Underpant Gnomes business. Phase gnomes one collect underpants. Phase two Parker and Stone sign a $900m deal with Paramount. The money is made by an associated business.
Don't they charge cloud vendors who sell LLAMA models on their platform? My understanding is that is part of the licensing agreement. Its more like:
1. Spend billions on a product.
2. Make it free to work with and charge to commercialize it.
3. Profit
They use the models they opensource in their own products, no? I don’t think Meta is looking to profit selling LLMs…
Not exactly "sell", but I do recall Zuckerberg saying that they have a revenue sharing agreement with platforms like AWS Bedrock that offer Llama inference.
> "I don’t think Meta is looking to profit selling LLMs…"
Said no one ever.
OP said it… I also don’t think meta is looking to sell LLMs.
Probably LLM backed products. Likely marketing tools if anything.
But not the same way OAI sells LLMs
Maybe not "sell" but if you look at my other comment on this post you'll see their valuation has increased 162 billion since they released their first open source model. They definitely are looking to profit, they just take a different approach to monetization. Same thing in my book.
So you’re saying Meta stock grew this year because they released an open source LLM?
It's probably part of it oddly enough as investors want to stick money in stocks that can be classified as 'AI'.
Excellent move to juice the stock price but it’s not profit. I think it’s more a hope for profit one day, a ticket to the game.
> They definitely are looking to profit
Since when? Lol
Meta knows it can’t be the top player in this space, their best play, therefore is to control the second tier. Ironically I think they aren’t even the best in the second tier - Mistral is.
I've used OpenAI's tech, and I've used their competitors tech, but I've never used any of Facebooks AI tech yet. Is there something I'm missing out on?
Meta = LLaMA
I think the bet is that many other things will use their llama that you then use. Perhaps without you even knowing this.
Ever heard of pytorch?
Never used it myself, but that's a good call out and precisely why I asked.
Running a (405B) flagship model locally on a Mac Studio? Otherwise, no. Most of these products are fairly similar.
OpenAI does not have its own platform yet but depend on Azure? If so, they probably should build their platform to squeeze out as much cost as possible.
They have their own platform - a platform to build AI tools on top of their API. The hardware layer below is abstracted away.
> That title is most definitely a swipe at OpenAI.
Eh, 'OpenAI' is a swipe at OpenAI.
macOS isn't a premium product over Windows nor is iOS over Android. Windows isn't really an open platform either. OpenAI is more open as it doesn't require logging in.
A simple narrative doesn't exist connecting these three product battles. The only thing that comes to mind is macOS, iOS and OpenAI are designed for the novice. You could probably add Coinbase vs Binance or Uber vs Lyft or Facebook vs Google Plus and be able to keep that narrative.
Ironically macOS is more "open" (as in source available) than Windows.
https://opensource.apple.com/releases/
Microsoft was anti open source until the world changed. Windows was never open source or source available. It's ironic they own github. It is more shocking macOS is source available.
> It is more shocking macOS is source available.
Nitpick: It's actual Open Source, not Source Available. Not the whole thing, but what code they release is under permissive open source licenses.
Sorry, mostly permissive licenses; there are a tiny number of copyleft things in there (famously, the last GPLv2 version of bash)
Apple is undeniably a premium product for consumers, and its OSes get the halo effect from that. Whether they're "premium" to develop or even use for is an entirely different question.
What is EEE?
Embrace, Extend, Extinguish.
Famously the Microsoft strategy against Java.
Yes this is what I meant. Sorry I was being lazy at the keyboard
Somewhat related to their FUD policy towards linux
I assume it’s embrace, extend, extinguish:
https://en.m.wikipedia.org/wiki/Embrace,_extend,_and_extingu...
Zuckerberg and Facebook gets a lot of hate but at least they invest a lot into engineering and open source.
> and open source
I'm kind of split about this. Yes, Facebook done a lot of great Open Source in the past, and I'm sure they'll do more great Open Source in the future.
But it's really hard to see them in a positive light when they keep misleading people about Llama, and publish blog posts that say how important Open Source is etc etc, then refuse to actually release Llama as Open Source, refuse to elaborate on why they see it as Open Source while no one else does it and refuse to take a step back and understand how the FOSS community feels when they actively mislead people like this.
What a lot of people complain about with Llama is the fact that the weights are open but not the training data and training code. That feels like a red herring to me—code is data and data is code, and we shouldn't require someone to be developing entirely in the open in order for the output to be open source.
The weights are the "preferred form of the work for making modifications to it", to quote the GPL. The rest is just the infrastructure used to produce the work.
Where "open source" is misleading with Llama is that it's restricted to companies under a certain size and has restrictions for what you can and can't do with it. That kind of restriction undermines the freedoms promised by the phrase "open source", and it's concerning to me that people have gotten so fixating on weights vs data when there's a big gap in the freedoms offered on the weights.
> The weights are the "preferred form of the work for making modifications to it", to quote the GPL. The rest is just the infrastructure used to produce the work.
I disagree with this, if you want to actually have a fundamental impact on the model, the real work and innovation goes into how the training is done, and the architecture of the model. That's the "source", and what Meta et al is currently trying to protect and keep private.
Guess why they're so adamant at keeping the training code secret?
Besides that, I agree wholeheartedly with you :)
To provide an example, the recent Llama3.2 release includes a clause in their acceptable use policy that says any individual or business located in EU has no rights to use their multimodal models. This is discrimination against persons/groups which violates most open source definitions. They even went as far as getting Huggingface to implement georestrictions so that EU users would get an error message and be unable to download the weights.
From my understanding this comes from a feud with EU privacy law because Facebook wants to train models on EU users data, but GDPR makes that complicated (see their letter at euneedsai.com). So they made this license change to "punish" the EU.
Not to mention that they are discriminating against fields of endeavors, and also requiring anyone that uses Llama in any shape or form to display "Built with Llama" prominently.
As far as I know, no one does that (https://ollama.com as an example of a platform that breaks Llama's terms and conditions) and Meta isn't enforcing it. But who knows, they might do so in the future.
Not sure how any sane person can claim Llama is Open Source after realizing these things.
Zuckerberg does what’s good for Zuckerberg. We should all have zero reservations about what kind of person this guy is at this point. If open source is beneficial he’ll do that, but when it stops being beneficial you can count on him to do what’s best for himself at the expense of the general public.
Zuckerberg is just following the Bezos strategy of someone else’s margin being his opportunity. This open source move is predatory.
I personally think our academia should be training and curating these kinds of models and the data they are based on, but this is an acceptable second best.
IMO there are much better ways to spend 300 million in research beyond firing up a cluster for 60 days to train on internet content.
Spending $100 million one time on a GPT4-level model that is open-source would help with a lot of that research. Especially after all the 3rd party groups fine-tuned it or layered their tools on it.
I think the Copilot-equivalent tools alone would make it quickly pay itself off in productivity gains. Research summaries, PDF extraction, and OCR would add more to that.
basically no one in the entire world was willing to spend the kind of money on massive compute and data centers that Meta did spend, is spending and will spend. The actual numbers are (I think) rare to find and so large that it is hard to comprehend it.
there's already https://www.goody2.ai/chat
Aggressively commoditizing the complement has been a good strategy for them.
> good strategy for them.
For everyone, besides OpenAI etc.
Also the fact that he is delivering on Fediverse integration with Threads.
I don't think most people expected that to happen so quickly or frankly at all.
VR also gets a lot of hate, and they definitely dropped the ball on user-facing software, but Meta is doing very substantial, deep and valuable long-term R&D on VR hardware. They are also doing a lot on systems software, with their OS and all the low-key AI that enables excellent real-time tracking, rendering and AR.
It might not all be open-source, and they are doing it with an expectation of long-term profit, but they are earnestly pushing the horizons (pun intended) of the field and taking-on lots of risk for everyone else.
It's undeniable now that they are a serious and innovative engineering organization, while Google is rapidly loosing that reputation.
Neither did the company.
Most new products fail at meta, because they become a "priority", throw thousands of engineers at the problem and get bogged down in managing a massive oversubscription of engineers to useful work ratio.
Threads happened because a few people managed to convince each other to take a risk and build a instagram/mastodon chimera. They managed to show enough progress to continue without getting fucked in the performance review, but not enough for an exec to get excited about building an empire around it.
The hate is mostly from people who don't use their products. Plenty of people are happy users
Do you realize that's a tautology? Why would you use their products if you hate them?
Why not?! I absolutely hate reddit but still can't stop using it.
Why, is it for work?
> "This effort pushed our infrastructure to operate across more than 16,000 NVIDIA H100 GPUs, making Llama 3.1 405B the first model in the Llama series to be trained at such a massive scale."
So at 20k a pop (assuming meta has a decent wholesale price from Nividia) they spent $320 MILLION on the 405B model (not including probably 5-10 million in electricity for the training process, water, staff, infra).
Do we think that brings more than 400+ million in value to Meta? I think so. I don't want to do the math, so I'll ask Perplexity to look it up:
> "How much has Meta's valuation increased since they released their first open source model"
Answer (edited):
> Closing price on February 23, 2023: $509.50 > Closing price on October 11, 2024: $573.68 > The increase in stock price is $64.18 per share. > Total increase = Price increase per share × Number of outstanding shares > Total increase = $64.18 × 2,534,000,000 = $162,632,100,000 > Meta's stock valuation has increased by approximately $162.63 billion since the release of their first open source model on February 24, 2023.
They seem to be making the right choices!
> Do we think that brings more than 400+ million in value to Meta?
Tough to tell, given nobody is turning a net profit on LLMs yet.
Companies have a tendency to develop neuroses, though, just like people. Apple’s near miss with bankruptcy fuelled cash hoarding. For Facebook, their disastrous IPO and near miss of mobile seems to have made them hyper aware of the Innovator’s Dilemma. $400mm spent on a defensive move is certainly wider than tens of billions on the metaverse.
> Tough to tell, given nobody is turning a net profit on LLMs yet
Perhaps, but Meta is definitely getting some money back from ad impressions supported by AI-generated content.
Correct. We know these models are producing fucktonnes of revenue. At least some of them can be run at a gross profit, i.e. where marginal power costs and capital costs are less than marginal revenues. (Put another way: if OpenAI were an absolute monopoly and stopped training new models, could it turn a profit?)
What’s unclear is if this is a short-term revenue benefit from people fucking around with the newest, shiniest model, or recurring revenue that is only appearing unprofitable because the frontier is advancing so quickly.
From the little we know about OpenAIs inference infra, I feel like I can confidently say that if training stopped today, and they got cut off Azure subsidies, their $20.00 subscription model would probably not cover the cost of Inference.
I know nothing about the enterprise side of OpenAI but I'm sure they're profitable there. I doubt the subscription cost of a single power user of ChatGPT Plus covers the water they consume as a single user (This is probably an exaggeration, but I think I'm in the ballpark).
It may be that extra-large LLMs don’t make sense for ChatGPT. They’re for enterprise use, like supercomputers. The reason I say “at least some” is I’ve found use in running a local instance of Llama, which seems to imply there is some niche of (legal) activities AI can support sustainably. (Versus, e.g. crypto.)
> Tough to tell, given nobody is turning a net profit on LLMs yet.
I suspect in the case of Meta and other big players, profit isn't necessary required to bring substantial value. Imagine their model being able to help them moderate more fairly and accurately. This alone could prevent potential legal actions from individuals, companies, and governments.
> profit isn't necessary required to bring substantial value
They’re private companies. If they can’t tie it to profit, it’s not adding value.
> being able to help them moderate more fairly and accurately
This reduces legal costs and increases strategic flexibility. Sort of like HR or legal departments: cost centres add value by controlling costs, a critical component of profitability.
> They’re private companies. If they can’t tie it to profit, it’s not adding value.
Not true, even strictly from an accounting perspective. If you spend $400M and build an asset that is worth >$400M, then you have increased the value of the company without modifying profit. For example, if a company were to buy land, and build a building, that building has a value regardless of if it is associated with any revenue.
> * If you spend $400M and build an asset that is worth >$400M, then you have increased the value of the company without modifying profit. For example, if a company were to buy land, and build a building, that building has a value regardless of if it is associated with any revenue*
Why does it have value? It’s because it can be rented, occupied for productive use or sold to someone who can do either of those. Revenue-free assets are essentially money. (Companies aren’t in the business of non-revenue non-monetary assets—that’s the domain of society at large.)
Providing toilets to employees does not tie to profit. Toilets in all offices are now closed. We are saving over $10k per day.
> Providing toilets to employees does not tie to profit. Toilets in all offices are now closed. We are saving over $10k per day
Right. How do those companies tend to wind up?
I didn't say short-term profits. I said ultimately, the value of a non-monetary assets are tied to profitability. Particularly financial assets, e.g. C corporations. That doesn't mean that's the only measure of value. But for a company it's damn close.
Put another way: when a for-profit company starts arguing that profits don't matter, it's a little bit curious.
Some people are gonna shit in their pants hearing this new policy.
> They’re private companies. If they can’t tie it to profit, it’s not adding value.
Smart companies understand other types of value exist.
If a democratic population hates you, it is harder to convince politicians to do your bidding. (Not impossible, just harder!)
If potential employees don't think kindly of you, it is harder to recruit.
Llama is a constant source of good PR for Meta in the developer community. Compared to just a couple of years ago when they were mostly laughed at by devs for metaverse stuff. Now it is "holy cow Zuck is standing up to Microsoft and Amazon and democratizing AI!"
With Llama, Meta has got great PR, and also developed cutting edge tech.
They also get to benefit from thousands of developers trying to make Meta's models run more efficiently.
> other types of value exist
Sure. But they intermediate to profit. I'm not suggesting leadership should be justifying everything in those terms. But if an entire product line doesn't have a solid long-term net revenue generating or cost saving consequence, it's a flag for governance.
> potential employees don't think kindly of you, it is harder to recruit
And you get higher churn. Cost imperative. Not the sole reason--you have to work with the people, after all. But that's an agent benefit. Companies treat high-value employees well because it makes sense to, and they'd probably cease to exist if they stopped doing it.
Do you think that the 16k GPUs get used once and then are thrown away? Llama 405B was trained over 56 days on the 16k GPUs; if I round that up to 60 days and assume the current mainstream hourly rate of $2/H100/hour from the Neoclouds (which are obviously making margin), that comes out to a total cost of ~$47M. Obviously Meta is training a lot of models using their GPU equipment, and would expect it to be in service for at least 3 years, and their cost is obviously less than what the public pricing on clouds is.
And Meta is using a lot of GPUs for offline ML and online ML features on Instagram, FB etc. So nothing is "wasted".
> I don't want to do the math, so I'll ask Perplexity to look it up
These numbers are totally wrong, and it takes about 30 seconds to look it up. It closed at 589.95 on 2024-10-11 and 172.04 on 2023-02-23. The other numbers appear to be wrong too.
I did a double-take at the quoted $500 price in 2023. The real numbers strengthen the case for AI as a pump-the-stock investment.
Another good reason not to use perplexity
Please don't post auto-generated text without checking it yourself. It is almost always wrong, as it is here.
SPY has gone from 404 to 579 in that timeframe - so meta lost value and performance due to their choices? Or you're using a terrible metric to judge things by.
The stock price is completely wrong, stock was around $170 back in February 2023
I wonder how this math works in light of Moore's law? E.g say it cost meta $320 million to train this model this year. How much does it cost to train that model next year or the year after instead? Is it significantly cheaper? Are the returns on investment the same? Makes me think there is a business case in watching someone spend a pile of money to train model X, waiting to see if there's interest in the market in this model X, then spend a comparatively smaller pile of money on the same model X yourself taking advantage of lower future costs of compute and undercut the original company hand over fist.
Meta seems to - in my view correctly - understand that the risk to Facebook & other Meta properties is someone else walking away with great success, capturing the market for generative AI & being left out.
They don't need to make money, increase their value. They need to ward off existential risk. The best way to insure they aren't locked out from the future is to keep the future open. That's the only hope they have to maintain the locks they already have on much of the information technology world.
> Meta's stock valuation has increased by approximately $162.63 billion
All thanks to their investments in VR! Wait guys where are you going
They also didn't open an Alpaca farm in Tuvalu during that time period.
One of the things driving this crazy spend on AI infrastructure by big tech is excess capital.
Big tech is getting increasing scrutiny by regulators, etc for monopolistic/anticompetitive practices/etc[0]. From S&P:
"Regulatory scrutiny is likely to translate into fewer mega deals, however. Adobe Inc. recently abandoned its $20 billion acquisition of Figma Inc. in the face of ongoing opposition from the UK's Competition and Markets Authority and the EU's European Commission. While Microsoft Corp. was able to close its $68.7 billion Activision Blizzard buy, it took nearly two years to complete the deal amid an intense fight with the regulators.
The harsh regulatory environment will continue to have a chilling effect on large tech M&A, but strategic buyers are likely to focus on smaller tuck-ins that do not tempt regulators to intervene."
So instead of growing further by spend on acquiring/rolling out new platforms to draw further scrutiny, they're shoveling piles of cash towards other capital expenses (with accounting tricks) that don't get further attention/scrutiny from regulators.
Meta doesn't use this infrastructure just for LLaMA - they get to use these hundreds of millions of dollars of spend to extract more value (deeper, not wider) from the users and platforms they already have.
Case in point: AI generated ads[1]. There's a not-too-distant future where the ads displayed on Facebook are an AI generated video of you in a new Toyota (or whatever). In terms of even open models Meta is much more than LLaMA. They have done a lot with speech (seamless), vision (segment anything), and much more.
That model training, inference, etc is running on these Nvidia GPUs as well.
There is also a take from Reid Hoffman saying that buying 100,000 GPUs or whatever is basically "table stakes" at this point[2]. Boards, analysts, the market, etc says "What I know for sure is this AI stuff needs Nvidia GPUs. Elon has 100k. How many do we have?".
So it's also a Cold War arms race/missile gap scenario. Plus, don't discount ego/"male anatomy measuring contest" that goes on with these guys. Consider this story where a bunch of centi-billionaires got together at Nobu to fight over Nvidia GPU shipments[3].
[0] - https://www.spglobal.com/marketintelligence/en/news-insights...
[1] - https://www.theverge.com/2024/10/8/24265065/meta-ai-edited-v...
[2] - https://www.theinformation.com/articles/openai-investor-hoff...
[3] - https://finance.yahoo.com/news/elon-musk-oracles-larry-ellis...
Free and open LLMs will be compelling to many of users that cannot or do not want to use online services.
It’s good that a large company like Meta seems to be pushing it forward with openllama 3.2.
Open, quality LLMs must lead to some tense discussions at OpenAI and Anthropic about whether they should be open or closed.
OpenAI Whisper is open source and it’s extremely good although it appeared to me the online version is much faster.
Likely not tense as it seems pretty clear where everyone stands on this. OpenAI and Anthropic are likely having conversations along the lines of 'move faster and build the moat!' while Meta is having a conversation along the lines of 'move faster and destroy the moat!'
It's not that Meta has anything against moats, it's just that they've seen how it works when they try to build their moat on top of someone else's (i.e. Apple and Google re: mobile) and it didn't work out too well.
Also the EU is far more aggressive and opinionated about what they want to see in the market than during the smartphone era. And this is spreading across the world.
So being committed to an open, standards-based strategy is an easy way for them to avoid most of the risks.
So they’re trying to be what Android is to the iPhone?
Precisely, and that extends to their XR ambitions as well: https://www.meta.com/blog/quest/meta-horizon-os-open-hardwar...
I wonder when Meta, Microsoft and OpenAI will partner on an open chip design to compete with NVIDIA.
They’re all blowing billions of dollars on NVIDIA hardware with like 70% margin and with triton backing PyTorch it shouldn’t be that hard to move off of CUDA stack.
https://ai.meta.com/blog/next-generation-meta-training-infer...
https://azure.microsoft.com/en-us/blog/azure-maia-for-the-er...
They're not teaming up and neither design is open, but they definitely have their own designs.
It would require a fairly big bet on AI models not changing structure all that much, I guess
I think if the tooling and supply chain falls into place it would surprise me if meta and friends didn't make their own chips, assuming it was a good fit of course.
Note: AMD's missed opportunity here is so bad people jump "make their own chip" rather than "buy AMD". Although watch that space.
Aren't they basically safe as long as future AI models are still basically multiplying tensors of floats?
Training uses other operations for normalizing, etc.
What is a large mount of money to you, is not that significant to some of these companies. I suspect for the vast majority of these companies, it still represents a small expense. General sentiment is that there is probably overspending in the area but its better to spend it and not risk being left behind.
Half of NVIDIA's 2nd quarter revenue (30 billion) came from 4 customers, with Microsoft and Meta already having spent 40-60 billion each on GPU data centers (of which most goes to NVIDIA). "Open"AI just raised a few billion and is supposedly planning on building their own training clusters soon.
For a small fraction of that they could poach a ton of people from NVIDIA and publish a new open chip spec that anyone could manufacture.
https://www.fool.com/investing/2024/09/12/46-nvidias-30-bill...
That again underestimates the challenges in that undertaking. After all these costs are still drops on the bucket. Why distract yourself from your business to go and build chips.
They all use SFDC, should they go and create and open source sales platform?
https://developers.facebook.com/blog/post/2021/09/07/eli5-op...
That's exactly what they id with their server design.
I'm saying come up with an open standard for tensor processing chips, with open drivers and core compute libraries, then let hardware vendors innovate and compete to drive down the price.
Meta spent like 10% of their revenue on ML hardware, it's not a drop in a bucket and with model scaling and large scale deployment these costs are not going down. https://www.datacenterdynamics.com/en/news/meta-to-operate-6...
Each of them is designing their own hardware. The goal isn't really to compete with nvidia though, whose market is general purpose GPU compute. Instead they're customizing hardware for inference to drive down product cost.
I can't see Meta and MSFT getting into the chip design business. Maybe OpenAI.
Apple's already not using nVidia chips to train its models.
Meta is already in the chip design business, albeit in collaboration with Broadcom who build it for them: https://ai.meta.com/blog/next-generation-meta-training-infer...
I think Microsoft have dabbled in the space already.
Very impressive. Not sure if worth $40 Billion. But very impressive nonetheless.
Beats buying Twitter, that's for sure.
Twitter was always very overvalued because it's real value case can't be expressed in dollar terms.
That made me laugh. $40 billion is absurd.
And to power all those fused-multiply-add circuits, Meta will likely soon be involved in the construction or restart or life-extension of nuclear fission reactors, just like Microsoft and Google and Amazon (and Oracle, allegedly).
Quoting Yann LeCun (Vice-President, Chief AI Scientist at Meta):
https://x.com/ylecun/status/1837875035270263014They ve already gone after openAi, are they after Nvidia now?
No, this is a rack built on NVIDIA’s platform, so this is just more $$$ for them.
The article talks about NVidia racks, but also about their DSF [0], which is Ethernet-based (as opposed to Infiniband) built on switches using Cisco and Broadcom chipsets and custom ASIC xPU accelerators such as their own MTIA [1] which is built for them by Broadcom. So they are talking more than one approach simultaneously.
[0] https://engineering.fb.com/2024/10/15/data-infrastructure/op...
[1] https://ai.meta.com/blog/next-generation-meta-training-infer...
There's also an AMD rack and meta is big enough that they won't get blacklisted for it.
Yeah this is... nothing. At least nothing anyone worth less than a few billion could ever care about.
Would be far more interesting to see MTIA in an edge compute PCIe form.
Sort of, while yes this uses Nvidia, one of Nvidia's moats or big advantages is its rack scale integration. AMD and other providers just can't scale up easily, they are behind in terms of connecting tons of GPUs together effectively. So doing this part themselves instead of buying Nvidia's much-hyped (deservedly) NVL72 solution, which is a nonpareil rack system with 72 GPUs in it, and then opensourcing it, opens the door to possibly integrating AMD GPUs in the future and this hurts Nvidia's moat.
"Open" is a meme at this point, isn't it?
Having used and learned from the architectures that Facebook has published via OCP over the past decade, this is not a "meme" but actual real information and designs that Facebook uses to commoditize their supply chain and that others can use too.
Some designs such as OCP NIC have become industry standard.
... but that's been a thing since ever using the "Open" moniker? I just feel "Open" is a mental shortcut that's being misused a bit.
See, e.g. [0] for an example of Facebook (now: Meta) for admirable dissemination of knowledge.
[0] https://engineering.fb.com/2015/06/26/security/fighting-spam...
I've not noticed a change in the amount of usage of "open" or any mental shortcuts about it since it was introduced via the term "open source" in the 1990s. If anything, there's less discussion and abuse of the term now. The only memery that I see is from OpenAI, which had aspirations to openness in the past but which today is a sham.
Facebooks open efforts here are completely great, IMHO, and have benefited me personally and professionally.
Open as in you get to open the box.
Or maybe Open as in “open for business”.
[flagged]