bottlepalm 1 day ago

How much money did Y Combinator invest to get that 0.6% stake? I hope it was more than zero. Funny how in 2019 they just start doling out shares in a previously shareless entity.

ucyo 1 day ago

Sam Altman was president of Y Combinator from 2014 to 2019. of course YC has a stake in OpenAI. My surprise is why it is that low…

  • JumpCrisscross 1 day ago

    > of course YC has a stake in OpenAI

    Sam has worn multiple hats for ages. Not all of this holdings are cross invested. (And they don’t have to be as long as everything is disclosed.)

    It’s absolutely material that Sam’s statements about having no equity are, granted unsurprisingly a lie. It’s a bit more surprising to me that Graham would publicly defend Altman without disclosing this bias. And it’s actually shocking to me that Livingston, a journalist, was in on the fix.

  • financetechbro 23 hours ago

    Every time they raise new $, old shareholders get diluted. OAI has raised a lot of money

  • dgellow 22 hours ago

    From what I see they had 13 founding rounds, that’s quite a lot of dilution to be expected if you were early

rvz 1 day ago

Greg Brockman (President of OpenAI) also said that OpenAI is around 80% close to achieving "AGI", but it was disclosed that his stake in OpenAI is worth around 30BN.

So what does the true definition of "AGI" actually mean? It depends on who you ask.

It appears to many to mean "A Great IPO" or "A Gigantic IPO" at this point rather than "Artificial General Intelligence" which has been clearly hijacked to mean something else.

  • wg0 1 day ago

    > So what does the true definition of "AGI" actually mean?

    If your stake is > 30 billion seems more of a reasonable and realistic criteria to me.

  • lukan 1 day ago

    ""Artificial General Intelligence" which has been clearly hijacked to mean something else."

    I mean, the goalposts shifted. The game Go used to be considered to require true AI. Passing the turing test. Scanning, analyzing and improving complex codebases largely on their own would have been considered some sort of AGI by me 6 years ago.

    Now sure, we all know they lack true understanding. But it gets blurry at times what that does mean.

    But I don't buy that there will be a magic point, where self improving AGI explodes towards singularity. The current approach is very, very energy and compute intense and that is unlikely to change.

    • sevenzero 1 day ago

      Maybe the dystopian AI development will result in energy funding and advancements that actually benefit most of us. I really hope all this turns out in a net positive for humanity. If we wont get true "AGI", which we are far far away from, we at least could make some advancements in different areas.

      • lukan 1 day ago

        Well, I surely hope so, but I feel less positive if that means a nuclear power plant is parked before every new rushed datacenter

        https://www.scmp.com/news/china/science/article/3351721/chin...

        But in general I do believe AI has the potential to be a great positive for humanity on its own - if the open models stay strong and not only a few people control them.

        • sevenzero 1 day ago

          I can see your reasoning. Unfortunately I see and experience everything wrong with AI in my daily life. People ask it what gifts to buy for their loved ones or use it as therapist substitute. Humans are not ready for this technology. A lot of us are even losing the ability to read properly (even though thats related to technology in general). It's extremely scary. The only advantage humans have is an extraordinary big brain and a pair of thumbs, we can't afford to use our brains less.

          • lukan 1 day ago

            I mean, people are doing dumb shit since the beginning of times and I considered this society as messed up since way before LLMs.

            And yes, humans as a whole are not even ready for cars or nuclear weapons. We build and used them anyway.

            But my brain is still pretty busy and I don't think the younger generation is getting dumber because of LLMs, rather mindless consuming TikTok and co

            LLMs are a also great learning tool and anyone using them should know their limits quickly. Not all do, though. That is obvious.

  • giancarlostoro 1 day ago

    That's the trick right? What do they really mean by AGI. Depending on how narrow you go, it sounds like we've already achieved it. However, if they keep saying they'll achieve it and not defining it before making such statements that determine what it is, they can keep saying it endlessly to create hype.

    One key thing I've heard about AGI which I think would be the most determining factor for me is a model that learns on the fly. Which could be done one way or another, but when you consider that LLMs basically run like "ROM" files, it makes it a little complicated.

    I think we need to re-imagine how LLMs are built, train, and run. But also, figure out how to drastically lower the cost of running them.

    • bitexploder 1 day ago

      I think they would not be LLMs then.

      • giancarlostoro 21 hours ago

        Agreed. It feels like LLMs are just a piece of the whole final solution towards AGI. I do foresee possibly seeing "LLM flavored AGI" where it does all those things, via tool calling, RAG and other techniques. The real AGI in my eyes will be more than just an LLM though.

  • avazhi 1 day ago

    One of the random tidbits I can remember from the New Yorker Altman deep dive was Brockman being obsessed about making $1B dollars. It was memorable because I actually cringed reading it.

  • christkv 1 day ago

    AGI is defined as whatever it takes for stock holders to make $$$ I guess?

  • fodkodrasz 1 day ago

    > So what does the true definition of "AGI" actually mean? It depends on who you ask.

    AGI - Automatically Generating Income

  • Galanwe 1 day ago

    > So what does the true definition of "AGI" actually mean?

    No worries, there will be a startup creating "AGI Bench", >=80% means you're AGI, they will be valued $50B.

    • in-silico 1 day ago

      The ARC-AGI benchmark is basically this already

      • jononor 1 day ago

        That is not at all the intention of the ARC team. By ARC teams definition, passing any single ARC-AGI benchmark does not mean that AGI has been achieved. Instead, AGI would be considered achieved when we are no longer able to come up with new benchmarks that the AI systems do not immediately do well on.

  • jimnotgym 1 day ago

    First do 80%, then do the remaining 150%, I would imagine

  • wiseowise 1 day ago

    > So what does the true definition of "AGI" actually mean? It depends on who you ask.

    When Greg Brockman makes a lot of money from the deal.

keybored 1 day ago

News at Y Combinator used to be my preferred reading diversion: reading interesting technical stories, debates on political topics, learning things, my comfort food of the same topics repeating the same arguments over the span of a decade. Now it’s that but also 65% AI doomscrolling.

  • benterix 1 day ago

    Same with me. I found a new hobby: reading pre-LLM HN. It turns out I missed so many interesting projects and discussions. Some are a bit funny in hindsight, some are inspiring.

    At the same time, the current version of HN is still usable, you just need to mentally filter LLM-related stuff. It was similar with cryptocurrencies TBH.

    • nozzlegear 23 hours ago

      I just press the "Hide" button under most stories related to AI; it removes the temptation for me to jump into the thread, and surfaces more interesting (i.e. not AI bullshit) submissions.

      You could probably automate it with a browser extension and a regex that looks for words like "AI", "LLM", and the names of any popular companies or projects.

  • big_toast 18 hours ago

    The AI bothered me less, but I got a little frustrated with less than substantive comments on the front page.

    Oddly I made an extension* to use the site more the way I wanted and now I find it a little easier to get a higher SNR past the front page and am enjoying that. I didn't really get past post rank 60 for two decades and now generally get much further.

    *(It's basically vim-keys support for basically two functions. A function to "highlight" stories/comment threads I think will be promising and then hide function for the rest.)

globalnode 1 day ago

i always thought there were two reasons for AI interest on HN.

1. since AI has captured the imagination of capitalists and they think this is the next industrial revolution, they gotta be in it to win it. combined with the fact that i believe most people here are wealthy or at least aspirationally so, that explained half of it.

2. the other half is that AI as a tech is interesting from a mathematical and compsci point of view, tho certainly not interesting enough to justify the proportion of topics about it here.

i guess i should add a 3rd reason.

3. ycomb has a financial stake in spreading the news about how wonderful this tech is!

lolol

  • tomhow 1 day ago

    The only thing that should be surprising to anyone who knows about the early history of OpenAI is how little of it YC owns, given how much it leveraged YC’s credibility to get started (early employees joined an institution called “YC Research”, operating from YC’s office space). Once that stake is divided up among all the LPs and small unit holders, it’s not a huge outcome.

    Also: nothing gets sustained attention on HN unless good hackers find it interesting. Our entire objective is to be the website that attracts the best hackers, serves them the most interesting content and facilitates the most interesting discussions. That can’t happen if we’re nefariously pushing a commercial agenda.

    • robocat 1 day ago

      Rhymes with reddit.com at IPO:

      - Sam Altman ~9%

      - YCombinator had <5%

      - Steve Huffman ~3% Although he had ~4% voting power via Class B shares.

      - Alexis Ohanian: Minimal

      - Advance Publications: ~30%

      - Tencent: ~11%.

      The original founders (Steve Huffman and Alexis Ohanian) massively diluted when they sold Reddit to Advance Publications in 2006 for $10 to $20 million.

      Numbers above are vaguely accurate. See https://www.untaylored.com/post/who-owns-reddit

  • ValentineC 1 day ago

    One more (for me, and definitely for many others since I've seen similar posts):

    It's letting me build stuff I probably wouldn't be able to build by myself without raising lots of money for way cheaper, at least until GitHub Copilot gets incredibly nerfed next month.

  • greggsy 1 day ago

    …or many people are using the products day to day in their work as IT professionals or developers?

    I think it’s mostly the above, rather than a capitalist conspiracy, or in its relevance as a scientific curiosity.

  • globalnode 1 day ago

    sorry everyone, sometimes i go down these rabbit holes

  • dgellow 22 hours ago

    Even if ycombinator doesn’t have ownership in OpenAI, they do have ownership in a lot of AI startups and would still be incentivized to spread AI news

  • an0malous 22 hours ago

    The interest in AI is global and spans nearly every corner of the Internet, it’s not something exclusive to HN. The root cause of this is #1 by a wide margin. Our society is governed by money, the investor class sees an opportunity to become trillionaires, the labor class is afraid of becoming the permanent underclass, all of these things are defined by money.

  • aurareturn 22 hours ago

    I'll present an alternative set of reasons:

    1. AI is tremendously useful at the current intelligence level and people here like to be more productive.

    2. AI is exciting - both in the potential applications and new models getting smarter.

    3. Many workers here have either transitioned to building agents or they're heavily using AI for their work.

  • tim333 21 hours ago

    It also can give insights into natural intelligence.

FergusArgyll 1 day ago

"well-known AI expert Gary Marcus"

  • tomhow 1 day ago

    Please don’t post snark on HN. Gary is, objectively, an AI expert. He’s been a leading researcher for decades and sold an AI company to Uber. He obviously sees things differently from the current generation of AI company leaders and has concerns about the direction of the AI industry. That doesn’t mean it’s fine to disrespect someone like this here. The first rule of the “In Comments” section of the guidelines is be kind.

    https://news.ycombinator.com/newsguidelines.html

oliculipolicula 1 day ago

Despite not publicly moving away from what has been said about Sam*

Jessica Livingston's personal stake in OpenAI is maybe at most 0.1% or less and Paul Graham's, afaik, is 0.

So the bias doesn't seem as large as OP thinks

*https://xcancel.com/paulg/status/2041366050693173393

And "toughness, adaptability, and determination" >>> "ambition", frankly

  • crowcroft 1 day ago

    What is 0.1% of a trillion? I think that's quite a large number still.

    • bitmasher9 1 day ago

      OpenAI’s last post-money valuation was less than a trillion. They’ll probably cross that point in the future, but let’s not get ahead of ourselves.

      • kibibu 1 day ago

        It was $852 billion - 0.1% of which is $852 million

        • hhh 1 day ago

          85.2m*

          • crowcroft 1 day ago

            Are you sure?

            • hhh 6 hours ago

              sorry, i misread 0.01% :)

        • epolanski 1 day ago

          Can only buy a luxury mega yacht, few mansions and private jets, but let's be real, after this you're lucky if you're left with just enough to buy yourself an European football club.

  • kibibu 1 day ago

    Does Paul Graham no longer have a stake in Y Combinator?

  • chis 1 day ago

    Such suspicious phrasing lol. So you’re saying Paul Graham and his wife Jessica have 800 MILLION dollars worth of OpenAI stock, and that’s not so significant?

    • oliculipolicula 1 day ago

      We're forced to decide whether 0.8B is enough to risk her credibility over, or, if it matters to us, gather more information first

      • anewhnaccount2 1 day ago

        Exactly! It's only $0.0008T. Pocket change really...

      • JumpCrisscross 1 day ago

        Has The Information broken any critical news about OpenAI? I never connected the dots around why I started finding it increasingly in worth paying for over the last year or two, but editorial bias feels correct.

8ig8 1 day ago

Seems to be an unusually quiet post for something posted 3 hours ago.

  • iambateman 1 day ago

    Do you have something to say about it?

  • roxolotl 1 day ago

    My understanding is dang has said in the past they do some anti moderation(I’m sure he has a better term) for posts related to ycombinator. That is to say they moderate less and might, do not quote me here, even boost a tad. So upvoted story by a well reputed source even without many comments is likely to hang onto the front page for a bit.

    • dang 1 day ago

      You're thinking of the principle I've explained here over the years: https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu... - that we moderate HN less, not more, when YC or a YC-funded startup are part of the story.

      "Less" doesn't mean "not at all", of course—that would be too big a loophole. But it does mean strictly less, and we stick to that, despite its various downsides, because the upside is bigger.

      In the present case, it means we haven't applied any moderation downweights to this post, even though it's obviously the sort of thing we would downweight under other circumstances, since it's neither particularly substantive nor intellectually interesting (though it could be some other kind of interesting, at least to some readers).

  • pdpi 1 day ago

    The actual content of the post is straightforward and not particularly novel — YC has a stake in OpenAI, that creates a conflict of interest, and the New Yorker is negligent (in the informal sense) for not putting that in their piece.

    It’s a sobering reminder and worthy of being on the front page on that basis alone, but I don’t see much of a discussion to be had. “Unusually quiet for a front page post” is probably where this post is meant to be.

    • gyomu 1 day ago

      > not particularly novel

      As far as I know this is the first time anyone has publicly claimed to know, quoting insider sources, what YC's actual stake in OpenAI is.

wg0 1 day ago

Nothing unusual. There's not an AI company (mostly AI wrappers) on planet in which Y Combinator hasn't sprinkled their cash already.

I'd go as far to say that it's impossible at this point to form an AI company without YCombinator not investing in it.

  • Vandit296 1 day ago

    I disagree there tons of early stage investors who even invest before YC you can find them on OpenVC

geuis 1 day ago

Could someone (non-AI) summarize this? I'm sorry but I just literally don't have time to even read long posts from very reputable sources. I know I need the info but time just isn't there in my life right now.

  • FabHK 1 day ago

    Ronan Farrow and Andrew Marantz had a critical investigative report in The New Yorker on Sam Altman and OpenAI last month asking whether Altman could be trusted.

    Paul Graham of Y-Combinator in response tweeted some positive things about Altman, emphasising that they didn't fire him as CEO of YC (though not going as far as declaring him trustworthy).

    Now John Gruber of DaringFireball (an Apple blog) added context by claiming that YC owns a 0.6% stake in OpenAi, worth around $5bn, which might colour Graham's judgement.

  • pixel_popping 1 day ago

    why non-AI? If AI is arguably great at something, it's this.

  • dgellow 22 hours ago

    Just skip and ignore if you don’t have the time, you likely have more important things to do