inimino 13 hours ago

This research process is the same as what I've been doing.

https://cmpr.ai/hutter/archive/

Claude does the research, like having an army of very fast but forgetful grad students working for free.

The individual paper doesn't make much incremental improvement but once you get a pipeline of results building in a particular direction it's powerful.

lrc a day ago

if you click through to the linked Vannevar Bush article and scroll down there are a bunch of vintage ads around the prose that are kind of interesting. And some of the predictions have been well overtaken by events!

lawrenceyan a day ago

Biggest update I see is that he thinks AI 2027 is actually going to happen.

  • emp17344 14 hours ago

    We’re going to see all talk of AI 2027 quietly disappear as folks realize how out-of-touch with reality it is. I have no idea why people take that crap seriously.

munificent a day ago

> We are entering a golden age in which all computer science problems seem to be tractable, insomuch as we can get very useful approximations of any computable function.

Alternatively, we are entering a dark age where the billionaires who control most of the world's capital will no longer need to suffer the indignity of paying wages to humans in order to generate more revenue from information products and all of the data they've hoarded over the past couple of decades.

> the real kicker is that we now have general-purpose thinking machines that can use computers and tackle just about any short digital problem.

We already have those thinking machines. They're called people. Why haven't people solved many of the world's problems already? Largely because the people who can afford to pay them to do so have chosen not to.

I don't see any evidence that the selfishness, avarice, and short-term thinking of the elites will be improved by them being able to replace their employees with a bot army.

  • skybrian a day ago

    I don't think you've read those quotes very closely? He's writing about all computer science problems. And "just about any short digital problem" is not the same as solving the world's problems.

    AI ghosts can do a lot of things, but they're limited by being non-physical.

    • munificent a day ago

      He does also say:

      > The entire global economy is re-organizing around the scale-up of AI models.

      > Software engineering is just the beginning; ...

      > Air conditioning currently consumes 10% of global electricity production, while datacenter compute less than 1%. We will have rocks thinking all the time to further the interests of their owners. Every corporation with GPUs to spare will have ambient thinkers constantly re-planning deadlines, reducing tech debt, and trawling for more information that helps the business make its decisions in a dynamic world.

      > Militaries will scramble every FLOP they can find to play out wargames, like rollouts in a MCTS search. What will happen when the first decisive war is won not by guns and drones, but by compute and information advantage? Stockpile your thinking tokens, for thinking begets better thinking.

      So he is extending this to more than just computer science.

      • skybrian a day ago

        Yeah, I think that's magical thinking about how much better war planning will help.

  • Centigonal a day ago

    I don't understand why you're being downvoted. This is a topic worth discussing.

    Like every previous invention that improves productivity (cf. copiers, steam power, the wheel), this wave of AI is making certain forms of labor redundant, creating or further enriching a class of industrialists, and enabling individuals to become even more productive.

    This could create a golden age, or a dark age -- most likely, it will create both. The industrial revolution created Dickensian London, the Luddite rebellion & ensuing massacres, and Blake's "dark satanic mills," but it also gave me my wardrobe of cool $30 band T-shirts and my beloved Amtrak train service.

    Now is the time to talk about how we predict incentive structures will cause this technology to be used, and what levers we have at our disposal to tilt it toward "golden age."

    • sunsunsunsun a day ago

      Considering the usage of LLMs by many people as a sort of friend or psychologist we also get to look forward to a new form a control over people. These things earn peoples "trust" and there is no reason why it couldn't be used to sway peoples opinions. Not to mention the devious and subtle ways it can advertise to people.

      Also, these productivity gains arent used to reduce working time for the same number of people, but instead to reduce the number of people needed to do the same amount of work. Working people get to see the productivity benefits via worsening material conditions.

      • SR2Z 13 hours ago

        People need to develop memetic immunity to AI flattery. It's exactly like how conspiracy sites on the Internet worked. A lot of people get one-shor in the beginning, but 10 years later mostly everyone understands that you can't just believe what you read on the Internet.

        • munificent 11 hours ago

          People have had several thousand years to develop immunity to flattery and yet here we are with a President where aides have to put his name in every paragraph of a memo to get him to read it.

          https://www.independent.co.uk/news/world/americas/donald-tru...

          At an individual level, we have a lot of psychological plasticity and can work to overcome our limitations. At societal scale, though, we are social primates and any system that takes advantage of natural social primate behavior is likely to succeed indefinitely.

        • soco 12 hours ago

          You'd be surprised. I'm already sorry if I sound condescending, I just don't know how to rephrase this: please but please look around how effective is nowadays all that internet, dare to say more and more effective, in pushing "alternative truth" for the obvious goal of covering dirty businesses, wars, and even more crimes.

    • beeflet a day ago

      Unlike every previous invention that improves productivity, It is making every form of labor redundant.

      • zozbot234 a day ago

        AIUI, in most lines of work AI is being used to replace/augment pointless paper-pushing jobs. It doesn't seem to be all that useful for real, productive work.

        Coding may be a limited exception, but even then the AI's job is to be basically a dumb (if sometimes knowledgeable) code monkey. You still need to do all the architecture and detailed design work if you want something maintainable at the end of the day.

        • munificent a day ago

          > It doesn't seem to be all that useful for real, productive work.

          Even the most pointless bullshit job accomplishes a societal function by transferring wages from a likely wealthy large corporation to a individual worker who has bills to pay.

          Eliminating bullshit jobs might be good from an economic efficiency perspective, but people still gotta eat.

          • uoaei a day ago

            The logic of American economic policy relies on a large velocity of money driven by consumer habits. It is tautological, and it is obsolete in the face of the elite trying to minimize wage expenses.

            • SR2Z 13 hours ago

              How is it obsolete? If everyone is unemployed and a few AI barons are obscenely wealthy, the velocity of money will be low because most people will be broke.

              Seems to me like that's still a worthy target if chasing it fights that outcome.

            • SR2Z 13 hours ago

              How is it obsolete? If everyone is unemployed and a few AI barons are obscenely wealthy, the velocity of money will be low.

              Seems to me like that's still a worthy goal.

          • DennisP a day ago

            If the only point is distributing money, then the pointless bullshit job is an unnecessary complication.

            • munificent a day ago

              It's not unnecessary to the person who uses it to pay their bills.

              • xg15 a day ago

                I think GP meant that the money could be distributed directly without the job in between, i.e. UBI.

                Of course that comes with its own set of problems, e.g. that you will lose training, connections, the ability to exert influence through the job or any hope of building a career.

                • munificent 11 hours ago

                  That's certainly true.

                  But one is well-advised to inflate and test the new lifeboat before jumping out of the current one, not after.

        • beeflet a day ago

          real productive work like what? What do you think all this hubub with robotics is about?

          I mean, I know what you are getting at. I agree with you on the current state of the art. But advancements beyond this point threaten everyone's job. I don't see a moat for 95% of human labor.

          There's no reason why you couldn't figure out an AI to assemble "the architecture and detailed design work". I mean I hope it's the case that the state of the art stays like this forever, I'm just not counting on it.

          • zozbot234 a day ago

            Robotics is nothing new, we had robots in factories in the 1980s. The jobs of modern factory workers are mostly about attending to robots and other automated systems.

            > There's no reason why you couldn't figure out an AI to assemble "the architecture and detailed design work".

            I'd like to see that because it would mean that AI's have managed to stay at least somewhat coherent over longer work contexts.

            The closest you get to this (AIUI) is with AI's trying to prove complex math theorems, where the proof checking system itself enforces the presence of effective large-scale structure. But that's an outside system keeping the AI on a very tight leash with immediate feedback, and not letting it go off-track.

    • keybored a day ago

      People fought back. Who is fighting back now?

      Capitalists have openly gloated in public about wanting to replace at least one profession. That was months or years ago. What are people doing in response? Discussing incentive structures?

      SC coders paid hundreds of thousands a year are just letting this happen to them. “Nothing to be done about another 15K round of layoffs, onlookers say”

      • zozbot234 a day ago

        > Capitalists have openly gloated in public about wanting to replace at least one profession. That was months or years ago. What are people doing in response?

        Great, let them try. They'll find out that AI makes the human SC coder more productive not less. Everyone knows that AI has little to nothing to do with the layoffs, it's just a silly excuse to give their investors better optics. Nobody wants to admit that maybe they've overhired a bit after the whole COVID mess.

      • AndrewKemendo a day ago

        This is exactly it, nobody is going to do anything about it

      • CamperBob2 a day ago

        Buggy-whip makers inconsolable!

  • keeda a day ago

    Here are my thoughts, which are not fully formed because AI is still so new. But taking this line of thought reductio ad absurdum, it becomes apparent that the elites have a critical dependency on us plebs:

    Almost all of their wealth is ultimately derived from people.

    The rich get richer by taking a massive cut of the economy, and the economy is basically people providing and paying for services and goods. If all the employees are replaced and can earn no money, there is no economy. Now the elite have two major problems:

    a) What do they take a cut of to keep getting richer?

    b) How long will they be safe when the resentment eventually boils over? (There's a reason the doomsday bunker industry is booming.)

    My hunch is, after a period of turmoil, we'll end up in the usual equilibrium where the rest of the world is kept economically growing just enough to keep (a) us stable enough not to revolt and (b) them getting richer. I don't know what that looks like, could be UBI or something. But we'll figure it out because our incentives are aligned: we all want to stay alive and get richer (for varying definitions of "richer" of course.)

    However, I suspect a lot will change quickly, because a ton of things that made up the old world order is going to be upended. Like, you'd need millions in funding to hire a team to launch any major software product; this ultimately kept the power in the hands of those with capital. Now a single person with an AI agent and a cloud platform can do it themselves for pocket change. This pattern will repeat across industries.

    The power of capital is being disintermediated, and it's not clear what the repercussions will be.

  • denkmoon a day ago

    A labouring proletariat with bread and circuses is a distracted proletariat. Billionaires are still flesh and blood, much like Louis XVI and Charles I.

    • AndrewKemendo a day ago

      Are you actually doing anything in that direction or is this “tough guy on the internet?”

      I see literally zero people doing the equivalent of “breaking the factories” like the luddites attempted

      • denkmoon a day ago

        We're not there yet. The luddite movement formed and acted over decades not years.

        Do you not see the overwhelmingly negative response to AI produced goods and services from the average westerner?

        • AndrewKemendo a day ago

          So, no then. Like I said upstream, nobody is going to anything about it.

          At a certain point it’s too late.

      • tejohnso a day ago

        I think we'd need a lot more suffering before we have enough people to start that kind of action. If we see 35% unemployment over the next 5 years with insufficient time to adjust, then maybe the pitchforks come out.

        • AndrewKemendo a day ago

          So then we should just go slightly slower?

          What if it’s over 10 years?

          • tejohnso a day ago

            Well, time is one aspect but we'd also need motivation and proper execution for a reasonable chance at successful adaptation. My guess is we'll coast along the boundary. I don't imagine things will move so fast as to cause the sort of general upheaval that I think you're talking about. But I do think things will move fast enough to cause significant harm on a larger scale than we've seen recently in the West.

            • AndrewKemendo a day ago

              Yeah, I agree that’s the most likely future.

  • measurablefunc a day ago

    What you fail to understand Bob is that as long as we let the billionaires do what they want then we all automatically win. That's just how the system is designed to work, we can't lose as long as Musk & his buddies are at the helm.

    • munificent a day ago

      Gazing up at them adoringly, mouth open, waiting for it all to trickle down on my face.

      • measurablefunc a day ago

        It's the only thing us plebeians can hope for. When all is said & done the people at the top are the only ones that can truly create wealth w/ their innovative genius. The rest of us should just shut up & follow their orders for our own good.

        • drdaeman a day ago

          That would be a thing if wealth would correlate with innovation. I’m afraid the correlation is inverse in way too many cases.

          • munificent a day ago

            This comment thread is being sarcastic.

esafak a day ago

This looks like a survey. Is there a thesis; any claim?

  • tejohnso a day ago

    Sounds like OP is happy to be alive at this moment, reveling in the wonder of it all, and wanting to share.

TacticalCoder a day ago

> As Rocks May Think

I thought they meant the plural of ASRock as in "ASRocks May think" and thought this was about ASRock motherboards getting a BIOS/UEFI with an integrated LLM or something.

macrocosmos a day ago

> AI generated videos are indistinguishable from reality.

alsetmusic a day ago

This person doesn't understand how LLMs work.

  • pradeesh a day ago

    Not sure how you could read this essay and come to that conclusion. It definitely aligns with my own understanding, and his conclusions seem pretty reasonable (though the AI 2027/Situational Awareness part might be arguable)

    • alsetmusic a day ago

      Absolutely:

      > In order to predict where thinking and reasoning capabilities are going, it's important to understand the trail of thought that went into today's thinking LLMs.

      No. You don't understand at all. They don't think. They don't reason. They are statistical word generators. They are very impressive at doing things like writing code, but they don't work the way that is being inferred here.

      • DennisP 5 hours ago

        This is an outdated view.

        People get the idea that just because LLMs are initially trained to predict word sequences, that's all they can do. This is not the case.

        Transformers are general-purpose learning mechanisms. Word sequences are just the first thing we teach them. Then we train them more with human feedback. Then we sometimes hook them up to math and logic engines to train them some more, so they learn logical and mathematical reasoning. The article describes this process in a bit of detail.

        With "reasoning models" we also let the model have some internal monologue before generating output, so it can "think through" the problem.

        We didn't do all that with early LLMs. Those were just word predictors. But now we do those things, and that's why our lowly LLMs are writing huge software projects that actually work, and solving famous math problems that have been open challenges for half a century.

        The weirdest part of all this is that LLMs started showing signs of reasoning with an internal world model even before we trained them for it specifically. Microsoft showed this in a famous paper back in (iirc) 2023. They showed that, for example, you could give GPT4 a list of odd-shaped objects that probably wouldn't appear together in any particular source text, ask GPT4 how to stack them so they wouldn't fall over, and GPT4 would come up with a good solution.

        Finally, don't overlook the multimodal models, which work explicitly with images, video, and 3D world models in addition to text.

  • DennisP a day ago

    Care to be more specific?

    • alsetmusic a day ago

      See reply in related thread.

measurablefunc a day ago

Yes, yes, we are all going to be living in an automated & luxurious communist utopia. Here are some material facts to ground the exuberance: 1) Lifecycel of typical GPU in a data center is 1-3 years, 2) Buildout is already limited by production capacity & will hit production walls by 2027-2028 when turnover matches & exceeds production capacity, 3) TSMCs projected capacity is ~130k wafers/month & it is not keeping up w/ demand which is more than doubling, 4) Power consumption for these "geniuses" & "thinking rocks" in data centers requires lots of power & the capacity was saturated in 2025, 5) So along w/ production capacity limitations power production is now another gating factor.

Anyway, like data centers in space there are lots of material limitations that all of these exuberant "ZOMG rocks can think now" essays all sweep under the rug to drive a very biased narrative about what is actually happening & the fact that all those binary bits are produced by real materials that have lifecycles & production limits not visible in the digital artifacts.

  • zozbot234 a day ago

    GPU lifecycle is 1-3 years because GPUs are becoming obsolete for cutting-edge work (especially re: power use) in that timeframe. This is good news if you'd like to see expanded use of AI. Production walls at fabs will matter little since future silicon dies will be capable of far more per unit area than current ones, so there will be plenty of incentive to upgrade.

    • measurablefunc a day ago

      I'm just stating facts. I don't care whether it's a good or bad thing. You can theorycraft about future utopias w/ computronium as much as you want but the facts as they stand today are what I stated.

      • groby_b a day ago

        "I don't care whether it's a good or bad thing" is not really a believable statement given your polemic closings.

        • measurablefunc a day ago

          If you think the facts are "good" or "bad" then take it up w/ the people who can do something about it to make them "better". Typical discussions about stuff like this becomes nonsensical & incoherent b/c whether you think the facts are "good" or "bad" makes no difference to the material reality & again, those are as I have stated them.

mynameisjody a day ago

I'm still waiting for one of these articles to be written by someone without something to be directly gained by the hype. Eric Jang, VP of AI at 1X.

  • johnfn a day ago

    The previous post in this blog is titled "Leaving 1X". So your wait is over!

    • Paracompact a day ago

      Very, very likely he is remaining in the AI space.

    • RealityVoid a day ago

      Unrelated, but it seems his previous company, 1x, was initially named Halodi and was located in Norway. And eventually, it was moved with all employees in Silicon Valley. How the hell does that work? That sounds like a logistical nightmare.Do you upend all those people's lives? Do you fire those who refuse? How many norwegians even want to go to the US? Sounds crazy to me.

      • xg15 a day ago

        Did they actually move or is it just a "remote-first" company now?

        (Or even just registered in SV but still physically in Norway?)

        Edit: Seems like a mix of all of it:

        > I joined Halodi Robotics in 2022 (prior name of the company) as the only California-based employee. At the time, we were about 40 based out of Norway and 2 in Texas.

      • jrmg a day ago

        How do they all get work visas?

xyzsparetimexyz a day ago

[flagged]

  • zozbot234 a day ago

    Nah, the ugliest prose is clanker prose and this definitely isn't. This stuff comes 100% from an actual carbon-based lifeform.

    • akovaski a day ago

      I think that Gemini regularly generates inane metaphors like the above. As an example, here's a message that it sent me when I was attempting to get it to generate a somewhat natural conversation:

      ----

      Look, if you aren't putting salt on your watermelon, you’re basically eating flavored water. It’s the only way to actually wake up the sweetness. People who think it’s "weird" are the same ones who still buy 2-in-1 shampoo.

      Anyway, I saw a guy at the park today trying to teach a cat to walk on a leash. The cat looked like it was being interrogated by the FBI, just dead-weighting it across the grass while he whispered "encouragement."

      Physical books are vastly superior to Kindles solely for the ability to judge a stranger's taste from across a coffee shop. You can’t get that hit of elitism from a matte gray plastic slab.

      ----

      This was with a prompt telling it to skip Reddit-style analogies.

      • wtetzner a day ago

        I buy 3-in-1 shampoo, conditioner and body wash.

    • beeflet a day ago

      Who is the wise guy that gave water the ability to think

  • appellations a day ago

    Author is Vice President of AI, 1X Technologies.

  • kagol a day ago

    Curious about the root of your distaste. Just a bad analogy/visualization?

  • netsharc a day ago

    The article goes from philosophical (what AI will do to society) to jargony blowhard and then even deeper look (I think. I flicked my thumb past several screens of text), and back out again.

    Come on author, learn to write properly. Or tell your LLM to not mix a philosophical article with a technical one.

kalterdev a day ago

> Chief among all changes is that machines can code and think quite well now.

They can’t and never will.

  • johnfn a day ago

    Are you really claiming that there isn't a machine in existence that can code? And that that is never possible?

    • kalterdev a day ago

      It can code in an autocomplete sense. In the serious sense, if we don’t distinguish between code and thought, it can’t.

      Observe that modern coding agents rely heavily on heuristics. LLM excels at training datasets, at analyzing existing knowledge, but it can’t generate new knowledge on the same scale, its thinking (a process of identification and integration) is severely limited on the conscious level (context window), where being rational is most valuable.

      Because it doesn’t have volition, it cannot choose to be logical and not irrational, it cannot commit to attaining the full non-contradictory awareness of reality. That’s why I said “never.”

      • johnfn a day ago

        > It can code in an autocomplete sense.

        I just (right before hopping on HN) finished up a session where an agent rewrote 3000 lines of custom tests. If you know of any "autocomplete" that can do something similar, let me know. Otherwise, I think saying LLMs are "autocomplete" doesn't make a lot of sense.

        • emp17344 14 hours ago

          That’s neat, but it’s important to note that agentic systems aren’t just comprised of the LLM. You have to take into account all the various tools the system has access to, as well as the agentic harness used to keep the LLM from going off the rails. And even with all this extra architecture, which AI firms have spent billions to perfect, the system is still just… fine. Not even as good as a junior SWE.

        • kalterdev a day ago

          That’s impressive. I don’t object to the fact that they make humans phenomenally productive. But “they code and think” makes me cringe. Maybe I’m confusing lexicon differences for philosophic battles.

          • johnfn 12 hours ago

            Yes, I think it is probably a question of semantics. I imagine you don't really take issue with the "they code" part, so it's the "they think" thing that bothers you? But what would you call it if not "thinking"? "Reasoning"? Maybe there is no verb for it?

      • libraryofbabel a day ago

        Some of that is true, sure, but nobody who claims LLMs can code and reason about problems is claiming that they operate like humans. Can you give concrete examples of actual specific coding tasks that LLMs can’t do and never will be able to do as a consequence of all that?

        • kalterdev a day ago

          I think it can solve about any leetcode problem. I don’t think it can build an enterprise-grade system. It can be trained on an exiting one but these systems are not closed and no past knowledge seems to predict the future.

          That’s not very specific but I don’t have another answer.

    • wavemode a day ago

      I think "quite", "well", and "now" are the objectionable parts of the quote.