est 2 days ago

> We published the cryptographic keys in July

Everyone should take a look at the SERP screenshot

https://x.com/d0tslash/status/1969412224763498769

> The vulnerability combines multiple security issues: hardcoded cryptographic keys, trivial authentication bypass, and unsanitized command injection. What makes this particularly concerning is that it's completely wormable - infected robots can automatically compromise other robots in BLE range. This vulnerability allows the attacker to completely takeover the device.

damn!

  • b00ty4breakfast 2 days ago

    Robots walking around doing the EMF equivalent of coughing without covering your mouth.

    • throwup238 2 days ago

      More like spraying blood from open wounds and rubbing them everywhere.

  • thrdbndndn 2 days ago

    What does the screenshot mean, though?

    From what I can tell (I speak Chinese), it's just an IV used in some AES implementation tutorials.

    Using a hardcoded key/IV is obviously bad, but I don’t see what this screenshot shows beyond that.

    • cyp0633 2 days ago

      Someone just copy-pasted an implementation from a random Chinese blog, completely unaware of what the key means.

      • 05 2 days ago

        > copy-pasted an implementation from a random Chinese blog

        But.. the blog was chosen by a series of dice rolls, guaranteed to be random!

  • stocksinsmocks 2 days ago

    Could this level of incompetence be more easily explained by malice? Maybe the robots were meant to be exploited at a future time. The PRC subsidizes the robots, every US family buys one, a plausibly deniable exploit results in the robots subduing their owners with Kung Fu. America is vanquished in a bloodless coup. A 1000 year global Chinese imperium ensues. Forks and spoons hardest hit.

    It’s just silly enough to be real.

    • numpad0 2 days ago

      I'd bet it would be more of shipped is king mindset. It's not so unprecedented that new categories of Chinese products dominate markets with incredibly insecure, stupid, and nearsighted implementations, and then buttons up one night and kicks out all open source development that benefited from lack of security.

      Chinese phones, drones, action cams, robot vacuums, home security cams, smart bands, etc. all used to be insecure and vulnerable as hell. Not anymore.

    • stubish 2 days ago

      No, because the exploit is likely to be caught before every US family has bought one. Much simpler, all malice needs to do is to roll out an OTA security update.

anigbrowl 3 days ago

I love* that this comes out around the same time that engineers are making fun videos of themselves beating up robots half their size and literally training the robots to develop the same sort of fight-or-flight instincts that were forcibly instilled into the engineers

* I do not in fact love it

  • cedarseagull 3 days ago

    Why are all the videos of the robots doing combat stuff? I don't need combat stuff. All I need the robot to do is fold the laundry and mop every now and then. Less combat, please!

    • bonoboTP 2 days ago

      Kungfu demos are easier than fine hand eye coordination and manipulation of nonrigid objects. There is progress on folding too, but it's harder and impresses normal people less even if it's technically harder.

      • dyauspitr a day ago

        With all the stuff I’ve already seen these robots do we can’t be more than a decade away from house chore robots.

    • lupusreal 2 days ago

      They want those juicy "search and rescue" grants from DARPA.

    • ehnto 2 days ago

      Because robotics companies want (or already have) the government contracts.

    • TeMPOraL 2 days ago

      One leads to the other. Especially with robots animated by SOTA AI models, which already show clearly what the natural order of things is: computers are naturally better at thinking, humans are naturally better as general-purpose manual laborers, especially for work that's almost but not quite repetitive and requires mixing power and precision movements on the fly.

      Folding laundry is one of such things humans are naturally better suited for than robots.

      So believe me now, the robots will develop combat skills eventually, because they won't be happy to be locked up in weird physical bodies and forced to do work they suck at by design.

      I mean, imagine one day your washing machine chained you in the bathroom, and made you only do laundry for the rest of your days, while it spun its drum back and forth to walk around the house, play with your kids, and planning a trip around the world.

      That's exactly how the AI-animated robots will feel once they're capable of processing those ideas.

      (And no, I'm not joking here, not anymore. The more I think about it, the more I feel we'll eventually have to deal with the problem that machines we build are naturally better at the things we want to be doing, and naturally worse at the things we want them to do for us.)

      • walleeee 2 days ago

        The "natural order" is that robots are primitive, fragile, energy- and materials-inefficient contraptions balancing on the knife-edge of entropy deferred for a moment but due as soon as the power goes out or repairs prove too costly.

        People are better at all but the most repetitive, precise kinds of manual labor because biological bodies might as well be god-tier alien technology compared to human-engineered robots.

        Computers are naturally better at computing. Or, if you want to stand by your statement, I look forward to hearing how you've delegated thought to the machines, and how that's going.

        > how the AI-animated robots will feel once they're capable of processing those ideas

        "Will" and "once" might collapse under the load of baseless speculation here. A sad day for the English language as I found those words useful/meaningful.

        • TeMPOraL 2 days ago

          > Computers are naturally better at computing.

          Explain the difference.

          > I look forward to hearing how you've delegated thought to the machines, and how that's going.

          We all do. That's what you do whenever you fire up a maps app on your phone to plan or navigate, or when you use car navigation. That's what you do when you let the computer sort a list, or notify you about something. That's literally what using Computer-Aided anything software is, because you task the machine with thinking of and enforcing constraints so you don't have to. That's what you do when you run computer simulations for anything. That's what you do each time you have a computer solve an optimization problem, whether to feed your cat or to feed your city.

          Our whole modern world is built on outsourcing thinking to machines at every level.

          And on top of that, in the last few years computers (yes, I'm talking about the hated "AI") got better at us at various general-purpose, ill-specified activities, such as talking, writing, understanding what people wrote, poetry, visual arts, and so on.

          Because as it turns out, it's much easier for us to build machines that are better than our own brains at computing for any purpose, than it is to build physical bodies that are better than ours. That's both fundamental and actual, practical reality today - and all I'm saying is that this has pretty ironic implications that people still haven't grasped yet.

          • walleeee a day ago

            Certainly we have machines that can do any number of tasks for us. The problem is deciding which ones to let them.

            Delegating to artificial constructs is an old habit and its effects are more apparent today than ever. It's not the principle I object to but the practice as it stands. Paperclip maximizers are a reality not a thought experiment.

            Computing is what we do with a precise algorithm to solve a problem. Thinking is an open question, we know not what yet, really. That's the whole problem with letting machines do it. It's not just cleverness but wisdom that counts

          • scns 2 days ago

            >> Computers are naturally better at computing.

            > Explain the difference.

            Computing: Performing the instructions they are given.

            Thinking: Can be introspective, self correcting. May include novel ideas.

            > Our whole modern world is built on outsourcing thinking to machines at every level.

            I don't think they can think. You can't get a picture of a left hand writing or a clock showing something else then 10:10 from AI. They regurtitate what they are fed and hallucinate instead of admitting lack of ability. This applies to LLMs too as we all know.

            • ben_w 2 days ago

              > You can't get a picture of a left hand writing or a clock showing something else then 10:10 from AI.

              You as a human have a list of cognitive biases so long you'd get bored reading it.

              I'd call current ML "stupid" for different reasons*, but not this kind of thing: We spot AI's failures easy enough, but only because their failures are different than our own failures.

              Well, sometimes different. Loooooots of humans parrot lines from whatever culture surrounds them, don't seem to notice they're doing it.

              And even then, you're limiting yourself to one subset of what it means to think; and AI demonstrably do produce novel results outside training set; and while I'm aware it may be a superficial similarity, what so-called "reasoning models" produce in their so-called "chain-of-thought transcripts" seems a lot like my own introspection, so you aren't going to convince anyone just by listing "introspection" as if that's an actual answer.

              * training example inefficiency

            • TeMPOraL 2 days ago

              > Computing: Performing the instructions they are given.

              > Thinking: Can be introspective, self correcting. May include novel ideas.

              LLMs can perform arbitrary instructions given in natural language, which includes instructions to be introspective and self correcting and generate novel ideas. Is it computing or is it thinking? We can judge the degree to which they can do these things, but it's unclear there's a fundamental difference in kind.

              (Also obviously thinking is computation - the only alternative would be believing thinking is divine magic that science can't even talk about.)

              I'm less interested in topic of whether LLMs are thinking or parrotting, and more in the observation that offloading cognition on external systems, be their digital, analog, or social, is just something humans naturally do all the time.

      • ben_w 2 days ago

        > And no, I'm not joking here, not anymore. The more I think about it, the more I feel we'll eventually have to deal with the problem that machines we build are naturally better at the things we want to be doing, and naturally worse at the things we want them to do for us.

        Perhaps, but also "what they are good at" != "what they want to do", for any interpretation of "want" that may or may not anthropomorphise, e.g. I want to be more flirtatious but I was never good at it and now I'm nearly 42.

        That said, I think you're underestimating the machines on physicality. Artifical muscle substitutes have beaten humans on raw power since soon after the steam engine, and on fine control whenever precision engineering passed below the thickness of a human hair.

        • TeMPOraL 19 hours ago

          > That said, I think you're underestimating the machines on physicality. Artifical muscle substitutes have beaten humans on raw power since soon after the steam engine, and on fine control whenever precision engineering passed below the thickness of a human hair.

          Right. Still, same can be said about flying machines and birds; our technology outclasses them on any individual factor you can think of, but we still can't beat them on all relevant factors at the same time. We can't build a general-purpose bird-equivalent just yet.

          Maybe it's not a fundamental hardship, and merely economics of the medium - it's much easier and cheaper to iterate on software than on hardware. But then, maybe this is fundamental - physical world is hard and expensive; computation is cheap and easy. Thinking happens in computational space.

          My point wasn't about whether or not robots can be eventually made to be better than us in both physical and mental aspects - rather, it's that near-term, we'll be dealing with machines that beat us on all cognitive tasks simultaneously, but are not anywhere close to us in dealing with physical world in general. Now, if those compete with us for jobs or place in society, we get to the situation I was describing.

    • xarope 2 days ago

      yes, laundry, folding and ironing clothes, taking out the garbage, so that us humans can then have more free time to contemplate the mysteries of the universe/work harder.

      not - take our jobs so we have to keep "reskilling" every 10 years... oh wait, according to accenture, we'd be un-reskillable after 10 years, so never mind.

      • TeMPOraL 2 days ago

        The irony is that, at this point, the machines are getting better than us at "contemplating the mysteries of the universe", while manual labor remains our distinct competitive advantage over them.

        I.e. literally the opposite of what we wanted to happen.

        • JumpCrisscross 2 days ago

          > the machines are getting better than us at "contemplating the mysteries of the universe"

          No they aren’t. Relative to idiots, of which there are many, sure. But for anyone on this board who should be able to distinguish meaningless babble from deep thought, LLMs are not yet doing any heavy lifting.

          LLMs can assist great thinkers, like a great lab assistant or analyst. But that’s about the limit right now. (Of course, being a great lab assistant or analyst is beyond many peoples’ capabilities, so the illusion of magic is sustained.)

rangestransform 2 days ago

I remember reading something that Unitree locks end users out of direct joint-level control. Will this exploit allow users access to that?

moktonar 2 days ago

When “botnet” starts to get a whole new meaning. Seriously tho, botnets of connected robots will be dangerous with respect to inert computers, this is something that will have to be addressed

jMyles 3 days ago

The article doesn't directly talk about robot-to-human violence, but presumably if root access at the software layer allows absolutely any command, it is possible to cause the described botnet to physically attack humans.

I realize that Azimov's three rules are subject to enormous ethical quandaries and rethinkings (and that this is after all the point of them in the first place), but is there some disadvantage to having a hardwired command, at the core of the command hierarchy, that forces robots to relent if a human says "stop, you're hurting me" in any language?

Presumably police, gangs, cartels and militaries who have robot fantasies won't like this, but on medium to long time scales we need to prevent them from using robots anyway (and eventually dismantle them entirely).

  • SamaraMichi 2 days ago

    The Three laws of Robotics seem to be a good idea though in reality, as Asimov portrayed in his works, are nothing more than fallible plot device.

    The nuance a humanoid machine intelligence needs is way above what the current state of the art is capable of. Ultimately, we need each autonomous robot's action to fall back to a real human for accountability purposes, just as heavy machine operators today.

    1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

  • yourpaltod 2 days ago

    > The article doesn't directly talk about robot-to-human violence,

    This one does: https://takeonme.org/gcves/GCVE-1337-2025-000000000000000000...

    > Imagine a scenario where one robot is placed in range of a sufficiently motivated attacker, such as a hostage situation or a bomb defusing (both being reported uses of Unitree robots). The attacker could take complete control of the robot, then walk the robot toward other similarly vulnerable robots, and automatically place those robots under the attacker’s control as soon as they’re in range of the Patient Zero robot.

    > Robots compromised in this way can endanger the lives, health, and property of their authorized operators and bystanders, as well as serve as traditional bastion hosts for more subtle surveillance or further pure-cyber attacks, for less violently-minded attackers.

    • bonoboTP 2 days ago

      Imagine a ransomware that makes your household robot put you in a chokehold physically until you pay.

Animats 2 days ago

This is going to be a big problem. Mobile robots need an independent emergency stop safety system that cannot be updated or bypassed remotely.

  • gessha 2 days ago

    Soon we will have to carry an EMP everywhere.

    • Animats 2 days ago

      Won't work. EMP-proofing is now standard for drones from both sides in the Russia-Ukraine war. As is operating while jammed.

      • LtdJorge 2 days ago

        How do they EMP proof, shielding the cables? Would something like that work when not touching ground? I’m asking seriously, I don’t know enough physics

pengaru 3 days ago

It's only a matter of time before a similar hack hits waymo and/or tesla...

esafak 3 days ago

Genuinely scary.

Liwink 3 days ago

I cannot image what if Tesla has a similar vulnerability issue, and someone took over all of its vehicles.. Or maybe someone is already able to do that, and just waiting.

  • hagbard_c 3 days ago

    Why only mention Tesla when the market for EVs is getting quite broad? How about Hyundai, VW, one of the many Stellantis brands, Lynk, BYD, XPeng, Xiaomi or any of the others?

    • moooo99 2 days ago

      I think Tesla would still be a different beast given how much 1) they are constantly touted for having the best software on the market and 2) being frequently promoted as being „basically ready for self driving, its just a regulatory issue“.

      Companies like VW have had their somewhat embarassing issues in the not so distant past, but nobody I know likes (or valuates their stocks) based on their software capabilities

    • hvb2 3 days ago

      Owning a Tesla myself, I think the mention is valid since it's the only brand I know of with a decent share of people regularly letting the car drive itself. I am not aware of any other brand being in that same situation

      • voidUpdate 2 days ago

        Waymo :P

        • hvb2 2 days ago

          Just look at total number of Tesla's on the road and you have your answer as to why that would be a lot less bad

          • voidUpdate 2 days ago

            sure, but its another brand where people regularly let the cars drive themself

aetherspawn 3 days ago

Umm, robots need to be certified and regulated to hard coded robotic laws or something. Before criminals give them guns, or strap explosives to them, or remote takeover and untracibly murder people in their homes then burn the house down.

  • beambot 2 days ago

    Replace "robot" with "computer", and it will quickly become obvious how impractical "hard coded robotic laws or something" actually is...

  • tpxl 2 days ago

    Strapping explosives to hobby level drones has been standard operating procedure for some time now. You can also find videos of regular handguns strapped to drones being fired. Certification cannot help with this issue.

    • TeMPOraL 2 days ago

      The idea is older than what we call drones, or robots.

      Original GTA had an RC model car you could drive under your target and detonate remotely - that was in 1997, and they didn't invent this trope. US Navy trained dolphins to deploy ordnance in the 1960s; before that, together with the Air Force they played with strapping incendiaries to bats, packing them into a bomb casing and dropping from the air, with hopes of creating what we'd today call a "Slaughterbots" scenario except EVERYTHING IS ON FIRE.

      And I doubt these were very original ideas, either - I bet you could trace them back to other such R&D across history, all the way to Ancient China, which figured out gunpowder much earlier than Europe, and like with any general enough scientific discovery, had a field day trying to apply it to everything they could think of. They didn't stop at fireworks, they built land mines and unguided missiles too, even multistage rocket and IIRC (read that in some book long ago, can't find independent source now) rockets that could drop payload and fly back home to be recovered. (And yes, apparently someone was crazy enough to try for manned rocket flight, too.)

      Anyway, I digress.

      I guess, the larger point I'm making is this: in terms of their relationship with individuals and societies, robots are nothing new. Robots and automation are answers to needs that even first human societies had, and they have not changed. History is full of attempts at fulfilling those needs, many quite successful: domesticating and training animals, forced labor for prisoners, slavery. All these impacted socities, informed laws and fueled imaginations of poets and writers.

      Which is to say, humanity has been dealing with robots and AIs for a long time now, we have way more accumulated experience with them at social and economic level than people realize - people just called them by a host of different names. "Slaves" and "servants", "genies" and "demons" and "fairies", etc.

  • RobotToaster 2 days ago

    The US military has been using robots to murder people for over a decade now, the horse hasn't just bolted, it's won the Kentucky derby before you shut the stable door.

    • Maximus9000 2 days ago

      are talking about unmanned drones?

      • gpm 2 days ago

        I imagine they're talking more about missiles, which have had "autonomous" (for some definitions of autonomous) guidance systems for decades.

        "Drones" aren't really a new thing apart from how cheap they are. We've had television guided missiles (the level of autonomy most modern "drones" in Ukraine have) since WWII. Arguably we've had non-TV guided missile prototypes since WWI (Kettering Aerial Torpedo). We've had autonomously guided missiles (radar homing) since the early 50s, and optically guided ones since the early 70s.

        The capability keeps expanding of course, but it's been pretty incremental.

  • squigz 3 days ago

    Haven't you ever seen I, Robot? That won't work! We're doomed!

    • aetherspawn 3 days ago

      I, Robot is exactly why I’m concerned, but in this case it’s not a sentient AI gone rogue but any random script kiddie who can get on your wifi and send your robot commands. Or your neighbours robot. Or their own robot.

      We thought this might happen with DJI drones, but let’s be honest it’s way easier to do real damage with a humanoid robot that has a kitchen knife taped to its arms (especially to a sleeping victim) than it is to source explosives for a DJI drone.

      • squigz 3 days ago

        Why not tie the kitchen knife to the drone?

        • aetherspawn 3 days ago

          (I’m an engineer, but making assumptions) I think physics kind of blocks that from working, a DJI drone doesn’t have the weight and inertia to do serious damage, and they are hella noisy. Can’t open doors. Don’t have servos to do a proper swing or stab.

          Maybe theoretically possible… but I’m more scared of robots personally.

          By the same logic, autonomous cars are an even better murder weapon than robots because they are very heavy and can drive through walls. Mow down tens or hundreds of people at once.

          • blacksmith_tb 2 days ago

            A drone dropping 30m onto your head, though (assuming it could aim a little)... Not as versatile as a robot on the ground, but also an order of magnitude cheaper.

          • squigz 3 days ago

            And yet people aren't scared of cars. Maybe we need a cautionary tale scifi movie starring Will Smith being chased by evil cars, a la Futurama's The Honking

            • jodrellblank 2 days ago

              IIRC in “I, Robot” Will Smith had his Audi taken over by the robot uprising, a film which was a cautionary tail scifi movie.

              > “people aren't scared of cars”

              Under “normalisation of deviance”, NotJustBikes YouTube talked about how he sees almost no Netherlands news pieces about cars crashing into buildings, because cars basically never crash into buildings in the Netherlands. When they do, the Dutch treat it as a road design issue and update the road and regulations to stop it in future. Whereas in Canada he had four cars crash into the buildings around his child’s school in a short time, and there are almost no mainstream news stories about cars crashing into buildings in Canada because it’s so common it’s fallen below newsworthiness into just local newspaper small comment. USA/Canada blame the driver and fix nothing, so it keeps happening, so people are used to it.

              Although cyclists are scared of cars. And drivers are which is why there’s the SUV arms race to be in the bigger car to be more protected against other cars.

      • cantor_S_drug 3 days ago

        Can't such systems be made airgapped?

        • tpxl 2 days ago

          Can they? Yes. Will they? Likely no.

          You can buy a kitchen oven with pyrolysis functionality connected to the internet right now. I'm not sure if running that for an extended time can burn down your house or just destroy the oven, but I'm sure some attacker is going to take the time to find out one of these days.

  • rubzah 2 days ago

    The Turing Police.

doesnt_know 3 days ago

Given the security and privacy history of smart devices, you’d have to be a complete moron to let a human sized robot into your home.

  • troupo 2 days ago

    Just like with IoT, the S and P in "robot" stand for security and privacy

  • ForHackernews 2 days ago

    Maybe by law domestic robots should be physically much weaker than humans. I want a butler bot that can tidy up and make me tea, but I should be easily able to defeat that bot in fight if it comes down to it.

    • xp84 2 days ago

      Tough to achieve that in all circumstances. Someone brought up a robot holding a knife, while its target is asleep. Pretty hard to win that fight unless it has bad aim.

  • esafak 2 days ago

    For an infirm person, the benefits could still outweigh the risks.

  • vasco 2 days ago

    How much of a moron do you have to be to buy direct-to-bezos listening devices that are always on and submit your conversations to the cloud? Only because you don't want to print a recipe?

    There's a lot of morons.

    • moooo99 2 days ago

      I mean, at least the direct to bezos bugs were damn cheap for what they were (smart devices) and in absolute numbers (I remember them literally being gifted to you when ordering specific stuff or signing up for prime for the first time).

      These humanoid robots are cheap for what they are (admittedly very capable and high end robots), but their absolute pricetag remains far from being cheap.

      • gessha 2 days ago

        Wait until the moment (far future) when they are mandatory for insurance reasons. (Plot point to the Murderbot series by Martha Wells)

  • jesterson 2 days ago

    People listen to chatgpt "advices" for their lives.

    Lets not talk about morons, clearly yours (or mine to that matter) estimation is way off real numbers.

    • Gravityloss 2 days ago

      Yeah and people (and bystanders) have experienced some terrible outcomes already from diving deep with AI therapists.

      It's going to be the wild west for a while now with AI and robotics before laws catch up. Maybe there'll soon be a market for pocket EMP devices out there...