keeda 3 days ago

I strongly believe LIDAR is the way to go and that Elon's vision-only move was extremely "short-sighted" (heheh). There are many reasons but that drives it home for me multiple times a week is that my Tesla's wipers will randomly sweep the windshield for absolutely no reason.

This is because the vision system thinks there is something obstructing its view when in reality it is usually bright sunlight -- and sometimes, absolutely nothing that I can see.

The wipers are, of course, the most harmless way this goes wrong. The more dangerous type is when it phantom-brakes at highway speeds with no warning on a clear road and a clear day. I've had multiple other scary incidents of different types (swerving back and forth at exits is a fun one), but phantom braking is the one that happens quasi-regularly. Twice when another car was right behind me.

As an engineer, this tells me volumes about what's going on in the computer vision system, and it's pretty scary. Basically, the system detects patterns that are inferred as its vision being obstructed, and so it is programmed to brush away some (non-existent) debris. Like, it thinks there could be a physical object where there is none. If this was an LLM you would call it a hallucination.

But if it's hallucinating crud on a windshield, it can also hallucinate objects on the road. And it could be doing it every so often! So maybe there are filters to disregard unlikely objects as irrelevant, which act as guardrails against random braking. And those filters are pretty damn good -- I mean, the technology is impressive -- but they can probabistically fail, resulting in things that we've already seen, such as phantom-braking, or worse, driving through actual things.

This raises so many questions: What other things is it hallucinating? And how many hardcoded guardrails are in place against these edge cases? And what else can it hallucinate against which there are no guardrails yet?

And why not just use LIDAR that can literally see around corners in 3D?

  • jqpabc123 3 days ago

    Engineering reliability is primarily achieved through redundancy.

    There is none with Musk's "vision only" approach. Vision can fail for a multitude of reasons --- sunlight, rain, darkness, bad road markers, even glare from a dirty windshield. And when it fails, there is no backup plan -- the car is effectively driving blind.

    Driving is a dynamic activity that involves a lot more than just vision. Safe automated driving can use all the help it can get.

    • Someone1234 3 days ago

      I agree with everything you're saying; but even outside of Tesla, I'd just like to remind people that LIDAR as a complement to vision isn't at all straightforward. Sensor fusion adds real complexity in calibration, time sync, and modeling.

      Both LIDAR and vision have edge cases where they fail. So you ideally want both, but then the challenge is reconciling disagreements with calibrated, and probabilistic fusion. People seem to be under the mistaken impression that vision is dirty input and LIDAR is somehow clean, when in reality both are noisy inputs with different strengths and weaknesses.

      I guess my point is: Yes, 100% bring in LIDAR, I believe the future is LIDAR + vision. But when you do that, early iterations can regress significantly from vision-only until the fusion is tuned and calibration is tight, because you have to resolve contradictory data. Ultimately the payoff is higher robustness in exchange for more R&D and development workload (i.e. more cost).

      The same reason why Tesla needed vision-only to work (cost & timeline) is the same reason why vision+LIDAR is so challenging.

      • ethbr1 3 days ago

        The primary benefit of multiple sensor fusion from a safety standpoint isn't an absolute decrease in errors.

        It's the ability to detect sensor disagreements at all.

        With single modality sensors, you have no way of truly detecting failures in that modality, other than hacks like time-series normalizing (aka expected scenarios).

        If multiple sensor modalities disagree, even without sensor fusion, you can at least assume something might be awry and drop into a maximum safety operation mode.

        But we'd think that the budget config of the Boeing 737 MAX would have taught us that tying safety critical systems to single sources of truth is a bad idea... (in that case, critical modality / single physical sensor)

        • AnIrishDuck 2 days ago

          > With single modality sensors, you have no way of truly detecting failures in that modality, other than hacks like time-series normalizing (aka expected scenarios).

          "A man with a watch always knows what time it is. If he gains another, he is never sure"

          Most safety critical systems actually need at least three redundant sensors. Two is kinda useless: if they disagree, which is right?

          EDIT:

          > If multiple sensor modalities disagree, even without sensor fusion, you can at least assume something might be awry and drop into a maximum safety operation mode.

          This is not always possible. You're on a two lane road. Your vision system tells you there's a pedestrian in your lane. Your LIDAR says the pedestrian is actually in the other lane. There's enough time for a lane change, but not to stop.

          What do you do?

          • esafak 2 days ago

            > Two is kinda useless: if they disagree, which is right?

            They don't work by merely taking a straw poll. They effectively build the joint probability distribution, which improves accuracy with any number of sensors, including two.

            > You're on a two lane road. Your vision system tells you there's a pedestrian in your lane. Your LIDAR says the pedestrian is actually in the other lane. There's enough time for a lane change, but not to stop.

            Any realistic system would see them long before your eyes do. If you are so worried, override the AI in the moment.

            • AnIrishDuck 2 days ago

              > They don't work by merely taking a straw poll. They effectively build the joint probability distribution, which improves accuracy with any number of sensors, including two.

              Lots of safety critical systems actually do operate by "voting". The space shuttle control computers are one famous example [1], but there are plenty of others in aerospace. I have personally worked on a few such systems.

              It's the simplest thing that can obviously work. Simplicity is a virtue when safety is involved.

              You can of course do sensor fusion and other more complicated things, but the core problem I outlined remains.

              > If you are so worried, override the AI in the moment.

              This is sneakily inserting a third set of sensors (your own). It can be a valid solution to the problem, but Waymo famously does not have a steering wheel you can just hop behind.

              This might seem like an edge case, but edge cases matter when failure might kill somebody.

              1. https://space.stackexchange.com/questions/9827/if-the-space-...

              • mafuy 2 days ago

                Voting is used when the systems are equivalent, e.g. 3 identical computers, where one might have a bit flip.

                This is completely different from systems that cover different domains, like vision and lidar.

              • sfifs 2 days ago

                Isn't the historical voting pattern something more of a legacy thing dictated by limited edge compute of the past vs necessarily a best practice.

                I see in many domains a tendency to oversimplify decision making algorithms for human understanding convenience (eg vote rather that develop a joint probability distribution in this case, supply chain and manufacturing in particular seem to love rules of thumb) rather than use better algorithms that modern compute enables higher performance, safety etc

                • AnIrishDuck 2 days ago

                  This is an interesting question where I do not know the answer.

                  I will not pretend to be an expert. I would suggest that "human understanding convenience" is pretty important in safety domains. The famous Brian Kernighan quote comes to mind:

                  > Everyone knows that debugging is twice as hard as writing a program in the first place. So if you're as clever as you can be when you write it, how will you ever debug it?

                  When it comes to obscure corner cases, it seems to me that simpler is better. But Waymo does seem to have chosen a different path! They employ a lot of smart folk, and appear to be the state of the art for autonomous driving. I wouldn't bet against them.

            • qingcharles 2 days ago

              We're trying to build vehicles that are totally autonomous, though. How do you grab the wheel of the new Waymos without steering wheels? Especially if you're in the back seat staring at Candy Crush.

              • esafak 2 days ago

                Waymos are safer, and drive more defensively than humans. There is no way a Waymo is going to drive aggressively enough to get itself into the trolley problem.

          • terribleperson 2 days ago

            This situation isn't going to happen unless the vehicle was traveling at unsafe speeds to begin with.

            Cars can stop in quite a short distance. The only way this could happen is if the pedestrian was obscured behind an object until the car was dangerously close. A safe system will recognize potential hiding spots and slow down preemptively - good human drivers do this.

            • AnIrishDuck 2 days ago

              > Cars can stop in quite a short distance.

              "Quite a short distance" is doing a lot of lifting. It's been a while since I've been to driver's school, but I remember them making a point of how long it could take to stop, and how your senses could trick you to the contrary. Especially at highway speeds.

              I can personally recall a couple (fortunately low stakes) situations where I had to change lanes to avoid an obstacle that I was pretty certain I would hit if I had to stop.

              • terribleperson a day ago

                At the driving school I attended, they had us accelerate to 50 mph and then slam on the brakes so we'd have a feel for the distance (and the feel).

                While it's true they don't stop instantaneously at highway speeds, cars shouldn't be driving highway speeds when a pedestrian suddenly being in front of you is a realistic risk.

                • AnIrishDuck a day ago

                  What if the obstacle is not a person? What if something falls off a truck in front of the vehicle? What if wildlife spontaneously decides to cross the road (a common occurrence where I live)?

                  I don't think these problems can just be assumed away.

          • cameldrv a day ago

            You don't really ever have "two sensors" in the sense that it's two measurements. You have multiple measurements from each sensor every second. Then you accumulate that information over time to get a reliable picture. If the probability of failure on each frame were independent, it would be a relatively simple problem, but of course you're generally going to get a fairly high correlation from one frame to the next about whether or not there's a pedestrian in a certain location. The nice thing about having multiple sensing modalities is that the failure correlation between them is a lot lower.

            For example, say you have a pedestrian that's partially obscured by a car or another object, and maybe they're wearing a hat or a mask or wearing a backpack or carrying a kid or something, it may look unusual enough that either the camera or the lidar isn't going to recognize it as a person reliably. However, since the camera is generally looking at color, texture, etc in 2D, and the Lidar is looking at 3D shapes, they'll tend to fail in different situations. If the car thinks there's a substantial probability of a human in the driving path, it's going to swerve or hit the brakes.

          • consumer451 2 days ago

            > > If multiple sensor modalities disagree, even without sensor fusion, you can at least assume something might be awry and drop into a maximum safety operation mode.

            > This is not always possible. You're on a two lane road. Your vision system tells you there's a pedestrian in your lane. Your LIDAR says the pedestrian is actually in the other lane. There's enough time for a lane change, but not to stop.

            > What do you do?

            Go into your failure mode. At least you have a check to indicate a possible issue with 2 signals.

            • Mentlo a day ago

              I came here to write the same comment you did. What I’d suspect (I don’t work in self driving but I do in AI) is the issue is that this mode of operation would happen more often than not as the sensors disagree in critical ways more often than you’d think. So going “safety first” every time likely critically diminishes UX.

              The issue is not recognising that optimising for Ux at the expense of safety here is the wrong call, motivated likely by optimism and a desire for autonomous cars, more than reasonable system design. I.e. if the sensors disagree so often that it makes the system unusable, maybe the solution is “we’re not ready for this kind of technology and we should slow down” rather than “let’s figure out non-UX breaking edge case heuristics to maintain the illusion of autonomous driving being behind the corner”.

              Part of this problem is not even technological - human drivers tradeoff safety for UX all the time - so the expectation for self driving is unrealistic and your system has to have the ethically unacceptable system configuration in order to have any chance of competing.

              Which is why - in my mind - it’s a fools endeavour in personal car space, but not in public transport space. So go waymo, boo tesla.

            • ethbr1 2 days ago

              Exactly my point. That you know the systems disagree is a benefit, compared to a single system.

              People are underweighting the alternative single system hypothetical -- what does a Tesla do when its vision-only system erroneously thinks a pedestrian is one lane over?

          • ranger_danger 2 days ago

            > This is not always possible. You're on a two lane road. Your vision system tells you there's a pedestrian in your lane. Your LIDAR says the pedestrian is actually in the other lane. There's enough time for a lane change, but not to stop.

            This is why good redundant systems have at least 3... in your scenario, without a tie-breaker, all you can do is guess at random which one to trust.

            • Someone1234 2 days ago

              That's a good point, but people do need to keep in mind that many engineered systems with three points of reference have three identical points of reference. That's why it works so well, a common frame of reference (i.e. you can compare via simple voting).

              For example jet aircraft commonly have three pitot static tubes, and you can just compare/contrast the data to look for the outlier. It works, and it works well.

              If you tried to do that with e.g. LIDAR, vision, and radar with no common point of reference, solving for trust/resolving disagreements is an incredibly difficult technical challenge. Other variations (e.g. two vision + one LIDAR), does not really make it much easier either.

              Tie-breaking during sensor fusion is a billion+ dollar problem, and will always be.

          • abraae 2 days ago

            > Never go to sea with two chronometers; take one or three.

        • leoc 3 days ago

          > If multiple sensor modalities disagree, even without sensor fusion, you can at least assume something might be awry and drop into a maximum safety operation mode.

          Also, this is probably when Waymo calls up a human assistant in a developing-country callcentre.

          • ethbr1 2 days ago

            Saw that happen a week ago, actually. Non-sensor problem, but a Waymo made a slow right turn too wide, approached the left turning lane of cars, then safed itself by stopping, then remote assistance came online and extricated it.

      • jqpabc123 3 days ago

        The same reason why Tesla needed vision-only to work (cost & timeline)

        But vision only hasn't worked --- not as promised, not after a decade's worth of timeline. And it probably won't any time soon either --- for valid engineering reasons.

        Engineering 101 --- *needing* something to work doesn't make it possible or practical.

      • ra7 3 days ago

        The complexity argument rings hollow to me. It’s a bit like saying distributed databases are complex because you have to deal with CAP guarantees. Yes, but people still develop them because it has real benefits.

        It was maybe a valid argument 10 years ago, but in 2025 many companies have shown sensor fusion works just fine. I mean, Waymo has clocked 100M+ miles, so it works. The AV industry has moved on to more interesting problems, while Tesla and Musk are still stuck in the past arguing about sensor choices.

        • leoc 2 days ago

          Well, it's more like sensor fusion plus extensive human remote intervention, it seems: https://www.nytimes.com/interactive/2024/09/03/technology/zo... . Mind you, if it takes both LiDAR and call-centre workers to make self-driving work in 2025 and for the foreseeable future, that makes Tesla's old ambition to achieve it with neither look all the more hopeless.

          • Narciss 2 days ago

            V interesting, thanks for sharing, I didn’t know this

      • microtherion 2 days ago

        > but then the challenge is reconciling disagreements with calibrated, and probabilistic fusion

        I keep reading arguments like this, but I really don't understand what the problem here is supposed to be. Yes, in a rule based system, this is a challenge, but in an end-to-end neural network, another sensor is just another input, regardless of whether it's another camera, LIDAR, or a sensor measuring the adrenaline level of the driver.

        If you have enough training data, the model training will converge to a reasonable set of weights for various scenarios. In fact, training data with a richer set of sensors would also allow you to determine whether some of the sensors do not in fact contribute meaningfully to overall performance.

      • overfeed 2 days ago

        > cost & timeline

        It's really hard to accept cost as the reason when Tesla is preparing a trillion dollar package. I suppose that can be reconciled if one considers the venture to be a vehicle (ha!) to shovel as much money as possible from investors and buyers into Elon's pockets, I imagine the prospect of being the worlds first trillionare is appealing.

      • Earw0rm 2 days ago

        There's no particular reason to use RGB for this kind of machine vision - cognition problem either.

        Infra-red of a few different wavelengths as well as optical light ranges seems like it'd give a superior result?

        • jqpabc123 2 days ago

          You've just described some of the rationale for using LIDAR.

      • overfeed 2 days ago

        > cost & timeline

        It's really hard to accept cost as the reason when Tesla is preparing a trillion dollar package. I suppose that can be reconciled if the venture is a vehicle (ha!) to shovel money from investors and buyers into Elon's pockets, I imagine the prospect of being the worlds first trillionare is appealing.

      • atcon 2 days ago

        Your comments on sensor fusion seem to describe the weird results of 2 informal ADAS (lidar, vision, lidar + vision, lidar + vision + 4d imaging radar, etc.) “tournaments” conducted earlier this year. There was an earlier HN post about it <https://news.ycombinator.com/item?id=44694891> with a comment noting “there was a wide range of crash avoidance behavior even between the same car likely due to the machine learning, and that also makes explaining the differences hard. Hopefully someone with more background on ADAS systems can watch and post what they think.”

        Notably, sensor confusion is also an “unsolved” problem in humans, eg vision and vestibular (inner ear) conflicts possibly explaining motion sickness/vertigo <https://www.nature.com/articles/s44172-025-00417-2>

        The results of both tournaments: <https://carnewschina.com/2025/07/24/chinas-massive-adas-test...> Counterintuitively, vision scored best (Tesla Model X)

        The videos are fascinating to watch (subtitles are available): Tournament 1 (36 cars, 6 Highway Scenarios): <https://www.youtube.com/watch?v=0xumyEf-WRI> Tournament 2 (26 cars, 9 Urban Scenarios): <https://www.youtube.com/watch?v=GcJnNbm-jUI>

        Highway Scenarios: “tests...included other active vehicles nearby to increase complexity and realism”: <https://electrek.co/2025/07/26/a-chinese-real-world-self-dri...>

        Urban Scenarios: “a massive, complex roundabout and another segment of road with a few unsignaled intersections and a long straight...The first four tests incorporated portions of this huge roundabout, which would be complex for human drivers, but in situations for which there is quite an obvious solution: don’t hit that car/pedestrian in front of you” <https://electrek.co/2025/07/29/another-huge-chinese-self-dri...>

      • maxlin 2 days ago

        I think you hit the nail on the head - Obviously when Tesla have saturated the potential of vision, they should bring in LiDAR if it can be reasonably added from a hardware point of view. Their current arguments make this clear - it would be surface-level thinking to add LiDAR and the kitchen sink now, complicating the system's evolution and axing scalability.

        But we're far from plateauing on what can be done with vision - Humans can drive quite well with essentially just sight, so we're far from extinguishing what can be done with it.

      • baby 2 days ago

        Sure but if you see something in front of you but LIDAR says "nope I can see 500m away" then you know LIDAR is right

      • anthem2025 12 hours ago

        Are people under that impression or are you just repeating the sort of nonsense musk pushes about how sensor fusion is bad?

    • jmpman 17 hours ago

      Tesla has redundant front facing cameras on their cars. In my 2019 Model 3, there are three front facing cameras, each with varying angles of view, all three behind the rear view mirror, all encased in a small area lined with anti reflective material. Living in an extremely hot climate, that small area, with its anti reflective fuzz have degraded, depositing a film on the window, only in front of the cameras, obscuring all three cameras at the same time. Now, my Tesla just recently started complaining when the sensors were obscured with this deposit, but that wasn’t always the case. I used to be driving down the freeway with autopilot on, and it could barely track. Eventually I looked at the saved video footage and discovered my Tesla was virtually blind, while driving me down the freeway at 85mph. At least now, with recent updates, it warns me that it can’t see very well. However I question the resolution of the sensors. To drive legally in my state, you must have 20/40 vision. When I move my head around, I effectively have 20/40 vision all around my car. If I close 1 eye, I still have 20/40 vision. Does Tesla have effectively 20/40 vision in all 360 degrees? Maybe one of the front facing cameras has optical resolution equal to 20/40, but do the rest of them? I’m skeptical, and expect I’m being driven by what’s equivalent to a human who couldn’t pass the vision test, or at best, a human with just one eye that can pass the vision test. This isn’t even getting into redundancy in the electronics boards, connectivity from the electronics to the CPU, and redundancy in the processsing. We are being asked to put our faith/lives in these non redundant systems, but they’re not designed like Class-A flight critical systems on airplanes.

    • SoftTalker 2 days ago

      > Vision can fail for a multitude of reasons --- sunlight, rain, darkness, bad road markers, even glare from a dirty windshield. And when it fails, there is no backup plan

      So like a human driver. Problem is, automatic drivers need to be substantially better than humans to be accepted.

      • tarsinge a day ago

        Humans have a brain though. Current AI is nowhere near that as every engineer know it but common people seem to forget it with all the PR.

    • Ocha 3 days ago

      Yeap. Same mistake that Boeing did with making redundancy optional upgrade on max8.

      • jqpabc123 3 days ago

        Another example of what happens when management starts making engineering decisions.

    • brandonagr2 2 days ago

      Lidar is not a backup to vision, in a waymo both lidar and vision must be working, so you actually have less reliability as now you have two single points of failure.

  • CjHuber 3 days ago

    Just imagine Tesla would have subventioned passive LIDAR on every car they ship to collect data. Wow that dataset would be crazy, and would even improve their vision models by having ground truth to train on. He’s such a moron

    • nolist_policy 3 days ago

      This. It's also the reason Waymo is ahead, they have tons of high quality training data being constantly fed into their pipeline.

    • wombat-man 3 days ago

      I think LIDAR was and maybe still is way more expensive. Initially running 75k. Now they're more around 10k which is better.

      • hoytschermerhrn 3 days ago

        The new electric Volvos have LIDAR, proving that the technology has (at least now) approached mass-market feasibility.

        • hnburnsy 2 days ago

          Ummm, it is actually active with ADAS anywhere? Certainly not in the US.

          >The EX90's LiDAR enhances ADAS features like collision mitigation and lane-keeping, which are active and assisting drivers. However, full autonomy (Level 3) is not yet available, as the Ride Pilot feature is still under development and not activated.

        • dzhiurgis a day ago

          A single car in US whose lidar is not operational yet and burns thru cameras? Wouldn’t call it success just yet.

      • kibwen 3 days ago

        This is off by orders of magnitude. BYD is buying LIDAR units for their cars for $140.

        • onlyrealcuzzo 3 days ago

          That's likely closer to reality now, but that's not counting the cost for R&D to add it to the car, any additional costs that come with it besides the LIDAR hardware, plus the added cost to install it.

          All of that combined is probably closer to $1k than to $140.

          And, again, that's - what - 10 years after Tesla originally made the decision to go vision only.

          It wasn't a terrible idea at the time, but they should've pivoted at some point.

          They could've had a massive lead in data if they pivoted as late as 3 years ago, when the total cost would probably be under $2.5k, and that could've led to a positive feedback loop, cause they'd probably have a system better than Waymo by now.

          Instead, they've got a pile of garbage, and no path to improve it substantially.

          • terribleperson 2 days ago

            I can't be sure, but I doubt Tesla is spending less than $140 on their cameras. High fidelity, high frame rate color cameras aren't actually cheap...

            • onlyrealcuzzo 2 days ago

              Not all LIDARs are equal. Just because BYD is spending $140 on a LIDAR system does not mean it's the same quality as the Waymo system reported to cost $75k almost a decade ago, or, especially, the same quality as the ones in use today.

              They might be!

              But I doubt it.

              I don't know enough about Tesla's cameras, but it's not implausible to think there are LIDARs of low enough quality that you'd be better off with a good quality camera for your sensor.

              Again, I doubt this is the case with BYDs cameras.

              But it's still worth pointing out, I think.

              My point is, BYD's LIDAR system costing $x is only one small part of the conversation.

              • lobsterthief 2 days ago

                I would say a $140 LIDAR system that’s currently being used in production cars [somewhere] is better than a $0 non-existent LIDAR system. Pair a cheap LIDAR system with some nice cameras and perhaps you can make up much of the difference in software.

        • wombat-man 2 days ago

          Well maybe Tesla should adopt it then.

      • realo 3 days ago

        My floor-cleaning robot has a lidar and i am pretty certain that part did not cost 10k$.

        • peterfirefly 2 days ago

          It goes very slow and it doesn't need to work with high resolution or long distances. It has plenty of time to average out noise.

          Solid-state LIDAR is still a fairly new thing. LIDAR sensors were big, clunky, and expensive back when Tesla started their Autopilot/FSD program.

          I googled a bit and found a DFR1030 solid-state LIDAR unit for 267 DKK (for one). It has a field of view of 108 degrees and an angular resolution of 0.6 degrees. It has an angle error of 3 degrees and a max distance of 300mm. It can run at 7.5-28 Hz.

          Clearly fine for a floor-cleaning robot or a toy. Clearly not good enough for a car (which would need several of them).

        • wombat-man 2 days ago

          well, probably different grades of lidar for different use cases.

    • CMay 2 days ago

      Even if Tesla wasn't using LIDAR, I think they did still use radar and ultrasonic detection for a while, which I'm sure contributed to their models some.

  • amelius 3 days ago

    Your comparison to hallucination is spot on.

    LLMs have shown the general public how AI can be plain wrong and shouldn't be trusted for everything. Maybe this influences how they, and regulators, will think about self driving cars.

    • bbarnett 3 days ago

      Well I wish this was true. But loads of DEVs on here will claim LLMs are infallible.

      And the general public?! No way. Most are completely unaware of the foibles of LLMs.

      • Cornbilly 3 days ago

        HN posters know better but a lot of them won’t be honest because they want to protect their investments and/or their employer.

      • jama211 2 days ago

        No they don’t. Don’t lie.

      • BoiledCabbage 3 days ago

        > Well I wish this was true. But loads of DEVs on here will claim LLMs are infallible.

        No the don't. You're making a straw man rather than trying to put forth an actual argument in support of your view.

        If you feel can't support your point, then don't try to make it.

        • greenchair 3 days ago

          It's done in a roundabout way. Usually with a variation of "you had a bad experience because you are using the tool incorrectly, get good at prompting".

          • Eisenstein 3 days ago

            That's a response to 'I don't get good results with LLMs and therefore conclude that getting good results with them is not possible'. I have never seen anyone claim that they make no mistakes if you prompt them correctly.

        • bbarnett 3 days ago

          A straw man? An actual argument?

          I responded to this parent comment:

          "LLMs have shown the general public how AI can be plain wrong and shouldn't be trusted for everything."

          You take issue with my response of:

          "loads of DEVs on here will claim LLMs are infallible"

          You're not really making sense. I'm not straw-manning anything, as I'm directly discussing the statement made. What exactly are you presuming I'm throwing a straw man over?

          It's entirely valid to say "there are loads of supposed experts that don't see this point, and you're expecting the general public to?". That's clearly my statement.

          You may disagree, but that doesn't make it a strawman. Nor does it make it a poorly phrased argument on my part.

          Do pay better attention please. And your entire last sentence is way over the line. We're not on reddit.

          • gellybeans 3 days ago

            This is just subjective spew.

            The irony of telling someone not to be rude while being absolutely insufferable. Peak redditor behavior.

            Please provide examples. Thank you!

  • danans 3 days ago

    > And why not just use LIDAR that can literally see around corners in 3D?

    Based on what I've read over the years: it costs too much for a consumer vehicle, it creates unwanted "bumps" in the vehicle visual design, and the great man said it wasn't needed.

    Yes, those reasons are not for technology or safety. They are based on cost, marketing, and personality (of the CEO and fans of the brand).

    • fooblaster 3 days ago

      Lidar is being manufactured in china in the volume of millions a year by robosense, Huawei, and hesai. Bom cost is on the order of a few hundred dollars - slightly more than automotive radar. The situation is a lot different in 2025 than in 2017.

  • beng-nl 3 days ago

    I’ve always wondered about LiDAR - how can multiple units sweep a scene at the same time (as would be the case for multiple cars driving close together, all using lidar)? One unit can’t distinguish return signals between itself and other units, can it?

  • fossuser 3 days ago

    I use FSD in my Model S daily to commute from SF to Palo Alto along with most of my other Bay Area driving. It does a better job currently than most people and it drives me 95% of the time now I haven't had the phantom braking.

    I'm in a 2025 with HW4, but it's dramatic improvement over the last couple of years (previously had a 2018 Model 3) increased my confidence that Elon was right to focus on vision. It wasn't until late last year where I found myself using it more than not, now I use it almost every drive point to point (Cupertino to SF) and it does it.

    I think people are generally sleeping on how good it is and the politicization means people are under valuing it for stupid reasons. I wouldn't consider a non Tesla because of this (unless it was a stick shift sports car, but that's for different reasons).

    Their lead is so crazy far ahead it's weird to see this reality and then see the comments on hn that are so wrong. Though I guess it's been that way for years.

    The position against lidar was that it traps you in a local max, that humans use vision, that roads and signs are designed for vision so you're going to have to solve that problem and when you do lidar becomes a redundant waste. The investment in lidar wastes time from training vision and may make it harder to do so. That's still the case. I love Waymo, but it's doomed to be localized to populated areas with high-res mapping - that's a great business, but it doesn't solve the general problem.

    If Tesla keeps jumping on the vision lever and solves it they'll win it all. There's nothing in physics that makes that impossible so I think they'll pull it off.

    I'd really encourage people to here with a bias to dismiss to ignore the comments and just go in real life to try it out for yourself.

    • cpuguy83 2 days ago

      This is extremely narrow minded. As another commenter pointed out, you are driving on easy mode in terms of environment and where a majority of the training was done.

      This is not a general solution, it is an SF one... at best.

      Most humans also don't get in accidents or have problems with phantom breaking within the timeframe that you mentioned.

      • fossuser 2 days ago

        Oh please - people excuse and dismiss major accomplishments, you can send a skyscraper to mars and people on HN will still be calling you a fraud.

        The Bay Area has massive traffic, complex interchanges, SF has tight difficult roads with heavy fog. Sometimes there’s heavy rain on 280. 17 is also non trivial.

        What Tesla has done is not trivial and roads outside the bay are often easier.

        People can ignore this to serve their own petty cognitive bias, but others reading their comments should go look at it for themselves.

        • Idesmi a day ago

          > you can send a skyscraper to mars and people on HN will still be calling you a fraud

          To date, SpaceX has sent nothing to Mars. Not to understate the company's accomplishment, but "people on HN" are fed up exactly with statements like yours.

          • fossuser 12 hours ago

            People here just whine and complain - yes they’ve “only” just sent a skyscraper to space for now and caught the booster on reentry, it’s a work in progress (along with their reusable rockets, earth scale telecom side project etc.)

            My point is people will still be calling him a fraud when they do get it to mars, no evidence is sufficient for the HN cynic that thinks their “above the fray” ethos makes them smart.

            Tesla has had massive success despite the haters, the model y becoming the literally best selling car on earth and you wouldn’t know it from HN. FSD has gotten really good, good enough to use more than not as they continue to improve it.

            The best thing about capitalism is the losers here don’t matter - the winners get rich and keep going.

            • cpuguy83 10 hours ago

              I said nothing about SpaceX here nor did I condemn Tesla ... or even mention Tesla.

              • fossuser 7 hours ago

                You downplayed what Tesla FSD can do and said I was being narrow minded and the Bay Area driving is "easy mode" and said vision isn't a general solution. I think none of this is true.

        • lightedman 2 days ago

          I have ridden in many Tesla-based Ubers with human drivers using autopilot.

          Here outside of Los Angeles, about an hour east, they do not do well at all on their 'auto-pilot.'

          Your area has the benefit of being one of the primary training areas, and thus the dataset for your area is good.

          Try that here. I'll be more than happy to watch you piss yourself as the Tesla tries to take you into the HOV lane THROUGH THE BARRIERS.

          • drak0n1c 2 days ago

            Auto-Pilot is not FSD. It's akin to a regular carmaker's Automatic-Braking-System and Lane-Keep-Assist. If you're seeing it used dangerously that's user error.

      • brandonagr2 2 days ago

        > Most humans also don't get in accidents

        Have you met any humans? Or seen people driving?

        • cpuguy83 16 hours ago

          Way to cut my sentence I half. That's not what I said and you know it.

    • nova22033 17 hours ago

      > politicization

      How is it politicization when TESLA THE COMPANY is saying Full Self Driving doesn't mean "Full" "Self" Driving?

      If it is as good as you claim, why doesn't Tesla claim it's Full Self Driving?

    • oblio 3 days ago

      You're basically driving on easy mode, in the Bay Area. Dry climate, sunshine all year round, pretty solid developed country infrastructure.

    • a123b456c 3 days ago

      OK so you believe "Elon was right" and people should "ignore the comments" Hmm very interesting.

    • ponector 2 days ago

      >> it drives me 95% of the time now

      But what is the point to use it everywhere if you still need to pay attention to the road, keep hands on the steering wheel?

      • fossuser 2 days ago

        You don’t need hands on the wheel anymore, just looking out the window. It’s way more relaxed.

        It’ll be nice when that’s not required anymore, but even today it’s way more comfortable.

    • anthem2025 11 hours ago

      What lead? They are way behind Waymo.

      Why would anyone listen to the opinion of someone who bought a Tesla in 2025?

      The only people still buying them are musk fanboys.

    • kolanos 3 days ago

      HW4 is really a game changer. I was absolutely floored by HW4 FSD during a recent test drive. Tesla is accomplishing some truly groundbreaking technical achievements here. But you wouldn't know it through all the Elon Musk noise (pro and con). I'd encourage anyone to take a test drive and put FSD through its paces. I went in with a super critical mindset and walked away stunned.

      • anthem2025 11 hours ago

        I’m gonna go ahead and guess that by “super critical” you actually mean that you went in an Elon worshipper and left an Elon worshipper.

      • fossuser 3 days ago

        Yeah it’s amazing

    • pbhjpbhj 3 days ago

      [flagged]

      • fossuser 3 days ago

        Thank you for exemplifying what I’m talking about. I should really buy more TSLA.

      • sixQuarks 3 days ago

        So he does nazi salutes and is totally buddy buddy with Netanyahu. Ok

        • ksenzee 3 days ago

          Those two things are now compatible. Unthinkable for those of us who were around in the 20th century, but now true.

          • tialaramex 2 days ago

            It also makes this horrible kind of sense that Elon would see them both as admirable, this idea that you're the only person who matters. Ordinary people exist only for you to exploit them, and have no intrinsic worth.

  • teleforce 2 days ago

    >why not just use LIDAR that can literally see around corners in 3D?

    LIDAR requires line-of-sight (LoS) hence cannot see around conner, but RADAR probably can.

    It's interesting to note that the all time 2nd most popular post on Tesla is 9 years ago on its full self driving hardware (just 2nd after the controversial Cybertruck) [1].

    >Elon's vision-only move was extremely "short-sighted"

    Elon's vision was misguided because some of the technologists at the time including him seem to really truly believed that AGI is just around the corner (pun attended). Now most of the tech people gave up on AGI claim blaming on the blurry definition of AGI but for me the truly killer AGI application is always full autonomous level 5 driving with only human level sensor perceptions minus the LIDAR and RADAR. But the complexity of the goal is very complicated that I really truly believe it will not be achieved in foreseeable future.

    [1] All Tesla Cars Being Produced Now Have Full Self-Driving Hardware (2016 - 1090 comments):

    https://news.ycombinator.com/item?id=12748863

  • UltraSane 3 days ago

    Camera only might work better if you used regular digital cameras along with more advanced cameras like event based cameras that send pixels as soon as they change brightness and have microsecond latency and\or Single Photon Avalanche Diode (SPAD) sensors which can detect single photons. Having the same footage from all 3 of these would enable some fascinating training options.

    But Tesla didn't do this.

  • cuttothechase 2 days ago

    Cannot agree more on this phantom braking.

    I rented a Tesla a while back and drove from the bay to the death valley. On clear roads with no hazards whatsoever, the car hit the brakes at highway speeds. It scared the bejeesus out of me! Completely off put by the auto drive and derailed plans to buy a Tesla.

  • duxup 2 days ago

    The around corners thing, when I saw demos of it seeing the vehicles the driver can't even see ... I wanted it for my non self driving car ... it's just too big of an advantage to skimp out on.

  • JumpCrisscross 2 days ago

    > maybe there are filters to disregard unlikely objects as irrelevant, which act as guardrails against random braking

    The filters introduce the problem of incorrectly deleting something that really is there.

  • paradox460 2 days ago

    Whats the oddest thing about the wiper tech is that we've had the tech for automated wipers since at least the 70s. As a kid my neighbor's Cadillac had it.

    tl;dr: you can use optics to determine if there's rain on a surface, from below, without having to use any fancy cameras or anything, just a light source and light sensor.

    If you're into this sort of thing, you can buy these sensors and use them as a rain sensor, either as binary "yes its rained" or as a tipping bucket replacement: https://rainsensors.com

  • mellosouls 2 days ago

    Elon's vision-only move was extremely "short-sighted" (heheh)

    Careful. HN takes a dim view of puns.

  • fred_is_fred 2 days ago

    "There are many reasons but that drives it home for me multiple times a week is that my Tesla's wipers will randomly sweep the windshield for absolutely no reason."

    Self-starting wipers uses some kind of current/voltage measure on the windshield right - unrelated to self-driving? It's been around longer than Tesla - or are you just saying it's another random failure?

  • debo_ 2 days ago

    I upvoted this just for "short-sighted."

  • diebeforei485 2 days ago

    How does lidar see around corners?

    • xnx 18 hours ago

      Waymo sensor pods are mounted at the corners of the vehicle allowing it to see things that a driver can't. (E.g. when pulling out of an alley)

  • maxlin 2 days ago

    This and that. FUD this FUD that. Tesla have communicated clearly why "adding" LiDAR isn't an improvement for a system with goals as high as their are. Remember, no vision system yet is as good as humans are with vision, so obviously there's a lot to do with vision still.

    Check this for a reference of how well Tesla's vision-only fares against the competition, where many have LiDAR. Keep it simple wins the game. https://www.youtube.com/watch?v=0xumyEf-WRI

    • FireBeyond 2 days ago

      Is this like the same BS as Elon on an investor call recently?

      One analyst asked about the reliability of Tesla’s cameras when confronting sun glare, fog, or dust. Musk claimed that the company’s vision system bypasses image processing and instead uses direct photon counting to account for “noise” like glare or dust.

      This... is horseshit. Photon counting is not something you can do with a regular camera or any camera installed on a Tesla. A photon counting camera doesn't produce imagery that is useful for vision. Even beyond that, it requires a closed environment so that you can you know, count them in a controlled manner, not an open outside atmosphere.

      It's bullshit. And Elon knows it. He just thinks that you are too stupid to know it and instead think "Oh, yeah, that makes sense, what an awesome idea, why is only Tesla doing this?" and are wowed by Elon's brilliance.

      • maxlin 17 hours ago

        They referred to using 10 bit / unprocessed sensor data bypassing the normal processing with that terminology.

        But go ahead fight some weird strawman you built.

        Did you even look at the video? Don't think you did.

  • moralestapia 3 days ago

    >Elon's vision-only move was extremely "short-sighted"

    It wasn't Elon's but Karpathy's.

    • Fricken 3 days ago

      Sterling Anderson was the first autopilot director, and he was fired for insisting on Lidar. Elon sued Sterling Anderson, then hired the bootlick Karpathy to help him grease chumps.

      • mcv 3 days ago

        But why is Elon so opposed to Lidar? I don't get it.

        • fossuser 3 days ago

          He argued the case in 2016 iirc.

          The position against lidar was that it traps you in a local max, that humans use vision, that roads and signs are designed for vision so you're going to ultimately have to solve that problem and when you do lidar becomes a redundant waste. The investment in lidar wastes time from training vision and may make it harder to do so. That's still the case.

          I love Waymo, but it's doomed to be localized to populated areas with high-res mapping - that's a great business, but it doesn't solve the general problem.

          If Tesla keeps jumping on the vision lever and solves it they'll win it all. There's nothing in physics that makes that impossible so I think they'll pull it off. His model is all this sort of first principles thinking, it's why his companies pull off things like starship. I wouldn't bet against it.

          • Applejinx 2 days ago

            If humans had radar they would reverse into obstacles less often, and not be blindsided or T-boned as readily so long as their radar could still reach the object moving rapidly in their direction.

            Elon is being foolish and weirdly anthropomorphic.

            • fossuser 2 days ago

              If humans had ten eyes always looking simultaneously and never got tired they would also not hit stuff.

          • moralestapia 2 days ago

            And yet ... Tesla is backtracking (read TFA) while Waymo is steadily getting there.

          • anthem2025 11 hours ago

            It’s amazing people heard that argument and didn’t immediately write him off as having zero clue what he’s talking about.

        • Fricken 3 days ago

          At that time Lidar was too expensive and ugly to be putting in every car. Robust Lidar for SAE level 4 autonomous vehicles is still not cheap and still pretty ugly.

          • polishdude20 2 days ago

            But that's what's needed. Tesla could have developed an effective and cheap lidar if they decided millions of their cars needed it.

            • Fricken 2 days ago

              Not too long ago there were over 5 dozen startups in the automotive grade lidar space. Lidar is now much cheaper and smaller, but still very conspicuous and still too expensive to be putting in every vehicle.

    • pinkmuffinere 3 days ago

      For decisions of this scale (ie, tens of years of development time, committing multiple products to a single unproven technology), the CEO really should be involved. Maybe they’ll just decide to take the recommendation of the SMEs, but it’s hard for me to imagine Elon had no say in it.

    • amelius 3 days ago

      I suspect so too, but is it factual?

  • qoez 3 days ago

    Not sure it was actually Elon's move though, I heard it was mainly a decision taken by Andrej Karpathy

  • weinzierl 3 days ago

    I think Elon's prediction was that LIDAR was too expensive and will stay too expensive. In a sense he was right, LIDAR prices did not drop and I wonder why that is?

    • exhilaration 3 days ago

      There's multiple comments in this thread pointing to Chinese car manufacturers paying under $200 for their LIDAR hardware.

      • weinzierl 3 days ago

        $200 is still a lot when a bunch of cameras cost maybe $20.

        • ra7 3 days ago

          $200 to enable better FSD vs a decade of struggle to get FSD only partially working with $20 cameras. Which one do you think is more expensive overall?

          • weinzierl 2 days ago

            The fact that we still do not have a significant number of cars with LIDAR on our streets somewhat proves which approach the auto industry considers viable for business.

            I am much more curious about the next ten years. If we can bring down the cost of a LIDAR unit into parity with camera systems[1], I think I know the answer. But I thought that 10 years ago and it did not happen so I wonder what is the real roadblock to make LIDAR cheap.

            [1] Which it won't replace, of course. What it will change is that it makes the LIDAR a regular component, not an exceptionally expensive component.

            • xnx 18 hours ago

              The fact that the only working self driving system uses LIDAR might say even more.

          • xnx 18 hours ago

            1) make it work

            2) make it right

            3) make it fast (or cheap in this case)

            Elon thinks his genius intellect allows him to skip to #3.

        • D-Coder 2 days ago

          > $200 is still a lot when a bunch of cameras cost maybe $20.

          Anything except the lowest end car will cost $20K or more, so $200 is one percent of that price.

        • suddenexample 3 days ago

          I mean, I'd rather be building a $30,200 robotaxi that works than a $30,020 robotaxi that doesn't.

          • weinzierl 2 days ago

            You won't get rich with a $30,200 robotaxi, you won't even have a viable business. The game is the mass market and there the usual unit of currency is not cents, its tenth of cents.

            • anthem2025 11 hours ago

              Even if your robotaxi only manages 2000 rides that’s still down to just 10 cents a ride to cover the cost of hardware.

              It’s nothing.

      • mensetmanusman 2 days ago

        Price without specs per radian is meaningless.

    • yndoendo 3 days ago

      Investments into re-engineering production to bring down cost is done when there is a market large enough for said product.

      True self-driving is still a baby that needs to grow and cannot even compete against an adult human with 30+ years of experience. As self driving actually forms to that level the market will grown.

      • fooblaster 3 days ago

        Why do all of you think prices haven't come down? I can buy an AT128 from hesai for a few hundred dollars in volume. It's higher performance than any spinning lidar I could buy in 2017.

        • yndoendo 2 days ago

          You may have interpreted that and that is not what I said.

          Once a product starts to sell after initial design, time is take to reduce the development cost. Try to reuse parts or replace part A with B. A machine from early 2018 can be little different than ones going out the door late 2018. _Kaizen_ was coined for this.

          My point of view of when mass reduction in cost will be when self-driving is cost effect secondary feature on all Toyota vehicles. I see that as the litmus test for knowing that self-driving has reached true utility.

          Also well designed vehicles would need a multi-sensor system to operate in self-driving mode. A human operating a car is using multi-sensor intake. Lack of multi-sensor in humans prevent them from operating a vehicle. Blind people need a secondary sensory input like walking stick. Vehicles need a multi-sensor system to prevent harming, mutilating, and killing the passengers and pedestrians.

        • weinzierl 3 days ago

          Elon's bet was one LIDAR against a bunch of cameras. A few hundred dollars is still way too much when you can get the cameras for a few tens.

          • Applejinx 2 days ago

            In what universe is 'a few hundred dollars is way too much' for implementing full self-driving on an autonomous vehicle that moves like, and at the speeds of and in the spaces of, an automobile?

            A two to four ton vehicle that can accelerate like a Ferrari and go over 100 mph, fully self-driving, and 'a few hundred dollars is way too much'.

            Disagree. Even as they are dialing back the claims, which may or may not affect how people use the vehicles. These things respond too quickly for flaky senses based on human sensoriums.

  • enslavedrobot 3 days ago

    Are you referring to autopilot or FSD? Phantom braking is a solved problem since the release of V12 FSD. As soon as a vision based car is safer than a human, it's flaws don't matter because it will save lives.

    Supervised FSD is already safer than a human.

    • canadaduane 3 days ago

      "Just git pull, and latest fixes it" is not reassuring in this context. Engineers evaluating your claims need real data, not marketing copy.

      • enslavedrobot a day ago

        FSD is rigorously tested before release.

        The revisions and updates are safety tested on roads for months before they are released. Tesla also has models that are too big to run on existing production hardware that perform better than the release versions in test cars.

        Updates are not git pulls and no engineer would ever think that they were.

  • jillesvangurp 3 days ago

    Lidar is great for object detection. But it's not great for interpreting the objects. It will stop you crashing into a traffic light. But it won't be able to tell the color of the light. It won't see the stripes on the road. It won't be able to tell signs apart. It won't enable AIs to make sense of the complex traffic situations.

    And those complex traffic situations are the main challenge for autonomous driving. Getting the AIs to do the right things before they get themselves into trouble is key.

    Lidar is not a silver bullet. It helps a little bit, but not a whole lot. It's great when the car has to respond quickly to get it out of a situation that it shouldn't have been in to begin with. Avoiding that requires seeing and understanding and planning accordingly.

    • amelius 3 days ago

      Meanwhile, the competition who is using LiDAR has FSD cars. You're understating the importance of this sensor.

      You can train a DL model to act like a LiDAR based on only camera inputs (the data collection is easy if you already have LiDAR cars driving around). If they could get this to work reliably, I'm sure the competition would do it and ditch the LiDAR, but they don't, so that tells us something.

      • SOLAR_FIELDS 3 days ago

        It is very true and worthwhile to point out that the only company deploying L4 at scale is using LIDAR. And that company is not Tesla

        • UltraSane 3 days ago

          The mental gymnastics Tesla fanboys use to explain this away are incredible.

          • happyPersonR 3 days ago

            The Tesla social media team actively used to post positive spin on comment threads. It wouldn’t surprise me if they have LLM doing this now.

      • ModernMech 3 days ago

        Researchers had this knowledge in 2007, when the only cars to finish the DARPA Urban challenge were equipped with Velodyne 3D LIDAR. Elon Musk sent us back a decade by using his platform to ignorantly convince everyone it was possible with just camera alone.

        For anyone who understands sensor fusion and the Kalman filter, read this and ask yourself if you trust Elon Musk to direct the sensor strategy on your autonomous vehicle: https://www.threads.com/@mdsnprks/post/DN_FhFikyUE

        For anyone wondering, to a sensors engineer the above tweet is like sayin 1 + 1 = 0 -- the truth (and science) is the exact opposite of what he's saying.

      • cmiles74 2 days ago

        Isn’t this article about Tesla admitting their system is as good as it’s going to get? They’re changing their definition of FSD to pretty much “current state”.

    • michaelt 3 days ago

      I think you might be under-estimating the importance of not hitting things.

      If you look at the statistics on fatal car accidents, 85%+ involve collisions with stationary objects or other road users.

      Nobody's suggesting getting rid of machine vision or ML - just that if you've got an ML+vision system that gets in 1 serious accident per 200,000 miles, adding LIDAR could improve that to 1 serious accident per 2,000,000 miles.

      • ModernMech 3 days ago

        Because LIDAR can detect the object at the beginning of the perception pipeline, whereas camera can only detect the object after an expensive and time consuming ML inference process. By the time the camera even knows there's an object (if it does at all) the LIDAR would have had the car hitting its brakes. When you're traveling 60 MPH, milliseconds matter.

        • losvedir 3 days ago

          Just to put numbers on it, 10ms at 60mph is just under a foot. I don't think that matters too much, but if we're talking 200ms that's 10-15 ft which is substantial. I have no idea how long the ML pipeline is, though.

    • HPsquared 3 days ago

      It's an extra sensor you'd add into the mix, you'd still have cameras. Like the radar sensors. I think the reason Teslas don't have it, is because the sensor hardware was expensive a few years back. I assume they are much cheaper now.

      • ndsipa_pomu 3 days ago

        Tesla have also backed themselves into a corner by declaring that older models are hardware capable of FSD, so they can't easily add LIDAR to new models without being liable for upgrading/refunding previously sold models.

        • gizajob 3 days ago

          New for 2027 - ABSOLUTE Self Driving Pro Max!

        • bbarnett 3 days ago

          I thought they had it on some models already, then removed it on models after?

          edit: no, it was ultrasonic sensors. But this was likely object detection, and now it's gone.

    • mirsadm 3 days ago

      I don't know about you but one of my primary goals when driving is not hitting into things

  • alex1138 3 days ago

    I've defended some of Musk because I think what he did for Twitter was completely necessary (showing Jay Bhattacharya that the old regime had put him on a trends blacklist, and all the other people who got banned for no reason) but things like this (and Tesla's already been accused of killing people through crashes) are alarming (vision only as opposed to multiple telemetry) and it's kind of amazing he's in charge of something like SpaceX (are we about to witness a fatal incident in space?)

    • oblio 3 days ago

      > showing Jay Bhattacharya that the old regime had put him on a trends blacklist, and all the other people who got banned for no reason

      He's doing the exact same thing and worse to people he doesn't like.

  • gcanyon 3 days ago

    The wiper system has nothing to do with self-driving -- it's based on total internal reflection in the glass: https://www.youtube.com/watch?v=TLm7Q92xMjQ

    • sean_bright 3 days ago

      Teslas do not use the rain sensors discussed in this video, they use cameras to detect rain.

      • gcanyon 3 days ago

        Oh good lord, why? This is a solved problem, why would they waste their time on it. Wait, I think I know the answer: Elon's famous (at SpaceX) for saying the most reliable part is no part. So maybe this is a consequence of that.

        In any case, thanks, TIL!

    • vel0city 3 days ago

      That's how every non-Tesla works. Tesla's don't do this method which is why their auto wipers have always been so bad compared to everyone else.

  • torginus 3 days ago

    The mistakes you describe are the issues of the AI system controlling the car, not of the cameras themselves. If you were watching the camera feed and teleoperating the vehicle, no way you'd phantom brake at a sudden bit of glare.

    • petee 3 days ago

      Going from cameras to the human model, every morning on my way to work humans suddenly slam their brakes for the sun in their eyes: if you can't see, you can't see. I think it's another good example why cameras are not enough alone.

    • nosianu 3 days ago

      OP says nothing else???

      > this tells me volumes about what's going on in the computer vision system

      Emphasis:

      > computer vision system

  • chippiewill 3 days ago

    As someone who worked in this space, you are absolutely right, but also kind of wrong - at least in my opinion.

    The cold hard truth is that LIDARs are a crutch, they're not strictly necessary. We know this because humans can drive without a LIDAR, however they are a super useful crutch. They give you super high positional accuracy (something that's not always easy to estimate in a vision-only system). Radars are also a super useful crutch because they give really good radial velocity. (Little anecdote, when we finally got the Radars working properly at work it made a massive difference to the ability for our car to follow other cars, ACC, in a comfortable way).

    Yes machine learning vision systems hallucinate, but so do humans. The trick for Tesla would be to get it good enough to where it hallucinates less than humans do (they're nowhere near yet - human's don't hallucinate very often).

    It's also worth adding that last I checked the state of the art for object detection is early fusion where you chuck the LIDAR and Radar point clouds into a neural net with the camera input so it's not like you'd necessarily have the classical methods guardrails with the Lidar anyway.

    Anyway, I don't think Tesla were wrong to not use LIDAR - they had good reasons to not go down that route. They were excessively expensive and the old style spinning LIDARs were not robust. You could not have sold them on a production car in 2018. Vision systems were improving a lot back then so the idea you could have a FSD on vision alone was plausible.

    • raincole 3 days ago

      > The cold hard truth is that LIDARs are a crutch

      The hard truth is there is no reason to limit machines to only the tools humans are biologically born with. Cars always have crutches that humans don't possess. For example, wheels.

      • dcchambers 3 days ago

        Exactly.

        In a true self-driving utopia, all of the cars are using multiple methods to observe the road and drive (vision, lidar, GPS, etc) AND they are all communicating with each other silently, constantly, about their intentions and status.

        Why limit cars to what humans can do?

      • mensetmanusman 2 days ago

        The hard truth is you are balancing cost benefit curves.

      • daveguy 3 days ago

        The "lidar is a crutch" excuse is such a fraud. Musk is doing it so he can make more money, because it's cheaper. Thats it. Just another sociopath billionaire cutting corners at the expense of safety.

        The reason this is clear is because, except for a brief period in late 2022, Teslas have included some combination of radar and ultrasonic sensors. [0]

        [0] https://en.m.wikipedia.org/wiki/Tesla_Autopilot_hardware

      • profunctor 3 days ago

        The reason is cost, LIDAR is expensive.

        • kibwen 3 days ago

          This information is out of date. LIDAR costs are 10x less than they were a decade ago, and still falling.

          Turns out, when there's demand for LIDAR in this form factor, people invest in R&D to drive costs down and set up manufacturing facilities to achieve economies of scale. Wow, who could have predicted this‽

        • throwaway31131 3 days ago

          Cost is relative. LIDAR maybe be expensive relative to a camera or two but it’s very inexpensive compared to hiring a full time driver. Crashes aren’t particularly cheap either. Neither are insurance premiums.

        • DennisP 3 days ago

          Huawei has a self-driving system that uses three lidars, which cost $250 each (plus vision, radar, and ultrasound). It appears to work about as well as FSD. Here's the Out of Spec guys riding around on it in China for an hour:

          https://www.youtube.com/watch?v=VuDSz06BT2g

          • mensetmanusman 2 days ago

            Huawei received over $1 billion in grants from the Chinese government in 2023.

            Western countries might not be smart enough to keep R&D because Wall Street sees it as a cost center.

        • ModernMech 3 days ago

          You know what used to be expensive? Cameras. Then people started manufacturing them for mass market and cost when down.

          You know what else used to be expensive? Structured light sensors. They cost $$$$ in 2009. Then Microsoft started manufacturing the Kinect for a mass market, and in 2010 price went down to $150.

          You know what's happened to LIDAR in the past decade? You guessed it, costs have come massively down because car manufacturers started buying more, and costs will continue to come down as they reach mass market adoption.

          The prohibitive cost for LIDAR coming down was always just a matter of time. A "visionary" like Musk should have been able to see that. Instead he thought he could outsmart everyone by using a technology that was not suited for the job, but he made the wrong bet.

          • jqpabc123 3 days ago

            but he made the wrong bet.

            This should be expected when someone who is *not* an experienced engineer starts making engineering decisions.

        • zbrozek 3 days ago

          It's not 2010 anymore. They will asymptotically reach approximately twice the price of a camera, since they need both a transmit and receive optical path. Right now the cheapest of the good LiDARs are around 3-4x that. So we're getting close, and we're already within the realm large-scale commercial viability.

        • uoaei 3 days ago

          That's ok, they're supposed to be. That's no excuse to rush a bad job.

          • revnode 3 days ago

            The point of engineering is to make something that’s economically viable, not to slap together something that works. Making something that works is easy, making something that works and can be sold at scale is hard.

            • uoaei 3 days ago

              That's not engineering, that's industry. It's important to distinguish the two.

              • revnode 3 days ago

                Engineering only exists within industry. Everything else is a hobby.

                • uoaei 3 days ago

                  That's simply not true. Engineering can exist outside industry. "Stuff costs money" is not a governing aspect of these kinds of things.

                  FOSS is the obvious counterexample to your absurdly firm stance, but so are many artistic pursuits that use engineering techniques and principles, etc.

                  • revnode a day ago

                    Industry includes FOSS and artistic endeavors, anything that’s done professionally.

                    My intent was to exclude research efforts, which is fundamentally different from engineering, which is a practical concern and not a “get it to just work” concern.

                    • uoaei a day ago

                      That's an interesting question, the question of whether engineering per se is strictly pragmatic. I personally think drawing a hard line between research and engineering is a misstep and relies too heavily on a bureaucratic kind of metaphysics.

            • waldarbeiter 3 days ago

              If it would be easy there would already be a car costing a few million that few can afford but that has solved AD. But there isn't.

              • revnode 3 days ago

                There is no market for such a thing. At that price point, you get a personal chauffeur. That’s what rich people do and he can do stuff that a self driving system never can.

                • tialaramex 3 days ago

                  And the rich people who don't want a chauffeur like driving the car. They will buy a $10M car no problem, but they want driving that car to be fun because that's what they were paying for. They don't want you to make the driving more automatic and less interesting.

    • hudon 3 days ago

      > they're not strictly necessary. We know this because humans can drive without a LIDAR

      and propellers on a plane are not strictly necessary because birds can fly without them? The history of machines show that while nature can sometimes inspire the _what_ of the machine, it is a very bad source of inspiration for the _how_.

      • ethbr1 3 days ago

        Turns out intelligent design is quicker than evolutionary algorithms. ;)

    • goalieca 3 days ago

      > The cold hard truth is that LIDARs are a crutch, they're not strictly necessary. We know this because humans can drive without a LIDAR, however they are a super useful crutch.

      Crutch for what? AI does not have human intelligence yet and let’s stop pretending it does. There is no shame in that as the word crutch implies.

      • spot5010 3 days ago

        I've never understood the argument against lidars (except cost, but even that you can argue can come down).

        If a sensor provides additional data, why not use it? Sure, humans can drive withot lidars, but why limit the AI to using human-like sensors?

        Why even call it a crutch? IMO It's an advantage over human sensors.

        • bayindirh 3 days ago

          > Sure, humans can drive without LIDARs...

          That's because our stereoscopic vision has infinitely more dynamic range, focusing speed and processing power w.r.t. a computer vision system. Periphery vision is very good at detecting movement, and central view can process tremendous amount of visual data without even trying.

          Even a state of the art professional action camera system can't rival our eyes in any of these categories. LIDARs and RADARs are useful and shall be present in any car.

          This is the top reason I'm not considering a Tesla. Brain dead insistence on cameras with small sensors only.

          • iknowstuff 3 days ago

            their cams have better dynamic range than your eyes, given they can just run multiexposure and u gotta squint for sunlight. focal point is infinite for driving.

            You’re not considering them even though they have the best adas on the market lmao suit yourself

            https://m.youtube.com/watch?v=2V5Oqg15VpQ

        • IgorPartola 3 days ago

          I don’t work in this field so take the grain of salt first.

          Quality of additional data matters. How often does a particular sensor give you false positives and false negatives? What do you do when sensor A contradicts sensor B?

          “3.6 roentgen, not great, not terrible.”

          • giveita 3 days ago

            You can say that about human hearing and balance. What if they conflict with visual? We are good at figuring it out.

            • IgorPartola 2 days ago

              Humans can be confused in a number of ways. So can AI. The difference is that we know pretty well how humans get confused. AI gets confused in novel and interesting ways.

              • giveita 2 days ago

                Does removing a sense help in that regard (for car driving?).

                Probably comes down to lidar (and Ai) failure modes.

                • IgorPartola 2 days ago

                  I suspect it helps engineering the system. If you have 30 difference sensors, how do you design a system that accounts for seemingly random combinations of them disagreeing with an observation in real time if a priori you don’t know the weight of their observation in that particular situation? For humans for example you know that in most cases seeing something in a car is more important than smelling something. But what if one of your eyes sees a pedestrian and another sees a shadow of a bird?

                  Also don’t forget that as a human you can move your head any which way, and also draw on your past experiences driving in that area. “There is always an old man crossing the road at this intersection. There is a school nearby so there might be kids here at 3pm.” That stuff is not as accessible to a LIDAR.

            • ben_w 3 days ago

              We throw up, an evolved response because that conflict is a symptom of poisonous plants messing with us.

      • lazide 3 days ago

        I think they meant crutch for the AI so they could pretend for investors that AGI is right around the corner haha

    • jfim 3 days ago

      LIDARs have the advantage that they allow detecting solid objects that have not been detected by a vision-only system. For example, some time ago, a Tesla crashed into an overturned truck, likely because it didn't detect it as an obstacle.

      A system that's only based on cameras is only as good as its ability to recognize all road hazards, with no fall back if that fails. With LIDAR, the vehicle might not know what's the solid object in front of the vehicle using cameras, but it knows that it's there and should avoid running into it.

      • sandworm101 3 days ago

        Solid objects that arent too dark or too shiny. Lidar is very bad at detecing mirrored surfaces or non-reflecting structures that absorb the paticular frequency in use. The back ends of trucks hauling liquid are paticularly bad. Block out the bumper/wheels, say by a slight hill, and that polished cone is invisible to lidar.

        • bayindirh 3 days ago

          Add one or a couple of RADAR(s), too. European cars use this one weird trick to enable tons of features without harming people or cars.

        • UltraSane 3 days ago

          LIDAR works be measuring the time it takes for light to return so I don't understand how a object can be too reflective. Objects that absorb the specific wavelength the LIDAR uses is an obvious problem.

          • sandworm101 3 days ago

            Too reflective, like a flat mirror, will send the light off in a random direction rather than back as the detector. Worse yet, things like double reflections can result in timing errors as some of the signal follows a longer path. You want a target that is nicely reflective but not so shiny that you get any double reflections. The ideal is a matte surface painted the same color as the laser.

            • UltraSane 3 days ago

              Ah it relies on diffuse reflections to guarantee some light returns to the sensor but specular reflections mean none is returned.

              This is a good example of why sensor fusion is good.

    • lazide 3 days ago

      The big promise of autonomous self-driving was that it would be done safer than humans.

      The assumption was that with similar sensors (or practically worse - digital cameras score worse than eyeballs in many concrete metrics), ‘AI’ could be dramatically better than humans.

      At least with Tesla’s experience (and with some fudging based on things like actual fatal accident data) it isn’t clear that is actually what is possible. In fact, the systems seem to be prone to similar types of issues that human drivers are in many situations - and are incredibly, repeatedly, dumb in some situations many humans aren’t.

      Waymo has gone full LiDAR/RADAR/Visual, and has had a much better track record. But their systems cost so much (or at least used to), that it isn’t clear the ‘replace every driver’ vision would ever make sense.

      And that is before the downward pressure on the labor market started to happen post-COVID, which hurts the economics even more.

      The current niche of Taxis kinda makes sense - centrally maintained and capitalized Taxis with outsourced labor has been a viable model for a long time, it lets them control/restrict the operating environment (important to avoid those bad edge cases!), and lets them continue to gather more and more data to identify and address the statistical outliers.

      They are still targeting areas with good climates and relatively sane driving environments because even with all their models and sensors, heavy snow/rain, icy roads, etc. are still a real problem.

      • tialaramex 3 days ago

        This whole "But Waymo can't work in bad climates" thing is very dubious. At some point it is too dangerous to be driving an automobile. "But Waymo should also be dangerous" is the wrong lesson.

        When the argument was Phoenix is too pleasant I could buy that. Most places aren't Phoenix. But SF and LA are both much more like a reasonable place other humans live. It rains, but not always, it's misty, but not always. Snow I do accept as a thing, lots of places humans live have some snow, these cities don't really have snow.

        However for ice when I watch one of those "ha, most drivers can make this turn in the ice" videos I'm not thinking "I bet Waymo wouldn't be able to do this" I'm thinking "That's a terrible idea, nobody should be attempting it". There's a big difference between "Can it drive on a road with some laying snow?" and "Can it drive on ice?".

        • lazide 3 days ago

          You know how I can tell you haven’t actually lived in a bad climate?

          Both SF and LA climates are super cushy compared to say, Northern Michigan. Or most of the eastern seaboard. Or even Kansas, Wyoming, etc. in the winter.

          In those climates, if you don’t drive in what you’re calling ‘nobody should be attempting it’ weather, you - starve to death in your house over the winter. Because many months are just like that.

          Self driving has a very similar issue with the vast majority of, say, Asia. Because similarly “this is crazy, no one should be driving like this conditions” is the norm. So if it can’t keep up, it’s useless.

          Eastern and far Northern Europe has a lot of kinda similar stuff going on.

          Self driving cars are easy if you ignore the hard parts.

          In India, I’ve had to deal with Random Camel, missing (entire) road section that was there yesterday, 5 different cars in 3 lanes (plus 3 motorcycles) all at once, many cattle (and people) wandering in the road at day and night, and the so common it’s boring ‘people randomly going the wrong way on the road’. If you aren’t comfortable bullying other drivers sometimes to make progress or avoid a dangerous situation, you’re not getting anywhere anytime soon.

          All in a random mix of flooding, monsoon rain, super hot temperatures, construction zones, fog, super heavy fireworks smoke, etc. etc.

          Hell, even in the US I’ve had to drive through wildfires and people setting off fireworks on the road (long story, safety reasons). The last thing I would have wanted was the car freezing or refusing.

          Is that super safe? Not really. But life is not super safe. And a car that won’t help me live my life is useless to me.

          Such an AI would of course be a dangerous asshole on, say, LA roads, of course. Even more than the existing locals.

          • tialaramex 2 days ago

            This idea that they're somehow ignoring the hard parts is very silly. The existing human drivers in San Francisco manage to kill maybe 20 or so people per year so apparently it's not so "easy" that the human drivers can do it without killing anybody.

            I live in the middle of a city, so, no, in terrible weather just like great weather I walk to the store, no need to "starve to death" even if conditions are too treacherous for people to sensibly drive cars. Because I'm an old man, and I used to live somewhere far from a city, I have had situations where you can't use a car to go fetch groceries because even if you don't care about safety the car can't go up an icy hill, it loses traction, gravity takes over, you slide back down (and maybe wreck the car).

            • lazide 2 days ago

              So why do you think they’re only those cities? Because I’m hearing nothing from you that goes beyond ‘nuh uh’ so far.

              Because as an old man who has actually lived in all these places - and also has ridden in Waymos before and has had friends on the Waymo team in the past, your comments seem pretty ridiculous.

              • tialaramex 2 days ago

                Unlike Phoenix the choice of SF and LA seems to me like a PR choice. SF is where lots of tech nerds live and work, LA is one half of the country's media. I'd imagine that today if you're at all interested in this stuff and live in LA or SF you have ridden Waymo whereas when it was in a Phoenix suburb that's a very niche thing to go do unless you happened to live there.

                A lot of the large population centres in the US are in these what you're calling "super cushy" zones where there's not much snow let alone ice. More launches in cities in Florida, Texas, California will address millions more people but won't mean more ice AFAIK. So I guess for you the most interesting announcement is probably New York, since New York certainly does have real snow. 2026 isn't that long, although I can imagine that maybe a President who thinks he's entitled to choose the Mayor of New York could mess that up.

                As to the "But people in some places are crazy drivers" I saw that objection from San Francisco before it was announced. "Oh they'll never try here, nobody here drives properly. Can you imagine a Waymo trying to move anywhere in the Mission?". So I don't have much time for that.

    • davidhs 3 days ago

      > Yes machine learning vision systems hallucinate, but so do humans.

      When was the last time you had full attention on the road and a reflection of light made you super confused and suddenly drive crazy? When was the last time you experienced objects behaving erratically around you, jumping in and out of place, and perhaps morphing?

      • hodgesrm 3 days ago

        Well there is strong anecdotal evidence of exactly this happening.

           We were somewhere around Barstow on the edge of the desert when the drugs began to take hold. I remember saying something like, “I feel a bit lightheaded; maybe you should drive . . .”And suddenly there was a terrible roar all around us and the sky was full of what looked like huge bats, all swooping and screeching and diving around the car, which was going about 100 miles an hour with the top down to Las Vegas. And a voice was screaming: “Holy Jesus! What are these goddamn animals?” [0]
        
        [0] Thompson, Hunter S., „Fear and Loathing in Las Vegas“
        • fipar 3 days ago

          Hopefully we can expect FSD systems not to act like humans on hallucinogens though, right? :)

          • hodgesrm 2 days ago

            One hopes so. Many of the comments assume an ideal human driver, whereas real human drivers are frequently tired, distracted, intoxicated, or just crazy.

      • ben_w 3 days ago

        Many accidents are caused by low-angle light dazzle. It's part if why high beams aren't meant to be used off a dual carriageway.

        When was the last time you saw a paper bag blown across the street and mistook it for a cat or a fox? (Did you even notice your mistake, or do you still think it was an animal?)

        Do you naturally drive faster on wide streets, slower on narrow streets, because the distance to the side of the road changes your subconcious feeling of how fast you're going? Do you even know, or are you limited to your memories rather than a dashcam whose footage can be reviewed later?

        etc.

        Now don't get me wrong, AI today is, I think, worse than humans at safe driving; but I'm not sure how much of that is that AI is more hallucinate-y than us vs. how much of it is that human vision system failures are a thing we compensate for (or even actively make use of) in the design of our roads, and the AI just makes different mistakes.

        • davidhs 3 days ago

          If the internal representation of Tesla Autopilot is similar to what the UI displays, i.e. the location of the w.r.t. to everything else, and we had a human whose internal representation is similar, everything jumping around in consciousness, we’d be insane to allow him to drive.

          Self-driving is probably “AI-hard” as you’d need extensive “world knowledge” and be able to reason about your environment and tolerate faulty sensors (the human eyes are super crappy with all kinds of things that obscure it, such as veins and floaters).

          Also, if the Waymo UI accurately represents what it thinks is going on “out there” it is surprisingly crappy. If your conscious experience was like that when you were driving you’d think you had been drugged.

          • ben_w 2 days ago

            I agree that if Tesla's representation of what their system is seeing is accurate, it's a bad system.

            The human brain's vision system makes pretty much the exact opposite mistake, which is a fun trick that is often exploited by stage magicians: https://www.youtube.com/watch?v=v3iPrBrGSJM&pp

            And is also emphasised by driving safety awareness videos: https://www.youtube.com/watch?v=LRFMuGBP15U

            I wonder what we'd seem like to each other, if we could look at each other's perception as directly as we can look at an AI's perception?

            Most of us don't realise how much we mispercieve because it doesn't feel different in the moment to percieve incorrectly; it can't feel different in the moment, because if it did, we'd notice we were mispercieving.

    • ethbr1 3 days ago

      > Anyway, I don't think Tesla were wrong to not use LIDAR - they had good reasons to not go down that route. They were excessively expensive and the old style spinning LIDARs were not robust. You could not have sold them on a production car in 2018.

      The correct move for Tesla would have been to split the difference and add LIDAR to some subset of their fleet, ideally targeted in the most difficult to debug environments.

      Somewhat like Google/Waymo are doing with their Jaguars.

      Don't LIDAR 100% of Teslas, but add it to >0%.

      • ACCount37 2 days ago

        Tesla did, in fact, use "ground truth vehicles" - vehicles that were owned and operated by Tesla itself, and had high performance LIDARs installed. They were used to collect the data to train the "vision-only" system and verify its performance.

        Reportedly, they no longer use this widely - but they still have some LIDAR-equipped "scout vehicles" they send into certain environments to collect extra data.

        • ethbr1 2 days ago

          It seems like an own goal not to sell these to some interested and targeted customers then.

          • ACCount37 a day ago

            Who would buy those and why? They don't use LIDARs for better self-driving somehow. They're just data harvesting units with wheels. And I don't think there's a large and underserved market for LIDARs on wheels.

            • ethbr1 a day ago

              > Who would buy those and why? [...] They're just data harvesting units with wheels.

              Tesla would subsidize them and offer them at the same price as non-LIDAR models, to select customers in target areas.

              And yes, you answered the second part of your own question.

    • marcos100 3 days ago

      I want my self-driving car to be a better driver than any human. Sure we can drive without LIDAR, but just look up the amount of accidents caused by humans.

      • paulryanrogers 3 days ago

        Humans cause one fatal accident per million miles. (They have no backup driver they can disengage to.) Now just look up how many disengagements per million miles Tesla has.

        • Eisenstein 3 days ago

          Can you make your point without the stat, or provide the stat for us please?

    • lukeschlather 3 days ago

      I had taken for granted that the cameras in the Tesla might be equivalent to human vision, but now I'm realizing that's probably laughable. I'm reading it's 8 cameras at 30fps and it sounds like the car's bus can only process about 36fps (so a total of 36fps, not 8x30 = 240fps theoretically available from the cameras, if they had a better memory bus.) It also seems plausible you would need at least 10,000 FPS to fully match human vision (especially taking into account that humans turn their heads which in a CV situation could be analogous to the CV algorithm having 32x30 = 960 FPS, but typically only processing 140 frames this second from cameras pointing in a specific direction.

      So maybe LIDAR isn't necessary but also if Tesla were actually investing in cameras with a memory bus that could approximate the speed of human vision I doubt it would be cheaper than LIDAR to get the same result.

      • tialaramex 3 days ago

        Mostly human vision is just violently different from a camera, but you could interpret that as a mix of better and worse.

        One of the ways it's better is that humans can sense individual photons. Not 100% reliably, but pretty well, which is why humans can see faint stars on a dark night without any special tools even though the star is thousands of light years away. On the other hand, our resolution for most of our field of vision is pretty bad - this is compensated for by changing what we're looking it when we care about details we can just look directly at it and the resolution is better right in the centre of the picture.

      • asats 2 days ago

        Also the human vision is backed by the general intelligence, which those cameras are very much not.

    • DennisP 3 days ago

      You might not have the classical guardrails, but you are providing the neural net with a lot more information. Even humans are starting to find it useful to get inputs from other sensor types in their cars.

      I agree that Tesla may have made the right hardware decision when they started with this. It was probably a bad idea to lock themselves into that path by over-promising.

    • phinnaeus 3 days ago

      Humans have the most sophisticated processing unit in the known universe to handle the data from the eyes. Is the brain a crutch?

      • bayindirh 3 days ago

        At least for one marine creature, which I forgot its name, the answer is yes. Said creature dissolves its brain the moment it can find a place to attach and call home.

        • chronogamous 3 days ago

          Can't think of the name atm either, but I'm pretty sure it only does so, as it would be pointless to make any further decisions after attaching itself - it simply has no means to act on anything after that... the attaching is the only thing it 'does' in it's life... after that, it's only job, and only ability, is to be. Chose the wrong spot to attach and call home? Brains wouldn't make a bit of difference (unless regretting it's one life-choice is somehow usefull during this stage of just being, being stuck on the spot).

    • uoaei 3 days ago

      This impulse to limit robots to the capacities, and especially the form factors, of humans has severely limited our path to progress and a more convenient life.

      Robots are supposed to make up for our limitations by doing things we can't do, not do the things we can already do, but differently. The latter only serves to replace humans, not augment them.

    • DonHopkins 3 days ago

      I'd rather cars have crutches than the people they run over.

    • fluidcruft 3 days ago

      Musk's argument "Humans don't have LIDAR, therefore LIDAR is useless" has always seemed pretty dumb to me. It ignores the possibility that LIDAR might be superhuman with superhuman performance. And we also know you can get superhuman performance on certain tasks with insect-scale brains. Musk's just spewing stoner marketing crap that stoners think is deep, not actual engineering savvy.

      (and that's not even addressing that human vision is fundamentally a weird sensory mess full of strange evolutionary baggage that doesn't even make sense except for genetic legacy)

      • mixedbit 3 days ago

        Musk's argument also ignores intelligence of humans. The worst case upper bound for reaching human level driving performance without LIDAR is for AI to reach human level intelligence. Perhaps it is not required, but until we see self-driving Teslas performing as well as humans, we won't know this. Worst case scenario is that Tesla unsupervised self-driving is as far away as AGI.

    • maxerickson 3 days ago

      You could write a rant like this about 4 vs 3 wheels.

    • inciampati 3 days ago

      I wish I had radar eyes

      • UltraSane 3 days ago

        I want to see gamma rays, I want to hear X-rays, and I want to smell dark matter.

        • RaftPeople 3 days ago

          "I've seen things you people wouldn't believe. Attack ships on fire off the shoulder of Orion. I watched C-beams glitter in the dark near the Tannhäuser Gate".

    • ModernMech 3 days ago

      > Vision systems were improving a lot back then so the idea you could have a FSD on vision alone was plausible.

      This was only plausible to people who had no experience in robotics, autonomy, and vision systems.

      Everyone knew LIDAR was the enabling technology thanks to the 2007 DARPA Urban challenge.

      But the ignoramus Elon Musk decided he knew better and spent the last decade+ trashing the robotics industry. He set us back as far as safety protocols in research and development, caused the first death due to robotic cars, deployed them on public roads without the consent of the public by hoisting around his massive wealth, lied consistently for a DECADE about the capabilities of these machines, defrauded customers and shareholders while becoming richer and richer, all to finally admit defeat while he still maintains the growth story of for Tesla's future remains in robotics. The nerve of this fucking guy.

  • zpeti 3 days ago

    If a human brain can tell the difference between sun glare and an object, machine learning certainly can.

    It’s already better at X-rays and radiology in many cases.

    Everything you are talking about is just a matter of sufficient learning data and training.

    • audunw 3 days ago

      1. A human has a lot more options to deal with things like sun glare. We can move our head, use shade, etc. And when it comes to certain aspects around dynamic range the human eyes are still better than cameras. And most of all, if we loose nearly all vision we are intelligent enough to simulate the behaviour of most objects around us to react safe for the next few seconds. 2. Human intelligence is much deeper than machine vision. We can predict a lot of things that machine visions have no hope to achieve without some kind of incredibly advanced multi-modal model which is probably many years out.

      The most important thing is that Tesla/Elon absolutely had no way to know, and no reason to believe (other than as a way to rationalise a dangerously risky bet) that machine vision would be able to solve all these issues in time to make good on their promise.

      • mcv 3 days ago

        Not only do we have options to deal with it, we understand that it's a vision artefact, and not something real. We understand objects don't vanish or appear out of nowhere. We understand the glare isn't reality but is obstructing our view of reality. We immediately understand we're therefore dealing with incomplete information and compensate for that. Including looking for other ways to look around the instruction or fill in the gaps. Without even thinking about it, often.

    • tsimionescu 3 days ago

      The human brain is the result of literal billions of years of evolution, across trillions of organisms. The "just" in your "just a matter of sufficient learning data and training" is doing a lot of work.

      • RaftPeople 3 days ago

        And the techniques our brain uses to generalize during learning appear to be orders of magnitude better than current ML methods.

    • jihadjihad 3 days ago

      This comment is a perfect illustration of the hubris of this technology in general.

    • threatofrain 3 days ago

      If you have cheat codes then why not just use it instead of insisting on principle that our eyes are good enough? We see Waymo use the cheat codes, oh no. We also only have binocular vision, so I guess Tesla is already okay with superhuman cheat codes.

    • tomasphan 3 days ago

      We not only use our vision when driving but also our other senses. We can tell the sun is shining at us because it warms our skin. This all happened subconsciously. Humans are vastly superior drivers in general, it’s just that 50% of humans are bad drivers.

    • stevage 3 days ago

      It's a big if, no? Humans do struggle with sun glare. It'd be great if cars were much better.

dlcarrier 3 days ago

This looks to me like they are acknowledging that their claims were premature, possibly due to claims of false advertising, but are otherwise carrying forward as they were.

Maybe they'll reach level 4 or higher automation, and will be able to claim full self driving, but like fusion power and post-singularity AI, it seems to be one of those things where the closer we get to it, the further away it is.

  • sschueller 3 days ago

    Premature? Is that what we call this now? It's straight up fraud!

    Others are in prison for far less.

    • tombert 3 days ago

      I was about to say this. Elon would go on stage and say something like “and this is something we can do today”, or “coming next year” in 2018. The crowd goes wild, the stock price shoots up.

      The first time could be an honest mistake, but after a certain point we have to assume that it’s just a lie to boost the stock price.

      • mlindner 3 days ago

        The stock price hasn't dropped though, the opposite rather.

        • tombert 3 days ago

          I know. That’s my point; he just goes on stage and lies, the stock price goes up and it doesn’t appear to correct itself despite the boost being based on a lie.

        • matthewdgreen a day ago

          I don’t care about the stock price, but I guess I wouldn’t mind a refund for the FSD I purchased in 2018 that wasn’t actually delivered. The high stock price just means the company should have more capital available to issue those refunds?

    • tejohnso 2 days ago

      I'm not sure it's fraud because there was always the fine print. But a company selling a car with a feature called Full Self Driving that does not in fact fully self drive, well, that's a company I don't buy from. Unfortunately others don't seem as offended and happily pay for the product, encouraging further b.s. marketing hype culture.

      Just like politicians, it seems there's no repercussions for CEO's lying as long as it's fleecing the peons and not the elite.

    • dlcarrier 3 days ago

      Not in the US. There's a whole bureaucracy of advertising boards where a false advertising case can heard and appealed before anyone with legal authority would even look at it, which pretty much never happens. Even then, it's a tort, so punishment outside of fines is pretty much non existent.

    • solardev a day ago

      It's only illegal when the insufficiently rich do it.

      • DrillShopper 6 hours ago

        Always has been

        It's a big club, and we ain't in it.

  • dreamcompiler 3 days ago

    Not gonna happen as long as Musk is CEO. He's hard over on a vision-only approach without lidar or radar, and it won't work. Companies like Waymo that use these sensors and understand sensor fusion are already eating Tesla's lunch. Tesla will never catch up with vision alone.

    • Rohansi 3 days ago

      While I don't think vision-only is hopeless (it works for human drivers) the cameras on Teslas are not at all reliable enough for FSD. They have little to no redundancy and only the forward facing camera can (barely) clean itself. Even if they got their vision-only FSD to work nicely it'll need another hardware revision to resolve this.

      • vbezhenar 3 days ago

        I feel like our AI research at physical world falls so much behind language-level AI, that our reasoning might be clouded.

        Compare Boston Dynamics and cat. They are on the absolutely different levels for their bodies and their ability to manipulate their bodies.

        I have no doubts, that using cameras-only would absolutely work for AI cars, but at the same time I'm feel that this kind of AI is not there. And if we want autonomous cars, it might be possible, but we need to equip them with as much sensors as necessary, not setting any artificial boundaries.

        • threatofrain 3 days ago

          But lidar is basically a cheat code, whether or not optical is sufficient. Why wait for end stage driving AI? Why not use cheat codes and wait for cheaper technology later?

          • Rohansi 2 days ago

            I honestly think Tesla is past the point where lidar would provide significant benefits. I've tried FSD for a month or two and it can see everything but just drives like an idiot. Lidar isn't going to help it merge properly, change lanes smoothly, take left turns at lights without blocking traffic, etc.

            Check out what the Tesla park assist visualization shows now. It's vision based and shows a 3D recreation of the world around the car. You can pan around to see what's there and how far away it is. It's fun to play around with in drive thrus, garages, etc. just to see what it sees.

            • threatofrain 2 days ago

              It should help for disambiguating scenarios that lead to phantom stops or not stopping on time, which has killed Tesla drivers before, such as by driving full speed into the back of a truck with some glare.

              • Rohansi 2 days ago

                Maybe? I don't remember the cases but there is some confusion with autopilot (cruise control) vs. FSD sometimes. Autopilot is a completely different system and nobody should be surprised if it leads to crashes when misused.

      • moogly 3 days ago

        > While I don't think vision-only is hopeless (it works for human drivers)

        I guess you don't drive? You use more senses than just vision when driving a car.

        • figassis 3 days ago

          Behavioral and pattern analysis is always in full overdrive when I drive. I drive in Africa, people never follow rules, red lights at crossings mean go for bikers, when there are no lights you can't just give right of way, or you'll never move. When nearing intersections, people accelerate so they can cross before you, and it's a long line, and they know you have right of way, so they accelerate to scare you into stopping. Amateurs freeze and hold up the line for a very long time, usually until a traffic officer shows up to unblock (multiply this by every intersection). In order for you to get anywhere, you have to play the same game, and get close enough to the point where they aren't sure you'll stop, and will hit you and will have to pay. So often at crossings you're always in near misses and they realize you're not going to stop, so they do. Everyone is skilled enough to do this daily. Your senses, your risk analysis, your spider sense are fully overclocked most of the time. And then there are all the other crazy things that happen, like all the insane lane zig zagging people do, bikers our of nowhere et night with no lights, memorizing all the pot holes in all roads in the city bc they aren't illuminated at night so you can drive at 80-120km/h, etc. So no, it's not just your eyes. Lots of sensors, memory, processing, memory/mapping are required.

        • bhaney 3 days ago

          Personally, I can smell a left turn signal from nearly three blocks away

          • okr 3 days ago

            The spider crawling out of the back of the car mirror has seen things, that are far beyond i will ever experience visually!

        • Rohansi 3 days ago

          And which ones can't be replicated with hardware?

          • scrollaway 3 days ago

            Even without getting out of the vision sense there are features of vision Tesla doesn’t properly try to replicate. Depth perception for example (it does DP very differently to humans).

            You also do use your ears when driving.

            • rogerrogerr 3 days ago

              Binocular depth perception stops being useful somewhere around 10 meters. Your brain is mostly driving using the “computed” depth perception based on the flat image it’s getting. Same way Tesla is getting a depth map.

              Provable by one-eyed people being able to drive just fine, as could you with one eye covered.

            • vbezhenar 3 days ago

              One-eyed people are allowed to drive.

              • scrollaway 3 days ago

                What’s your point? I was answering a question, not making a statement about any disabilities.

                • vbezhenar 3 days ago

                  My point is that "hardware" depth perception is not necessary for successful driving. Just one camera should be enough, rest is algorithms.

                  • ModernMech 2 days ago

                    Eyes are not cameras they are extensions of the brain. That people can drive with one eye is not a "proof of concept" that cars should be able to drive with one camera. You'll need a human brain to go along with it. Unfortunately for Tesla, they seem to be short on supply of those at the moment.

                    • rogerrogerr 2 days ago

                      So your assertion is that a human with access to arbitrarily good camera feeds could not drive a car at level 5? That something magical is happening because the eyes are close topographically to the brain? Sounds implausible.

                      • ModernMech 2 days ago

                        How does the human consume the arbitrarily good camera feeds?

                        > That something magical is happening because the eyes are close topographically to the brain?

                        It sounds to me like you have to study what eyes actually are. It's not about proximity or magic, they are a part of your brain, and we're only beginning to understand their complexities. Eyes are not just sensory organs, so the analogy to cameras is way off. They are able to discern edges, motion, color, and shapes, as well as correct errors before your brain even is even aware.

                        In robotics, we only get this kind of information after the camera image has been sent through a perception pipeline, often incurring a round trip through some sort of AI and a GPU at this point.

                        > Sounds implausible.

                        Musk just spent billions of dollars and the better part of a decade trying to prove the conjecture that "cameras are sufficient", and now he's waving the white flag. So however implausible it sounds, it's now more implausible than ever that cameras alone are sufficient.

                      • JumpCrisscross 2 days ago

                        > your assertion is that a human with access to arbitrarily good camera feeds could not drive a car at level 5?

                        No. I live in snow country. Folks with vestibular issues are advised to pull over in snowstorms because sometimes the only indication that you have perpendicular velocity and are approaching a slide off the road or spin is that sense. My Subaru has on more than one occasion noticed a car before I did based on radar.

                        Vision only was a neat bet. But it will cost Tesla first to market status generally and especially in cities, where regulators should have fair scepticism about a company openly trying to do self driving on the cheap.

                        • rogerrogerr a day ago

                          Teslas definitely have accelerometers/gyros, and have access to the torque and RPM on every wheel. It has a much better picture of the 3D motion of the car relative to the road than any human driver.

                          • ModernMech a day ago

                            Dynamics don't help when you are blinded by the sun or can't discern the broadside of a firetruck.

                            • rogerrogerr a day ago

                              Cameras can clearly discern the broadside of a firetruck. Whether some earlier build didn't detect one doesn't change that firetrucks reflect plenty of photons to be detectable.

                              I'm consistently surprised by how immune to sun-blindness my car is. It regularly reads traffic lights that have the sun right next to them; I've never seen any discernible degradation due to too much light, too little light, or bad contrast of any kind.

                              You're just bringing up a never-ending stream of but-what-abouts, so I'm done refuting them after this. It's not a good use of my time.

                              • ModernMech 16 hours ago

                                Your personal experience with your car doesn't change that Tesla is waving the white flag due to the fact the sensor system Musk insisted on has caused deaths and is too unreliable to deliver full autonomy. The sun has confounded Tesla autonomy since its inception, and its shortcomings caused multiple decapitations: https://www.latimes.com/business/la-fi-tesla-florida-acciden...

                                > You're just bringing up a never-ending stream of but-what-abouts

                                By "what abouts" you of course mean "shortcomings of camera-only systems that make them unsuitable for full autonomy."

                                > It's not a good use of my time.

                                No it's not, it's a losing battle, and Musk has admitted it. Camera-only systems will not enable full self driving. Y'all got scammed.

            • mlindner 3 days ago

              Teslas have and use microphones.

              • scrollaway 3 days ago

                Gp asked about specifically vision only approach. Vision only means no microphones, regardless of whether Tesla has any…

                What is up with hn today? Was there a mass stroke?

                • rogerrogerr 3 days ago

                  “Vision-only” colloquially means no LIDAR and other expensive sensors, not the exclusion of microphones (which are hilariously cheap).

            • dtj1123 3 days ago

              One eyed, deaf people can drive

            • gizajob 3 days ago

              Deaf people can drive fine.

          • moogly 3 days ago

            Ask Musk; he's the one who claims that sensor fusion does not work.

        • terminalshort 3 days ago

          Yeah, but you can drive on vision alone. Deaf people are allowed to drive just the same as anyone else.

          • asadotzler 2 days ago

            It's not just hearing. I can "feel" in the seat of my pants, the pull of the steering wheel, et. I have a vestibular system that knows bout relative velocities and changes which coordinates with my other senses, and more. This all allows me to take in far more than what my eyes see, or my ears hear and to build the correct intuitions and muscle memories to get good at driving and adapt to new driving environments.

        • ndsipa_pomu 3 days ago

          > You use more senses than just vision when driving a car

          Deaf drivers (may include drivers playing loud music too) don't, unless they're somehow tasting the other vehicles.

          • ChrisMarshallNY 3 days ago

            We have these things called "inner ears." I'm pretty sure deaf people have them, too.

            Nature's accelerometers.

            I've had mine go bad, and it wasn't fun.

            Just sayin'...

            • ndsipa_pomu 3 days ago

              Were you unable to drive when your inner ears weren't functioning?

              • ChrisMarshallNY 3 days ago

                I guess so.

                I was unable to stand up.

                • ndsipa_pomu 3 days ago

                  Sounds horrible. I can understand that stopping you from cycling, but if you could have managed to sit in a car, would you have been able to drive it? I can imagine that inner ear issues can sometimes affect vision too as my wife suffered from positional vertigo for a while and I could see her eyes flicking rapidly when she was getting dizzy. (I did find a helpful YouTube video about a sequence of positions to put the sufferer through which basically helps to remove the otoliths from the ear canal).

                  • robocat 2 days ago

                    When the vertigo is bad, you can't even go as a passenger in the car because the movement is literally sickening.

                    Even driving with mild vertigo could be difficult because you want to restrict your head movement.

                    Source: my dad gets Benign paroxysmal positional vertigo (BPPV)

                    • ndsipa_pomu 2 days ago

                      I'd recommend him trying the Epley Maneouvre as it's quick and easy to do (needs someone to help though) and is unlikely to make anything worse.

                      • robocat 18 hours ago

                        Thanks. I've tried to encourage him to learn it. He's stubborn and isn't interested. He's had physio do it when he was hospitalized...

                        He's mentally sharp, and has a science background, but nope!

          • vel0city 3 days ago

            There are more than three senses.

            • ndsipa_pomu 3 days ago

              Yes and they're not really of much use in driving safely unless you're referring to some spidey-sense of danger.

              • vel0city 3 days ago

                I'm using inertial senses from my inner ear. I feel the suspension through the seat. I feel feedback through the steering wheel. I can feel the g forces pulling on my body.

                • ndsipa_pomu 3 days ago

                  Yes, but in what specific circumstances do they change your driving behaviour? If you weren't able to feel the suspension through your seat, how would your driving become less safe?

                  • vel0city 3 days ago

                    One quick obvious example, they put tactile features on the road specifically so you can feel them. Little bumps on lane markers. Rumble strips on the boundaries. Obvious features like that.

                    While it doesn't often snow or ice up here (it does sometimes), it does rain a good bit from time to time. You can usually feel your car start to hydroplane and lose traction well before anything else goes wrong. It's an important thing to feel but you wouldn't know it's happening if you're going purely on vision.

                    You can often feel when there's something wrong with your car. Vibrations due to alignment or balance issues. Things like that.

                    Those are quick examples off the top of my head. I'm sure there are more.

                    Of course, all these things can be tracked with extra sensors, I'm not arguing humans are entirely unique in being able to sense these things. But they are important bits of feedback to operate your car safely in a wide range of conditions that you probably will encounter, and should be accounted for in the model.

                    As for auditory feedback, while some drivers don't have sound input available to them (whether they're deaf or their music is too loud or whatever) sound is absolutely a useful input to have. You may hear emergency vehicles you cannot see. You may hear honking alerting you to something weird going on in a particular direction. You may hear issues with your car. Those rumble strips are also tuned to be loud when cars run over them as well. You can hear the big wind gusts and understand those are the source of weird forces pushing the car around as opposed to other things making your car behave strangely. So sure, one can drive a car without sound, but its not better without it.

                  • MangoToupe 3 days ago

                    Pretty much all of them. The difference between driving a car and playing a video game is remarkable.

                    But that's sort of besides the point: why would you not use additional data when the price of the sensors are baked into the feature that you're selling?

              • tombert 3 days ago

                I am not 100% sure which “sense” this would be, but when I drive I can “feel” the texture of the road and intuit roughly how much traction I have. I’m not special, every driver does this, consciously or not.

                I am not saying that you couldn’t do this with hardware, I am quite confident you could actually, but I am just saying that there are senses other than sight and sound at play here.

                • ndsipa_pomu 3 days ago

                  Whilst that might feel re-assuring that you're getting tactile feedback, I doubt that there's many situations apart from driving on snow and ice that it's of much use. Fair enough if you're aiming for a lap record round a track, but otherwise you shouldn't be anywhere near the limit of traction of your tyres.

                  • tombert 3 days ago

                    Snow, ice, and rain are cases that still need to be accounted for so that really doesn’t dispel anything I said.

        • renewiltord 3 days ago

          But we allow deaf people to drive but not people who are entirely blind. This means vision is necessary and sufficient.

          The problem is clearly a question of the fidelity of the vision and our ability to slave a decision maker and mapper to it.

      • bkettle 3 days ago

        > it works for human drivers

        Sure, for some definition of "works"...

        https://www.iihs.org/research-areas/fatality-statistics/deta...

        • Rohansi 3 days ago

          Vision is almost certainly not the main issue with humans as drivers.

          • NaomiLehman 3 days ago

            it's one of the reasons.

            • Rohansi 3 days ago

              For sure, but my phone camera sees better than I do. Cars can make use of better camera sensors and have more than two of them. You can't just extrapolate the conclusion that human vision bad = vision sensors bad.

              • NaomiLehman 3 days ago

                we can't conclude that LIDAR is better than a camera? Is it worth cutting the costs? LIDAR has everything that a camera has plus more.

              • shpx 2 days ago

                Cameras are nowhere near the fidelity and responsiveness of human eyes.

      • SalmoShalazar 3 days ago

        Such utter drivel. A camera is not the equivalent of human eyes and sensory processing, let alone an entire human being engaging with the physical world.

        • terminalshort 3 days ago

          Cameras are better than human eyes. Much better. There are areas in which they are worse, but that's completely outweighed by the fact that you are not limited to two of them and they can have a 360 degree field of vision.

          • FireBeyond 2 days ago

            What garbage. The human eye has about 20 stops of dynamic range. Cameras of the size that are in a Tesla are at about 12 stops. That's a lot of data they don't get. For just one thing. Human eyes can also adjust focal distance multiple times a second, which camera (lenses) have a harder time doing.

            • terminalshort 2 days ago

              For one tiny portion of the 360 field of vision of cameras, yes. For the rest they have 0 stops.

        • Rohansi 3 days ago

          The best cameras are surely better than most peoples' eyes these days.

          Sensory processing is not matched, sure, but IMO how a human drives is more involved than it needs to be. We only have two eyes and they both look in the same direction. We need to continuously look around to track what's around us. It demands a lot of attention from us that we may not always have to spare, especially if we're distracted.

          • rcxdude 3 days ago

            >The best cameras are surely better than most peoples' eyes these days.

            Not on all metrics, especially not simultaneously. The dynamic range of human eyes, for example, is extremely high.

            • Rohansi 3 days ago

              The front camera Tesla is using is very good with this. You can drive with the sun shining directly into it and it will still detect everything 99% of the time, at least with my older model 3. Way better than me stuck looking at the pavement directly in front the car.

              AFAIK there is also more than one front camera. Why would anyone try to do it all with one or two camera sensors like humans do it?

              It's important to remember that the cameras Tesla are using are optimized for everything but picture quality. They are not just taking flagship phone camera sensors and sticking them into cars. That's why their dashcam recordings look so bad (to us) if you've ever seen them.

          • kivle 3 days ago

            Well, Teslas use low cost consumer cameras. Not DSLRs. Bad framerate, bad resolution and bad dynamic range. Very far from human vision and easily blinded and completely washed out by sudden shifts in light.

            • matthewdgreen a day ago

              You can compare the size of the cameras used in Tesla with the size (of the lenses at least) on the Waymo rig, and they do not look like they’re in the same league, optically.

            • rogerrogerr 3 days ago

              I’m consistently surprised by how my Tesla can see a traffic light with the sun directly behind it. They seem to have solved the washout problem in practice.

      • mbrochh 2 days ago

        Uh... why don't they put the cameras... into the car (it works for human drivers)???

    • formercoder 3 days ago

      Humans drive without LIDAR. Why can’t robots?

      • cannonpr 3 days ago

        Because human vision has very little in common with camera vision and is a far more advanced sensor, on a far more advanced platform (ability to scan and pivot etc), with a lot more compute available to it.

        • torginus 3 days ago

          I don't think it's a sensors issue - if I gave you a panoramic feed of what a Tesla sees on a series of screens, I'm pretty sure you'd be able to learn to drive it (well).

        • lstodd 3 days ago

          yeah, try matching a human eye on dynamic range and then on angular speed and then on refocus. okay forget that.

          try matching a cat's eye on those metrics. and it is much simpler that human one.

          • terminalshort 3 days ago

            Who cares? They don't need that. The cameras can have continuous attention on a 360 degree field of vision. That's like saying a car can never match a human at bipedal running speed.

          • dmos62 3 days ago

            I'm curious, in what ways is a cat's vision simpler?

            • lstodd a day ago

              less far sight, dichromatic color vision, over-optimized for low light.

              a cursory glance did not find studies on cat peripheral vision, but would assume it's worse than human if only because they rely more on audio

        • insane_dreamer 2 days ago

          The human sensor (eye) isn't more advanced in its ability to capture data -- and in fact cameras can have a wider range of frequencies.

          But the human brain can process the semantics of what the eye sees much better than current computers can process the semantics of the camera data. The camera may be able to see more than the eye, but unless it understands what it sees, it'll be inferior.

          Thus Tesla spontaneously activating its windshield wipers to "remove something obstructing the view" (happens to my Tesla 3 as well), whereas the human brain knows that there's no need to do that.

          Same for Tesla braking hard when it encountered an island in the road between lanes without clear road markings, whereas the human driver (me) could easily determine what it was and navigate around it.

      • phire 3 days ago

        Why tie your hands behind your back?

        LIDAR based self-driving cars will always massively exceed the safety and performance of vision-only self driving cars.

        Current Tesla cameras+computer vision is nowhere near as good as humans. But LIDAR based self-driving cars already have way better situational awareness in many scenarios. They are way closer to actually delivering.

        • kimixa 3 days ago

          And what driver wouldn't want extra senses, if they could actually meaningfully be used? The goal is to drive well on public roads, not some "Hands Tied Behind My Back" competition.

        • tliltocatl 3 days ago

          Because any active sensor is going to jam other such sensors once there are too many of them on the road. This is sad but true.

      • Sharlin 3 days ago

        And bird fly without radar. Still we equip planes with them.

      • apparent 3 days ago

        The human processing unit understands semantics much better than the Tesla's processing unit. This helps avoid what humans would consider stupid mistakes, but which might be very tricky for Teslas to reliably avoid.

      • randerson 3 days ago

        Even if they could: Why settle for a car that is only as good as a human when the competitors are making cars that are better than a human?

        • dotancohen 3 days ago

          Cost, weight, and reliability. The best part is no part.

          No part costs less, it also doesn't break, it also doesn't need to be installed, nor stocked in every crisis dealership's shelf, nor can a supplier hold up production. It doesn't add wires (complexity and size) to the wiring harness, or clog up the CAN bus message queue (LIDAR is a lot of data). It also does not need another dedicated place engineered for it, further constraining other systems and crash safety. Not to mention the electricity used, a premium resource in an electric vehicle of limited range.

          That's all off the top of my head. I'm sure there's even better reasons out there.

          • randerson 3 days ago

            These are all good points. But that just seems like it adds cost to the car. A manufacturer could have an entry-level offering with just a camera and a high-end offering with LIDAR that costs extra for those who want the safest car they can afford. High-end cars already have so many more components and sensors than entry-level ones. There is a price point at which the manufacturer can make them reliable, supply spare parts & training, and increase the battery/engine size to compensate for the weight and power draw.

            • terminalshort 3 days ago

              We already have that. Tesla FSD is the cheap camera only option and Waymo is the expensive LIDAR option that costs ~150K (last time I heard). You can't buy a Waymo, though, because the price is not practical for an individually owned vehicle. But eventually I'm sure you will be able to.

              • asadotzler 2 days ago

                LIDAR does not add $150K to the cost. Dramatically customizing a production car, and adding everything it needs costs $150K. Lidar can be added for hundreds of dollars per car.

                • dotancohen 2 days ago

                    > Lidar can be added for hundreds of dollars per car.
                  
                  Surprisingly, many production vehicles have a manufacturer profit under one thousand dollars. So that LIDAR would eat a significant portion of the margin on the vehicle.
                  • matthewdgreen a day ago

                    But that’s sort of the point of the business model. Getting safe fully-self driving vehicles appears to require a better platform, given today’s limitations. You can achieve that better platform financially in a fleet vehicle where the cost of the sensors can be amortized over many rides, and the “FSD” capability translates directly into revenue. You can’t put an adequate sensor platform into a consumer vehicle today, which is what Tesla tried to promise and failed to deliver. Maybe someday it will be possible, but the appropriate strategy is to wait until that’s possible before selling products to the consumer market.

            • dotancohen 3 days ago

              Not with Teslas. There are almost no options on a Tesla - it's mostly just colours and wheels once you've selected a drivetrain.

          • dygd 3 days ago

            Teslas use automotive Ethernet for sensor data which has much more bandwidth compared to CAN bus

            • dotancohen 3 days ago

              But also higher latency. Teslas also use a CAN bus.

              But LIDAR would probably be wired more directly to the computer then use a packet protocol.

      • systemswizard 3 days ago

        Because our eyes work better than the cheap cameras Tesla uses?

        • lstodd 3 days ago

          problem is, expensive cameras that Tesla doesn't use don't work either.

          • systemswizard 3 days ago

            They cost 20-60$ to make per camera depending on the vehicle year and model. They also charge $3000 per camera to replace them…

            • MegaButts 3 days ago

              I think his point was even if you bought insanely expensive cameras for tens of thousands of dollars, they would still be worse than the human eye.

            • terminalshort 3 days ago

              They charge $3000 for the hours of labor to take apart the car, pull the old camera out, put the new camera in, and put the car back together, not for the camera. You can argue that $3000 is excessive, but to compare it to the cost of the camera itself is dishonest.

            • dzhiurgis 3 days ago

              Fender camera is like $50 and requires 0 skill to replace. Next.

      • matthewdgreen a day ago

        I drove into the setting sun the other day and needed to shift the window shade and move my head carefully to avoid having the sun directly in my field of vision. I also had to run the wipers to clean off a thin film of dust that made my windshield difficult to see through. And then I still drove slowly and moved my head a bit to make sure I could see every obstacle. My Tesla doesn’t necessarily have the means to do all of these things for each of its cameras. Maybe they’ll figure that out.

      • dreamcompiler 3 days ago

        Chimpanzees have binocular color vision with similar acuity to humans. Yet we don't let them drive taxis. Why?

        • ikekkdcjkfke 3 days ago

          Chimpanzies are better than humans given a reward structure they understand. The next battlefield evilution are chimpanzies hooked up with intravenous cocaine modules running around with 50. cals

        • ndsipa_pomu 3 days ago

          There's laws about mis-treating animals. Driving a taxi would surely count as inhumane torture.

        • insane_dreamer 2 days ago

          they can't understand how to react to what they see the way humans do

          it has to do with the processing of information and decision-making, not data capture

      • zeknife 3 days ago

        I wouldn't trust a human to drive a car if they had perfect vision but were otherwise deaf, had no proprioception and were unable to walk out of their car to observe and interact with the world.

        • dotancohen 3 days ago

          And yet deaf people regularly drive cars, as do blind-in-one-eye people, and I've never seen somebody leave their vehicle during active driving.

          • zeknife 3 days ago

            I didn't mean that a human driver needs to leave their vehicle to drive safely, I mean that we understand the world because we live in it. No amount of machine learning can give autonomous vehicles a complete enough world model to deal with novel situations, because you need to actually leave the road and interact with the world directly in order to understand it at that level.

      • Waterluvian 3 days ago

        They can. One day. But nobody can just will it to be today.

      • rcpt 3 days ago

        We crash a lot.

        • insane_dreamer 2 days ago

          that's (usually) because our reflexes are slow (compared to a computer), or we are distracted by other things (talking, phone, tiredness, sights, etc. etc.), not because we misinterpret what we see

      • nkrisc 3 days ago

        Well these robots can’t.

    • dzhiurgis 3 days ago

      So robotaxi trial thats happening already is some sort of rendering, ai slop and rides we see aren’t real?

  • epolanski 2 days ago

    This is fraud he went in front of investors and said multiple times it was around the corner.

    He said consumers, just buy the car and it will come with an updated. It didn't.

    This is a scam, end of story.

    7 years of it.

    • insane_dreamer 2 days ago

      Surprising that there hasn't been a class-action suit yet.

  • crooked-v 3 days ago

    So does anyone who previously bought it on claims that actual full self-driving would be "coming soon" get refunds?

    • garbagewoman 3 days ago

      Hopefully not. They might learn a lesson from the experience.

      • blackoil 3 days ago

        Hmm, you want to penalize company and teach a lesson to customers,so give the money to Ford shareholders.

  • jojobas 3 days ago

    >false advertising

    I think you mean "securities fraud", at gargantuan scale at that. Theranos and Nikola were nowhere near that scale.

    • paulryanrogers 3 days ago

      It is strange how Elon and Tesla get a pass on this. Tesla has contributed to the death of more people than Thernos. I guess he didn't rip off rich investors, except maybe the ones who died in their Teslas.

      Perhaps it's that cars are more sacred than healthcare.

  • gitaarik 2 days ago

    But, they're changing the meaning of FSD to FSD (Supervised). So that means they don't make any promises for unsupervised FSD in the future anymore. They'll of course say that they keep working on it and that stuff is progressing. But they don't have to deliver anymore. Just like they say to people getting into accidents that they should keep their arms on the wheel or else it's your own responsibility.

  • jeffbee 3 days ago

    > Maybe they'll reach level 4 or higher automation

    There is little to suggest that Tesla is any closer to level 4 automation than Nabisco is. The Dojo supercomputer that was going to get them there? Never existed.

    • ascorbic 3 days ago

      And their H100s were diverted to build MechaHitler instead

  • standardUser 3 days ago

    What does Waymo lack in your opinion to not be considered "full self driving"?

    The persistent problem seems to be severe weather, but the gap between the weather a human shouldn't drive in and weather a robot can't drive in will only get smaller. In the end, the reason to own a self-driven vehicle may come down to how many severe weather days you have to endure in your locale.

    • mkl 3 days ago

      Waymo is very restricted on the locations it drives (limit parts of limited cities, I think no freeways still), and uses remote operators to make decisions in unusual situations and when it gets stuck. This article from last year has quite a bit of information: https://arstechnica.com/cars/2024/05/on-self-driving-waymo-i...

      • panarky 3 days ago

        Waymo never allows a remote human to drive the car. If it gets stuck, a remote operator can assess the situation and tell the car where it should go, but all driving is always handled locally by the onboard system in the vehicle.

        Interesting that Waymo now operates just fine in SF fog, and is expanding to Seattle (rain) and Denver (snow and ice).

        • epcoa 3 days ago

          The person you're replying to never claimed otherwise. However, while decision support is not directly steering and accelerating/braking the car, I am just going to assert it is still driving the car, at least for how it actually matters in this discussion. And the best estimate is that these interventions are "uncommon" on the order of 10ks miles, but that isn't rare.

          A system that requires a "higher level" handler is not full self driving.

          • ascorbic 3 days ago

            I think the important part is that the remote person doesn't need to be alert, and make real time decisions within seconds. As I understand it, the remote driver is usually making decisions with the car stationary. I'd imagine that any future FSD car with no steering wheel would probably have a screen for the driver to make those kind of decisions.

          • AlotOfReading 3 days ago

            There's a simple test I find useful to determine who's driving:

            If the vehicle has a collision, who's ultimately responsible? That person (or computer) is the driver.

            If a Waymo hits a pole for example, the software has a bug. It wasn't the responsibility of a remote assistant to monitor the environment in real time and prevent the accident, so we call the computer the driver.

            If we put a safety driver in the seat and run the same software that hits the same pole, it was the human who didn't meet their responsibility to prevent the accident. Therefore, they're the driver.

          • panarky 3 days ago

            Agreed!

            Which is why an autonomous car company that is responsible and prioritizes safety would never call their SAE Level 4 vehicle "full self-driving".

            And that's why it's so irresponsible and dangerous for Tesla to continue using that marketing hype term for their SAE Level 2 system.

          • standardUser 3 days ago

            In that case, it sounds like "full self driving" is more of an academic concept that is probably past it's due date. Waymo and Apollo Go are determining what the actual requirements are for an ultra-low labor automated taxi service by running them successfully.

      • phire 3 days ago

        Geofencing and occasional human override meets the definition of "Level 4 self driving". Especially when it's a remote human override.

        But is Level 4 enough to count as "Full Self Driving"? I'd argue it really depends on how big the geofence area is, and how rare interventions are. A car that can drive on 95% of public roads might as well be FSD from the perspective of the average drive, even if it falls short of being Level 5 (which requires zero geofencing and zero human intervention).

      • zer00eyz 3 days ago

        Waymo has been testing freeway driving for a bit:

        https://www.reddit.com/r/waymo/comments/1gsv4d7/waymo_spotte...

        > and uses remote operators to make decisions in unusual situations and when it gets stuck.

        This is why its limited markets and areas of service: connectivity for this sort of thing matters. Your robotaxi crashing cause the human backup lost 5g connectivity is gonna be a real real bad look. NO one is talking about their intervention stats. IF they were good I would assume that someone would publish them for marketing reasons.

        • decimalenough 3 days ago

          > Your robotaxi crashing cause the human backup lost 5g connectivity is gonna be a real real bad look.

          Waymo navigates autonomously 100% of the time. The human backup's role is limited to selecting the best option if the car has stopped due to an obstacle it's not sure how to navigate.

        • refulgentis 3 days ago

          > NO one is talking about their intervention stats.

          Interventions are a term of art, i.e. it has a specific technical meaning in self-driving. A human taking timely action to prevent a bad outcome the system was creating, not taking action to get unstuck.

          > IF they were good I would assume that someone would publish them for marketing reasons.

          I think there's an interesting lens to look at it in: remote interventions are massively disruptive, the car goes into a specific mode and support calls in to check in with the passenger.

          It's baked into UX judgement, it's not really something a specific number would shed more light on.

          If there was a significant problem with this, it would be well-known given the scale they operate at now.

      • FireBeyond 2 days ago

        > I think no freeways still

        California granted Waymo the right to operate on highways and freeways in March 2024.

      • standardUser 3 days ago

        All cars were once restricted in the locations they could drive. EVs are restricted today. I don't see why universal access is a requirement for a commercially viable autonomous taxi service, which is what Waymo is currently. And the need for human operators seems obvious for any business, no matter how autonomous, let alone a business operating in a cutting edge and frankly dangerous space.

        • shadowgovt 3 days ago

          It's by definition in terms of how these things are counted.

          L4 is "full autonomy, but in a constrained environment." L5 is the holy grail: as good as or better than human in every environment a human could take a car (or, depending on who's doing the defining: every road a human could take a car on. Most people don't say L5 and mean "full Canyonero").

          • yencabulator 2 days ago

            > or, depending on who's doing the defining: every road a human could take a car on.

            That's a distinction without a difference. Forest service and BLM roads are "roads" but can be completely impassable or 100% erased by nature (and I say this as a former Jeep Wrangler owner), they aren't always located where a map thinks they are, and sometimes absolutely nothing differentiates them from the surrounding nature -- for example, left turn into a desert dry wash can be a "road" and right not.

            Actual "full" autonomous driving is crazy hard. Like, by definition you get into territory where some vehicles and some drivers just can't make it through, but it's still a road(/"environment"). And some people will live at the end of those roads.

          • standardUser 2 days ago

            These definitions appear to be largely academic and now outdated.

        • pavel_lishin 3 days ago

          > EVs are restricted today.

          Are they? Did you mean Autonomous Vehicles?

          • standardUser 3 days ago

            No, you can't go driving off into an area with no charging options, which would be much of the world.

            • yencabulator 2 days ago

              Did you know that a gas car can also run out of gas?

              • standardUser 2 days ago

                Yes, and before gas stations were widespread you couldn't drive gas cars anywhere you wanted either, dummy.

    • gerdesj 3 days ago

      No one does FSD yet - properly.

      It initially seems mad that a human, inside the box can outperform the "finest" efforts of a multi zillion dollar company. The human has all their sensors inside the box and most of them stymied by the non transparent parts. Bad weather makes it worse.

      However, look at the sensors and compute being deployed on cars. Its all minimums and cost focused - basically MVP, with deaths as a costed variable in an equation.

      A car could have cameras with views everywhere for optical, LIDAR, RADAR, even a form of SONAR if it can be useful, microwave and way more. Accellerometers and all sorts too, all feeding into a model.

      As a driver, I've come up with strategies such as "look left, listen right". I'm British so drive on the left and sit on the right side of my car. When turning right and I have the window wound down, I can watch the left for a gap and listen for cars to the right. I use it as a negative and never a positive - so if I see a gap on the left and I hear a car to my right, I stay put. If I see a gap to the left but hear no sound on my right, I turn my head to confirm that there is a space and do a final quick go/no go (which involves another check left and right). This strategy saves quite a lot of head swings and if done properly is safe.

      I now drive an EV: One year so far - a Seic MG4, with cameras on all four sides, that I can't record from but can use. It has lane assist (so lateral control, which craps out on many A road sections but is fine on motorway class roads) and cruise control that will keep a safe distance from other vehicles (that works well on most roads and very well on motorways, there are restrictions).

      Recently I was driving and a really heavy rain shower hit as I was overtaking a lorry. I immediately dived back into lane one, behind the lorry and put cruise on. I could just see the edge white line, so I dealt with left/right and the car sorted out forward/backward. I can easily deal with both but its quite nice to be able carefully abrogate responsibilities.

    • panick21_ 3 days ago

      Put a Waymo on random road in the world, can it drive it?

      • standardUser 3 days ago

        For a couple decades you couldn't even bring your cell phone anywhere in the world and use it. Transformational technologies don't have to be available universally and simultaneously to be viable. Even when the gas car was created you couldn't use it anywhere that didn't have gasoline and paved roads, plus a mechanic and access to parts.

        • panick21_ 2 days ago

          Did I argue that the technology was not viable?

          I answered the question 'What does Waymo lack in your opinion to not be considered "full self driving"?'. And clearly its not if it can't drive on literally 99.99% of roads in the world. Any argument to the contrary is just ridiculous.

          • standardUser 2 days ago

            As I said above, "full self driving" is clearly an outdated concept.

        • jazzyjackson 3 days ago

          A significant portion of US highways and backroads are uncovered by cell signal. I suppose a self driving car would have starlink these days.

          • standardUser 3 days ago

            We once had no gas stations, now we have 150,000 (in the US). If the commercial need is there, building out connectivity is an unlikely impediment. Starlink et al. can solve this everywhere except when there's severe weather, a problem Waymo shares, which is starting to make me think the Upper Midwest might be waiting a very long time for self-driving cars.

            • panick21_ a day ago

              I think the bigger problem is mapping every road to the detail they need and keeping that up to date.

      • Kye 3 days ago

        That's the real issue. If "can navigate roads" is enough then we've had full self-driving for a while. There needs to be some base level of general purpose capability or it's just a neat regional curiosity.

      • cryptoz 3 days ago

        Many humans couldn't.

        • jacquesm 3 days ago

          Most humans that claim they could could. Anyway, this seems like a pretty low quality comment, you got perfectly well what the OP meant.

          • cryptoz 3 days ago

            Oh gosh sorry, I do try to contribute positively to HN and write quality comments. I'll expand: I've been in circumstances where I've been rented a company car in a foreign country, felt that I was a good driver, but struggled. The road signs are different and can be confusing, the local patterns and habits of drivers can be totally different from what you're accustomed to. I don't doubt that lots of humans could drive most roads - but I think the average driver would struggle, and have a much higher rate of accidents than a local.

            Germany, Italy, India all stand out as examples to me. The roads and driving culture is very different, and can be dangerous to someone who is used to driving on American suburban streets.

            I really do stand by my comment, and apologize for the 'low quality' nature of it. I meant to suggest that we set the bar far higher for AI than we do for people, which is in general a good thing. But still - I would say that by this definition of 'full self driving', it wouldn't be met very well by many or most human drivers.

            • jacquesm 3 days ago

              I've driven all over the planet except for Asia and Africa. So far, no real problem and I think most drivers would adapt within a day or two. Greece, Panama and Colombia stand out as somewhat more exciting. Switching to left hand driving in the UK also wasn't a big problem but you do have to pay more attention.

              Of course I may have simply been lucky, but given that my driving license is valid in many countries it seems as though humanity has determined this is mostly a solved problem. When someone says "Put a Waymo on random road in the world, can it drive it?" they mean: I would expect a human to be able to drive on a random road in the world. And they likely could. Can a Waymo do the same?

              I don't know the answer to that one. But if there is one thing that humans are pretty good at it is adaptation to circumstances previously unseen. I am not sure if a Waymo could do the same but it would be a very interesting experiment to find out.

              American suburban streets are not representative of driving in most parts of the world. I don't think the bar of 'should be able to drive most places where humans can drive' is all that high and even your average American would adapt pretty quickly to driving in different places. Source: I know plenty of Americans and have seen them drive in lots of countries. Usually it works quite well, though, admittedly, seeing them in Germany was kind of funny.

              "Am I hallucinating or did we just get passed by an old lady? And we're doing 85 Mph?"

            • gerdesj 3 days ago

              "Germany, Italy, India "

              That's experience and you learned and survived to tell the tale. Its almost as though you are capable of learning how to deal with an unfamiliar environment, and fail safe!

              I'm a Brit and have driven across most of Europe, US/CA and a few other places.

              Southern Italy eg around Napoli is pretty fraught - around there I find that you need to treat your entire car as an indicator: if you can wedge your car into a traffic stream, you will be let in, mostly without horns blaring. If you sit and wait, you will go grey haired eventually.

              In Germania, speed is king. I lived there in the 70s-90s as well as being a visitor recently. The autobahns are insane if you stray out of lane one, the rest of the road system is civilised.

              France - mostly like driving around the UK apart from their weird right hand side of the road thing! La Perifique is just as funky as the M25 and La Place du Concorde is a right old laugh. The rest of the country that I have driven is very civilised.

              Europe to the right of Italy is pretty safe too. I have to say that across the entirety of Europe, that road signage is very good. The one sign that might confuse any non-European is the white and yellow diamond (we don't have them in the UK). It means that you have priority over an implied "priority to the right". See https://driveeurope.co.uk/2013/02/27/priority-to-the-right/ for a decent explanation.

              Roundabouts were invented in the US. In the UK when you are actually on a roundabout you have right of way. However, everyone will behave as though "priorite a la doite" and there will often be a stand off - its hilarious!

              In the UK, when someone flashes their headlights at you it generally means "I have seen you and will let you in". That generally surprises foreigners (I once gave a lift to a prospective employee candidate from Poland and he was absolutely aghast at how polite our roads seemed to be). Don't always assume that you will be given space but we are pretty good at "after you".

              • jacquesm 3 days ago

                That reminds me. I was in the UK on some trip and watched two very polite English people crash into each other when after multiple such 'after you' exchanges they both simultaneously thought screw it and accelerated into each other. Fortunately only some bent metal.

          • bsder 3 days ago

            > Most humans that claim they could could.

            I don't agree.

            My anecdata suggests that Waymo is significantly better than random ridesharing drivers in the US, nowadays.

            My last dozen ridesharing experiences only had a single driver that wasn't actively hazardous on the road. One of them was so bad that I actually flagged him on the service.

            My Waymo experiences, by contrast, have all been uniformly excellent.

            I suspect that Waymo is already better than the median human driver (anecdata suggests that's a really low bar)--and it just keeps getting better.

            • jacquesm 3 days ago

              > Most humans that claim they could could.

              > My anecdata suggests that Waymo is significantly better than random ridesharing drivers in the US, nowadays.

              Those two aren't really related are they? That's one locality and a specific kind of driver. If you picked a random road there is a pretty small chance that road would be one like the one where Waymo is currently rolled out, and where your ridesharing drivers are representative of the general public, they likely are not.

  • matthewdgreen a day ago

    Do Waymos (without safety driver in the car) count as FSD?

  • bradhe 2 days ago

    > This looks to me like they are acknowledging that their claims were premature, possibly due to claims of false advertising, but are otherwise carrying forward as they were.

    Delusionaly generous take. Perhaps even zealotry.

an0malous 3 days ago

How have they gotten away with such obvious misadvertising for this long? It’s undeniably misled customers and inflated their stock value

  • dreamcompiler 3 days ago

    Normally the Board of Directors would fire any CEO that destroyed as much of the company's value as Musk has. But Tesla's board is full of Musk syncophants and family members who refuse to stand up to him.

  • Eddy_Viscosity2 3 days ago

    Who was going to stop them from lying?

    • vlovich123 3 days ago

      SEC and FTC would be obvious candidates who historically would do this. States also have the ability to prosecute this via UDAP (unfair and deceptive practices) laws.

      Probably Tesla being the only major domestic EV manufacturer + historically Musk not wading into politics + Musk/Tesla being widely popular for a time is probably why no one has gone after him. Not sure how this changes going forward with Musk being a very polarizing figure now.

      • 1over137 3 days ago

        >SEC and FTC would be obvious candidates who historically would do this.

        Yeah, historically, as in: before many people here were born. It's been so long since SEC and FTC did such things.

        • rsynnott 3 days ago

          FTC, sure, yeah, mostly, kinda neutered these days. SEC, despite Trump’s efforts to neuter it, is still fairly scary tho.

      • MangoToupe 3 days ago

        Not to mention there's got to have been insane pressure from the hill not to kill the golden goose.

      • randallsquared 3 days ago

        The previous two administrations (Trump I and Biden) being somewhat anti-Tesla or anti-Musk was some part of what prompted Musk to get into politics in the first place. Given the Biden admin's hostility, I would have expected the SEC and FTC to have been directed to do all they could against him within bounds, and so my first guess would be that they did, in fact, do everything justifiable.

        • MangoToupe 3 days ago

          > anti-Tesla

          I'm curious why you think this. I would be pretty shocked if, despite Musk's disgusting personality, they weren't also bought in.

          • randallsquared 3 days ago

            From 2022, a contemporaneous account of the Biden antipathy: https://www.detroitnews.com/story/business/autos/2022/02/03/...

            While I didn't look long for a more neutral source, Teslarati has a good list of the prompts of the shift from Musk being anti-Trump and pro-Biden, to giving up on Biden, to supporting Trump: https://www.teslarati.com/former-tesla-exec-confirms-wsj-rep...

            There were apparently also other considerations not associated with Tesla for his turn (transgender child, etc), but my read on all this is that Musk saw staying out of politics didn't mean politics would stay away from him. Given that Trump II is also now somewhat anti-Musk, it's not clear to me that he succeeded in avoiding a longer-term axe for Tesla (Neuralink/Solarcity/SpaceX/Boring...) from politicians. We'll see.

      • barbazoo 3 days ago

        Maybe that’s what happens in late stage capitalism. The billionaires get so powerful that they become untouchable. He’s already shown that he uses his fortune to steer political outcomes.

AbrahamParangi 3 days ago

I use self-driving every single day in Boston and I haven’t needed to intervene in about 8 months. Most interventions are due to me wanting to go a different route.

Based on the rate of progress alone I would expect functional vision-only self-driving to be very close. I expect people will continue to say LIDAR is required right up until the moment that Tesla is shipping level 4/5 self-driving.

  • rogerrogerr 3 days ago

    Same experience in a mix of city/suburban/rural driving, on a HW3 car. Seeing my car drive itself through complex scenarios without intervention, and then reading smart people saying it can’t without hardware it doesn’t have, gives major mental whiplash.

  • rootusrootus 3 days ago

    I would like to get my experience more in line with yours. I can go a few miles without intervention, but that's about it, before it does something that will result in damage if I don't take over. I'm envious that some people can go months when I can't go a full day.

    • mauvehaus 2 days ago

      Where are you driving?! If the person you're replying to has gone 8 months in Boston without having to intervene, I'm impressed. Boston is the craziest place to drive that I've ever driven.

      Pro tip if you get stuck in a warren of tiny little back streets in the area. Latch on to the back of a cab; they're generally on their way to a major road to get their fare where they're going and they usually know a good way to get to one. I've pulled this trick multiple times around city hall, Government Center, the old state house, etc.

      • meroes 2 days ago

        Or when. Driving during peak commute hours really makes you a sardine in a box and it's harder for there to be intervene-worthy events just by nature of dense traffic.

    • diebeforei485 2 days ago

      I am curious what vehicle you're driving and whereabouts you're driving it.

  • FollowingTheDao 3 days ago

    Self driving is not the same as "autonomy". Musk lied to everyone with the Tesla self driving, the Boring Company, DOGE...wake up people...

    • globular-toast 2 days ago

      Every company that does marketing lies to you.

      • FollowingTheDao 2 days ago

        I agree, but no one is more obvious with Musk yet people still keep falling for it. Specifically his “investors”.

        • globular-toast 2 days ago

          He's just very good at it, like Steve Jobs was.

  • herbturbo 3 days ago

    > Based on the rate of progress alone I would expect functional vision-only self-driving to be very close.

    So close yet so far, which is ironically the problem vision based self-driving has. No concrete information just a guess based on the simplest surface data.

  • potato3732842 2 days ago

    On a scale from "student driver" to "safelite guy (or any other professional who drives around as part of their job) running late" how does it handle storrow and similiar?

    Like does it get naively caught in stopped traffic for turns it could lane change out or does it fucking send it?

    • rogerrogerr a day ago

      I don't drive in Boston, but there is some impatience factor and it will make human-like moves out of correct-but-stopped lanes into moving ones. It'll merge into gaps that feel very small when it doesn't have other options.

Nitsua007 2 days ago

Small correction: LiDAR can’t literally see around corners — it’s still a line-of-sight sensor. What it can do is build an extremely precise 3D point cloud of what it can see, in all lighting conditions, and with far less susceptibility to “hallucinations” from things like glare, shadows, or visual artifacts that trip up purely vision-based systems.

The problem you’re describing — phantom braking, random wiper sweeps — is exactly what happens when the perception system’s “eyes” (cameras) feed imperfect data into a “brain” (compute + AI) that has no independent cross-check from another modality. Cameras are amazing at recognizing texture and color but they’re passive sensors, easily fooled by lighting, contrast, weather, or optical illusions. LiDAR adds active depth sensing, which directly measures distance and object geometry rather than inferring it.

But LiDAR alone isn’t the endgame either. The real magic happens in sensor fusion — combining LiDAR, radar, cameras, GNSS, and ultrasonic so each sensor covers the others’ blind spots, and then fusing data at the perception level. This reduces false positives, filters out improbable hazards before they trigger braking, and keeps the system robust in edge cases.

And there’s another piece that rarely gets mentioned in these debates: connected infrastructure. If the vehicle can also receive data from roadside units, traffic signals, and other connected objects (V2X), it doesn’t have to rely solely on its onboard sensors. You’re effectively extending the vehicle’s situational awareness beyond its physical line of sight.

Vision-only autonomy is like trying to navigate with one sense while ignoring the others. LiDAR + fusion + connectivity is like having multiple senses and a heads-up from the world around you.

  • moomoo11 a day ago

    Honestly at that point of complexity I hope automakers just quit chasing FSD and go back to making actually good cars again.

    Let the automated trucks figure it out if it’s an actual problem worth solving or we can just use trains or let truck driving be a decent middle class job.

jesenpaul 3 days ago

They made tons of money on the Scam of the Decade™ from Oct 2016 (See their "Driver is just there for legal reasons" video) to Apr 2024 (when they officially changed it to Supervised FSD) and now its not even that.

smoovb 10 hours ago

Here's what Claude has to say about electrek.co:

Tesla Headlines Sentiment Analysis - Electrek.co Bottom Line: Strongly Negative Sentiment Based on analysis of Tesla headlines and articles from Electrek over the past few months, the sentiment is overwhelmingly negative (approximately 85% negative, 10% neutral, 5% positive). The coverage reveals a company in decline across multiple fronts.

mettamage 3 days ago

I’m not surprised. As a former Elon fan, it never struck me that he thought about this from first principles, whereas for SpaceX he did.

For as long as we can’t understand AI systems as well as we understand normal code, first principles thinking is out of reach.

It may be possible to get FSD another way but Elon’s edge is gone here.

  • fsmv 3 days ago

    SpaceX is a success despite Elon. Maybe setting an extremely lofty goal helped somewhat but Gwynne Shotwell and all the actual engineers at SpaceX deserve the credit for their success.

    • tim333 2 days ago

      Wikipedia says Gwynne Shotwell joined after the first successful launch. Elon founding it and getting a rocket up must count for something.

    • mettamage 3 days ago

      How is it despite Elon? I don't know the history too well.

      I agree that the team deserves most of the success. I think that's the case in general. At best, a CEO puts down good framing/structure, that's it. ICs do the actual innovative work.

      • johnthewise 2 days ago

        Napoleon didn't fight much at all in battles, too.

    • rlt a day ago

      I used to think engineers should get all the credit for the successes of big engineering projects, and they should probably get more credit than they do, but over time I’ve realized how important good leadership is. SpaceX and Tesla absolutely would not be where they are today without Elon. Anyone who claims otherwise either hasn’t paid attention or is being disingenuous because they don’t like him.

goloroden 3 days ago

I think I’d call what Tesla did fraud. Or scam. Or both.

ciconia 3 days ago

War Is Peace. Freedom Is Slavery. Ignorance Is Strength. FSD is... whatever Elon says it is.

IgorPartola 3 days ago

I don’t need self driving cars that can navigate alleys in Florence, Italy and also parkways in New England. Here is what we really need: put transponders into the roadway on freeways and use those for navigation and lane positioning. Then you would be responsible for getting onto the freeway and getting off the exit but can take a nap between. This would be something that would be do e by the DOT, supported by all car makers, and benefit everyone. LIDAR could be used for obstacle detection but not for navigation. And whoever figures out how to do the transponders and land a government contract and get at least one major car manufacturer on board would make bank.

  • hedora 3 days ago

    We live in an area with sort of challenging roads, and I strongly disagree.

    There’s an increasing number of drivers that can barely drive on the freeways. When they hit our area they cannot even stay on their side of the road, slow down for blind curves (when they’re on the wrong side of the road!), maintain 50% the normal speed of other drivers, etc. I won’t order uber or lyft anymore because I inevitably get one of these people as my driver (and then watch them struggle on straight stretches of freeway).

    Imagine how much worse this will get when they start exclusively using lane keeping on easy roads. It’ll go from “oh my god I have to work the round wheel thingy and the foot levers at the same time!” to “I’ve never steered this car at speeds above 11”.

    I’d much rather self driving focused on driving safely on challenging roads so that these people don’t immediately flip their cars (not an exaggeration; this is a regular occurrence!) when the driver assistance disables itself on our residential street.

    I don’t think addressing this use case is particularly hard (basically no pedestrians, there’s a double yellow line, the computer should be able to compute stopping distance and visibility distance around blind curves, typical speeds are 25mph, suicidal deer aren’t going to be the computer’s fault anyway), but there’s not much money in it. However, if you can’t drive our road, you certainly cannot handle unexpected stuff in the city.

    • IgorPartola 2 days ago

      You describe it as challenging but it sounds like realistically it is just badly designed roads. But fixing that aside, nothing really stops you from outfitting secondary roads with transponders in principle. In practice, it is easier to start with freeways because (a) they are much more uniform, (b) the impact of an accident at freeways speeds is much more deadly, (c) no pedestrians, bicycles, etc., and (d) the federal government has control over the freeways (it is a complex relationship but ultimately the feds have a say) which means they can mandate installing the transponders and pay for it. Once the system functions there it can be expanded until every driveway and parking spot is outlined.

      • hedora a day ago

        They’re badly designed (outskirts of Silicon Valley). So is the electricity and internet, so the transponders placed in shady spots would need something like a 30 day battery backup and network other than wired, cell, wisp or starlink.

        Vision based systems would be more than adequate. Lidar or (god forbid) ultrasonic chirps would easily lead to superhuman safety and speeds.

        I’m skeptical of transponder or network based systems. What happens during a natural disaster? Do the 10% of cars that lack drivers or steering wheels just stop and block the evacuation routes? That’d kill a lot of people in very graphic / high profile ways.

  • heeton 2 days ago

    We already have transponders on freeways. They’re technically passive reflectors, but they reflect a high proportion of incident EM waves, in the visible spectrum, and exist between lanes on every major road in the US. Also known as white paint.

  • gilbetron 2 days ago

    Following roads and lane markers and signs and signals is the "easy" part of autonomous driving. You could do everything you say and it wouldn't result in something that is any better than the current state of the art. Dealing with others on the road is the main problem (weather comes in close second). Your solution solves nothing, I'm afraid.

  • randunel 3 days ago

    How would you know which signals to trust and which to ignore?

    • uoaei 3 days ago

      Physics prevents detected objects from jumping unrealistically. Current systems seem not to account for that at all, reacting to objects which appear and disappear spontaneously. Sensor fusion is exactly the solution to this: use a variety of sensors as input to reliably identify actual obstacles. To fake all the sensors at once you'd need to fake vision, lidar, and transponder locations simultaneously.

    • jijijijij 3 days ago

      Blockchain.

      Just kidding.

      Wait, no! Please. No!

      How do I delete this???

shadowgovt 3 days ago

"Full Self Driving (Supervised)." In other words: you can take your mind off the road as long as you keep your mind on the road. Classic.

Tesla is kind of a joke in the FSD community these days. People working on this problem a lot longer than Musk's folk have been saying for years that their approach is fundamentally ignoring decades of research on the topic. Sounds like Tesla finally got the memo. I mostly feel sorry for their engineers (both the ones who bought the hype and thought they'd discover the secret sauce that a quarter-century-plus of full-time academic research couldn't find and the old salts who knew this was doomed but soldiered on anyway... but only so sorry, since I'm sure the checks kept clearing).

  • arijun 3 days ago

    Until very recently I worked in the FSD community, and I wouldn’t say I viewed it as a joke. I don’t know if I believed they would get to level 5 without any lidar, it’s pretty good for what’s available in the consumer market.

    • shadowgovt 3 days ago

      That's what I mean. Nobody I know thought there'd be a chance of getting to L4 (much less L5) without LIDAR. They doomed the goal from the gate and basically lied to people for years about the technological possibilities to pad their bottom line.

      It's two steps from selling snake-oil, basically. Not that L4 or L5 are impossible, but people who knew the problem domain looked at how they were approaching it hardware-wise and went "... uhuh."

  • FireBeyond 2 days ago

    > In other words: you can take your mind off the road as long as you keep your mind on the road.

    They literally did this with Summon. "Have your car come to you while dealing with a fussy child" - buried far further down the page in light grey, "pay full attention to the vehicle at all times" (you know, other than your "fussy child").

d_sem 3 days ago

My experience working in an automotive supplier suggest that Tesla engineers must have always knowns this and the real strategy was to provide the best ADAS experience with the cheapest sensor architecture. They certainly did achieved that goal.

There were aspirations that the bottom up approach would work with enough data, but as I learned about the kind of long tail cases that we solved with radar/camera fusion, camera-only seemed categorically less safe.

easy edge case: A self driving system cannot be inoperable due to sunlight or fog.

a more hackernew worthy consideration: calculate the angular pixel resolution required to accurately range and classify an object 100 meters away. (roughly the distance needed to safely stop if you're traveling 80mph) Now add a second camera for stereo and calculate the camera-to-camera extrinsic sensitivity you'd need to stay within to keep error sufficiently low in all temperature/road condition scenarios.

The answer is: screw that, I should just add a long range radar.

there are just so many considerations that show you need a multi-modality solution, and using human biology as a what-about-ism, doesn't translate to currently available technology.

  • brandonagr2 2 days ago

    Tesla does not use stereo/binocular vision, that's not how humans perceive relative motion at that distance either, we would depend on perspective and parallax

briandw 3 days ago

Lidar is the first thing brought up in these discussions. Lidar isn’t that great of a sensor. It does one thing well and that’s measure distance. A visual sensor can be measured along the axis of spatial resolution (x,y,z) temporal resolution(fps) and dynamic range(bit depth). You could add things like light frequency and phase etc. Lidar is quite poor in all of these except the spatial z dimension, measuring distance as mentioned before. Compared to a cheep camera the fps is very low, the spatial resolution in x and y is pathetic 128. in the vertical, higher horizontal but its not mega pixels. Finally the dynamic range is 1 bit(something is there or not). Lidars use near infrared and are just as susceptible to problems with natural fog (not the theatrical fog like in that Roper video) and rain. Multiple cameras can do good enough depth estimation with modern neural networks. But cameras are vastly better at making sense of the world. You can’t read a sign with lidar.

  • smilekzs 2 days ago

    Lidars have been reporting per-point intensity values for quite a while. The dynamic range is definitely not 1 bit.

    Many Lidar visualization software will happily pseudocolor the intensity channel for you. Even with a mechanically scanning 64-line Lidar you can often read a typical US speed limit sign at ~50 meter in this view.

AndrewKemendo 3 days ago

Karpathy should be held liable for this (maybe less than Musk) but he should at least be considered persona non grata for pushing it.

It was his idea, his decision to build the architecture and he led the entire vision team during this.

Yet, he remains free from any of this fallout and still widely considered an ML god

https://youtu.be/3SypMvnQT_s?si=FDmyA6amWnDpMPEj

  • bmitc 3 days ago

    Silicon Valley tech workers and companies are not known for their morals.

starchild3001 3 days ago

Feels like Musk should step down from the CEO role. The company hasn’t really delivered on its big promises: no real self-driving, Cybertruck turned into a flop, the affordable Tesla never materialized. Model S was revolutionary, but Model 3 is basically a cheaper version of that design, and in the last decade there hasn’t been a comparable breakthrough. Innovation seems stalled.

At this point, Tesla looks less like a disruptive startup and more like a large-cap company struggling to find its next act. Musk still runs it like a scrappy startup, but you can’t operate a trillion-dollar business with the same playbook. He’d probably be better off going back to building something new from scratch and letting someone else run Tesla like the large company it already is.

  • xpe 3 days ago

    This is not a heavily researched comment, but it seems to me that the Model 3 is relatively affordable, at least compared to available options at the time. It depends on your point of comparison: there is a lot of competition for sure. The M3 was successful to a good degree, don’t you think? I mean, we should put a number on it so we’re not just comparing feels. The Model Y sells well too, doesn’t at least until the DOGE insanity.

    • starchild3001 3 days ago

      Here's some heavy research for you -- Model 3 is competing with the likes of BMW, Audi etc. That's not considered the "affordable" tier. It's called luxury. Here's a comparison:

      https://www.truecar.com/compare/bmw-3-series-vs-tesla-model-...

      • xpe 3 days ago

        I will charitably interpret “heavy research” as a joke.

        It is hard to interpret the smugness above in a positive light. It is unhelpful to you and to everyone here.

        If you want to compare an electric car against combustion-engine vehicle, go ahead, but that isn’t a key decision point for what we’re talking about.

        The TrueCar web page table does not account for a $7,500 federal tax credit for EVs. I recognize it ends soon — September 30 — if only to head off a potential zinger comment (which would be irrelevant to the overall point).

        All in all, it is notable that ~2 minutes asking a modern large language model for various comparisons is more helpful than this conversation with another human (presumably). If we’re going to advocate for the importance of humanity, seems to me like we should start demonstrating that we can at least act like why we deserve it. I view HN primarily as a place to learn and help others, not a place for snarky comments.

        A better modern comparison showing less expensive EVs would mention the Nissan Leaf or Chevy Equinox or others. The history is interesting and worth digging into. To mention one aspect: the Leaf had a ~7 year head start but the Tesla caught up in sales by ~2018 and became the best-selling EV — even at a higher price point. So this undermines any claim that Tesla wasn’t doing something right from the POV of customer perception.

        I don’t need to “be right” in this particular comment — I welcome corrections — I’m more interested in error correction and learning.

        • hedora 3 days ago

          Here’s another list:

          https://www.edmunds.com/electric-car/articles/cheapest-elect...

          The model 3 is 1.5x more expensive than the cheapest car on the list, and it’s not obviously better than other things in its price range.

          Here are some brands that have delivered more affordable EVs than Tesla: Kia, Hyundai, Chevy, Cooper, Nissan.

          Note that all of these cost about 2x more than international competitors.

          On top of that, Ford’s upcoming platform is targeting $30K midsize pickup trucks. Presumably, most other manufacturers have similar things in their pipelines.

          Tesla is already behind most of its competitors, and does not seem to have anything new the pipeline, so the gap is likely to expand.

          They’ve clearly failed to provide affordable EVs. They’ve been beaten to market by a half dozen companies in the North American market, and that’s with trade barriers blocking foreign companies that are providing cars for less than half these prices.

          • Kirby64 11 hours ago

            Are we reading the same list?

            The mini cooper EV is a joke of a car. 114 miles is ridiculous for $30k. Likewise, the 'base' Nissan Leaf is 150mi for $30k isn't much better.

            The crossover/small-SUV segment is a little more competitive, but still you're comparing vehicles with quite dissimilar (n.b., worse) specs for lower prices.

            If all you care about is a car that is electric that can be driven, then sure, there's cheaper cars. That doesn't mean they're better or reasonable for most consumers.

  • DoesntMatter22 3 days ago

    They went from no revenue to the 9th most valuable company in the world under him. No vehicle sales to having the best selling vehicles in the world.

    They are still profitable, have very little debt and a ton of money into the bank.

    Every company has hits and misses. Bezos started before Musk and still hasn't gotten his rockets into orbit.

    • tzs 3 days ago

      > No vehicle sales to having the best selling vehicles in the world

      They have the best selling model in the world (their Model Y). But their total sales of all models are way behind many other car companies.

      These car companies sell more cars each year than Tesla (ordered by total sales): Toyota, Volkswagen, Hyundai-Kia, GM, Stellantis, Ford, BYD, Honda, Nissan, Suzuki, BMW, Mercedes-Benz, Renault, and Geely.

      Toyota and Volkswagen each sell more cars in a year than Tesla has sold over its lifetime, and Hyundai-Kia's annual sales are about the same as Tesla's lifetime sales.

      By revenue rather than units these companies sell more per year: Volkswagen, Toyota, Stallantis, GM, Ford, Mercedes-Benz, BMW, Honda, BYD, and SAIC Motor. (Edit: I accidentally left out Hyundai-Kia)

    • starchild3001 3 days ago

      If I had to guess, I’d say the original Tesla founders had a greater influence than Musk. His track record, frankly, is unimpressive. He’s been promising full self-driving “next year” since 2016, yet it’s still nowhere close. Aside from the Model S and X, there hasn’t been a major innovation under his watch. The real groundbreaking work likely came before him. His reign? Far from remarkable. Each year has been a cycle of overpromising (often outright lying) and underdelivering. As for Tesla’s stock? Well, markets can stay irrational far longer than most people can remain solvent.

      • DoesntMatter22 3 days ago

        Tarpenning and Eberhard left Tesla in 2008 and 2007 but somehow they had a greater influence? They contributed no money, nearly tanked the company but somehow were more important.

        "His track record is unimpressive"... I can see why you say that, I mean, took Tesla from almost nothing to a trillion dollar company. Started the most prolific rocket and satellite company in history (but hey, it's only rocket science right?), provides internet to places that it never even had the possibility of getting to, and providing untold millions the chance to get on the internet.

        Started a company that is giving the paralyzed the ability to use a computer controlling their brain, and is working to restore sight to the blind.

        Totally unimpressive. There are so many people who have done these things /s

  • sidcool 3 days ago

    Tesla haters tend to just move the goal posts.

    • starchild3001 2 days ago

      I don't hate Tesla. I've owned two of them, and still have one. I just think severe underperformance and hallucinations are going on with the company.

      FSD has been a complete lie since the beginning. Any reasonable person who followed the saga (and the name "FSD") can tell you that. It was mobileye in 2015-2016, which worked quite well for what it's, followed by unfilled "FSD next year" promise since then every year.

      Fool me once, shame on you; fool me twice, shame on me.

      • brandonagr2 2 days ago

        A complete lie that drives me to work every morning? I'm not seeing what the lie is

  • derefr 3 days ago

    Daily reminder that Telsa is not — nor was ever intended to be — a car company. Tesla is fundamentally an "energy generation and storage" (battery/supercapacitor) company. Given Tesla's fundamentals (the types of assets they own, the logistics they've built out), the Powerwall and Megapack are closer to Tesla's core product than the cars are. (And they also make a bunch of other battery-ish things that have no consumer names, just MIL-SPEC procurement codes.)

    Yes, right now car sales make up 78% of Tesla's revenue. But cars have 17% margins. The energy-storage division, currently at 10% of revenue, has more like 30% margins. And the car sales are falling as the battery sales ramp up.

    The cars were always a B2C bootstrap play for Tesla, to build out the factories it needed to sell grid-scale batteries (and things like military UAV batteries) under large enterprise B2B contracts. Which is why Tesla is pushing the "car narrative" less and less over time, seeming to fade into B2C irrelevancy — all their marketing and sales is gradually pivoting to B2B outreach.

    • JimDabell 3 days ago

      > Telsa is not — nor was ever intended to be — a car company. Tesla is fundamentally an "energy generation and storage" (battery/supercapacitor) company.

      > The cars were always a B2C bootstrap play for Tesla, to build out the factories it needed to sell grid-scale batteries

      This seems like revisionist history. They called their company Tesla Motors, not Tesla Energy, after all.

      This is a blog post from the founder and CEO about their first energy play. It seems clear that their first energy product was an unintended byproduct of the Roadster, they worried about it being a distraction from their core car business, but they decided to go ahead with it because they saw it as a way to strengthen their car business.

      https://web.archive.org/web/20090814225814/http://www.teslam...

      • ZeroGravitas 2 days ago

        I think that blog talks about selling their batteries to other car manufacturers.

        But, to support your wider point, there's some reporting that the initial grid BESS Megapack batteries had a test setup in the car park at Tesla and Elon was unaware they existed until they got mentioned to him in a meeting and someone pointed out the window to explain.

        He immediately wanted to shut that project down to focus on cars.

    • CPLX 3 days ago

      > Telsa is not — nor was ever intended to be — a car company. Tesla is fundamentally an "energy generation and storage" (battery/supercapacitor) company

      Are we still doing this in 2025?

      Uber is not a taxi company it’s a transportation company! Just wait until they roll out buses!

      Juicero is not a fruit squeezing company it’s an end to end technology powered nourishment platform!

      And so on. Save it for the VC PowerPoints.

      Tesla is a car company. Maybe some day it’ll be defined by some other lines of business too. Maybe one day they’ll even surpass Yamaha.

      • enaaem 2 days ago

        Tesla fans don't seem to be aware of the existence of conglomerates. Imagine Tesla did half the things Hyundai did.

    • utyop22 3 days ago

      Are you an investor of Tesla by any chance?

      • derefr 3 days ago

        Nope. Don't even own a car. Military-industrial-complexes are just my special interest. And apparently Musk's, too. (What do grid-scale batteries, rockets, data-satellite constellations, and tunnel boring machines have in common? They're all products/services that can be — and already are being — sold to multiple allied nations' militaries. AFAICT, this is 90% of the reason Trump can't fully cut ties with the guy.)

    • rsynnott 3 days ago

      I mean if that’s true they’re _really_ overvalued; that sort of commodity utility stuff is very low margin.

yieldcrv 3 days ago

The lesson here is to wait for a chill SEC and friendly DOJ before you recant your fraudulent claims, because then they won’t be found to be fraudulent

  • comice 3 days ago

    Wait for them? or buy them?

    • yieldcrv 3 days ago

      You’re right, still an exercise of patience

maxlin 3 days ago

Needs to be known that Fred Lambert pushes out so much negative Tesla press that its reasonable to say that he's on a crusade. And not a too fact-based one.

Like with this. No, Tesla hasn't communicated any as such. Everyone knows FSD is late. But Robotaxi shows it is very meaningfully progressing towards true autonomy. And for example crushed the competition (not literally) in a recent very high-effort test in avoiding crashes on a highway with obstacles that were hard to read for almost all the other systems: https://www.youtube.com/watch?v=0xumyEf-WRI

  • FireBeyond 2 days ago

    > But Robotaxi shows it is very meaningfully progressing towards true autonomy.

    What? They literally just moved the in car supervisor from the passenger seat to the driver seat. That's not a vote of confidence.

    And I don't think you can glean anything. There are less than 20 Robotaxis in Austin, that spend their time giving rides to influencers so they can make YT videos where even they have scary moments.

    • maxlin 17 hours ago

      >They literally just moved the in car supervisor from the passenger seat to the driver seat

      They didn't "move" anything. They decided to have that for a new expansion, driving on highways. The system is progressing at a good rate, considering it's comparing well to Waymo well even though Waymo has been there for years.

      >where even they have scary moments.

      More FUD. There's hours long streams. And as I said, it compares well to Waymo. That alone should make their boardroom eerie.

RyanShook 3 days ago

Looking forward to the class action on this one…

  • greyface- 3 days ago

    Tesla has binding arbitration that prohibits class actions.

    • t0mas88 3 days ago

      Won't help them in most of Europe. Consumer protection laws here are stricter.

      • jillesvangurp 3 days ago

        But class action suits are not a thing here. And FSD is not deployed in Europe.

        • pavlov 3 days ago

          They’ve been selling FSD in Europe since 2018.

          I know because I bought it in March 2019 on a Model 3. (I got it because I thought it would help my elderly parents who mostly used the car.)

          7500 euros completely down the drain. It still can’t even read highway speed signs. A five-year-old would be a safer driver than Tesla’s joke FSD.

          They do have the audacity to send me NPS surveys on the car’s “Teslaversary.” Maybe they could guess by now that it’s a big fat zero.

        • ranguna 3 days ago

          But it's promised.

    • hedora 3 days ago

      If that’s the case I’m guessing a smart-assed lawyer will use grok to open one arbitration case per tesla sold.

      However, I’m not sure that’s necessary, They lost the Tesla Roof class action suit, so it’s clearly possible to sue them.

paradox460 2 days ago

One of the shower thoughts I've had is why don't we start equipping cars with UWB tech. UWB can identify itself, two UWB nodes can measure short range distances between each other (around 30m) with fairly decent accuracy and directionality.

Sure, it wouldn't replace any other sensing tech, but if my car has UWB and another car has UWB, they can telegraph where they are and what their intentions are a lot faster and in a "cleaner" manner than using a camera to watch the rear indicator for illumination

iammjm 3 days ago

This nazi-saluting manchild has been purposefully lying about self-driving for close to 10 years now, each year self-driving coming "next year". How is this legal and not false advertisement?

AdmiralAsshat 3 days ago

Kinda wish we as consumers had some way to fight back against this obvious bullshit, since lord knows the government won't do anything.

Like if a company comes out with a new transportation technology and calls it "teleportation", but in fact is just a glorified trebuchet, they shouldn't be allowed to use a generic term with a well-understood meaning fraudulently. But no, they'll just call it "Teleportation™" with a patented definition of their glorified trebuchet, and apparently that's fine and dandy.

I am still bitter about the hoverboard.

dvh 3 days ago

And stock is up $15

  • hedora 3 days ago

    Gotta keep juicing it to get that $1T payout.

    Am I the only one that noticed most of the targets are in nominal dollars, not inflation adjusted? Trump’s already prosecuting Fed leadership because they’re refusing to print money for him. Elon’s worked with him enough to understand where our monetary policy is headed.

amanaplanacanal 3 days ago

I wonder if this change came from the legal department after their loss in the lawsuit over that poor woman that was killed.

asdff 3 days ago

What I don't understand about this is that to my experience being driven around in friends teslas, its already there. It really seems like legalese vs technical capability. The damn thing can drive with no input and even find a parking spot and park itself. I mean where are we even moving the goalpost at this point? Because there's been some accidents its not valid? The question is how that compares to the accident rate of human drivers not that there should be an expectation of zero accidents ever.

  • AlotOfReading 3 days ago

    The word "driving" has multiple, partially overlapping meanings. You're using it in a very informal sense to mean "I don't have to touch the controls much". Power to you for using whatever definitions you feel like.

    Other people, most importantly your local driving laws, use driving as a technical term to refer to tasks done by the entity that's ultimately responsible for the safety of the entire system. The human remains the driver in this definition, even if they've engaged FSD. They are not in a Waymo. If you're interested in specific technical verbiage, you should look at SAE J3016 (the infamous "levels" standard), which many vehicle codes incorporate.

    One of the critical differences between your informal definition is whether you can stop paying attention to the road and remain safe. With your definition, it's possible have a system where you're not "driving", but you still have a responsibility to react instantaneously to dangerous road events after hours of of inaction. Very few humans can reliably do that. It's not a great way to communicate the responsibilities people have in a safety-critical task they do every day.

    • asdff 2 days ago

      >The human remains the driver in this definition

      I don't understand why that is. They literally do nothing. The car drives itself. Parks itself. Does everything itself. The fact you have to engage with the wheel every now and then is because of regulation not because the tech isn't there imo. Really to me there is zero difference between the waymo and tesla experience save for regulatory decisions that prevent the tesla from being truly hands free eyes shut.

      • diebeforei485 2 days ago

        The difference is liability. If you're riding a Waymo, you are not at all liable for what the vehicle does. If there is a collision, you don't need to exchange your insurance info or name or anything else (regardless of who is at fault). You are not allowed to be in the drivers seat.

        Tesla has chosen to not (yet) assume that liability, and leave that liability to the driver and requires a driver in the drivers seat. But someone in the drivers seat can override the steering wheel accidentally and cause a collision, so they likely will require the drivers seat to be empty to assume liability (or disable all controls, which is only possible on a steer by wire vehicle, and the only such vehicle in the world is Cybertruck).

        Tesla has not asked for regulatory approval for level 4 or 5. When they do, it'll be interesting to see how governments react.

        • asdff 2 days ago

          It makes sense why they wouldn't from a game theory standpoint. Why not shift liability? Waymo would too if they could set up such a structure in a way that makes sense. It is a little different for a cab where a 13 year old could call one on moms cellphone vs a car you buy outright and is registered to a licensed driver who pays for the insurance on it.

          Still, my point is all this has nothing to do with the tech. It is all regulatory/legal checkers.

          • diebeforei485 2 days ago

            > Why not shift liability?

            Because being a passenger in a driverless vehicle is a much better user experience than being a driver. You can be on a zoom call, sleep, watch a movie or TV show or scroll TikTok, get some work done on your computer, wear a VR headset and be in a different world, etc etc. Tesla would make a lot more money, and could charge a lot more for FSD.

            They aren't doing that yet because they aren't ready yet. It's why they still have humans in the robotaxi service.

            There are no doubts in my mind that they will do it probably next year. The latest version of FSD on the new cars is very, very impressive.

      • AlotOfReading 2 days ago

        As I explained in the previous post, the crucial difference is

            you still have a responsibility to react instantaneously to dangerous road events after hours of inaction.
        
        There are no regulatory barriers impeding Tesla outside a small handful of states (i.e. California). The fact that you still have to supervise it is an intentional aspect of the system design to shift responsibility away from Tesla.
        • asdff 2 days ago

          Like I said, legal issue not a technical issue.

          • AlotOfReading 2 days ago

            I don't know how to communicate this any more clearly to you, but I'm only talking about the safety design of the system. No legal or regulatory issues are involved.

  • breve 3 days ago

    Tesla set their own goal posts.

    In 2016 Tesla claimed every Tesla car being produced had "the hardware needed for full self-driving capability at a safety level substantially greater than that of a human driver": https://web.archive.org/web/20161020091022/https://tesla.com...

    It was a lie then and remains a lie now.

    • asdff 2 days ago

      That recent bloomburg report showed tesla fsd was an order of magnitude safer than human drivers in the U.S. People on reddit tried to discount it because "fsd is not actually fsd because you have to tap the wheel every now and then so those miles don't count" but really that is just a regulatory constraint not some real technical issue. The car drives itself and its drivers don't pay attention at all and its getting these numbers.

      • breve 2 days ago

        > The car drives itself and its drivers don't pay attention at all

        Sure. Does Tesla take responsibility and accept liability for the vehicle's driving?

GUNHED_158 2 days ago

The link contains malicious scripts.

diebeforei485 2 days ago

This article makes no sense to me. They aren't changing the meaning of anything for consumers, it's only defining it for the purpose of the compensation milestone.

gcanyon 3 days ago

Honest question: did Tesla in the past promise that FSD would be unsupervised? My based-on-nothing memory is that they weren't promising that you wouldn't have to sit in the driver's seat, or that your steering wheel would collect dust. Arguing against myself: they did talk about Teslas going off to park themselves and returning, but that's a fairly limited use case. Maybe in the robotaxi descriptions?

My memory was more that you'd be able to get into (the driver's seat of) your Tesla in downtown Los Angeles, tell it you want to go to the Paris hotel in Vegas, and expect generally not to have to do anything to get there. But not guaranteed nothing.

  • toast0 3 days ago

    Is Full just a catch word for actually not full now?

    Full Speed USB is 12Mbps, nobody wants a Full Speed USB data transfer.

    Full Self Driving requires supervision. Clearly, even Tesla understands the implication of their name, or they wouldn't have renamed it Full Self Driving Supervised... They should probably have been calling it Supervised Self Driving since the beginning.

    • gcanyon 2 days ago

      Autopilot on a plane requires a pilot in the cockpit. Does that make it "not auto"?

      I get that there are many who rush to defend Musk/Tesla. I'm not one of them.

      I was just caught off guard by the headline. To me, changing “Full Self-Driving” to “Full Self-Driving (Supervised)” doesn't merit the headline "Tesla changes meaning of ‘Full Self-Driving’, gives up on promise of autonomy".

      Again, to me "Full Self-Driving" never meant you would retro-fit your Tesla to remove the steering wheel, nor even set it for someplace and go to sleep. To me, it meant not needing to have your hands on the steering wheel and being able to have a conversation while maintaining some sort of situational awareness, although not necessarily keeping your eyes fully on the road for the more monotonous parts of a journey.

      As others have pointed out, Tesla/Musk sometimes claimed more than that, but the vast majority of their statements re: FSD hew closer to what I said above. At least I think so -- no one yet has posted something where claims of more than the above are explicit and in the majority.

      • toast0 2 days ago

        > Autopilot on a plane requires a pilot in the cockpit. Does that make it "not auto"?

        Autopilot in a plane generally maintains heading and altitude. It certainly can do that with or without a pilot in the cockpit, and you hear about incidents from time to time where the pilot is incapacitated and the autopilot keeps the heading and altitude until fuel run out. Keeping heading and altitude is insufficient to operate a plane, of course; Tesla's choice of the word Autopilot was also problematic, because the larger market of drivers doesn't necessarily understand the limitations of aviation autopilot and many people thought the system is more capable than it actually is; an aviation style autopilot wouldn't be much help on the road, maintaining heading in that way isn't actually helpful when roads are not completely straight, maintaining speed is sometimes useful but that's been called cruise control for decades. (Some flight automation systems can do waypoints, and autoland is a thing, but afaik, it's not all put together where you put the whole thing in at once and chill, nor would that be a good idea).

        > To me, it meant not needing to have your hands on the steering wheel and being able to have a conversation while maintaining some sort of situational awareness, although not necessarily keeping your eyes fully on the road for the more monotonous parts of a journey.

        I mean, that's sort of what the product is, although there's real safety concerns about ability for humans to context switch and intervene properly. I see how that's supervised self-driving, but not how it's full self-driving.

        If I paid 90% of your invoice and said paid in full, that doesn't make it paid in full.

  • abduhl 3 days ago

    The [2016 Tesla promotional] video carries a tagline saying: “The person in the driver’s seat is only there for legal reasons. He is not doing anything. The car is driving itself.”

    https://www.reuters.com/technology/tesla-video-promoting-sel...

    • gcanyon 3 days ago

      Well there you go. It seems clear that most of what Tesla has said is compatible with the application of the word "supervised" without really changing the meaning much. But a few statements, and the overall implication, very much contradict that.

  • scoopertrooper 3 days ago

    Musk promised unsupervised driving being right around the corner so many times it became a joke.

    https://youtu.be/B4rdISpXigM

    • gcanyon 3 days ago

      Thx for this! The one that stands out is the guy who says "my grandmother who doesn't speak the language and doesn't drive" -- that speaks to unsupervised. That said, most of rest don't seem incompatible with "supervised"

      To be clear, this is obviously a reframing from the implications Musk has made. But I still don't see adding "supervised" to the description as that big a shift for most of the use cases that have been presented in the past.

  • herbturbo 3 days ago

    In 2016 Musk said you’d be able to drive from LA to NYC without touching the steering wheel once “within 2 years”. He’s been making untrue statements about Tesla FSD for a decade.

    • gcanyon 3 days ago

      Yeah, I know he said this. Setting aside the two years aspect, which obviously didn't pan out/was a lie if we're being harsh, I don't see this language as incompatible with the current change -- they're adding "supervised" to the description. He didn't say you'd be able to go to sleep in the back seat. "able to" is not the same as "guaranteed to." Believe me, I'm not a fan, but I just don't see this language as that big of a shift.

    • bmitc 3 days ago

      And people keep buying the stock and buying the cars.

      • gcanyon 2 days ago

        Yeah, that's a bad bet unless Tesla becomes much more than a car company.

ratelimitsteve 14 hours ago

Bad angle shot: This thing where advertisers exploit the need to clarify ambiguity in order to smuggle in custom, private definitions of words that mean the opposite of the agreed-upon definitions of those same words is a problem. Calling something "full self-driving" when it doesn't drive by itself fully is lying even if you put in the fine print that "full" means "not full" and "self-driving" means "not driving by itself"

rickdg 2 days ago

I guess you can either go full waymo or full comma. The rest is just hype.

jaggs 3 days ago

One problem might be that American driving is not exactly... well great, is it? Roads are generally too straight and driving tests too soft. And for some weird reason, many US drivers seem to have a poor sense of situational awareness.

The result is it looks like many drivers are unaware of the benefits of defensive driving. Take that all into account and safe 'full self driving' may be tricky to achieve?

mlindner 3 days ago

The title is rather misleading. They haven't given up on promise of autonomy...

Ancalagon 2 days ago

Guess living and working in the factory ain’t working out so well

mensetmanusman 2 days ago

Our current infrastructure isn’t compatible with lidar. We were consulted to fix it, but governments have no idea how to approach this problem so it won’t happen for years.

Animats 2 days ago

So what does it mean for Tesla's "Robotaxi"? Is that being shut down?

It's pathetic. The Austin Robotaxi demo had a "safety monitor" in the front passenger seat, with an emergency stop button. But there were failures where the safety driver had to stop the vehicle, get out, walk around the car, get into the drivers's seat, and drive manually. So now the "safety monitor" sits in the driver's seat.[1] It's just Uber now.

Do you have to tip the "safety monitor"?

And for this, Musk wants the biggest pay package in history?

[1] https://electrek.co/2025/09/03/tesla-moves-robotaxi-safety-m...

  • diebeforei485 2 days ago

    The article is wrong. What happened is that they added some highway-capable ridehail vehicles, and only those vehicles have the safety person in the drivers seat. Frederic (the author) lives in Canada, he doesn't have access to any recent version of FSD.

    • Animats 2 days ago

      See this article in Gizmodo.[1]

      "In a visible sign of its shifting posture from daredevil innovation to cautious compliance, Tesla this week relocated its robotaxi safety monitors, employees who supervise the autonomous software’s performance and can take over the vehicle’s operation at any moment, from the passenger seat to the driver’s seat."

      And this one in Electrek.[2]

      The state of Texas recently enacted regulations for self-driving cars that require more reporting. Tesla's Robotaxi with a driver is not a self-driving car under Texas law.

      Musk claims Tesla will remove the safety driver by the end of the year. Is there a prediction market on that?

      [1] https://gizmodo.com/tesla-robotaxi-2000653821

      [2] https://electrek.co/2025/09/03/tesla-moves-robotaxi-safety-m...

      • diebeforei485 2 days ago

        Both Gizmodo and Electrek are wrong.

        • Animats 2 days ago

          Seeking Alpha: [1] "In sharp contrast, Tesla FSD largely remains a beta product that still requires drivers' attention. Even their Robotaxi service, which likely launched more out of necessity to stay in the headlines as opposed to actual readiness, requires a full-time human to monitor the vehicle. Independent data consistently shows that Tesla requires far more human intervention on a per mile basis than Waymo. This is not surprising as it seems FSD errors are becoming commonplace, with numerous examples of Tesla's FSD mode swerving into oncoming traffic, misinterpreting construction zones, and even failing to recognize pedestrians continuing to pile up."

          [1] https://seekingalpha.com/article/4818639-tesla-robotaxi-ambi...

          • diebeforei485 2 days ago

            Seeking Alpha (this author anyway) is also wrong. They don't have data, and there are tons of examples of Waymo making mistakes, hitting buses, etc.

            None of this is independent reporting. They're reading the same social media posts and repackaging them into articles. None of these authors are based in Austin where robotaxi is.

            • Animats a day ago

              Coverge in Austin-American Statesman newspaper. "After its June 22 rollout in Austin, Tesla launched a ride-hailing service in the Bay Area under the same name, but the cars there all come with a driver behind the wheel. Beyond the fact it’s still not driverless in either city, Tesla’s ride-hailing service has so far been available only to a limited list of influencers and brand fans — many of whom monetize their content by promoting Tesla online."[1]

              [1] https://www.statesman.com/business/technology/article/tesla-...

jgalt212 3 days ago

Given this move, like the rest of TSLA's inane investor base, I wholeheartedly support the potential $1 trillion pay package for Musk

sidcool a day ago

Electrek has been anti Tesla for a long time now.

MagicMoonlight 3 days ago

There needs to be a class-action against Tesla. It’s blatant fraud.

luis_cho 3 days ago

Fir a long time, I don’t think full self driving makes economic sense. Would this hurt car sells at long term?

ares623 3 days ago

Most Honest Company (Sarcasm)

  • jacquesm 3 days ago

    Fish rots from the head.

aurizon 3 days ago

I was a fool's game from the start, with only negative aspects = what could possibly go wrong?

  • utyop22 3 days ago

    Tesla's share price is all based on the Greater Fool Theory in the short run.

    In the long run some of those promises might materialise. But who cares! Portfolio managers and retail investors want some juicy returns - share price volatility is welcomed.

aamargulies 3 days ago

I knew that FSD was nonsense when I tried to use Tesla's autopark feature under optimal conditions and it failed to park the car satisfactorily.

tempodox 2 days ago

If you can’t reach the goal, move the goal posts!

pm90 3 days ago

[flagged]

  • lotsofpulp 3 days ago

    It’s a winning strategy. See who won the presidential election recently.

  • rsynnott 3 days ago

    Well, I mean, clearly you did at some point; you bought one of his cars.

    • jbm 3 days ago

      I bought one too and he did not factor into it.

      Electric car + active battery management were what I cared about at the time of purchase. Also, I am biased against GM and Ford due to experiences with their cars in the 80s and 90s.

      I doubt I'm the only one.

      (In retrospect, the glass roof was not practical in Canada and I will look elsewhere in the future)

      • rogerrogerr 3 days ago

        What’s the problem with the glass roof in Canada? Hail?

        • jbm 2 days ago

          Ah I would have guessed that too but in my case, had a crack that opened up over night — no impact at all.

          Besides, hot in summer and cold in winter. Just see no benefit, it is just another made for California feature

jqpabc123 3 days ago

[flagged]

  • bmitc 3 days ago

    > Musk is not an engineer.

    You have been voted down, but this is proven. He has lied about his education. Henever even enrolled at Stanford, and his undergraduate degree was basically a general studies business degree.

    • asadotzler 2 days ago

      The National Academy of Engineers disagrees with both of you. https://www.nae.edu/?id=270224

      • bmitc 2 days ago

        It disagrees, yes, but it does not correct what was stated. And that appointment couldn't possibly be political could it?

        Musk has lied, time and time again, about his education. He has never worked as an engineer. People have commented that he barely understands how to run simple Python scripts.

moomin 3 days ago

My 1993 Nissan has FSD. I can fully drive myself anywhere.

freerobby 3 days ago

This is clickbait from a publication that's had it out for Tesla for nearly a decade.

Tesla is pivoting messaging toward what the car can do today. You can believe that FSD will deliver L4 autonomy to owners or not -- I'm not wading into that -- but this updated web site copy does not change the promises they've made prior owners, and Tesla has not walked back those promises.

The most obvious tell of this is the unsupervised program in operation right now in Austin.

  • qwerpy 3 days ago

    Marketing choice of words aside, it's already really good now to the point that it probably does 95% of my driving. Once in a while it chooses the wrong lane and very rarely I will have to intervene, but it's always getting better. If they just called it "Advanced Driver Assist" or something, and politics weren't such an emotional trigger, it would be hailed as a huge achievement.

    • freerobby 3 days ago

      Yeah, Tesla did themselves no favors with how they initially marketed FSD, and all the missed timelines amplified the brand cost of that. I'm glad to see them focus on what it can do today. Better to underpromise and overdeliver etc.

      As an aside, it's wild how different the perspective is between the masses and the people who experience the bleeding edge here. "The future is here, it's just not evenly distributed," indeed.

      • utyop22 3 days ago

        Surely you're joking? You really believe those timelines were set in good faith?

        Lol it has been strategic manipulation right the way through. Right out of an Industrial Organisation textbook.

        • freerobby 3 days ago

          Yeah I think their early success with Tesla Vision was faster than expected, it went to their heads, and they underestimated the iteration and fine tuning needed to solve the edge cases. It's difficult to predict how many reps it will take to solve an intricate problem. That's not to excuse their public timeline -- their guidance was naive and IMO irresponsible -- but I don't think it was in bad faith.

  • an0malous 3 days ago

    Great spin job. They didn’t lie, they’re just “pivoting their messaging”

  • panarky 3 days ago

    Can you find any statement in the article that is false?

    • freerobby 3 days ago

      The first one.

      > Tesla has changed the meaning of “Full Self-Driving”, also known as “FSD”, to give up on its original promise of delivering unsupervised autonomy.

      They have not given up on unsupervised autonomy. They are operating unsupervised autonomy in Austin TX as I type this!

      • addaon 3 days ago

        > They have not given up on unsupervised autonomy. They are operating unsupervised autonomy in Austin TX as I type this!

        Setting aside calling a driver in the driver's seat "unsupervised"... that's exactly the point. People paid for this, and they are revoking their promise of delivering it, instead re-focusing on (attempting) operating it themselves.

        I'd have no objection to this if they offered buy-backs on the vehicles in the field, but that seems unlikely.

        • electriclove 3 days ago

          I would like to understand what population feels they were fleeced. The FSD available on their cars with HW3 (some as old as 2017?) is quite impressive when you consider what the capabilities were back then. Sure, it won’t be as good as a 2025 Juniper Model Y. But who are the people that bought FSD in the early days and are unhappy and how big of a population is that? Is this the main thing people are upset about?

          Or are people upset about the current state of autonomous vehicles like Waymo (which has been working for Years!) and the limited launch of Robotaxi?

        • freerobby 3 days ago

          I haven't closely followed which rides have drivers where, and what is driven by Tesla vs what is regulatory -- but I thought some "drivers" were still in the passenger seat in Austin?

          At any rate, I don't think they are revoking their prior promises. I expect them to deliver L4 autonomy to owners as previously promised. With that said, I'm glad they are ceasing that promise to new customers and focusing on what the car does today, given how wrong their timelines have been. I agree it's shitty if they don't deliver that, and that they should offer buybacks if they find themselves in that position.

          • addaon 3 days ago

            > but I thought some "drivers" were still in the passenger seat in Austin?

            Nope, they gave up on that and moved them to the driver's seat.

      • panarky 3 days ago

        > The first one

        That's not a fact, it's a conclusion drawn from all the other facts in the article.

        Did you find the facts that support this conclusion to be false?

      • narrator 3 days ago

        Yeah, they never said this. This article smells like anti-Elon FUD. "Elon is a dummy, everything he tries will fail, replace him with someone who isn't so controversial and supports the proper politics for a powerful global figure" and repeat in 100 minor internet blogs until the money to write these articles runs out.

resfirestar 3 days ago

I don't read the article (besides the clickbait headline and the author's "take") as Tesla "giving up". No marketing is changing, no plans for taxi services are changing. This is about the company's famously captured board giving their beloved CEO flexibility on how to meet their ambitious-sounding targets, by using vague language in the definitions. This way if Tesla fails to hit 10 million $100/month FSD subscriptions, they could conceivably come up with a cheaper more limited subscription and get Elon his pay.