haswell 4 days ago

Not my TIFU, but a concrete example of what AI coding can lead to when the user doesn't actually understand what the LLM is generating.

At first, I was tempted to dismiss this somewhat as "the same can happen when you're using Stackoverflow", but the conversational and personalized nature of the results these tools provide creates a false sense of security I think.

I've posted positive comments here about my experiences with AI-assisted prototyping and I stand by them, but some very expensive lessons will be learned.

rvz 4 days ago

So there was no warning on how destructive that command was?

This is exactly why LLMs can easily generate absolutely plausible looking outputs and without checking exactly what it is doing it can lead to costly problems.

Imagine all the mess the "vibe-coding" movement is about to create when they start to deploy their code into production without caring for robustness and the quality of their software, because an LLM spat out unchecked rubbish code.

This won't be the last time this happens.

  • squircle 4 days ago

    I was "vibe coding" a rust project the other day and, working on tests, the LLM removed the safety function added to keep the thing from accidently nuking my system. The the test case was confidently resolved.