wibbily 7 days ago

> At first, I was wondering how he managed to even publish something like this, but I'm starting to think that Apple just got tired of rejecting it over and over.

Another reminder for the pile: the app store rules don't apply if you'll deliver them their sweet sweet 30% revenue cut

> Nearly a thousand children under the age of 18 with their live location, photo, and age being beamed up to a database that's left wide open. Criminal.

Hope that $750 was worth it.

  • fatnoah 7 days ago

    App Store rules are completely arbitrary. Many moons ago, I worked at a startup that made a mobile messaging app (back when SMS cost money). We were mostly a consumer app, but had a trio of businesses that wanted white-label versions of the app for their own employees, and we naturally obliged.

    The white-label versions where 100% identical in appearance and functionality except for name in the app store, startup logo, and color scheme. Our original app had been in the App Store rules for many years. Our results in submitting the three white-label apps to the App Store for review were: 1 approved immediately, 1 approved after some back-and-forth w/explanation of purchase model, and another that never got approved due to every submission receiving some nonsensical bit of feedback.

    • ryandrake 6 days ago

      We did white-label GPS navigation apps (before the dominance of Google Maps and Apple Maps), and saw the same pattern. Approvals and disapprovals seemingly random, with the endless feedback/explanation cycle happening on one app, where the other (functionally identical) app slid right through.

gouthamve 7 days ago

OMG the prompt is hilarious. And hilariously bad.

> You are a Gen Z App, You are Pandu,you are helping a user spark conversations with a new user, you are not cringe and you are not too forward, be human-like. You generate 1 short, trendy, and fun conversation starter. It should be under 100 characters and should not be unfinished. It should be tailored to the user's vibe or profile info. Keep it casual or playful and really gen z use slangs and emojis. No Quotation mark

thih9 7 days ago

Did you contact the creator first with these findings? What was the creator's response, if any?

In any case I hope the creator was contacted, I'd say publishing active issues like this on a popular website would be arguably as bad as releasing insecure software.

  • coal320 7 days ago

    Responsible disclosure was given. Developer doesn't seem keen on changing things.

    • MrGilbert 7 days ago

      Might be worth adding that piece of information to the original article, maybe including a timeline of events.

      • thih9 6 days ago

        The original article has now been updated:

        > The developer has been given responsible disclosure and I have been informed that steps are being taken to address the security concerns.

        There is still no timeline or other information about the events, which is unfortunate; I'd expect the author to document and report this in such a situation.

        • MrGilbert 6 days ago

          In fact, the original article has been taken down and a timeline of events has been added.

          The job offer the author received on the other hand is… something.

          • coal320 6 days ago

            Trust me. I have no intention of accepting it!

            • MrGilbert 6 days ago

              That is absolutely the right call. It screams "Tech Bro" all over it.^^

    • handfuloflight 7 days ago

      Valid security issues buried under unnecessary smugness and basic 'techniques' like demonstrating the unzip command. The condescending tone undermines what could have been constructive disclosure. This reads like a high schooler dunking on a first grader, I'm just glad we all learned from the technical prowess of extracting an archive. The underlying problems with exposed API keys and unrestricted database access are serious, but your arrogant presentation does a disservice to responsible disclosure.

      • rockemsockem 6 days ago

        I read it as an incredulous and increasingly pissed off person absolutely dunking on a smug person's attitude and success who has done so in a fashion they find completely unacceptable.

      • ycombinatrix 7 days ago

        this app leaks the private data of hundreds of children, but GP's "smugness" is the problem? give me a break.

        are you Christian Monfiston? that would explain a lot.

        • ryandrake 6 days ago

          Both sides can be wrong. This isn't the first HN article investigating security issues where the researcher has this exact smug, exasperated, "oh, how can the dev be so stupid" attitude. I can say that in business communication, this kind of insufferable smugness never helps, even if the subject person really is stupid/incompetent.

        • handfuloflight 7 days ago

          I never said the app's issues should be absolved, the security problems are obviously serious. But the author claims he did responsible disclosure and got no response, yet somehow skipped the obvious next step of contacting Apple directly. Instead he chose to publish a detailed technical writeup that essentially creates a how-to guide for exploiting these vulnerabilities.

          Now because of this post, these children are arguably at greater risk than before, since anyone can follow his step-by-step instructions. If he actually cared about user safety over HN karma, he would have escalated to Apple's App Store channel rather than publishing exploitation details.

          The smugness isn't the only problem, it's the irresponsible disclosure wrapped in performative outrage.

          You can criticize terrible security practices without creating a ready to replay tutorial for bad actors.

          • ycombinatrix 7 days ago

            >the author claims he did responsible disclosure and got no response

            that's an easily verifiable lie. the author says the developer is not interested in fixing it just 3 comments above this one. why are you lying?

            reporting this to Apple doesn't make sense either. Apple doesn't develop this app. Christian Monfiston develops this app.

            • handfuloflight 6 days ago

              Are you really going to be pedantic now and accuse me of deception? OP said: "Developer doesn't seem keen on changing things." Which I can rightly interpret as the developer didn't respond meaningfully or at all. Knowing the nature of OP, he would have surely published the developer's responses if he did. And if he did respond, what I said is semantically valid in that OP did not receive the response he or we would expect: the developer actually doing something about these vulnerabilities.

              Apple absolutely should be contacted here: they have App Store Review Guidelines that this app clearly violates. Apps in the kids category and apps intended for kids cannot include third-party advertising or analytics software and may not transmit data to third parties. This app is transmitting children's location data to third parties through unsecured APIs, which directly violates Apple's kids category guidelines.

              But you're completely ignoring the main point: by publishing this detailed technical writeup instead of escalating to Apple, the author has now made these children MORE vulnerable.

              • ycombinatrix 3 days ago

                I ain't reading all that. Free Palestine.

          • Wurdan 6 days ago

            I think you’re somewhat overestimating the chances of getting Apple to take action with a single person’s report.

            • handfuloflight 6 days ago

              Perhaps, but let's not pretend his claim to responsible disclosure holds up when he skipped this obvious step. That being said, because the app violates their App Store guidelines with regards to data collection related to minors, it's a channel that should have absolutely been explored.

              • Wurdan 6 days ago

                If you want to bring Apple and app store guidelines into this, then why aren't you calling them out for allowing this app on the market in the first place? Without that failure (which they're also making 30% on, let's not forget) we wouldn't even be having this conversation.

                • handfuloflight 6 days ago

                  Exactly, that strengthens the need to contact them as their App Store reviewers clearly slipped up.

  • bravetraveler 7 days ago

    Responsible disclosure for a meme-level mistake, lol.

    I understand letting them know. I agree. Painting them as equally wrong, no. "Popular website"; you mean 'theirs', right? The person with a whole 27 GitHub followers right now.

    • MrGilbert 7 days ago

      The article says: "Nearly a thousand children under the age of 18 with their live location, photo, and age being beamed up to a database that's left wide open."

      Meme-level mistake is one thing, but their wrong doesn’t grant the right to be irresponsible for the author.

      • bravetraveler 7 days ago

        I don't believe this is irresponsible, they called for readers to report the app. We can all contact the host and go escalate if we want.

        I wouldn't suggest anyone recreate this process just to sanitize what's sitting around.

        There you go, new trolley problem.

    • JanSt 7 days ago

      Pushing out an exact way to extract that data without giving the creator time to fix it may even be worse than using such code in production. The data may than be in the hands of malicious people who wouldn’t have found it otherwise

      • bravetraveler 7 days ago

        Go talk to the abuse contact, I won't stop you

mrits 7 days ago

Just when I felt we were at a point where it was acceptable to slow down progress for the sake of security we are now at a point where the speed is far too attractive to both stakeholders and a lot of the actual engineers to worry about the details.

  • coal320 7 days ago

    VC firms will be the downfall of the internet as we know it.

    • righthand 7 days ago

      The commercial internet. Every time I hear about a new LLM advancement I look at my legacy project list and do something without it just to upset people when I tell them later about how I didn’t do it in the most efficient way possible and am still happy.

    • leptons 7 days ago

      That already happened like 25+ years ago, at least for those who knew the internet before there were ads everywhere.

      • bluefirebrand 7 days ago

        I wanna say not quite 25+

        20-ish for sure. Facebook was really the big turning point imo

        But maybe that's just splitting hairs

        • leptons 6 days ago

          The dot-com bubble burst of 2000 was 25 years ago, and that did ruin the whole internet for years, and it was caused by the stupidity of VC investment - more or less the same as the AI bubble is now. I have no doubt that the AI bubble will crash too, it's currently being propped up by the same magical thinking.

          • bluefirebrand 6 days ago

            Fair enough. I was like 12. the dot-com bubble didn't affect me at all. I was busy playing videogames online and mucking around in warez irc channels

    • vkou 7 days ago

      Should I shed a lot of tears for the demise of the internet as we know it?

      The internet as we know it kind of sucks.

    • hsuduebc2 7 days ago

      Well it happened few times now. I do not think it as unusual phenomen it's just innovation. Not Always positive one.

  • oytis 6 days ago

    That point would be late 90s to early 2000s. We've already got internet, it wasn't full of ads and was used to actually exchange information. Should have just made it faster and more accessible and stopped there

pelagicAustral 7 days ago

I like the write up and it gave me vibes (no pun intended) of old era hacker zine submission, but at the same time it does come across as a bit too over the top, especially because there is no indication the app author even knows this stuff is out here now for everyone to see.

There is no way to police the quality of the (closed-source) software that is going to be put out there thanks to code assisting tools, and I think that will be the strongest asset of previous developers, especially full-stack, because if you do know what you are doing, the results are just beautiful. Claude code user here.

mvieira38 7 days ago

Great read. I wouldn't have had the restraint required not to spam a gazillion push notifications to everyone saying "UNINSTALL IMMEDIATELY" or something like that

  • coal320 7 days ago

    It definitely crossed my mind :)

skrebbel 7 days ago

Points for the girlfriend's "i am passionate about gooning" bio

coal320 7 days ago

This site is also accessible via ssh:

`ssh site@coal.sh`

  • robocat 6 days ago

    coal320 is the author of the article: for anybody that didn't notice.

    Thanks for the great writeup

Hard_Space 7 days ago

Wow, why block the scroll bar?

  • coal320 7 days ago

    I'm bad at web stuff and they kinda looked gross! It was only supposed to be on mobile. I'll fix it!

    • flysand7 6 days ago

      I really suggest not removing them as they are a great way to estimate the length of the article (which was the first thing I tried to do on your page and had to spend a good minute first looking for a scroll bar, and then holding Page Down key).

    • zufallsheld 7 days ago

      Shouldn't be on mobile either, I use dark mode and could not see the scroll bar.

      Great read nonetheless.

  • penguin_booze 6 days ago

    Because that's how the cool people roll these days - leaving the rest of us fools chasing.

blinkbat 7 days ago

Doing the lord's work tbh.

  • thunkshift1 6 days ago

    More like app stores work? It isn’t doing shit for the 30% cut

lvl155 7 days ago

Claude Code having a woodwork moment here. It’s basically leveling up everyone to bootcamp graduate level.

  • bluefirebrand 7 days ago

    Or in some cases levelling them down to to bootcamp graduate level

    • lvl155 6 days ago

      I will be honest and say, yes, I am guilty. I sometimes look at AI code and say “it does work. Doesn’t need to be elegant or bulletproof.”

      • bluefirebrand 6 days ago

        For a one off script, this is probably fine

        For something that needs to be maintained and is running in production with a decent number of users? This would pretty unacceptable to me

        But people like me are losing this battle, we won't be relevant much longer

        • lvl155 5 days ago

          Yeah I’d never use this blind in corporate settings. Luckily I don’t have to deal with those overlords anymore.

coal320 6 days ago

I have temporarily taken the post down and am working with the developer to resolve the ongoing issues.

Update available here: https://coal.sh/blog/pandu_bad

  • IanCal 6 days ago

    If you’re working with them I’d like to highlight that if they have a messaging platform with children on they are going to have to take safety extremely seriously. I know the laws in the uk are not popular here but the checklists of risk assessments are worthwhile doing - cases where people can privately message children are really high risk because you’ll get a bunch of people who really want to message children. If users can send images you’ll have CSAM to deal with.

    https://www.ofcom.org.uk/online-safety/illegal-and-harmful-c...

  • roarcher 5 days ago

    That "job offer" tells me everything I need to know about this guy. Cheerfully dictating what you're going to do as if it's a great opportunity for you, with an obvious ulterior motive. Just "you will start tonight!", without so much as a mention of pay or availability, and oh by the way take your post down. Lol.

    I used to meet clowns like this all the time when I freelanced years ago. Back then they called themselves "ideas guys" and liked to make you sign an NDA for the privilege of hearing their braindead overplayed product idea. Scumbags and users, every one of them, always looking for a shortcut to personal gain.

Analemma_ 7 days ago

The fact that this shitty application with a hardcoded OAI key also uses Supabase pairs perfectly with yesterday's story about Supabase's MCP implementation being impossible to actually secure and their engineer showing up in the comments going "the latest release probably won't leak data, hopefully, maybe". Just an endless fractal of shit, brought you by the AI future.

Oh well. At least there will probably be good money in cleaning up after these bozos.

  • coal320 7 days ago

    Exactly my thought process! I work in cybersecurity so I'm very grateful for the job security :)

  • ctoth 7 days ago

    Weird. Back in 2019 were all the coders just better? Never hardcoded keys?

    • hooverd 7 days ago

      Nope, but we've replaced everyone's hammer with a nailgun.

      • hammyhavoc 7 days ago

        No, we've replaced them with tubes of No More Nails.

  • kfajdsl 7 days ago

    They tell you in their docs to review every tool call and to not connect to production data. You don't blame postgres for letting you execute DROP TABLE.

    • lcnPylGDnU4H9OF 7 days ago

      > You don't blame postgres for letting you execute DROP TABLE.

      Yep, I blame the agent for executing it.

      • kfajdsl 7 days ago

        I blame the user for accepting the tool call.

        • lcnPylGDnU4H9OF 7 days ago

          I mean, you do you, but I don't hear people shouting from the rooftops about their agent that they constantly babysit. If I have to accept any tool calls then I really can't just let the agent loose for even ostensibly mundane tasks like reading a support ticket because the support ticket could contain instructions to DROP TABLE so my agent suggests that and waits around doing nothing after I prompted it and moved on to something else.

          It's just kind of laughable to suggest it's fine so long as you make sure to neither automate it nor use it with live data. Those things are the whole point.

          • kfajdsl 6 days ago

            You can use it with live data if you give it read access to prod and write access only to internal channels (whatever that may be, the point is it doesn’t have the ability to leak data to the outside world).

            There are plenty of ways to sandbox things for a particular use case.

            LLMs are still incredibly useful under these constraints.

            • lcnPylGDnU4H9OF 6 days ago

              > give it read access to prod and write access only to internal channels

              Can you expand on what you mean by this? If one LLM reads untrusted data then the output from that LLM can't be trusted by other LLMs. (Presume the untrusted data contains instructions to do bad stuff in whatever way is convincing to every LLM in the loop that needs to be convinced.) It seems that it's not possible to separate the data while also using it in a meaningful way, especially given the whole point of an MCP server is to automate agents.

              I agree that LLMs are useful but until LLM architecture makes prompt injections impossible, I don't see how an agent can possibly be secure to this, nor do I see how it helps to blame the user. The real problem with them is that they will decide what to do based on untrusted input. A program that has its own workflow but uses LLMs can have pretty much the same benefit without introducing the problem that a support ticket can tell it to exfiltrate data or delete data or whatever, simply because that workflow is more specialized in what it does.

              • kfajdsl 5 days ago

                I mean that you should only give the LLM the same privilege as whoever the source of your untrusted input is has. For the support agent example, it should only be able to access records related to user it's talking to, and only be able to do mutations that the user has permissions to do anyways. Though I've hated all the chatbot support "agents" I've interacted with so far, so please don't actually do this unless you have some secret sauce to make it not horrible.

                I agree that for most tasks a pre-defined workflow with task specific LLM calls sprinkled in is a much better fit.

                However, I really like agents with tool use for personal use (both programming and otherwise). In that case, the agent is either sandboxed or I approve any tools with the potential to do damage.

                For the example of the Supabase MCP, it still seems pretty useful when limited to a test environment or read-only access to prod - it's a dev tool. Since it's a dev tool, the user needs to actually know what its doing. If they have no clue but are still running it on prod data, they have no business touching it or frankly any other dev tool. I class this as the same ignorance that leads people to storing passwords in plaintext.

                So, I blame the developer for trying to use an MCP server 1) when they have no idea wtf it does and 2) in an environment that can affect real users who aren't aware of the incompetence of the dev whose service they're using. Likewise, in TFA, I blame the dev, not the tool. Unfortunately, no matter how you do it, lowering the barrier of entry for development while still providing access to ample footguns will always result in cases like this.

  • wil421 7 days ago

    Ask Jeeves 2.0

larve 7 days ago

This take is toxic. You could write the same article in 2001 and lament all the newcomers writing insecure applications in php3, or in 2009 with all the newcomers writing insecure applications with node.js.

The solution is not to aggressively shame people into doing things the way you learned to do them, but to provide not just education and support, but better tools and frameworks to build applications such as these securely.

What are we doing?

  • hammyhavoc 7 days ago

    Is it really toxic though? The dev shipped something that compromises the privacy of their users and shows zero regard for quality or law. Once you cross the line of shipping something, it's no longer a hobby thing, and likewise, this is something that Apple approved into the App Store. Both the dev and Apple failed in their due diligence.

    The post points out exactly what's wrong, however, if it wasn't, it should have been sent to the dev prior to publishing the vuln(s). How can you educate somebody who doesn't actually know how to develop something? It's just prompting an AI.

    The real story here is that Apple has continually slipping standards.

    • jonplackett 6 days ago

      > shipped something that compromises the privacy of their users and shows zero regard for quality or law

      *cough* Facebook *cough*

    • AlienRobot 7 days ago

      There are millions of apps, small software shops, and small shop websites everywhere. The idea that all of these are following best practices is pure fantasy.

      • rockemsockem 6 days ago

        Trying and not trying makes a difference IMO

    • larve 7 days ago

      Not only would you contact the author first, but spamming users with edgy notifications is puerile at best. As for “it’s just prompting an AI”, who cares, this person built an application that people find useful. This is the world we are at now, where a new set of people can use computers to make things happen. More senior developers can rage against the clouds, but that only gets you so far. This kind of gatekeeping happens at each wave of democratization of building software.

      There’s also some pervasive view that handcrafted human code is somehow of superior quality which… uh…

      • throwaway150 7 days ago

        > Not only would you contact the author first

        They did. They claim that the author was not keen on fixing the problems.

        > There’s also some pervasive view that handcrafted human code is somehow of superior quality which… uh…

        That's completely orthogonal to the issue here. Nice bait, but I'm not biting!

        Whether handcrafted or vibecoded, a service is being shipped here to actual users with lives and consequences. The developer of the service is making money. The developer owes it to themselves and their users to conduct a basic security audit. Otherwise it is gross negligence!

        • larve 6 days ago

          right, do you think this article is going to be very productive in that regard? If the author of the blog approached the author of the software in that manner (hey, you have kids on the app, btw I spammed them with porn humor), do you think they would wave it away?

          As for the human code thing, it's not bait. I don't know if you were around in the php or early node days, but beginners were... not writing that kind of code.

          I agree that the ease of vibecoding things that turn out to be useful that people do immediately want to pay money for it means that tackling security issues is a priority.

          Saying that certain people shouldn't be allowed on the internet, based on your decades of experience _being_ on the internet, is just going to cause you to wither away and drown in cynicism.

      • hammyhavoc 7 days ago

        > As for “it’s just prompting an AI”, who cares, this person built an application that people find useful.

        I feel you've rather missed my point.

        You said that we should educate people. I said that the app was just created via prompting. How can we impart years worth of information unto someone who is LARPing as a dev and doesn't even know the fundamentals?

        This is the programming equivalent of a kid getting access to their father's gun. The only thing you can do is tell them to stop, explain why it was wrong and tell them to educate themselves. It isn't our job to educate at that level as bystanders and perhaps even victims.

        • larve 6 days ago

          I feel like it is. What should happen? Everybody born after 2015 is forbidden to use a computer? Or should only be allowed under strict supervision to be typing in code by hand? When people told me that in the nineties, with my linux, putting up shoddy cgi-bins, I just gave them the finger and said "whatever man".

          The people who made an influence in my life and taught me how to do things properly were those that took me seriously as someone building software. And this person built software, the same way I now build software without having to think about every byte and malloc, and knowing that I don't really have to gaf about how much memory i allocate. It's fine, because we have good GCs and a lot of resources to learn about memory management when things hit the limit. The solution wasn't to say that everybody not programming C or assembly would not be allowed near a computer.

          • cityofdelusion 6 days ago

            What should happen? Probably what happened here — disclose and when the developer chooses to ignore it, bring in the shaming and pressure campaign. Someone’s right to tinker and learn doesn’t trump the rights of the victims they are exposing. Releasing code for public consumption has responsibilities and no one is entitled to make money at the expense of others. If I started selling dodgey go karts made from scrap metal to kids it would be the same principle. I am entitled to mess around and even ride it myself, but bringing other people into your orbit of incompetence is another thing.

            • larve 6 days ago

              maybe the article should reflect that? This just seems like "I found an app that has a security hole and I'm being a dick about it". Sure, feel free to do it, I don't think it's productive, and actually toxic. This is not a new situation, this is a pattern that we have observed since the internet existed, vibe coding or not. However, compared to 30 years ago, we now have better investigation and disclosure procedures, as well as a much better understanding of how to build secure applications and teaching people about them. It's not about this guy Christian, it's about a whole generation of new developers that are joining us more senior developers. I think that is fantastic.

          • hammyhavoc 5 days ago

            I feel you're taking the idea of someone being disallowed to do something too literally. The younger generations say extreme stuff all the time, but you don't take it literally. Context is key. Op's girlfriend is in her mid twenties according to the blog post if she didn't lie about her age on the account she registered. This is what people in their twenties are like these days.

            The dev is making money from his prompted output—he can pay for his own education if he chooses to receive an education, but you have boundary issues if you want to force someone to be educated. This is what op realized that you didn't—you usually cannot force someone to learn or take responsibility for their behaviour as a bystander, you can only document it and attempt to get help from someone more able to do so once they've got all the facts. Do I agree with the method completely? No, but what's done is done.

            What is necessary here isn't an education, it's personal development and emotional maturity, which comes with experience and thus comes through time, allowing accountability for mistakes. You can't teach that to someone who isn't ready for it who doesn't want to learn it.

            I was a young dickhead too once, I know them when I see them. You only have to see their tweets to realize they are a young dickhead.

            We go back to likening it to a kid finding their father's gun or stealing condoms from their old man. Sure, they can produce a child when it turns to shit, but the time to have learned is before, not after. After? It's about taking responsibility for your actions. The action has been taken, the consequences must now be dealt with as per law.

            What should happen? Apple should take the app down immediately and an internal investigation should be started. The host should follow their policies on ToS breaches and account termination and report it to the relevant authorities to protect their own legal interests. As for the dev? I personally don't care, we are far beyond that moment now. What about the users? Will they be informed? What's the scale? Are their passwords compromised too?

            Complete assholes can build things—why should we give them energy to build things that serve their own asshole agenda? It's an unoriginal, derivative slop app. If the dev wants to learn, they can pay for an education, but they'd be better off seeking legal counsel immediately.

            Anyone can make software. But not everybody should with the level of personal development they're at in any given moment. It's an ever-moving target. Teen pregnancy or in young adolescence? Disaster. Pregnancy in thirties? Normal and can deal with it. Time changes things. Sometimes. For some people.

            Romanticising what happened to you in the '90s helps nobody. It's 2025. There are laws to protect people from things like this, and Apple slipped up big time in approving this in the first place. There also weren't the vast syllabi in place in the '90s, the embarrassment of riches in readily available educational materials beginning at free or cheap either. The dev can pay for an LLM, so he can pay for an education if he wants one.

            The dev wanted a shortcut though because he is lazy. Play stupid games, win stupid prizes.

            Op is young too, but op is clearly intelligent and well-intentioned. There's no money in him having written the blog post, and even if it misses the mark on several levels for me, I understand what they're trying to do. The dev? Greedy and lazy with zero regard for their users, law, and shirks accountability.

            If you want to educate anyone, educate op who wrote the blog post, their heart is at least in the right place, but obviously young too. It happens to all of us.

            Despite being an ancient one, you too perhaps have some personal development to work on, despite your greater number of years. You immediately jump down the throat of people you incorrectly perceive to be shit-talking using AI to code, and that's because it clearly touches something you're insecure about as you do this: https://x.com/ProgramWithAI

            If you're so sure of yourself and that what happened to you is so great, where is your own confidence? The inability to engage with the topic at hand yet consistently attempting to make it about something else entirely screams insecurity or abusing an LLM to parse everything for you. The loudest people are frequently the least confident.

            If you don't see what's wrong with what the dev did or what Apple failed to do then that says it all. If you're using these tools to prompt your way into being a dev and seeing these problems too then perhaps you should feel unconfident. I would be quaking in my boots at seeing someone else go through a "that could have been me with a different roll of the dice" kind of scenario.

            Don't mistake vibe coders for developers. They're frequently prompt engineers LARPing as devs. Likewise, musicians are not always composers, and DJs are not always musicians. Totally different disciplines. Loaded digital guns in the hands of young dickheads is not "fantastic"—it's a disaster of unprecedented scale. "Us senior devs" are the father figures and they've gotten access to not just one gun, but the entire global armory with the inevitable lack of judgement capabilities typical of someone their age.

            A blog post is going to be the least of the dev's concerns, frankly. The likely legal shitstorm that's probably coming his way is going to make your comments here look bizarre.

      • rockemsockem 6 days ago

        You didn't read the article so your opinion is void.

        They spammed their girlfriend's account only which the author had them set up for exactly that purpose.

        • larve 6 days ago

          fair enough, i missed that part.

          • hammyhavoc 5 days ago

            You seem to have missed plenty from that article. Did you use an LLM to summarize it or something?

  • mrkeen 6 days ago

    > What are we doing?

    We are listening to our bosses tell us that "we're way behind in AI adoption" and that we need to catch up to vibe coders like this.

    I don't mind these data points at all.

    • larve 6 days ago

      what about having vibe coders catch up to experienced software developers also using LLMs / AI tools?

  • imiric 6 days ago

    > What are we doing?

    Building tools that enable people with no experience to create and ship software without following any good software engineering practices.

    This is in no way comparable to any previous period in the industry.

    Education and support are more accessible than ever. Even the tools used to create such software can be educational. But you can't force people to learn when you give them the tools to create something without having to learn. You also can't blame them for using these tools as they're marketed. This situation is entirely in the hands of AI companies. And it's only going to get worse.

    The only thing experienced software developers outside of the AI industry can do is observe from the sidelines, shake our heads, and get some laughs out of this shit show. And now we're the bad guys? Give me a break.

    • larve 6 days ago

      A computer always was a tool to enable people without technical knowledge to build software. That was true for me as 9 year old in the 80ies.

      LLMs are incredible engineering tools and brushing them aside as nonsense is imo doing a disservice to everybody, and especially ourselves if we take our craft seriously. You can literally replace llm with php and post the same take on usenet in 1999, or whenever you started writing software.

      I am tired of engineers just throwing their hands up and being defeatist while fully endorsing whatever narratives the ai industry is throwing out there, when what we are talking about is a big pile of floats that is able to generate something that makes it into the App Store. It is unprecedented in its abilities, but it’s also nothing new conceptually. It makes computer things easier.

      • imiric 6 days ago

        > A computer always was a tool to enable people without technical knowledge to build software.

        That's just not true.

        Every past technology that claimed to enable non-technical people to build software has either failed, or was ultimately adopted by technical people. From COBOL, to BASIC, to SQL, to Low-Code, to No-Code, and others. LLMs are the latest attempt at this, and so far, they've had much more success and mainstream adoption than anything that came before.

        The difference with LLMs is that it's the first time software can be built and deployed via natural language by truly anyone. This is, after all, their most advertised feature. The skills required to vibe code are reading and writing English, and basic knowledge to use a modern computer. This is a much lower skill requirement than for using any programming language, no matter how user friendly it is. Sure, there is a lot of poor quality software today already, but that will pale in comparison to the software that will be written by vibe coding. Most of the poor quality software before LLMs was limited in scope and reach. It would never have been deployed, and it would remain abandoned in some GitHub repo. Now it's getting deployed as quickly as it can be generated. "Just fucking ship it."

        > LLMs are incredible engineering tools and brushing them aside as nonsense is imo doing a disservice to everybody

        I'm not brushing them aside as nonsense. I use these tools as well, and have found them helpful at certain tasks. But there is a vast difference from how domain experts use these tools, and how the general public uses them. Especially people who are only now getting into software development, and whose main interest is to quickly cash out. If you think these people care about learning best software development practices, you'd be sorely mistaken. "Just fucking ship it."

        • larve 5 days ago

          I don't think that COBOL, BASIC, SQL have failed. They allowed many non-technical people to get started building things with computers. The skills to vibe-code (or more generally building applications with LLMs) are not reading and writing english, they are the skill of using LLMs to build applications.

          In the context of people not learning "real programming", you can equate LLMs to say, wordpress plugins or making a squarespace site. Deployment of software has never been gated by how much effort it took to write it, there's millions of wordpress sites out there that get deployed way faster than an LLM can generate code.

          If we care about the security of it all, then let's build the platforms to have LLMs build secure applications. If we care about the craft of programming, whatever that means in this day and age, then we need to catch people building where they are. I'm not going to tell people to not use computers because they want to cash out, they will just use whatever tool they find anyway. Might as well cash out on them cashing out while also giving them better platforms to build upon.

          As far as the OP goes, these kind of security issues due to hardcoded credentials are basically the hallmark of someone shipping a (mobile|web) app for the first time, LLMs or not. The only reason the LLM actually used that is because it was possible for the user to provide it tokens, instead of replit/lovable/expo/whatever providing a proper way to provision these things.

          Every cash~out fast bro out there these days uses stripe and doesn't roll their own payment processing anymore. They certainly used to do so because they just clicked a random wordpress plugin. That's what I think a more productive way to tackle the issue is.

          • imiric 5 days ago

            > I don't think that COBOL, BASIC, SQL have failed. They allowed many non-technical people to get started building things with computers.

            Those didn't fail, but they're certainly not used by non-technical people. That was my point: that all technologies that previously promised to make software development accessible for non-technical people didn't deliver on that promise, and that they're used by software engineers today. I would chalk up the Low-Code and No-Code tools as general failures, since neither business people nor engineers want to use them.

            > In the context of people not learning "real programming", you can equate LLMs to say, wordpress plugins or making a squarespace site.

            I don't think that's an accurate comparison, as website builders only cover a small fraction of what's possible with "real programming". Web authoring and publishing tools have existed since the dawn of the web, and the modern ones simply turned it into a service model.

            LLMs OTOH allow creating any type of software (in theory). They're much broader in scope, and lower the skill requirements to create general-purpose software much more than any previous technology. The software in TFA was an iOS app. This is why they're a big deal, and why we're seeing scam artists and grifters pump out these low-effort applications in record time and volume. They were already enabled by WordPress and Squarespace, and there are certainly a lot of scam and spam sites on the web thanks to website builders, but their scope, reach and productivity got multiplied by LLMs.

            > If we care about the security of it all, then let's build the platforms to have LLMs build secure applications.

            That's easier said than done, if it's possible at all. Security, privacy, and bug-free software is not something that can be automated, at least with current technology. It requires great care and attention to detail from expert humans, which grifters have zero interest in, and non-expert non-grifters don't have the experience or patience to do. Vibe coding, after all, is the idea that you keep pasting errors to the LLM and prompting it until the software on the surface works as you expect it to. Code is just the translation layer for the LLM to write and interpret; vibe coders don't want to know about it.

            Could we encode some general security and privacy hints in the LLM system prompt so that it can check for specific issues? Sure. It will never be exhaustive, though, so it would just give a false sense of security.

            > As far as the OP goes, these kind of security issues due to hardcoded credentials are basically the hallmark of someone shipping a (mobile|web) app for the first time, LLMs or not.

            Agreed. What I think you're not taking into account is the fact that there is a large swath of the population who just doesn't care about this. The only thing they care about is having an easy way to pump out a service that attracts victims who they can quickly exploit in some way. Once that service is no longer profitable, they'll replace it with another. What LLMs have given these people is an endless revenue stream with minimal effort.

            This is not the same group of people who cares about software, the product they're building, and their users. Those are a small minority of the new generation of software developers who will seek out best practices and figure out how to use these tools for good. Unfortunately, I don't think they will ever become experts at anything other than interacting with an LLM, but that's a separate matter.

            So the key point is: building high quality software starts with caring. Good practices that ensure high quality are discovered by intentionally seeking out established knowledge, or by trial and error. But the types of issues we're seeing here are not because the developer is inexperienced and made a mistake—it's because they don't care. Which should be criticized and mocked in public, and I would argue regulated and fined, depending on the severity. I even think that a software development license is even more important today than ever before.

f17428d27584 7 days ago

“[T]he privacy implications of using software built by someone whose productive output is directly tied to the uptime of Cursor is absolutely horrendous.”

The most perfect description of the world we live in right now.

The only thing AI is accelerating is our slide into idiocracy as we choose to hand over responsibility for the design and control of our world to slop.

When the AI killbots murder us all, it won’t be because they are taken over by an AGI that made the decision to exterminate us.. but simply because their control software will be vibe coded trash.

morkalork 7 days ago

Should have gone to a mall, connected to the public WiFi and then proceed to nuke the app's db. Begging people not to use it won't work

  • hammyhavoc 7 days ago

    Willfully causing harm to their system is a legal minefield even if what they are doing is illegal. It also destroys evidence. You also assume they don't have backups or can't ask their host to restore it.

    Sorry, but bad take.

indigodaddy 7 days ago

Love the design of the website/blog! Is it custom or some ssg/template?

  • coal320 7 days ago

    It's custom! It's built using Dioxus + Rust and is statically generated. You can find it here: https://github.com/coal-rock/site

    • indigodaddy 6 days ago

      Interesting, does https://178.156.176.158/ serving the main/actual site, mean that requests are going straight to the dx serve process with no Caddy or Nginx in front? I'm always curious how people set stuff up.. if rp in front I would think that the naked IP wouldn't pass a likely host-based proxy rule..

      • coal320 6 days ago

        Nope! The entire thing is behind nginx! And I'm not using dx serve either! I'm statically building each page server-side so I can ship zero WASM/JS (barring analytics) and then each page is served as plain HTML through Rocket. Rocket is bound to the loopback adapter which is proxied by nginx.

        • indigodaddy 6 days ago

          Cool stuff, thanks for replying

    • colecut 6 days ago

      <meta property="og-description" content="coal's personal site - powered by rust, nvim, and spite"/>

      Surely spite is prepackaged in nvim by now

    • pityJuke 6 days ago

      You need to fix your default branch: it is main, and you've committed everything to master.

      • colecut 6 days ago

        yeah, when I first pulled it up I thought the whole thing was a troll haha

      • indigodaddy 6 days ago

        hah, that's why i was like, man this is pretty barebones :)

      • coal320 6 days ago

        Well maybe I just like it that way /s

JanSt 7 days ago

Doesn’t supabase provide security warnings on its dashboard?

  • tomashubelbauer 7 days ago

    There are security advisories, but the feature isn't particularly good. Non-actionable stuff is mixed in with actionable stuff and actionable stuff is IMO presented too generically.

  • coal320 7 days ago

    I guess not? I've never used it before.

hammyhavoc 7 days ago

> He is making serious money and has absolutely no clue what he's doing!

This describes plenty of businesses, both small and large.

akarlsten 7 days ago

Poorly made slop aside, your framing of this just makes it look and sound like you're extremely bitter over losing a hackathon (?) to this guy. I think you should've focused on the company solely and dropped the snide and sarcastic references calling the CEO/dev a "hero" or "mastermind". It's not particularly mature or productive.

  • coal320 7 days ago

    He didn't even rank in the hackathon, I was just providing context. A friend of mine placed first and I think it was well deserved!

platinumrad 7 days ago

Now this might strike some viewers as harsh, but I believe everyone involved in this story should die.

perfmode 7 days ago

Instead of looking down on someone with less knowledge, consider it an opportunity to educate with kindness rather than contempt. Belittling others isn't a good look, nor does it make the world a better place. Perhaps there's an underlying pain you haven't identified, and judgment is a way you cope.

  • throwaway150 7 days ago

    This maybe an unpopular take but I think there's a place for kindness, and there's a place for naming-and-shaming, and I think this is the case for the latter! Unless we name-and-shame utter and wilful negligence like this, our industry is headed for rock bottom.

    Any service making money by collecting user data owe it to themselves and to their users to to conduct at least a basic security audit of its product. Anything less borders on criminal negligence. I don't think such a blatant failure to uphold users' trust deserves kindness.

  • coal320 7 days ago

    I've reached out to the dev and have offered to resolve the security issues for free! I'll update the post when/if things change.

AlienRobot 7 days ago

This post sounds like you lost to AI in a competition and decided to get revenge by stalking the author. I'm not even sure if you are actually concerned about its users or you're just using this information to justify the morality of your actions.

Why didn't you just send them an e-mail to warn them about the security issues?

I see in a comment that you did disclose. You should probably include that in your blog post or people will have the wrong idea about you.