prodigycorp a day ago

The best part about this is that you know the type of people/companies using langchain are likely the type that are not going to patch this in a timely manner.

  • peab 18 hours ago

    Langchain is great because it provides you an easy method to filter people out when hiring. Candidates who talk about langchain are, more often than not, low quality candidates.

    • babyshake 17 hours ago

      Would you say the same for Mastra? If so, what would you say indicates a high quality candidate when they are discussing agent harnessing and orchestration?

      • avaer 13 hours ago

        I somewhat take issue as a LangChain hater + Mastra lover with 20+ years of coding experience and coding awards to my name (which I don't care about, I only mention it for context).

        Langchain is `left-pad` -- a big waste of your time, and Mastra is Next.js -- mostly saving you infrastructure boilerplate if you use it right.

        But I think the primary difference is that Python is a very bad language for agent/LLM stuff (e.g. static typesystem, streaming, isomorphic code, strong package management ecosystem is what you want, all of which Python is bad with). And if for some ungodly reason you had to do it in Python, you'd avoid LangChain anyway so you could bolt on strong shim layers to fix Python's shortcomings in a way that won't break when you upgrade packages.

        Yes, I know there's LangChain.js. But at that point you might as well use something that isn't a port from Python.

        > what would you say indicates a high quality candidate when they are discussing agent harnessing and orchestration?

        Anything that shows they understand exactly how data flows through the system (because at some point you're gonna be debugging it). You can even do that with LangChain, but then all you'd be doing is complaining about LangChain.

        • runeblaze 9 hours ago

          > And if for some ungodly reason you had to do it in Python

          I literally invoke sglang and vllm in Python. You are supposed to (if not using them over-the-network) use the two fastest inference engines there is via Python.

        • stingraycharles 10 hours ago

          Python being a very bad language for LLM stuff is a hot take I haven’t heard before. Your arguments sound mostly like personal preferences that apply to any problem, not just agentic / LLM.

          If we’re going to throw experience around, after 30+ years of coding experience, I really don’t care too much anymore as long as it gets the job done and it doesn’t get in the way.

          LangChain is ok, LangGraph et al I try to avoid like the plague as it’s too “framework”-ish and doesn’t compose well with other things.

          • avaer 10 hours ago

            I used to write web apps in C++, so I totally understand not caring if it gets the job done.

            I guess the difference where I draw the line is that LLMs are inherently random I/O so you have to treat them like UI, or the network, where you really have no idea what garbage is gonna come in and you have to be defensive if you're going to build something complex -- otherwise, you as a programmer will not be able to understand or trust it and you will get hit by Murphy's law when you take off your blinders. (if it's simple or a prototype nobody is counting on, obviously none of this matters)

            To me insisting that stochastic inputs be handled in a framework that provides strong typing guarantees is not too different from insisting your untrusted sandbox be written in a memory safe language.

            • stingraycharles 9 hours ago

              What does static type systems provide you with that, say, using structured input / output using pydantic doesn’t?

              I just don’t follow your logic of “LLMs are inherently random IO” (ok, I can somehow get behind that, but structured output is a thing) -> “you have to treat them like UI / network” (ok, yes, it’s untrusted) -> static typing solves everything (how exactly?)

              This just seems like another “static typing is better than dynamic typing” debate which really doesn’t have a lot to do with LLMs.

          • saidnooneever 7 hours ago

            he says its bad for agents, nit 'LLM stuff'. python is fine to throw task to the GPU. it is absolutely dreadful at any real programming. so if you want to write an agent that _uses_ LLMs etc like an agent, there are much better languages, for performance, safety and your sanity.

            • stingraycharles 6 hours ago

              so the argument boils down to “untyped languages are dreadful for real programming” ?

    • deepsquirrelnet 18 hours ago

      It also helps with jobseeking as well. Easy to know which places to avoid.

    • notnullorvoid 17 hours ago

      Curious what your critique is for LangChain?

      I found the general premise of the tools (in particular LangGraph) to be solid. I was never in the position to use it (not my current area of work), but had I been I may have suggested building some prototypes with it.

  • wilkystyle a day ago

    Can you elaborate? Fairly new to langchain, but didn't realize it had any sort of stereotypical type of user.

    • int_19h 20 hours ago

      I'll admit that I haven't looked it in a while, but as originally released, it was a textbook example on how to complicate a fundamentally simple and well-understood task (text templates, basically) with lots of useless abstractions that made it all sound more "enterprise". People would write complicated langchains, but then when you looked under the hood all it was doing is some string concatenation, and the result was actually less readable than a simple template with substitutions in it.

      • phyzome 13 hours ago

        Huh, kind of sounds like they used LLMs to design it. :-)

      • gcr 19 hours ago

        What do you suggest instead? Handrolled code with “import openai”? BAML?

        • hhh 10 hours ago

          yes, in an industry that has rapidly changing features and 7000 of these products that splinter and lose user base so quickly you should write your own orchestration for this stuff. It’s not hard and gives you a very easy path to implementing new features or optimizations.

        • llmslave2 15 hours ago

          Oh gosh, not that legacy "hand rolled code"

        • never_inline 5 hours ago

          Have you heard of `def`?

          • gcr 3 hours ago

            It was an earnest question. I didn’t intend to be sarcastic.

            • alzoid 2 hours ago

              I went through evaluating a bunch of frameworks. There was Langchain, AG2, Firebase Gen AI / Vertex / whatever Google eventually lands on, Crew AI, Microsoft's stuff etc.

              It was so early in the game none of those frame works are ready. What they do under the hood when I looked wasn't a lot. I just wanted some sort of abstraction over the model apis and the ability to use the native api if the abstraction wasn't good enough. I ended up using Spring AI. Its working well for me at the moment. I dipped into the native APIS when I needed a new feature (web search).

              Out of all the others Crew AI was my second choice. All of those frameworks seem parasitic. One your on the platform you are locked in. Some were open source but if you wanted to do anything useful you needed an API key and you could see that features were going to be locked behind some sort of payment.

              Honestly I think you could get a lot done with one of the CLI's like Claude Code running in a VM.

            • never_inline 2 hours ago

              Which abstractions in langchain do you find so useful which require significant code to replicate yourself in functions with OpenAI SDK / LiteLLM?

    • XCSme 21 hours ago

      I am not sure what's the stereotype, but I tried using langchain and realised most of the functionality actually adds more code to use than simply writing my own direct API LLM calls.

      Overall I felt like it solves a problem doesn't exist, and I've been happily sending direct API calls for years to LLMs without issues.

      • teruakohatu 21 hours ago

        JSON Structured Output from OpenAI was released a year after the first LangChain release.

        I think structured output with schema validation mostly replaces the need for complex prompt frameworks. I do look at the LC source from time to time because they do have good prompts backed into the framework.

        • avaer 13 hours ago

          To this day many good models don't support structured outputs (say Opus 4.5) so it's not a panacea you can count on in production.

          The bigger problem is that LangChain/Python is the least set up to take advantage of strong schemas even when you do have it.

          Agree about pillaging for prompts though.

          • teruakohatu 9 hours ago

            > so it's not a panacea you can count on in production.

            OpenAI and Gemini models can handle ridiculously complicated and convoluted schemas, if I needed complicated JSON output I wouldn’t use anything that didn’t guarantee it.

            I have pushed Gemini 2.5 Pro further than I thought possible when it comes to ridiculously over complicated (by necessity) structured output.

        • majormajor 20 hours ago

          IME you could get reliable JSON or other easily-parsable output formats out of OpenAI's going back at least to GPT3.5 or 4 in early 2023. I think that was a bit after LangChain's release but I don't recall hitting problems that I needed to add a layer around in order to do "agent"-y things ("dispatch this to this specialized other prompt-plus-chatgpt-api-call, get back structured data, dispatch it to a different specialized prompt-plus-chatgpt-api-call") before it was a buzzword.

          • nostrebored 16 hours ago

            Can guarantee this was not true for any complicated extraction. You could reliably get it to output json but not the json you wanted

            Even on smallish ~50k datasets error was still very high and interpretation of schema was not particularly good.

            • avaer 13 hours ago

              It's still not true for any complicated extraction. I don't think I've ever shipped a successful solution to anything serious that relied on freeform schema say-and-pray with retries.

      • Insanity 19 hours ago

        When my company organized an LLM hackathon last year, they pushed for LangChain.. but then instead of building on top of it I ended up creating a more lightweight abstraction for our use-cases.

        That was more fun than actually using it.

    • prodigycorp 21 hours ago

      No dig at you, but I take the average langchain user as one who is either a) using it because their C-suite heard about at some AI conference and had it foisted upon them or b) does not care about software quality in general.

      I've talked to many people who regret building on top of it but they're in too deep.

      I think you may come to the same conclusions over time.

      • inlustra 20 hours ago

        Great insight that you wouldn’t get without HN, thank you! What would you and your peers recommend?

        • baobabKoodaa 20 hours ago

          LangChain does not solve any actual problem, so there is no need to replace it with anything. Just build without it.

        • peab 18 hours ago

          There's a great talk called Pydantic is all you need that i highly recommend

        • sumitkumar 20 hours ago

          pydantic/pydanticAI in builder mode or llamaindex in solution architect mode.

      • wilkystyle 15 hours ago

        Thanks for the reply, and no offense taken. I've inherited some code that uses LangChain, and this is my first experience with it.

  • anonzzzies 20 hours ago

    Yep, not sure why anyone is using it still.

    • blcknight 16 hours ago

      What would you use instead?

      I built an internal CI chat bot with it like 6 months ago when I was learning. It’s deployed and doing what everyone needs it to do.

      Claude Code can do most of what it does without needing anything special. I think that’s the future but I hate the vendor lock in Anthropic is pushing with CC.

      All my python tools could be skills, and some folks are doing that now but I don’t need to chase after every shiny thing — otherwise I’d never stop rewriting the damn thing.

      Especially since there’s no standardizing yet on plugins/skills/commands/hooks yet.

      • esperent 9 hours ago

        > I hate the vendor lock in Anthropic is pushing with CC.

        Accepting any kind of vendor lock in within this space at the moment is an incredibly bad idea. Who knows what will get released next week, let alone the next year. Anthropic might be dead in the water in six months. It's unlikely but not impossible. Expand that to a couple of years and it's not even that unlikely.

    • echelon 19 hours ago

      These guys raised $125M at $1.3B post on $12M ARR? What.

      > Today, langchain and langgraph have a combined 90M monthly downloads, and 35 percent of the Fortune 500 use our services

      What? This seems crazy. Maybe I'm blind, but I don't see the long term billion dollar business op here.

      Leftpad also had those stats, iirc.

      • anonzzzies 18 hours ago

        They were the first, very very early in the gpt hype start.

fn-mote 19 hours ago

> The blast radius is scale

Ugh. I’m a native English speaker and this sounds wrong, massaged by LLM or not.

“Large blast radius” would be a good substitute.

I am happy this whole issue doesn’t affect me, so I can stop reading when I don’t like the writing.

  • morkalork 19 hours ago

    Makes you think a human wrote it, which for an article on a .ai domain is kind of shocking!

  • croemer 19 hours ago

    It's definitely LLM generated. I came here to post that, then you saw you had already pointed it out. Giveaway for me: 'The most common real-world path here is not “attacker sends you a serialized blob and you call load().” It’s subtler:'

    It's not, it's; bolded items in list.

    Also no programmer would use this apostrophe instead of single quote.

    • minitech 14 hours ago

      > Also no programmer would use this apostrophe instead of single quote.

      I’m a programmer who likes punctuation, and all of my pointless internet comments are lovingly crafted with Option+]. It’s also the default for some word processors. Probably not wrong about the article, though.

shahartal a day ago

CVE-2025-68664 (langchain-core): object confusion during (de)serialization can leak secrets (and in some cases escalate further). Details and mitigations in the post.

threecheese a day ago

Cheers to all the teams on sev1 calls on their holidays, we can only hope their adversaries are also trying to spend time with family. LangGrinch, indeed! (I get it, timely disclosure is responsible disclosure)

crtasm 19 hours ago

What're the odds they've licenced the Grinch character for use on their company blog?

  • rainonmoon 17 hours ago

    This would make this blog notable as the first AI company to proactively respect trademark.

nubg a day ago

WHY on earth did the author of the CVE feel the need to feed the description text through an LLm? I get dizzy when I see this AI slop style.

I would rather just read the original prompt that went in instead of verbosified "it's not X, it's **Y**!" slop.

  • iamacyborg a day ago

    > WHY on earth did the author of the CVE feel the need to feed the description text through an LLm?

    Not everyone speaks English natively.

    Not everyone has taste when it comes to written English.

    • nkrisc 17 hours ago

      I personally find that text written by a human, even someone without a strong grasp of the language, is always preferable to read simply because each word (for better or worse) was chosen by a human to represent their ideas.

      If you use an LLM because you think you can’t write and communicate well, then if that’s true it means you’re feeding content that you already believe isn’t worthy of expressing your ideas to a machine that will drag your words even further what you intended.

      • smj-edison 17 hours ago

        Yeah. It feels like the same amount of signal for a larger amount of noise, and I strongly prefer high SNR. Terse and accurate are what I strive for in my writing, so it's painful to read a lot of text only to realize that two sentences would've sufficed.

    • nubg a day ago

      If I want to cleanup, summarize, translate, make more formal, make more funny, whatever, some incoming text by sending it through an LLM, I can do it myself.

    • crote 21 hours ago

      I would rather read succinct English written by a non-native speaker filled with broken grammar than overly verbose but well-spelled AI slop. Heck, just share the prompt itself!

      If you can't be bothered to have a human write literally a handful of lines of text, what else can't you be bothered to do? Why should I trust that your CVE even exists at all - let alone is indeed "critical" and worth ruining Christmas over?

      • eviks 5 hours ago

        > Why should I trust that your CVE even exists at all - let alone is indeed "critical" and worth ruining Christmas over?

        No reason, of course, the was no Christmas involved:

        > Report submitted via Huntr – December 4th, 2025 Acknowledged by LangChain maintainers – December 5th, 2025

      • llmslave2 14 hours ago

        It's actually far more preferable to read broken English written by a human because each language imposes their own unique "flavour" in English making it preferable to AI slop.

      • iinnPP 21 hours ago

        I prefer reading the LLM output for accessibility reasons.

        More importantly though, the sheer amount of this complaint on HN has become a great reason not to show up.

        • crote 20 hours ago

          > I prefer reading the LLM output for accessibility reasons.

          And that's completely fine! If you prefer to read CVEs that way, nobody is going to stop you from piping all CVE descriptions you're interested in through a LLM.

          However, having it processed by a LLM is essentially a one-way operation. If some people prefer the original and some others prefer the LLM output, the obvious move is to share the original with the world and have LLM-preferring readers do the processing on their end. That way everyone is happy with the format they get to read. Sounds like a win-win, no?

          • iinnPP 20 hours ago

            Yes, framed as you stated it is indeed a win-win.

            However, there will be cases where lacking the LLM output, there isn't any output at all.

            Creating a stigma over technology which is easily observed as being, in some form, accessible is expected in the world we live. As it is on HN.

            Not to say you are being any type of anything, I just don't believe anyone has given it all that much thought. I read the complaints and can't distinguish them from someone complaining that they need to make some space for a blind person using their accessibility tools.

            • crote 19 hours ago

              > However, there will be cases where lacking the LLM output, there isn't any output at all.

              Why would there be? You're using something to prompt the LLM, aren't you - what's stopping you from sharing the input?

              The same logic can be applied in an even larger extent to foreign-language content. I'd 1000x rather have a "My english not good, this describe big LangChain bug, click <link> if want Google Translate" followed by a decent article written in someone's native Chinese, than a poorly-done machine translation output. At least that way I have the option of putting the source text in different translation engines, or perhaps asking a bilingual friend to clarify certain sections. If all you have is the English machine translation output, then you're stuck with that. Something was mistranslated? Good luck reverse engineering the wrong translation back to its original Chinese and then into its proper English equivalent! Anyone who has had the joy to deal with "English" datasheets for Chinese-made chips knows how well this works in practice.

              You are definitely bringing up a good point concerning accessibility - but I fear using LLMs for this provides fake accessibility. Just because it results in well-formed sentences doesn't mean you are actually getting something comprehensible out of it! LLMs simply aren't good enough yet to rely on them not losing critical information and not introducing additional nonsense. Until they have reached that point, their user should always verify its output for accuracy - which on the author side means they were - by definition - also able to write it on their own, modulo some irrelevant formatting fluff. If you still want to use it for accessibility, do so on the reader side and make it fully optional: that way the reader is knowingly and willingly accepting its flaws.

              The stigma on LLM-generated content exists for a reason: people are getting tired of starting to invest time into reading some article, only for it to become clear halfway through that it is completely meaningless drivel. If >99% of LLM-generated content I come across is an utter waste of my time, why should I give this one the benefit of the doubt? Content written in horribly-broken English at least shows that there is an actual human writer investing time and effort into trying to communicate, instead of it being yet another instance of fully-automated LLM-generated slop trying to DDoS our eyeballs.

              • MobiusHorizons 13 hours ago

                I completely agree I prefer the original language as it offers more choice in how to try to consume it. I believe search engines segment content by source language though, so you would probably not ever see such content in search results for English language queries. It would be cool if you could somehow signal to search engines that you are interested in non-native language results. I don’t even tend to see results in the second language in my accept languages header unless the query is in that language.

            • llmslave2 14 hours ago

              Im sorry but I don't buy the argument that we should be accepting of AI slop because it's more accessible. That type of framing is devious because you frame dissenters as not caring about accessibility. It has nothing to do with accessibility and everything to do with simply not wanting to consume utterly worthless slop.

              • iinnPP 4 hours ago

                People generally don't actually care about accessibility and it shows, everywhere. There is obvious and glaring accessibility gains from LLMs that are entirely lost with the stigma.

          • zmgsabst 5 hours ago

            Well, no.

            Because authors do two things typically when they use an LLM for editing:

            - iterate multiple rounds

            - approve the final edit as their message

            I can’t do either of those things myself — and your post implicitly assumes there’s underlying content prior to the LLM process; but it’s likely to be iterated interactions with an LLM that produces content at all — ie, there never exists a human-written rough draft or single prompt for you to read, either.

            So your example is a lose-lose-lose: there never was a non-LLM text for you to read; I have no way to recreate the author’s ideas; and the author has been shamed into not publishing because it doesn’t match your aesthetics.

            Your post is a classic example of demanding everyone lose out because something isn’t to your taste.

            • iinnPP 4 hours ago

              Thank you for your post, it's more elegant than my explanation and makes good arguments.

              Sometimes I question my sanity these days when my (internally) valid thoughts seem to swoosh by externally.

        • roywiggins 21 hours ago

          Unfortunately, the sheer amount of ChatGPT-processed texts being linked has for me become a reason not to want to read them, which is quite depressing.

      • colechristensen 20 hours ago

        You wouldn't complain as much if it were merely poorly written by a human. It gets the information across. The novelty of complaining about a new style of bad writing is being overdone by a lot of people, particularly on HN.

        • crote 19 hours ago

          > You wouldn't complain as much if it were merely poorly written by a human.

          Obviously.

          > It gets the information across.

          If it is poorly written by a human? Sure!

          > The novelty of complaining about a new style of bad writing

          But it's not a "new style of bad writing", is it?

          The problem is that LLM-generated content is more often than not wrong. It is only worth reading if a human has invested time into post-processing it. However, LLMs make badly-written low-quality content look the same as badly-written high-quality content or decently-written high-quality content. It is impossible for the reader to quickly distinguish properly post-processed LLM output from time-wasting slop.

          On the other hand, if its written by a human it is often quite easy to distinguish badly-written low-quality content from badly-written high-quality content. And the writing was never the important part: it has always been about the content. There are plenty of non-native English tech enthusiasts writing absolute gems in the most broken English you can imagine! Nobody has ever had trouble distinguishing those from low-quality garbage.

          But the vast majority of LLM-generated content I come across on the internet is slop and a waste of my time. My eyeballs are being DDoSed. The only logical action upon noticing that something is LLM-generated content is to abort reading it and assume it is slop as well. Like it or not, LLMs have become a sign of poor quality.

          By extension, the issue with using LLMs for important content is that you are making it look indistinguishable from slop. You are loudly signaling to the reader that it isn't worth their time. So yes, if you want people to read it, stick to bad human writing!

          • zmgsabst 5 hours ago

            > There are plenty of non-native English tech enthusiasts writing absolute gems in the most broken English you can imagine! Nobody has ever had trouble distinguishing those from low-quality garbage.

            Your entire theory about LLMs seems to rely on that… but it’s just not true, eg, plenty of quality writing with low technical merit is making a fortune while genuinely insightful broken English languishes in obscurity.

            You’re giving a very passionate speech about how no dignified noble would be dressed in these machine-made fabrics, which while some are surely as finely woven as those by any artisan, bear the unmistakable stain of association with plebs dressed in machine-made fabrics.

            I admire the commitment to aesthetics, but I think you’re fighting a losing war against the commoditization and industrialization of certain intellectual work.

  • dorianmariecom a day ago

    you can use chatgpt to reverse the prompt

    • XCSme 21 hours ago

      Not sure if it's a joke, but I don't think LLM is a bijective function.

      • croemer 19 hours ago

        If you had all the token probabilities it would be bijective. There was a post about this here some time back.

        • XCSme 17 hours ago

          Kind of, LLMs still use randomness when selecting tokens, so the same input can lead to multiple different outputs.

    • small_scombrus 21 hours ago

      ChatGPT can generate you a sentence that plausibly looks like the prompt

      • llmslave2 14 hours ago

        Rather it estimates a potential prompt. I could do the same and it would be no more or less accurate.

croemer 19 hours ago

LLM slop. At least one clear error (hallucination): "’Twas the night before Christmas, and I was doing the least festive kind of work: staring at serialization"

Per disclosure timeline the report was made on December 4, it was definitely not the night before Christmas when you were doing the work then.

  • eviks 5 hours ago

    Security research often looks dramatic from the outside. In reality, it is usually the mundane work of asking AI to make up dramatic stories

bad_haircut72 19 hours ago

The vulnerability was that dumps() and dumpd() did not properly escape user-controlled dictionaries that happened to include the reserved ‘lc’ key.

I wonder if this code was written by an LLM hahaha

nextworddev 20 hours ago

Meanwhile Harrison Chase is laughing his way to the bank