pnathan 3 days ago

I'm glad to see this. I'm happy to plan to pay for Zed - its not there yet but its well on its way - But I don't want essentially _any_ of the AI and telemetry features.

The fact of the matter is, I am not even using AI features much in my editor anymore. I've tried Copilot and friends over and over and it's just not _there_. It needs to be in a different location in the software development pipeline (Probably code reviews and RAG'ing up for documentation).

- I can kick out some money for a settings sync service. - I can kick out some money to essentially "subscribe" for maintenance.

I don't personally think that an editor is going to return the kinds of ROI VCs look for. So.... yeah. I might be back to Emacs in a year with IntelliJ for powerful IDE needs....

  • dilDDoS 3 days ago

    I'm happy to finally see this take. I've been feeling pretty left out with everyone singing the praises of AI-assisted editors while I struggle to understand the hype. I've tried a few and it's never felt like an improvement to my workflow. At least for my team, the actual writing of code has never been the problem or bottleneck. Getting code reviewed by someone else in a timely manner has been a problem though, so we're considering AI code reviews to at least take some burden out of the process.

    • Aurornis 3 days ago

      AI code reviews are the worst place to introduce AI, in my experience. They can find a few things quickly, but they can also send people down unnecessary paths or be easily persuaded by comments or even the slightest pushback from someone. They're fast to cave in and agree with any input.

      It can also encourage laziness: If the AI reviewer didn't spot anything, it's easier to justify skimming the commit. Everyone says they won't do it, but it happens.

      For anything AI related, having manual human review as the final step is key.

      • aozgaa 3 days ago

        Agreed.

        LLM’s are fundamentally text generators, not verifiers.

        They might spot some typos and stylistic discrepancies based on their corpus, but they do not reason. It’s just not what the basic building blocks of the architecture do.

        In my experience you need to do a lot of coaxing and setting up guardrails to keep them even roughly on track. (And maybe the LLM companies will build this into the products they sell, but it’s demonstrably not there today)

        • CharlesW 3 days ago

          > LLM’s are fundamentally text generators, not verifiers.

          In reality they work quite well for text and numeric (via tools) analysis, too. I've found them to be powerful tools for "linting" a codebase against adequately documented standards and architectural guidance, especially when given the use of type checkers, static analysis tools, etc.

          • skydhash 3 days ago

            The value of an analysis is the decision that will be taken after getting the result. So will you actually fix the codebase or it’s just a nice report to frame and put on the wall?

            • CharlesW 3 days ago

              > So will you actually fix the codebase…

              Code quality improvements is the reason to do it, so *yes*. Of course, anyone using AI for analysis is probably leveraging AI for the "fix" part too (or at least I am).

      • pnathan 3 days ago

        That's a fantastic counterpoint. I've found AI reviewers to be useful on a first pass, at a small-pieces level. But I hear your opinion!

      • chuckadams 3 days ago

        I find the summary that copilot generates is more useful than the review comments most of the time. That said, I have seen it make some good catches. It’s a matter of expectations: the AI is not going to have hurt feelings if you reject all its suggestions, so I feel even more free to reject it feedback with the briefest of dismissals.

      • kmacdough 2 days ago

        I agree and disagree. I think it's important to make it very visually clear that it is not really a PR, but rather an advanced style checker. I think they can be very useful for assessing more rote/repetitive standards that are a bit beyond what standard linters/analysis can provide. Things like institutional standards, lessons learned, etc. But if it uses the normal PR pipeline rather than the checker pipeline, it gives the false impression that it is a PR, which is not.

      • moomoo11 2 days ago

        What about something like this?

        Link to the ticket. Hopefully your team cares enough to write good tickets.

        So if the problem is defined well in the ticket, do the code changed actually address it?

        For example for a bug fix. It can check the tests and see if the PR is testing the conditions that caused the bug. It can check the code changed to see if it fits the requirements.

        I think the goal with AI for creative stuff should be to make things more efficient, not replace necessarily. Whoever code reviews can get up to speed fast. I’ve been on teams where people would code review a section of the code they aren’t familiar with too much.

        In this case if it saves them 30 minutes then great!

    • kstrauser 3 days ago

      IMO, the AI bits are the least interesting parts of Zed. I hardly use them. For me, Zed is a blazing fast, lightweight editor with a large community supporting plugins and themes and all that. It's not exactly Sublime Text, but to me it's the nearest spiritual successor while being fully GPL'ed Free Software.

      I don't mind the AI stuff. It's been nice when I used it, but I have a different workflow for those things right now. But all the stuff besides AI? It's freaking great.

      • dns_snek 3 days ago

        > while being fully GPL'ed Free Software

        I wouldn't sing them praises for being FOSS. All contributions are signed away under their CLA which will allow them to pull the plug when their VCs come knocking and the FOSS angle is no longer convenient.

        • bigfudge 3 days ago

          How is this true if it’s actually GPL as gp claimed?

          • pie_flavor 3 days ago

            The CLA assigns ownership of your contributions to the Zed team[^0]. When you own software, you can release it under whatever license you want. If I hold a GPL license to a copy, I have that license to that copy forever, and it permits me to do all the GPL things with it, but new copies and new versions you distribute are whatever you want them to be. For example Redis relicensed, prompting the community to fork the last open-source version as Valkey.

            The way it otherwise works without a CLA is that you own the code you contributed to your repo, and I own the code I contributed to your repo, and since your code is open-source licensed to me, that gives me the ability to modify it and send you my changes, and since my code is open-source licensed to you, that gives you the ability to incorporate it into your repo. The list of copyright owners of an open source repo without a CLA is the list of committers. You couldn't relicense that because it includes my code and I didn't give you permission to. But a CLA makes my contribution your code, not my code.

            [^0]: In this case, not literally. You instead grant them a proprietary free license, satisfying the 'because I didn't give you permission' part more directly.

          • therealpygon 3 days ago

            Because when you sign away copyright, the software can be relicensed and taken closed source for all future improvements. Sure, people can still use the last open version, maybe fork it to try to keep going, but that simply doesn’t work out most times. I refuse to contribute to any project that requires me to give them copyright instead of contributing under copyleft; it’s just free contractors until the VCs come along and want to get their returns.

            • setopt 3 days ago

              > I refuse to contribute to any project that requires me to give them copyright instead of contributing under copyleft

              Please note that even GNU themselves require you to do this, see e.g. GNU Emacs which requires copyright assignment to the FSF when you submit patches. So there are legitimate reasons to do this other than being able to close the source later.

              • wolvesechoes 2 days ago

                I will start being worried about GNU approach the day they accept VC money.

              • therealpygon 2 days ago

                FSF and GNU are stewards of copyleft, and FSF is structured under 501(c)(3). Assigning copyright to FSF whose significant purpose is to defend and encourage copyleft…is contributing under copyleft in my mind. They would face massive backlash (and GNU would likely face lawsuits from FSF) were they to attempt such a thing. Could they? Possibly. Would they? Exceptionally unlikely.

                So yes, I trust a non-profit, and a collective with nearly 50 years of history supporting copyleft, implicitly more than I will ever trust a company or project offering a software while requiring THEY be assigned the copyright rather than a license. Even your statement holds a difference; they require assignment to FSF, not the project or its maintainers.

                That’s just listening to history, not really a gotcha to me.

              • teddyh 2 days ago

                > even GNU themselves require you to do this

                Some GNU projects require this; it’s up to the individual maintainers of each specific GNU project whether to require this or not. Many don’t.

          • carey 3 days ago

            The FSF also typically requires a copyright assignment for their GPL code. Nobody thinks that they’ll ever relicense Emacs, though.

            • ekidd 3 days ago

              It has been decades since I've seen an FSF CLA packet, but if I recall correctly, the FSF also made legally-binding promises back to the original copyright holder, promising to distribute the code under some kind of "free" (libre, not gratuit) license in the future. This would have allowed them to switch from GPL 2 to GPL 3, or even to an MIT license. But it wouldn't have allowed them to make the software proprietary.

              But like I said, it has been decades since I've seen any of their paperwork, and memory is fallible.

            • kergonath 3 days ago

              They’re also not exactly a VC-backed startup.

            • johnny22 3 days ago

              yeah I don't mind signing a CLA for copyleft software to a non-profit org, but i do with a for-profit one.

          • kstrauser 3 days ago

            In my opinion, it's not. They could start licensing all new code under a non-FOSS license tomorrow and we'd still have the GPL'ed Zed as it is today. The same is true for any project, CLA or not.

      • tkz1312 3 days ago

        why not just use sublime text?

        • kstrauser 3 days ago

          That GPL/Free Software part is a pretty huge part of the reason.

          • tkz1312 3 days ago

            until the inevitable VC rug pull…

    • sli 3 days ago

      I found the OP comment amusing because Emacs with a Jetbrains IDE when I need it is exactly my setup. The only thing I've found AI to be consistently good for is spitting out boring boilerplate so I can do the fun parts myself.

    • TheCapeGreek 3 days ago

      I always hear this "writing code isn't the bottleneck" used when talking about AI, as if there are chosen few engineers who only work on completely new and abstract domains that require a PhD and 20 years of experience that an LLM can not fathom.

      Yes, you're right, AI cannot be a senior engineer with you. It can take a lot of the grunt work away though, which is still part of the job for many devs at all skill levels. Or it's useful for technologies you're not as well versed in. Or simply an inertia breaker if you're not feeling very motivated for getting to work.

      Find what it's good for in your workflows and try it for that.

      • 3836293648 3 days ago

        I feel like everyone praising AI is a webdev with extremely predictable problems that are almost entirely boilerplate.

        I've tried throwing LLMs at every part of the work I do and it's been entirely useless at everything beyond explaining new libraries or being a search engine. Any time it tries to write any code at all it's been entirely useless.

        But then I see so many praising all it can do and how much work they get done with their agents and I'm just left confused.

        • typpilol 2 days ago

          Can I ask what kind of work area you're in?

        • creshal 2 days ago

          Yeah, the more boilerplate your code needs, the better AI works, and the more it saves you time by wasting less on boilerplate.

          AI tooling my experience:

          - React/similar webdev where I "need" 1000 lines of boilerplate to do what jquery did in half a line 10 years ago: Perfect

          - AbstractEnterpriseJavaFactorySingletonFactoryClassBuilder: Very helpful

          - Powershell monstrosities where I "need" 1000 lines of Verb-Nouning to do what bash does in three lines: If you feed it a template that makes it stop hallucinating nonexisting Verb-Nouners, perfect

          - Abstract algorithmic problems in any language: Eh, okay

          - All the `foo,err=…;if err…` boilerplate in Golang: Decent

          - Actually writing well-optimized business logic in any of those contexts: Forget about it

          Since I spend 95% of my time writing tight business logic, it's mostly useless.

    • jama211 3 days ago

      Highlighting code and having cursor show the recommended changes and make them for me with one click is just a time saver over me copying and pasting back and forth to an external chat window. I don’t find the autocomplete particularly useful, but the inbuilt chat is a useful feature honestly.

    • stouset 3 days ago

      I'm the opposite. I held out this view for a long, long time. About two months ago, I gave Zed's agentic sidebar a try.

      I'm blown away.

      I'm a very senior engineer. I have extremely high standards. I know a lot of technologies top to bottom. And I have immediately found it insanely helpful.

      There are a few hugely valuable use-cases for me. The first is writing tests. Agentic AI right now is shockingly good at figuring out what your code should be doing and writing tests that test the behavior, all the verbose and annoying edge cases, and even find bugs in your implementation. It's goddamn near magic. That's not to say they're perfect, sometimes they do get confused and assume your implementation is correct when the test doesn't pass. Sometimes they do misunderstand. But the overall improvement for me has been enormous. They also generally write good tests. Refactoring never breaks the tests they've written unless an actually-visible behavior change has happened.

      Second is trying to figure out the answer to really thorny problems. I'm extremely good at doing this, but agentic AI has made me faster. It can prototype approaches that I want to try faster than I can and we can see if the approach works extremely quickly. I might not use the code it wrote, but the ability to rapidly give four or five alternatives a go versus the one or two I would personally have time for is massively helpful. I've even had them find approaches I never would have considered that ended up being my clear favorite. They're not always better than me at choosing which one to go with (I often ask for their summarized recommendations), but the sheer speed in which they get them done is a godsend.

      Finding the source of tricky bugs is one more case that they excel in. I can do this work too, but again, they're faster. They'll write multiple tests with debugging output that leads to the answer in barely more time than it takes to just run the tests. A bug that might take me an hour to track down can take them five minutes. Even for a really hard one, I can set them on the task while I go make coffee or take the dog for a walk. They'll figure it out while I'm gone.

      Lastly, when I have some spare time, I love asking them what areas of a code base could use some love and what are the biggest reward-to-effort ratio wins. They are great at finding those places and helping me constantly make things just a little bit better, one place at a time.

      Overall, it's like having an extremely eager and prolific junior assistant with an encyclopedic brain. You have to give them guidance, you have to take some of their work with a grain of salt, but used correctly they're insanely productive. And as a bonus, unlike a real human, you don't ever have to feel guilty about throwing away their work if it doesn't make the grade.

      • skydhash 3 days ago

        > Agentic AI right now is shockingly good at figuring out what your code should be doing and writing tests that test the behavior, all the verbose and annoying edge cases,

        That's a red flag for me. Having a lot of tests usually means that your domain is fully known so now you can specify it fully with tests. But in a lot of setting, the domain is a bunch of business rules that product decides on the fly. So you need to be pragmatic and only write tests against valuable workflows. Or find yourself changing a line and have 100+ tests breaking.

        • asgraham 3 days ago

          If you can write tests fast enough, you can specify those business rules on the fly. The ideal case is that tests always reflect current business rules. Usually that may be infeasible because of the speed at which those rules change, but I’ve had a similar experience of AI just getting tests right, and even better, getting tests verifiably right because the tests are so easy to read through myself. That makes it way easier to change tests rapidly.

          This also is ignoring that ideally business logic is implemented as a combination of smaller, stabler components that can be independently unit tested.

          • skydhash 3 days ago

            Unit tests value is mostly when integration and more general tests are failing. So you can filter out some sections in the culprit list (you don’t want to spend days specifying the headlights if the electric design is wrong or the car can’t start)

            Having a lot of tests is great until you need to refactor them. I would rather have a few e2e for smoke testing and valuable workflows, Integration tests for business rules. And unit tests when it actually matters. As long as I can change implementation details without touching the tests that much.

            Code is a liability. Unless you don’t have to deal with (assembly and compilers) reducing the amount of code is a good strategy.

        • stouset 2 days ago

          This is a red flag for me. Any given user-facing software project with changing requirements is still built on top of relatively stable, consistent lower layers. You might change the business rules on top of those layers, but you need generally reasonable and stable internal APIs.

          Not having this is very indicative of a spaghetti soup architecture. Hard pass.

        • TheCapeGreek 3 days ago

          Tests breaking when you change things is... kind of the point?

          • skydhash 2 days ago

            You can over specify. When the rules are stringent it's best to have extensive test suites (Like Formula 1). But when it's just a general app, you need to be pragmatic. It's like having a too sensitive sensor in some systems.

      • mkl 3 days ago

        What languages and contexts are you getting these good results for?

    • skrtskrt 3 days ago

      AI is solid for kicking off learning a language or framework you've never touched before.

      But in my day to day I'm just writing pure Go, highly concurrent and performance-sensitive distributed systems, and AI is just so wrong on everything that actually matters that I have stopped using it.

      • skydhash 3 days ago

        But so is a good book. And it costs way less. Even though searching may be quicker, having a good digest of a feature is worth the half hour I can spend browsing a chapter. It’s directly picking an expert brains. Then you take notes, compare what you found online and the updated documentation and soon you develop a real understanding of the language/tool abstraction.

        • skrtskrt 3 days ago

          In an ideal world, yeah. But most software instructional docs and books are hot garbage, out of date, incorrect, incomplete, and far too shallow.

          • skydhash 3 days ago

            Are you reading all the books on the market? You can find some good recommendation lists. No need to get every new releases from Packtpub.

            • mirkodrummer 3 days ago

              I knew you were up to jab Packt XD I have yet to find a good book from Packt it may be exist. My fav publishers are manning and nostarch press

      • sarchertech 3 days ago

        I’m using Go to build a high performance data migration pipeline for a big migration we’re about to do. I haven’t touched Go in about 10 years, so AI was helpful getting started.

        But now that I’ve been using it for a while it’s absolutely terrible with anything that deals with concurrency. It’s so bad that I’ve stopped using it for any code generation and going to completely disable autocomplete.

      • mirkodrummer 3 days ago

        AI has stale knowledge I won't use it for learning, especially because it's biased towards low quality JS repos on which has been trained on

        • skrtskrt 3 days ago

          A good example would be Prometheus, particularly PromQL for which the docs are ridiculously bare, but there is a ton of material and stackoverflow answers scattered al over the internet.

    • aDyslecticCrow 3 days ago

      zed was just a fast and simple replacement for Atom (R.I.P) or vscode. Then they put AI on top when that showed up. I don't care for it, and appreciate a project like this to return the program to its core.

  • mootoday 3 days ago

    You can opt out of AI features in Zed [0].

    [0] https://zed.dev/blog/disable-ai-features

    • inetknght 3 days ago

      Opt-out instead of opt-in is an anti-feature.

      • echelon 3 days ago

        You can leave LLM Q&A on the table if you like, but tab auto complete is a godlike power.

        I'm auto-completing crazy complex Rust match branches for record transformation. 30 lines of code, hitting dozens of fields and mutations, all with a single keystroke. And then it knows where my next edit will be.

        I've been programming for decades and I love this. It's easily a 30-50% efficiency gain when plumbing fields or refactoring.

        • typpilol 2 days ago

          Honestly I find it useful for simple things like having to change something in a ton of columns you can't do with an easy find replace.

          Really is game changing

      • gleenn 3 days ago

        IIRC it was opt-in.

    • oneshtein 2 days ago

      How to opt-out of unrequested pop-ups and various helpers, or download and installation of binary files without permission?

  • senko 3 days ago

    Can't you just not use / disable AI and telemetry? It's not shoved in your face.

    I would prefer an off-by-default telemetry, but if there's a simple opt-out, that's fine?

  • coneonthefloor 3 days ago

    Well said, Zed could be great if they just stopped with the AI stuff and focused on text editing.

  • DerArzt 2 days ago

    Just to echo the sentiment, I've had struggles trying to figure out how to use LLMs in my daily work.

    I've landed on using it as part of my code review process before asking someone to review my PR. I get a lot of the nice things that LLMs can give me (a second set of eyes, a somewhat consistent reviewer) but without the downsides (no waiting on the agent to finish writing code that may not work, costs me personally nothing in time and effort as my Org pays for the LLM, when it hallucinates I can easily ignore it).

  • nsm 3 days ago

    Have you considered sublime text as the lightweight editor?

  • asadm 3 days ago

    I think you and I are having very different experiences with these copilot/agents. So I have questions for you, how do you:

    - generate new modules/classes in your projects - integrate module A into module B or entire codebase A into codebase B?

    - get someones github project up and running on your machine, do you manually fiddle with cmakes and npms?

    - convert an idea or plan.md or a paper into working code?

    - Fix flakes, fix test<->code discrepancies or increase coverage etc

    If you do all this manually, why?

    • skydhash 3 days ago

      > generate new modules/classes in your projects

      If it's formulaic enough, I will use the editor templates/snippets generator. Or write a code generator (if it involves a bunch of files). If it's not, I probably have another class I can copy and strip out (especially in UI and CRUD).

      > integrate module A into module B

      If it's cannot be done easily, that's the sign of a less than optimal API.

      > entire codebase A into codebase B

      Is that a real need?

      > get someones github project up and running on your machine, do you manually fiddle with cmakes and npms

      If the person can't be bothered to give proper documentation, why should I run the project? But actually, I will look into AUR (archlinux) and Homebrew formula if someone has already do the first jobs of figuring dependency version. If there's a dockerfile, I will use that instead.

      > convert an idea or plan.md or a paper into working code?

      Iteratively. First have an hello world or something working, then mowing down the task list.

      > Fix flakes, fix test<->code discrepancies or increase coverage etc

      Either the test is wrong or the code is wrong. Figure out which and rework it. The figuring part always take longer as you will need to ask around.

      > If you do all this manually, why?

      Because when something happens in prod, you really don't want that feeling of being the last one that have interacted with that part, but with no idea of what has changed.

    • frakt0x90 3 days ago

      To me, using AI to convert an idea or paper into working code is outsourcing the only enjoyable part of programming to a machine. Do we not appreciate problem solving anymore? Wild times.

      • mackeye 3 days ago

        i'm an undergrad, so when i need to implement a paper, the idea is that i'm supposed to learn something from implementing it. i feel fortunate in that ai is not yet effective enough to let me be lazy and skip that process, lol

        • craftkiller 3 days ago

          When I was younger, we all had to memorize phone numbers. I still remember those numbers (even the defunct ones) but I haven't learned a single new number since getting a cellphone.

          When I was younger, I had to memorize how to drive to work/the grocery store/new jersey. I still remember those routes but I haven't learned a single new route since getting a smartphone.

          Are we ready to stop learning as programmers? I certainly am not and it sounds like you aren't either. I'll let myself plateau when I retire or move into management. Until then, every night debugging and experimenting has been building upon every previous night debugging and experimenting, ceaselessly progressing towards mastery.

          • tracker1 3 days ago

            I can largely relate... that said, I rarely rely on my phone for remembering routes to places I've been before. It does help that I've lived in different areas of my city and suburbs (Phoenix) so I'm generally familiar with most of the main streets, even if I haven't lived on a given side of town in decades.

            The worst is when I get inclined to go to a specific restaurant I haven't been to in years and it's completely gone. I've started to look online to confirm before driving half an hour or more.

          • fapjacks 3 days ago

            I noticed this also, and ever since, I've made it a point to always have memorized my SO's number and my best friend's number.

      • mirkodrummer 3 days ago

        *Outsourcing to a parrot on steroids which will make mistakes, produce stale ugly ui with 100px border radius, 50px padding and rainbow hipster shadows, write code biased towards low quality training data and so on. It's the perfect recipe for disaster

        • xpe 3 days ago

          Over the top humor duly acknowledged.

          Disastrous? Quite possibly, but my concerns are based on different concerns.

          Almost everything changes, so isn’t it better to rephrase these statements as metrics to avoid fixating on one snapshot in an evolving world?

          As the metrics get better, what happens? Do you still have objections? What objections remain as AI capabilities get better and better without limit? The growth might be slow or irregular, but there are many scenarios where AIs reach the bar where they are better at almost all knowledge work.

          Stepping back, do you really think of AI systems as stochastic parrots? What does this metaphor buy you? Is it mostly a card you automatically deal out when you pattern match on something? Or does serve as a reusable engine for better understanding the world?

          We’ve been down this road; there is already much HN commentary on the SP metaphor. (Not that I recommend HN for this kind of thing. This is where I come to see how a subset of tech people are making sense of it, often imperfectly with correspondingly inappropriate overconfidence.)

          TLDR: smart AI folks don’t anchor on the stochastic parrots metaphor. It is a catchy phrase and helped people’s papers get some attention, but it doesn’t mean what a lot of people think it means. Easily misunderstood, it serves as a convenient semantic stop sign so people don’t have to dig in to the more interesting aspects of modern AI systems. For example: (1) transformers build conceptual models of language that transcend any particular language. (2) They also build world models with spatial reasoning. (3) Many models are quite resilient to low quality training data. And more.

          To make this very concrete: under the assumption of universal laws of physics, people are just following the laws of physics, and to a first approximation, our brains are just statistical pattern matchers. By this definition, humans would also be “stochastic parrots”. I go all this trouble to show that this metaphor doesn’t cut to the heart of the matter. There are clearer questions to ask: they require getting a lot more specific about various forms and applications of intelligent behavior. For example

          - under what circumstances does self play lead to superhuman capability in a particular domain?

          - what limits exist (if any) in the self supervised training paradigm used for sequential data? If the transformer trained in this way can write valid programs then it can create almost any Turing machine; limited only by time and space and energy. What more could you want? (Lots, but I’m genuinely curious as to people’s responses after reflecting on these.)

          • jeremyjh 3 days ago

            Until the thing can learn on its own and advance its capabilities to the same degree that a junior developer can, it is not intelligent enough to do that work. It doesn't learn our APIs, it doesn't learn our business domain, it doesn't learn from the countless mistakes I correct it on. What we have now is interesting, it is helping sometimes and wasteful others. It is not intelligent.

            • xpe 2 days ago

              > It is not intelligent.

              Which of the following would you agree to... ?

              1. There is no single bar for intelligence.

              2. Intelligence is better measured on a scale than with 1 bit (yes/no).

              3. Intelligence is better considered as having many components instead of just one. When people talk about intelligence, they often mean different things across domains, such as emotional, social, conceptual, spatial, kinetic, sensory, etc.

              4. Many researchers have looked for -- and found -- in humans, at least, some notions of generalized intellectual capability that tends to help across a wide variety of cognitive tasks.

              If some of these make sense, I suggest it would be wise to conclude:

              5. Reasonable people accentuate different aspects and even definitions of intelligence.

              6. Expecting a yes/no answer for "is X intelligent?" without considerable explanation is approximately useless. (Unless it is a genuinely curious opener for an in-depth conversation.)

              7. Asking "is X intelligent?" tends to be a poorly framed question.

            • xpe 2 days ago

              > Until the thing can learn on its own and advance its capabilities to the same degree that a junior developer can, it is not intelligent enough to do that work.

              This confuses intelligence with memory (or state) which tends to enable continuous learning.

              • jeremyjh 2 days ago

                No confusion here.

                This is just semantics, but you brought it up. The very first definition of intelligence provided by Webster:

                1.a. the ability to learn or understand or to deal with new or trying situations : reason also : the skilled use of reason

                https://www.merriam-webster.com/dictionary/intelligence

            • xpe 2 days ago

              Another thing that jumps out to me is just how fluidly people redefine "intelligence" to mean "just beyond what machines today can do". I can't help wonder much your definition has changed. What would happen if we reviewed your previous opinions, commentary, thoughts, etc... would your time-varying definitions of "intelligence" be durable and consistent? Would this sequence show movement towards a clearer and more testable definition over time?

              My guess? The tail is wagging the dog here -- you are redefining the term in service of other goals. Many people naturally want humanity to remain at the top of the intellectual ladder and will distort reality as needed to stay there.

              My point is not to drag anyone through the mud for doing the above. We all do it to various degrees.

              Now, for my sermon. More people need to wake up and realize machine intelligence has no physics-based constraints to surpassing us.

              A. Businesses will boom and bust. Hype will come and go. Humanity has an intrinsic drive to advance thinking tools. So AI is backed by huge incentives to continue to grow, no matter how many missteps economic or otherwise.

              B. The mammalian brain is an existence proof that intelligence can be grown / evolved. Homo sapiens could have bigger brains if not for birth-canal size constraints and energy limitations.

              C. There are good reasons to suggest that designing an intelligent machine will be more promising than evolving one.

              D. There are good reasons to suggest silicon-based intelligence will go much further than carbon-based brains.

              E. We need to stop deluding ourselves by moving the goalposts. We need to acknowledge reality, for this is reality we are living in, and this is reality we can manipulate.

              Let me know if you disagree with any of the sentences below. I'm not here to preach to the void.

              • xpe 2 days ago

                > A. Businesses will boom and bust. Hype will come and go. Humanity has an intrinsic drive to advance thinking tools. So AI is backed by huge incentives to continue to grow, no matter how many missteps economic or otherwise.

                Corrected to:

                A. Businesses will boom and bust. Hype will come and go. Nevertheless, humanity seems to have an intrinsic drive to innovate, which means pushing the limits of technology. People will seek more intelligent machines, because we perceive them as useful tools. So AI is pressurized by long-running, powerful incentives, no matter how many missteps economic or otherwise. It would take a massive and sustained counter-force to prevent a generally upwards AI progression.

          • ITjournalist 2 days ago

            Regarding the phrase statistical parrot, I would claim that statistical parrotism is an ideology. As with any ideology, what we see is a speciation event. The overpopulation of SEO parrots has driven out a minority of parrots who now respecialize in information dissemination rather than information pollution, leaving their former search-engine ecological niche and settling in a new one that allows them to operate at a higher level of density, compression and complexity. Thus it's a major step in evolution, but it would be a misunderstanding to claim that evolution is the emergence of intelligence.

            • mirkodrummer 2 days ago

              The overpopulation of AI BS, prophet previsions, pseudo philosopher/anthropologist and so on, this site has been tampered with is astonishing

      • vehemenz 3 days ago

        Drawing blueprints is more enjoyable than putting up drywall.

        • jeremyjh 3 days ago

          The code is the blueprint.

          “The final goal of any engineering activity is some type of documentation. When a design effort is complete, the design documentation is turned over to the manufacturing team. This is a completely different group with completely different skills from the design team. If the design documents truly represent a complete design, the manufacturing team can proceed to build the product. In fact, they can proceed to build lots of the product, all without any further intervention of the designers. After reviewing the software development life cycle as I understood it, I concluded that the only software documentation that actually seems to satisfy the criteria of an engineering design is the source code listings.” - Jack Reeves

      • asadm 3 days ago

        depends. if i am converting it to then use it in my project, i don't care who writes it, as long as it works.

    • pnathan 3 days ago

      I'm pretty fast coding and know what I'm doing. My ideas are too complex for claude to just crap out. If I'm really tired I'll use claude to write tests. Mostly they aren't really good though.

      AI doesn't really help me code vs me doing it myself.

      AI is better doing other things...

      • asadm 3 days ago

        > AI is better doing other things...

        I agree. For me the other things are non-business logic, build details, duplicate/bootstrap code that isn't exciting.

    • mackeye 3 days ago

      > how do you convert a paper into working code?

      this is something i've found LLMs almost useless at. consider https://arxiv.org/abs/2506.11908 --- the paper explains its proposed methodology pretty well, so i figured this would be a good LLM use case. i tried to get a prototype to run with gemini 2.5 pro, but got nowhere even after a couple of hours, so i wrote it by hand; and i write a fair bit of code with LLMs, but it's primarily questions about best practices or simple errors, and i copy/paste from the web interface, which i guess is no longer in vogue. that being said, would cursor excel here at a one-shot (or even a few hours of back-and-forth), elegant prototype?

      • asadm 3 days ago

        I have found that whenever it fails for me, it's likely that I was trying to one-shot the solution and I retry by breaking the problem into smaller chunks or doing a planning work with gemini cli first.

        • mackeye 3 days ago

          smaller chunks works better, but ime, it takes as long as writing it manually that way, unless the chunk is very simple, e.g. essentially api examples. i tend not to use LLMs for planning because thats the most fun part for me :)

    • chamomeal 3 days ago

      For stuff like adding generating and integrating new modules: the helpfulness of AI varies wildly.

      If you’re using nest.js, which is great but also comically bloated with boilerplate, AI is fantastic. When my code is like 1 line of business logic per 6 lines of boilerplate, yes please AI do it all for me.

      Projects with less cruft benefit less. I’m working on a form generator mini library, and I struggle to think of any piece I would actually let AI write for me.

      Similar situation with tests. If your tests are mostly “mock x y and z, and make sure that this spied function is called with this mocked payload result”, AI is great. It’ll write all that garbage out in no time.

      If your tests are doing larger chunks of biz logic like running against a database, or if you’re doing some kinda generative property based testing, LLMs are probably more trouble than they’re worth

    • stevenbedrick 3 days ago

      To do those things, I do the same thing I've been doing for the thirty years that I've been programming professionally: I spend the (typically modest) time it takes to learn to understand the code that I am integrating into my project well enough to know how to use it, and I use my brain to convert my ideas into code. Sometimes this requires me to learn new things (a new tool, a new library, etc.). There is usually typing involved, and sometimes a whiteboard or notebook.

      Usually it's not all that much effort to glance over some other project's documentation to figure out how to integrate it, and as to creating working code from an idea or plan... isn't that a big part of what "programming" is all about? I'm confused by the idea that suddenly we need machines to do that for us: at a practical level, that is literally what we do. And at a conceptual level, the process of trying to reify an idea into an actual working program is usually very valuable for iterating on one's plans, and identifying problems with one's mental model of whatever you're trying to write a program about (c.f. Naur's notions about theory building).

      As to why one should do this manually (as opposed to letting the magic surprise box take a stab at it for you), a few answers come to mind:

      1. I'm professionally and personally accountable for the code I write and what it does, and so I want to make sure I actually understand what it's doing. I would hate to have to tell a colleague or customer "no, I don't know why it did $HORRIBLE_THING, and it's because I didn't actually write the program that I gave you, the AI did!"

      2. At a practical level, #1 means that I need to be able to be confident that I know what's going on in my code and that I can fix it when it breaks. Fiddling with cmakes and npms is part of how I become confident that I understand what I'm building well enough to deal with the inevitable problems that will occur down the road.

      3. Along similar lines, I need to be able to say that what I'm producing isn't violating somebody's IP, and to know where everything came from.

      4. I'd rather spend my time making things work right the first time, than endlessly mess around trying to find the right incantation to explain to the magic box what I want it to do in sufficient detail. That seems like more work than just writing it myself.

      Now, I will certainly agree that there is a role for LLMs in coding: fancier auto-complete and refactoring tools are great, and I have also found Zed's inline LLM assistant mode helpful for very limited things (basically as a souped-up find and replace feature, though I should note that I've also seen it introduce spectacular and complicated-to-fix errors). But those are all about making me more efficient at interacting with code I've already written, not doing the main body of the work for me.

      So that's my $0.02!

    • craftkiller 3 days ago

      > generate new modules/classes in your projects

      I type:

        class Foo:
      
      or:

        pub(crate) struct Foo {}
      
      > integrate module A into module B

      What do you mean by this? If you just mean moving things around then code refactoring tools to move functions/classes/modules have existed in IDEs for millennia before LLMs came around.

      > get someones github project up and running on your machine

      docker

      > convert an idea or plan.md or a paper into working code

      I sit in front of a keyboard and start typing.

      > Fix flakes, fix test<->code discrepancies or increase coverage etc

      I sit in front of a keyboard, read, think, and then start typing.

      > If you do all this manually, why?

      Because I care about the quality of my code. If these activities don't interest you, why are you in this field?

      • asadm 3 days ago

        > If these activities don't interest you, why are you in this field?

        I am in this field to deliver shareholder value. Writing individual lines of code; unless absolutely required, is below me?

        • craftkiller 3 days ago

          Ah well then, this is the cultural divide that has been forming since long before LLMs happened. Once software engineering became lucrative, people started entering the field not because they're passionate about computers or because they love the logic/problem solving but because it is a high paying, comfortable job.

          There was once a time when only passionate people became programmers, before y'all ruined it.

          • asadm 3 days ago

            i think you are mis-categorizing me. i have been programming for fun since i was a kid. But that doesn't mean i solve mundane boring stuff even though i know i can get someone else or ai to figure those parts out so i can do the fun stuff.

            • craftkiller 3 days ago

              Ah perhaps. Then I think we had different understandings of my "why are you in this field?" question. I would say that my day job is to "deliver shareholder value"[0] but I'd never say that is why I am in this field, and it sounds like it isn't why you're in this field either since I doubt you were thinking about shareholders when you were programming as a kid.

              [0] Actually, I'd say it is "to make my immediate manager's job easier", but if you follow that up the org chart eventually it ends up with shareholders and their money.

              • asadm 3 days ago

                well sure i may have oversimplified it. the shareholder is usually me :)

        • barnabee 3 days ago

          Every human who defines the purpose of their life's work as "to deliver shareholder value" is a failure of society.

          How sad.

          • asadm 3 days ago

            as opposed to fluff like "make world a better place"?

            • barnabee 2 days ago

              Defining one's worth by shareholder value is pretty dystopian, so yeah, even "make the world a better place" is preferable, at least if whoever said it really means it…

  • insane_dreamer 3 days ago

    didn't Zed recently add a config option to disable all AI features?

  • AceJohnny2 3 days ago

    > I can kick out some money to essentially "subscribe" for maintenance.

    People on HN and other geeky forums keep saying this, but the fact of the matter is that you're a minority and not enough people would do it to actually sustain a product/company like Zed.

    • ethanwillis 3 days ago

      It's a code editor so I think the geeky forums are relevant here.

      Also, this post is higher on HN than the post about raising capital from Sequoia where many of the comments are about how negatively they view the raising of capital from VC.

      The fact of the matter is that people want this and the inability of companies to monetize on that desire says nothing about whether the desire is large enough to "actually sustain" a product/company like Zed.

  • agosta 3 days ago

    "Happy to see this". The folks over at Zed did all of the hard work of making the thing, try to make some money, and then someone just forks it to get rid of all of the things they need to put in to make it worth their time developing. I understand if you don't want to pay for Zed - but to celebrate someone making it harder for Zed to make money when you weren't paying them to begin with -"Happy to PLAN to pay for Zed"- is beyond.

    • pnathan 2 days ago

      I pay for intellij. I pay for Obsidian.

      I would pay for zed.

      The only path forward I see for a classic VC investment is the AI drive.

      But I don't think the AI bit is valuable. A powerful plugin system would be sufficient to achieve LLM integration.

      So I don't think this is a worthwhile investment unless the product gets a LOT worse and becomes actively awful for users who aren't paying beaucoup bucks for AI tooling- the ROI will have to center the AI drive.

      It's not a move that will generate a good outcome for the average user.

    • eviks 3 days ago

      > I understand if you don't want to pay for Zed

      But he does say he does want to pay!

jemiluv8 3 days ago

I always have mixed feelings about forks. Especially the hard ones. Zed recently rolled out a feature that lets you disable all AI features. I also know telemetry can be opted out. So I don’t see the need for this fork. Especially given the list of features stated. Feels like something that can be upstreamed. Hope that happens

I remember the Redis fork and how it fragmented that ecosystem to a large extent.

  • barnabee 3 days ago

    I'd see less need for this fork if Zed's creators weren't already doing nefarious things like refusing to allow the Zed account / sign-in features to be disabled.

    I don't see a reason to be afraid of "fragmented ecosystems", rather, let's embrace a long tail of tools and the freedom from lock-in and groupthink they bring.

    • jemiluv8 3 days ago

      For what they provide, for free, I'd say refusing to disable login is not "nefarious". They need to grow a business here.

      • jeremyjh 3 days ago

        They need to make money for their investors. Once you start down the enshittification path, forever will it dominate your destiny.

    • giancarlostoro 3 days ago

      Well there's features within Zed that are part of the account / sign-in process, so it might be a bit more effort to just "simply comment out login" for an editor that is as fast and smooth as Zed, I dont care that its there as long as they dont force it on me, which they don't.

    • canadaduane 3 days ago

      I have this take, too. I tried to show how valuable this is to me via github issue, but the lack of an answer is pretty clearly a "don't care."

  • max-privatevoid 3 days ago

    Even opt-in telemetry makes me feel uncomfortable. I am always aware that the software is capable of reporting the size of my underwear and what I had for breakfast this morning at any moment, held back only by a single checkbox. As for the other features, opt-out stuff just feels like a nuisance, having to say "No, I don't want this" over and over again. In some cases it's a matter of balance, but generally I want to lean towards minimalism.

    • m463 3 days ago

      What makes me uncomfortable is that people with your opinion have to defend their position.

      I think your thinking is common sense.

      • jemiluv8 3 days ago

        I'm not particularly attached to this position. I just don't believe in a world where interests don't collide and often the person doing more should probably have a better say in things. If we built the product, we get to dictate some of these privacy features by default.

        But giving users an escape hatch is something that people take for granted. I'd understand all these furor if there was no such thing.

        Besides, I reckon Zed took a lot of resources to build and maintain. Help them recoup their investment

        • gnud 2 days ago

          Pretty sure Zed won't let me pay for an editor without any "sign in" or LLM features?

    • fastball 3 days ago

      Automatic crash reporting is very useful if you want stable software.

  • hsn915 3 days ago

    I'm one of the people interested in Zed for the editor tech but disheartened with all the AI by default stuff.

    opt-out is not enough, specially in a program where opt-out happens via text-only config files.

    I can never know if I've correctly opted out of all the things I don't want.

    • fastball 3 days ago

      What interests you about Zed that is not already covered by Sublime?

      • biztos 3 days ago

        For me, it's always interesting to try out new editors, and I've been a little frustrated with Sublime lately.

        Upsides of Zed (for me, I think):

        * Built-in AI vibecodery, which I think is going to be an unavoidable part of the job very soon.

        * More IDE features while still being primarily an Editor.

        * Extensions in Rust (if I'm gonna suffer, might as well learn some Rust).

        * Open source.

        Downsides vs Sublime:

        * Missing some languages I use.

        * Business model, arguably, because $42M in VC "is what it is."

    • echelon 3 days ago

      This is why we shouldn't open source things.

      All of that hard work, intended to build a business, and nobody is happy.

      Now there's a hard fork.

      This is shitty.

      • _benj 2 days ago

        I particularly agree with you.

        Sublime is not open source and it has a very devout paying client base.

        To me the dirty thing is to make something “open source” because developers absolutely love that, to then take an arguably “not open source” path of $42 mil in VC funding.

        There’s something dissonant there.

        • aurareturn 2 days ago

          I think it makes sense business wise.

          Open source allows it to gain adoption in the dev community. Devs are notoriously hard to convince to adopt a new tool. Open source is one way to do it.

          The path is usually to have an open community edition and then a cloud/enterprise edition. Over time, there will be greater and greater separation between the open source one and the paid ones. Eventually, the company will forget that the open source part even exists and slowly phase it out.

      • hsn915 2 days ago

        Open Source does not work for business. It just doesn't.

        I intend to make my products source-available but not open source.

        I do open source libraries/frameworks that I produce as part of producing the product, but not the product itself.

  • mixmastamyk 3 days ago

    It's nice to have additional assurance that the software won't upload behind your back on first startup. Though I also run opensnitch, belt and suspenders style.

  • giancarlostoro 3 days ago

    Not to mention Zed is already open source. I guess the best thing Zed can do is make it all opt-in by default, then this fork is rendered useless.

    • mcosta 2 days ago

      This fork is useful as a zero user value auto filter for zed.

RestartKernel 3 days ago

Bit premature to post this, especially without some manifesto explaining the particular reason for this fork. The "no rugpulls" implies something happened with Zed, but you can't really expect every HN reader to be in the loop with the open source controversy of the week.

  • eikenberry 3 days ago

    Contributor Agreements are specifically there for license rug-pulls, so they can change the license in the future as they own all the copyrights. So the fact that they have a CA means they are prepping for a rug-pull and thus this bullet point.

    • latexr 3 days ago

      I can’t speak for Zed’s specific case, but several years ago I was part of a project which used a permissive license. I wanted to make it even more permissive, by changing it to one of those essentially-public-domain licenses. The person with the ultimate decision power had no objections and was fine with it, but said we couldn’t do that because we never had Contributor License Agreements. So it cuts both ways.

      • ItsHarper 3 days ago

        It's reasonable for a contributor to reject making their code available more permissively

        • latexr 3 days ago

          Of course. Just like it is reasonable for them to reject the reverse. It is reasonable for them to reject any change, which is the point.

      • eikenberry 3 days ago

        You seem to be assuming that a more permissive license is good. I don't believe this is true. Linux kernel is a great example of a project where going more permissive would be a terrible idea.

        • latexr 2 days ago

          Saying I believe one specific project—of which I was a major contributor and knew intimately—would benefit from a more permissive license in no way means I think every other project should do the same. Every case is different, and my projects have different licenses according to what makes sense. Please don’t extrapolate and assume someone’s general position from one example.

    • Conlectus 3 days ago

      I’m not sure where this belief came from, or why the people who believe it feel so strongly about it, but this is not generally true.

      With the exception of GPL derivatives, most popular licenses such as MIT already include provisions allowing you to relicense or create derivative works as desired. So even if you follow the supposed norm that without an explicit license agreement all open source contributions should be understood to be licensed by contributors under the same terms as the license of the project, this would still allow the project owners to “rug pull” (create a fork under another license) using those contributions.

      But given that Zed appears to make their source available under the Apache 2.0 license, the GPL exception wouldn’t apply.

      • max-privatevoid 3 days ago

        Indeed, if you discount all the instances where it is true, it is not true.

        From my understanding, Zed is GPL-3.0-or-later. Most projects that involve a CLA and have rugpull potential are licensed as some GPL or AGPLv3, as those are the licenses that protect everyone's rights the strongest, and thanks to the CLA trap, the definition of "everyone" can be limited to just the company who created the project.

        https://github.com/zed-industries/zed/blob/main/crates/zed/C...

        • Conlectus 3 days ago

          Good catch on the license in that file. I went by separate documents in the repo that said the source is available “under the licenses documented in the repository”, and took that to mean at-choice use of the license files that were included.

          I think the caveat to the claim that CLAs are only useful for rug pulls still important, but this is a case where it is indeed a relevant thing to consider.

    • hsn915 3 days ago

      CA means: this is not just a hobby project, it's a business, and we want to retain the power to make business decisions as we see fit.

      I don't like the term "rug-pull". It's misleading.

      If you have an open source version of Zed today, you can keep it forever, even if future versions switch to closed source or some source-available only model.

      • jeremyjh 3 days ago

        If you build a product and a community around a certain set of values, and then you completely swap value systems its a rug pull. They build a user base by offering something they don't intend to continue offering. What the fuck else do you want to call it?

        • hsn915 a day ago

          If someone offers you free stuff for a while, then stops offering it, you should show gratitude for having the privilege of receiving the fruit of their work for free.

          You should show gratitude, not hostility.

          • jeremyjh a day ago

            I agree with that, but it’s also fine for us to be skeptical of products that are clearly headed down that path, and recommend people not use them. That is what we are discussing here.

    • zahlman 3 days ago

      CLAs represent an important legal protection, and I would never accept a PR from a stranger, for something being developed in public, without one. They're the simplest way to prove that the contributor consented to licensing the code under the terms of the project license, and a CYA in case the contributed code is e.g. plagiarized from another party.

      (I see that I have received two downvotes for this in mere minutes, but no replies. I genuinely don't understand the basis for objecting to what I have to say here, and could not possibly understand it without a counterargument. What I'm saying seems straightforward and obvious to me; I wouldn't say it otherwise.)

      • Eliah_Lakhin 3 days ago

        I upvoted your comment. I share your view and just wanted to say you're not the only one who thinks this way.

        • Conlectus 3 days ago

          There are dozens of us. Dozens!

    • jen20 3 days ago

      Are you suggesting the FSF has a copyright assignment for the purposes of “rug pulls”?

      • eikenberry 3 days ago

        It was, some see the GPL2->GPL3 as a rug-pull... but it doesn't matter today as the FSF stopped requiring CAs back in 2021.

        • mirashii 3 days ago

          That's a harder argument to make given the "or later" clause was the default in the GPLv2, and also optional.

      • ilc 3 days ago

        Yes.

        The FSF requires assignment so they can re-license the code to whatever new license THEY deem best.

        Not the contributors.

        A CLA should always be a warning.

        • craftkiller 3 days ago

          IANAL but their official reason for the CLA seems pretty reasonable to me: https://www.gnu.org/licenses/why-assign.en.html

          tl;dr: If someone violates the GPL, the FSF can't sue them on your behalf unless they are a copyright holder.

          (personally I don't release anything under virus licenses like the GPL but I don't think there's a nefarious purpose behind their CLA)

          • dragonwriter 3 days ago

            > If someone violates the GPL, the FSF can't sue them on your behalf unless they are a copyright holder.

            This seems to be factually untrue; you can assign specific rights under copyright (such as your right to sue and receive compensation for violations by third parties) without assigning the underlying copyright. Transfer of the power to relicense is not necessary for transfer of the power to sue.

            • teddyh 2 days ago

              Whether or not it is acually true, this is what their lawyer has told them, and so the FSF is acting accordingly. You can’t reasonably blame them for that.

              • dragonwriter 2 days ago

                I can’t reasonably excuse the FSF for it, and if you think about what the FSF’s mission and prime means of pursuing it is, I think you’ll see why your blame-outsourcing excuse doesn't really work in this case.

  • NoboruWataya 3 days ago

    Zed is quite well known to be heavily cloud- and AI-focused, it seems clear that's what's motivating this fork. It's not some new controversy, it's just the clearly signposted direction of the project that many don't like.

    • aurareturn 2 days ago

      I remember it started out as a native app editor that is all about speed. I think it only started focusing on AI after LLMs blew up.

      • setopt 2 days ago

        It focused on cloud / collab from the beginning though.

  • decentrality 3 days ago

    Seems like it might be reacting to or fanned to flame by: https://github.com/zed-industries/zed/discussions/36604

    • 201984 3 days ago

      No, this fork is at least 6 months old. The first PR is dated February 13th.

      • decentrality 3 days ago

        This is correct. The fork and the pitchforks are not causally related

    • FergusArgyll 3 days ago

      That's not a rug pull, that's a few overly sensitive young 'uns complaining

      • MeetingsBrowser 3 days ago

        overly sensitive to what?

        • bigstrat2003 3 days ago

          "You're doing business with someone whose views I dislike" is not harassment, nor do I believe that the person who opened the issue is arguing in good faith. The world is full of people with whom I disagree (often strongly) on matters of core values, and I work with them civilly because that is what a mature person does. Unless the VC firm starts pushing Zed to insert anti-Muslim propaganda into their product, or harassing the community, there is no reasonable grounds to complain about the CoC.

          • MeetingsBrowser 3 days ago

            I don't agree that it is immature or overly sensitive. The issue basically says:

            > Hey, you look to be doing business with someone who publicly advocates for harming others. Could you explain why and to what extend they are involved?

            "doing business with someone whose views I dislike" is slightly downplaying the specific view here.

            • bigstrat2003 3 days ago

              I think that the formulation you gave is precisely "doing business with someone whose views I dislike". It assumes much that simply should not be assumed, to wit:

              * That this man actually advocates for harming others, versus advocating for things that the github contributor considers tantamount to harming others

              * That his personal opinions constitute a reason to not do business with a company he is involved with

              * That Zed is morally at fault if they do not agree that this man's personal opinions constitute a reason to not do business with said company

              I find this kind of guilt by association to be detestable. If Zed wishes to do business with someone whom I personally would not do business with for moral reasons, that does not confer some kind of moral stain on them. Forgiveness is a virtue, not a vice. Not only that, but this github contributor is going for the nuclear option by invoking a public shaming ritual upon Zed. It's extremely toxic behavior, in my opinion.

            • samdoesnothing 3 days ago

              Yet they post this on Github, which apparently isn't a problem for themselves or the code of conduct despite Microsoft having ties with the Israeli military.

            • zahlman 3 days ago

              >The issue basically says:

              I don't think any of the evidence shown there demonstrates "advocacy for harming others". The narrative on the surely-unbiased-and-objective "genocide.vc" site used as a source there simply isn't supported by the Twitter screencaps it offers.

              This also isn't at all politely asking "Could you explain why and to what extend they are involved?" It is explicitly stating that the evidenced level of involvement (i.e.: being a business partner of a company funding the project) is already (in the OP's opinion) beyond the pale. Furthermore, a rhetorical question is used to imply that this somehow deprives the Code of Conduct of meaning. Which is absurd, because the project Code of Conduct doesn't even apply to Sequoia Capital, never mind to Shaun Maguire.

              • runarberg 3 days ago

                The issue also cites the New York times. Here is an archive: https://archive.is/6VoyD You can read the quote for your self here https://x.com/shaunmmaguire/status/1941135110922969168 there is no question about the fact that this is racist speech, that builds up on a racist stereotype. Many of Zed’s contributors are no doubt Muslims, whom Shaun Maguire is being racist against here.

                Zed’s leadership does have to answer for why they invited people like that to become a part of Zed’s team.

                • zahlman 2 days ago

                  (And since I missed it the first time around: accepting funding from Sequoia Capital doesn't make Maguire "part of the team".)

                • zahlman 2 days ago

                  I had already read the NYT article when I commented.

                  Making a racist claim in a tweet is not advocacy for harming others.

        • GuB-42 3 days ago

          Boycotting a text editor because the company that makes it accepted funding from another company that has a partner who holds controversial views on a conflict in Gaza where children are killed is going a bit far I think.

          In a perfect world, children don't get killed, but with that many levels of indirection, I don't think there is anything in this world that is not linked to some kind of genocide or other terrible things.

          • runarberg 3 days ago

            It should be relatively easy to simply not accept money from companies such as these. Accepting this money is a pretty damning moral failure.

            • GuB-42 3 days ago

              I don't have a startup, but not accepting $32M doesn't seem particularly easy to me.

              I am sure plenty of people here know these things, this is Y Combinator after all, but to me, the general idea in life is that getting money is hard, and stories that make it look easy are scams or extreme outliers.

              • foldr 2 days ago

                Exactly. Any moral compromise can be justified if it’s necessary to fund your startup.

              • runarberg 3 days ago

                We clearly disagree here, but be that as it may, Zed’s contributors are obviously outraged at this, and I argue that this outrage is justifiable. The amount of money you accept from reprehensible people is usually pretty strongly correlated with the amount of people who’ll look down on you for doing so.

                • its-summertime 3 days ago

                  > Zed’s contributors are obviously outraged at this

                  Do you have an example of that? I can't find any contributors that are upset about this aspect of the funding

                  • runarberg 3 days ago
                    • its-summertime 3 days ago

                      Which of those are contributors?

                      • runarberg 2 days ago

                        I would say all of them. By taking part of a discussion about the editor, they are contributing. But if you are talking about code contributions in particular. Zed has thousands of code contributors, and this discussion has hundreds of interactions, overwhelmingly supportive. There is no way for me to cross check that (but honestly I would be very surprised if there is no code contributor among the 170 upvotes this discussion got).

                        But this is all an aside, I was talking about contributors in a more general sense.

            • samdoesnothing 3 days ago

              Microsoft has ties to the Israeli military. Every commentator in that post should be ashamed of using and supporting Github, a product of Microsoft, as they are indirectly supporting the Israeli cause. This is far worse than simply accepting funding from a company who hires an employee with disagreeable views.

              • runarberg 3 days ago

                “disagreeable views” is doing some heavy lifting:

                > Mr. Maguire’s post was immediately condemned across social media as Islamophobic. More than 1,000 technologists signed an open letter calling for him to be disciplined. Investors, founders and technologists have sent messages to the firm’s partners about Mr. Maguire’s behavior. His critics have continued pressuring Sequoia to deal with what they see as hate speech and other invective, while his supporters have said Mr. Maguire has the right to free speech.

                https://archive.is/6VoyD#selection-725.0-729.327

                Shaun Maguire is a partner, not just a simple hire, and Sequoia Industries had a chance to distance them selves from him and his views, but opted not to.

                This is very different from your average developer using GitHub, most of them have no choice in the matter and were using GitHub long before Microsoft’s involvement in the Gaza Genocide became apparent. Zed’s team should have been fully aware of what kind of people they are partnering with. Like I said, it should have been very easy for them not to do so.

                EDIT: Here is a summary of the “disagreeable views” in question: https://genocide.vc/meet-shaun-maguire/

                At the end there is a simple request for Sequoia Industries, which Sequoia Industries opted against:

                > We call on Sequoia to condemn Shaun’s rhetoric and to immediately terminate his employment.

                • zahlman 3 days ago

                  In my moral calculus, it is literally not possible for a person to say something that is so bad that it becomes morally worse than actual physical violence. I know from experience that I am not at all alone in this, and I suspect that GP thinks similarly.

                  Emphasizing the nature of Mr. Maguire's opinion is not really doing anything to change the argument. Emphasizing what other people think about that opinion, even less so.

                  > Zed’s team should have been fully aware of what kind of people they are partnering with.

                  In my moral calculus, accepting money from someone who did something wrong, when that money was honestly obtained and has nothing to do with the act, does not make you culpable for anything. And as GP suggests, Microsoft's money appears to have a stronger tie to violence than Maguire's.

                  • runarberg 3 days ago

                    Just to be clear we are talking about genocidal and racist hate speech here (you can see for your self). It it is not some one off things he has said (which to be clear would be bad enough) but something Shaun Maguire has defined his whole online persona around. Speech such as these are an integral part of every genocide, as they seek to dehumanize the victims and justify (or deny) the atrocities against them.

                    As an aside—despite the popularity of the trolley problem—people don‘t have a rational moral calculus. And moral behavior does not follow a sequential order from best to worse. Whatever your moral calculus be, that has no effect on whether or not the Zed team’s actions were a moral blunder or not... they were.

                    • samdoesnothing 3 days ago

                      It's only a moral blunder if you either decide everyone is guilty of indirect association with "bad" people, or if you selectively chose who is guilty or not based on some third factor (generally ingroup/outgroup). The former doesn't result in making Github threads, and the latter is a kind of behaviour that ironically leads to the sins underpinning this whole issue.

                    • dlubarov 3 days ago

                      Genocidal speech? Where?

                      The site you linked to just seems to brazenly misrepresent each of Shaun's tweets - e.g. the tweet that "demonized Palestinians" never mentions Palestinians, but does explicitly refer to Hamas twice. Not sure how Shaun could have been any clearer that he was criticizing a specific terrorist group and not an entire racial/ethnic group.

                      • runarberg 2 days ago

                        the post on genocide.vs is almost two years old. Shaun Maguire’s speech has only gotten worse since. NYT took up that story when his speech started targeting a particular American Politician with his racist Islamophobia. Go to Shaun Maguire’s twitter profile, scroll down e.g. to his May’s tweets before he became so obsessed with being racist against Mamdani, along the way you will find plenty of tweets e.g. the Pallywood conspiracy theory, and plenty of other genocide denial/justification, intermixed with his regular Islamophobia. Just see for your self.

                        • zahlman 2 days ago

                          I read the NYT story. It doesn't portray anyone who comes anywhere close to being genocidal.

                          > plenty of other genocide denial/justification

                          So he disagrees with you about this word being appropriate to describe what's actually going on. This is not a fringe viewpoint.

                          • runarberg 2 days ago

                            It very much is a fringe and very hateful viewpoint. There is a difference between disagreeing with how a technical and a legal term is used to describe atrocities, and flat out denying and justifying said atrocities. Most people who don‘t describe the Gaza Genocide as a genocide are doing the former. Shaun Maguire is doing the latter. When he publicly shares the Pallywood conspiracy theory he is engaging in and spreading a hateful genocidal rhetoric. This is hatespeech and is illegal in many countries (though enforcement is very lax).

                            • zahlman 2 days ago

                              > There is a difference between disagreeing with how a technical and a legal term is used to describe atrocities, and flat out denying and justifying said atrocities. Most people who don‘t describe the Gaza Genocide as a genocide are doing the former. Shaun Maguire is doing the latter.

                              Nothing you have quoted evidences this.

                              > When he publicly shares the Pallywood conspiracy theory he is engaging in and spreading a hateful genocidal rhetoric.

                              Claiming that your political outgroup is engaging in political propaganda is not the same thing as calling for their deaths. Suggesting otherwise is simply not good faith argumentation.

                              Nothing you have done here constitutes a logical argument. It is only repeating the word "genocide" as many times as you can manage and hoping that people will sympathize.

                              > This is hatespeech and is illegal in many countries

                              This is not remotely a valid argument (consider for example that many countries also outlaw things that you would consider morally obligatory to allow), and is also irrelevant as Mr. Maguire doesn't live in one of those countries.

                              • runarberg 2 days ago

                                > Claiming that your political outgroup is engaging in political propaganda is not the same thing as calling for their deaths.

                                I don‘t think you grasp the seriousness of hate speech. Even if you don’t explicitly call for their deaths, by partaking in hate speech (including by sharing conspiracy theories about the group) you are playing an integral part of the violence against the group. And during an ongoing genocide, this speech is genocidal, and is an integral part of the genocide. There is a reason hate speech is outlawed in almost every country (including the USA; although USA is pretty lax what it considers hate speech).

                                The Pallywood conspiracy theory is exactly the kind of genocidal hate speech I am talking about. This conspiracy theory has been thoroughly debunked, but it persists among racists like Shaun Maguire, and serves as an integral part to justify or deny the violence done against Palestinians in an ongoing genocide.

                                If you disagree, I invite you to do a though experiment. Swap out Palestinians with Jews, and swap out the Pallywood conspiracy theory with e.g. Cultural Marxism, and see how Shaun Maguire’s speech holds up.

                                • zahlman 2 days ago

                                  > I don‘t think you grasp the seriousness of hate speech.

                                  No; I think you are wrong about that seriousness.

                                  > by partaking in hate speech (including by sharing conspiracy theories about the group) you are playing an integral part of the violence against the group.

                                  No, I disagree very strongly with this, as a core principle.

                                  > and serves as an integral part to justify or deny the violence done against Palestinians in an ongoing genocide.

                                  And with this as well.

                                  > If you disagree, I invite you to do a though experiment. Swap out Palestinians with Jews, and swap out the Pallywood conspiracy theory with e.g. Cultural Marxism, and see how Shaun Maguire’s speech holds up.

                                  First off, the "cultural Marxism" theory is not about Jews, any more than actual Marxists blaming things on "greedy bankers" is about Jews. (A UK Labour party leader once got in trouble for this, as I recall, and I thought it was unjustified even though I disagreed with his position.)

                                  Second, your comments here are the first I've heard of this conspiracy theory, which I don't see being described by name in Maguire's tweets.

                                  Third, no. This thought experiment doesn't slow me down for a moment and doesn't lead me to your conclusions. If Maguire were saying hateful things about Jewish people (the term "anti-Semitic" for this is illogical and confusing), that would not be as bad as enacting violence against Jewish people, and it would not constitute "playing an integral part of the violence" enacted against them by, e.g., Hamas.

                                  The only way to make statements that "serve as an integral part to justify or deny violence" is to actually make statements that either explicitly justify that violence or explicitly deny it. But even actually denying or justifying violence does not cause further violence, and is not morally on the same level as that violence.

                                  > There is a reason hate speech is outlawed in almost every country (including the USA; although USA is pretty lax what it considers hate speech).

                                  There is not such a reason, because the laws you imagine do not actually exist.

                                  https://en.wikipedia.org/wiki/Hate_speech_in_the_United_Stat...

                                  American law does not attempt to define "hate speech", nor does it outlaw such. What it does do is fail to extend constitutional protection to speech that would incite "imminent lawless action" — which in turn allows state-level law to be passed, but generally that law doesn't reference hatred either.

                                  https://en.wikipedia.org/wiki/Brandenburg_v._Ohio

                                  Even in Canada, the Criminal Code doesn't attempt to define "hatred", and such laws are subject to balancing tests.

                                  https://en.wikipedia.org/wiki/Hate_speech_laws_in_Canada

                                  > The Pallywood conspiracy theory is exactly the kind of genocidal hate speech I am talking about. This conspiracy theory has been thoroughly debunked

                                  Even after looking this up, I don't see anything that looks like a single unified claim that could be objectively falsified. I agree that "conspiracy theory" is a fair term to describe the general sorts of claims made, but expecting the label "conspiracy theory" to function as an argument by itself is not logically valid — since actual conspiracies have been proven before.

                                  • runarberg 2 days ago

                                    > First off, the "cultural Marxism" theory is not about Jews, any more than actual Marxists blaming things on "greedy bankers" is about Jews.

                                    I don’t follow. Cultural Marxism is an anti-Semitic conspiracy theory which has inspired terrorist attacks, see e.g. Anders Behring Breivik, or the Charlottesville Riots. Greedy bankers is not a conspiracy theory, but a simple observation of accumulation of wealth under capitalism. Terrorists targeting minorities very frequently use Cultural Marxism to justify their atrocities. “Greedy bankers” are used during protests, or political violence against individuals or institutions at worst. There is a fundamental difference here, if you fail to spot the difference, I don‘t know what to tell you, and honestly I fear you might be operating under some serious misinformation about the spread of anti-Semitism among the far-right.

                                    As for Pallywood, it is a conspiracy theory which states that many of the atrocities done by the IDF in Gaza are staged by the Palestinian victims of the Gaza Genocide. There have been numerous allegations about widespread staging operations, but so far there is zero proof of any of these allegations. It is safe to say that the people who believe in this conspiracy theory do so because of racist believes about Palestinians, but not because they have been convinced by evidence. And just like Cultural Marxism, the Pallywood conspiracy theory has been used to justify serious attacks and deaths of many people, but unlike Cultural Marxism, the perpetrator of these attacks are almost exclusively confined to the IDF.

                                    By the way Shaun Maguire has 5 tweets where he uses the term directly (all from 2023) but he uses the term indirectly a lot. And just like Cultural Marxism citing the conspiracy theory—even if you don‘t name it directly—is still hate speech. E.g. when the White Nationalists at the Charlottesville riots were chanting “Jews will not replace us!” they were citing the White Replacement conspiracy theory (as well as Cultural Marxism) and they were doing hate speech, which directly lead to the murder of Heather Heyer.

                                    And to hammer the point home (and to bring the conversation back to the topic at hand), I seriously doubt the Zed team would have accepted VC funding from an investor affiliated with an open supporter of Anders Behring Breivik or the Charlottesville rioters.

                                    • zahlman 2 days ago

                                      > Cultural Marxism is an anti-Semitic conspiracy theory

                                      No, it isn't. I've observed people to espouse it without any reference to Judaism whatsoever. (I don't care how Wikipedia tries to portray it, because I know from personal experience that this is not remotely a topic that Wikipedia can be trusted to cover impartially.)

                                      > Greedy bankers is not a conspiracy theory

                                      I didn't say it was. It is, however, commonly a dogwhistle, and even more commonly accused of being a dogwhistle. And people who claim that Jews are overrepresented in XYZ places of power very commonly do get called conspiracy theorists as a result, regardless of any other political positions they may hold.

                                      > Terrorists targeting minorities very frequently use Cultural Marxism to justify their atrocities.

                                      This is literally the first time in 10+ years of discussion of these sorts of "culture war" topics, and my awareness of the term "cultural Marxism", that I have seen this assertion. (But then, I suspect that I would also disagree with you in many ways about who merits the label of "terrorist", and about how that is determined.)

                                      > honestly I fear you might be operating under some serious misinformation about the spread of anti-Semitism among the far-right.

                                      There certainly exist far-rightists who say hateful things about Jews. But they're certainly not the same right-wingers who refuse to describe the actions of Israeli forces as "genocide". There is clearly and obviously not any such "spread"; right-wing sentiment on the conflict is more clearly on Israel's side than ever.

                                      The rest of this is not worth engaging with. You are trying to sell me on an accounting of events that disagrees with my own observations and research, as well as a moral framework that I fundamentally reject.

                                      I should elaborate there. It doesn't actually matter to me what you're trying to establish about the depth of these atrocities (even though I have many more disagreements with you on matters of fact). We have a situation where A accepts money from B, who has a business relationship with C, who demonstrably has said some things about X people that many would consider beyond the pale. Now let's make this hypothetical as bad as possible: let's suppose that every X person in existence has been brutally tortured and murdered under the direct oversight of D, following D's premeditated plans; let's further suppose that C has openly voiced support of D's actions. (Note here that in the actual case, D doesn't even exist.) In such a case, the value of X is completely irrelevant to how I feel about this. C is quite simply not responsible for D's actions, unless it can be established that D would not have acted but for C's encouragement. Meanwhile, A has done absolutely nothing wrong.

                                      • dttze a day ago

                                        > No, it isn't. I've observed people to espouse it without any reference to Judaism whatsoever.

                                        That’s the point of a dog whistle. Are people who use (((this))) idiom also not antisemites because they don’t explicitly mention Jews? Also look up Cultural Bolshevism and who used that term.

                                      • runarberg 2 days ago

                                        In my circles there is a saying: If you are at a party, and somebody brings a Nazi to the party, and nobody kicks the Nazi out of the party, then you are at a Nazi party.

                                        Sequoia Industries were made aware that one of their partners was a racist Islamophobe, they opted not to do anything about it, and allowed him to continue being a racist Islamophobe partner with Sequoia, one can only assume that Sequoia Industries is an Islamophobic investor. I personally see people knowingly accepting money from racist Islamophobes as being a problem, and I would rather nobody did that.

                                        • zahlman a day ago

                                          > In my circles there is a saying: If you are at a party, and somebody brings a Nazi to the party, and nobody kicks the Nazi out of the party, then you are at a Nazi party.

                                          Yes, you are from exactly the circles that you appear to be from based on your other words here.

                                          In my circles, that reasoning is bluntly rejected. The reductio ad absurdum is starkly apparent: your principle, applied transitively (as it logically must), identifies so many people as irredeemably evil (including within your circles!) that it cannot possibly be reconciled with the observed good in the real world.

                                          And frankly, the way that the term "Nazi" gets thrown around nowadays seems rather offensive to the people who actually had to deal with the real thing.

                    • zahlman 2 days ago

                      > Speech such as these are an integral part of every genocide, as they seek to dehumanize the victims and justify (or deny) the atrocities against them.

                      That does not make such speech genocidal.

                      It also does not make such speech worse than physical violence.

                      It also does not make the speech of someone you associate with relevant to your own morality.

                • samdoesnothing 3 days ago

                  Now that Microsoft's role has become apparent, and which has had a significantly larger impact compared to Sequoia's inaction, why do developers continue to use Github? There are several alternatives which provide equivalent features. Why is this type of inaction not condemned?

                  Furthermore, if accepting funding in this manner is considered a violation of their CoC, then surely the use of Github is even more of a violation. Why wasn't that brought up earlier instead of not at all?

                  And finally, ycombinator itself has members of its board who have publicly supported Israel. Why are you still using this site?

                  Turns out when you try to tar by association, everybody is guilty.

    • Squarex 3 days ago

      [flagged]

      • barbazoo 3 days ago

        > Are they really boycotting jews now?

        Just because they're boycotting someone who happens to be Jewish doesn't necessarily mean they're boycotting them because of it.

        > Zed just announced that they are taking money from Sequoia Capital, which has a partner, Shaun Maguire, who has recently been publicly and unapologetically Islamophobic. It seems hard to believe that the team didn't know about this, as it was covered in the New York Times. In addition, Maguire has been actively pro-occupation and genocide in Palestine for nearly 2 years.

        > How can anyone feel like the Code of Conduct means anything at all, when Sequoia is an investor? I'm shocked and surprised at the Zed team for this - I expected much better.

        Reads like it has more to do with what they said and done in the past which seems reasonable.

        • nicce 3 days ago

          Sounds like the timer is on. Right when Zed started to be really good.

  • marcosdumay 3 days ago

    They got a VC investment.

    But a fork with focus on privacy and local-first only needs lack of those to justify itself. It will have to cut some features that zed is really proud of, so it's hard to even say this is a rugpull.

    • yencabulator a day ago

      > It will have to cut some features that zed is really proud of

      What, they're proud of the telemetery?

      The fork claims to make everything opt-in and to not default to any specific vendor, and only to remove things that cannot be self-hosted. What proprietary features have to be cut that Zed people are really proud of?

      https://github.com/zedless-editor/zedless?tab=readme-ov-file...

      As far as I know, the Zed people have open sourced their collab server components (as AGPLv3), at least well enough to self-host. For example, https://github.com/zed-industries/zed/blob/main/docs/src/dev... -- AFAIK it's just https://github.com/livekit/livekit

      The AI stuff will happily talk to self-hosted models, or OpenAI API lookalikes.

    • m463 3 days ago

      Today we're announcing our $32M Series B led by Sequoia Capital with participation from our existing investors, bringing our total funding to over $42M. - zed.dev

_benj 3 days ago

I’m curious how this will turn out. Reminds me of the node.js fork IO.js and how that shifted the way node was being developed.

If there’s a group of people painfully aware of telemetry and AI being pushed everywhere is devs…

dkersten 3 days ago

What I really want from Zed is multi window support. Currently, I can’t pop out the agent panel or any other panels to use them on another monitor.

Local-first is nice, but I do use the AI tools, so I’m unlikely to use this fork in the near term. I do like the idea behind this, especially no telemetry and no contributor agreements. I wish them the best of luck.

I did happily use Zed for about year before using any of its AI features, so who knows, maybe I’ll get fed up with AI and switch to this eventually.

  • bn-l 3 days ago

    Yes same here. I tried it out because of all the discussion about it then saw I couldn’t pop the panel out (or change some really basic settings cursor has had for over a year) then closed and uninstalled it.

201984 3 days ago

Comment from the author: https://lobste.rs/c/wmqvug

> Since someone mentioned forking, I suppose I’ll use this opportunity to advertise my fork of Zed: https://github.com/zedless-editor/zed

> I’m gradually removing all the features I deem undesirable: telemetry, auto-updates, proprietary cloud-only AI integrations, reliance on node.js, auto-downloading of language servers, upsells, the sign-in button, etc. I’m also aiming to make some of the cloud-only features self-hostable where it makes sense, e.g. running Zeta edit predictions off of your own llama.cpp or vLLM instance. It’s currently good enough to be my main editor, though I tend to be a bit behind on updates since there is a lot of code churn and my way of modifying the codebase isn’t exactly ideal for avoiding merge conflicts. To that end I’m experimenting with using tree-sitter to automatically apply AST-level edits, which might end up becoming a tool that can build customizable “unshittified” versions of Zed.

  • haneefmubarak 3 days ago

    > relying on node.js

    When did people start hating node and what do they have against it?

    • bigstrat2003 3 days ago

      For Zed specifically? It cuts directly against their stated goal of being fast and resource-light. Moreover, it is not acceptable for software I use to automatically download and run third-party software without asking me.

      For node.js in general? The language isn't even considered good in the browser, for which it was invented. It is absolutely insane to then try to turn it into a standalone programming language. There are so many better options available, use one of them! Reusing a crappy tool just because it's what you know is a mark of very poor craftsmanship.

    • leblancfg 3 days ago

      > When did people start hating node

      You're kidding, right?

      • WestCoader 3 days ago

        Maybe they've just never seen a dependency they didn't like.

    • max-privatevoid 3 days ago

      It shouldn't be as tightly integrated into the editor as it is. Zed uses it for a lot of things, including to install various language servers and other things via NPM, which is just nasty.

    • muppetman 3 days ago

      You might not be old enough to remember how much everyone hated JavaScript initially - just as an in-browser language. Then suddenly it's a standalone programming language too? WTH??

      I assume that's where a lot of the hate comes from. Note that's not my opinion, just wondering if that might be why.

      • skydhash 3 days ago

        JavaScript is actually fine as the warts have been documented. The main issue these days is the billions of tiny packages. So many people/org to trust for every project that uses npm.

        • zahlman 3 days ago

          Nobody is forcing you to use the tiny packages.

          The fact that the tiny packages are so popular despite their triviality is, to me, solid evidence that simply documenting the warts does not in fact make everything fine.

          And I say this as someone who is generally pro having more small-but-not-tiny packages (say, on the order of a few hundred to a few thousand lines) in the Python ecosystem.

          • hollerith 3 days ago

            The point is that Zed's developers have chosen to include prettier, which probably transitively includes many other NPM packages.

            Node and these NPM packages represent a large increase in attack surface for a relatively small benefit (namely, prettier is included in Zed so that Zed's settings.json is easier to read and edit) which makes me wonder whether Zed's devs care about security at all.

    • woodson 3 days ago

      I guess some node.js based tools that are included in Zed (or its language extensions) such as ‘prettier’ don’t behave well in some environments (e.g., they constantly try to write files to /home/$USER even if that’s not your home directory). Things like that create some backlash.

    • aDyslecticCrow 3 days ago

      Slow and ram heavy. Zed feels refreshingly snappy compared to vscode even before adding plugins. And why does desktop application need to use interpreted programming languages?

    • Sephr 3 days ago

      For me, upon its inception. We desperately needed unity in API design and node.js hasn't been adequate for many of us.

      WinterTC has only recently been chartered in order to make strides towards specifying a unified standard library for the JS ecosystem.

adastra22 3 days ago

Thank you.

That's all I have to say right now, but I feel it needs to be said. Thank you for doing this.

withinrafael 3 days ago

The CLA does not change the copyright owner of the contributed content (https://zed.dev/cla), so I'm confused by the project's comments on copyright reassignment.

  • Huppie 3 days ago

    Maybe not technically correct but it's still the gist of this line, no?

    > Subject to the terms and conditions of this Agreement, You hereby grant to Company, and to recipients of software distributed by Company related hereto, a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare derivative works of, publicly display, publicly perform, sublicense, and distribute, Your Contributions and such derivative works (the “Contributor License Grant”).

    They are allowed to use your contribution in a derivative work under another license and/or sublicense your contribution.

    It's technically not copyright reassignment though.

    • withinrafael 3 days ago

      Yes, you grant the entity you've submitted a contribution to, to use (not own) your contribution in whatever it ends up in. That was the whole point of the developer's contribution right?

      • pie_flavor 3 days ago

        The CLA has you granting them a non-open-source license. It permits them to change the Zed license to a proprietary one while still incorporating your contributions. It doesn't assign copyright ownership, but your retaining the ability to release your contribution under a different license later has little practical value.

        • withinrafael 2 days ago

          Isn't that a good thing? As a dev submitting something to them, I want my feature/bugfix to stay with the product.

          Are you suggesting that devs should be able to burden the original contribution with conditions, like "they can't use my code without permission 5 years later if you relicense"? That's untenable, isn't it?

          I don't know how else you would accept external contributions for software without the grant in the CLA. Perhaps I'm not creative enough!

          • pie_flavor 6 hours ago

            I submit my code contributions, for free, because I am participating in a collaborative community effort called an Open Source Project. I do not typically contribute to the proprietary codebases of for-profit companies for free; I have a contractor rate for that.

            If you say 'that makes it untenable for me to accept your contributions for free, then relicense to proprietary keeping those contributions', well, that's your problem. I don't particularly care about arranging tenable circumstances for you to sell my work under a proprietary license without paying me.

            The way you accept external contributions for software without a CLA grant is by not attempting to take the project proprietary, and keeping the open source arrangement forever. I do not see how you could be confused about an open source project staying open source forever while taking open-source-only contributions. That is what almost all open source projects do.

      • max-privatevoid 3 days ago

        I'm concerned about relicensing. See HashiCorp.

  • ItsHarper 3 days ago

    It may not technically reassign copyright, but it grants them permission to do whatever they want with your contributions, which seems pretty equivalent in terms of outcome.

    • withinrafael 3 days ago

      Yes, you grant the entity you've submitted a contribution to, to use (not own) your contribution in whatever it ends up in. That was the whole point of the developer's contribution right?

      • nicce 3 days ago

        Without CLA, they can’t sell, for example, the code under different license, or be an exception themselves for the current GPL license requirements. But yeah, there might be some confusion with terms.

        Relevant part:

        > 2. Grant of Copyright License. Subject to the terms and conditions of this Agreement, You hereby grant to Company, and to recipients of software distributed by Company related hereto, a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare derivative works of, publicly display, publicly perform, sublicense, and distribute, Your Contributions and such derivative works (the “Contributor License Grant”). Further, to the extent that You participate in any livestream or other collaborative feedback generating session offered by Company, you hereby consent to use of any content shared by you in connection therewith in accordance with the foregoing Contributor License Grant.

popalchemist 3 days ago

Would be wise to not invoke their name, which is trademarked.

cultofmetatron 3 days ago

I've been using AI extensivly the last few weeks but not as a coding agent. I really don't trust it for that. Its really helpful for generating example code for a library I might not be familiar with. a month ago, I was interested in using rabbitmq but he docs were limited. chatgpt was able to give me a fairly good amount of starter code to see how these things are wired together. I used some of it and added to it by hand to finally come up with what is running in production. It certainly has value in that regard. Letting it write and modify code directly? I'm not ready for that. other things its useful for is finding the source of an error when the error message isnt' so great. I'll usually copy paste code that I know is causing the error along with the error message and it'll point out the issues in a way that I can immediatly address. My method is cheaper too, I can get by just fine on the $20/month chatgpt sub doing that.

leshenka 3 days ago

Shouldn’t this just be a pull request to Zed itself that hides AI features of behind behind compile flags? That way the ‘fork’ will be just a build command with different set of flags with no changes to the actual code?

conradev 3 days ago

  Chrome : Chromium :: Zed : ????
I don’t view Chrome and Chromium as different projects, but primarily as different builds of the same project. I feel like this will (eventually) go the same way.
  • max-privatevoid 3 days ago

    I like to think of the relationship between Zed and Zedless more like Chromium and ungoogled-chromium.

faangguyindia 3 days ago

I loved Zed Editor, Infact i was using it all time but being a "programmer", i wanted to extend it with "extensions", it was hard for me to roll out my rust extension, with apis and stuff missing.

I went ahead with Vscode, I had to spend 2 hours to make it look like Zeditor with configs, but i was able to roll out extension in javascript much faster and VScode has lot of APIs available for extensions to consume.

jazzyjackson 3 days ago

I'm confused how the "contributors" feature works on GitHub, is this showing that this fork has 986 contributors and 29,961 commits? Surely that's the Zed project overall. I feel like this gives undue reputation to an offshoot project.

https://github.com/zedless-editor/zed/graphs/contributors

  • Aurornis 3 days ago

    It's contributors to the codebase you're viewing.

    It's fair because those people contributed to the codebase you're seeing. Someone can't fork a repo, make a couple commits, and then have GitHub show them as the sole contributor.

  • brailsafe 3 days ago

    It's the zed project overall from the point where the fork was created, plus any downstream merges and unique contributions to zedless

  • rubbietheone 3 days ago

    Yeah i get it, it looks like zedless itself has been going on for a while. However, i'm not sure what's the best way to approach this, the fork still carries zed's original commit history

nsonha 3 days ago

Software engineers: add otel to help debug their own products, while relentlessly protest any telemetry on someone else's

johnfn 3 days ago

This fork has around 20 net-new commits on it. The Zed repository has around 30,000 commits. This is a wee bit premature, no?

ahmetcadirci25 3 days ago

Was it necessary?

  • zahlman 3 days ago

    I think we would all be clearly worse off if OSS developers collectively decided to limit themselves to what is "necessary".

yogorenapan 3 days ago

This just reminded me that I have Zed installed but haven't used it at all yet. Neovim is a bit too sticky with all my custom shortcuts. Will uninstall it and try this version out when I eventually decide to migrate

bitbasher 3 days ago

I knew it was a matter of time before this happened. I even considered starting it myself, but didn't want the burden of actually maintaining it.

I even thought of calling it zim (zed-improved.. like vim). Glad to see the project!

Quitschquat 3 days ago

I think this guy has to be trolling in the testimonials page:

    “Yes! Now I can have shortcuts to run and debug tests. Ever since snippets were added, Zed has all of the features I could ask for in an editor.”
toastal 3 days ago

“Privacy focus” that states to “No reliance on proprietary cloud services” should not hypocritically lock their code & collaboration behind Microsoft’s GitHub.

ElijahLynn 3 days ago

I on the other hand would probably only switch to Zed with the AI integration. Want to learn a new language? Using AI speeds it up by a factor of months.

lordofgibbons 3 days ago

Zed makes it incredibly easy to both turn of telemetry and to use your own LLM inference endpoints. So why is this needed?

djabatt 3 days ago

Right On! I use Zed and appreciate what the team is building.

Tepix 3 days ago

So, what‘s Zed?

  • yobert 3 days ago

    Zed is a really really nice editor. I consider the AI features secondary but they have been useful here and there. (I usually have them off.) You can use it like cursor if you want to.

    Where I think it gets really interesting is they are adding features in it to compete with slack. Imagine a tight integration between slack huddles and VS code's collaborative editing. Since it's from scratch it's much nicer than both. I'm really excited about it.

  • spagoop 3 days ago

    Zed's dead, baby. Zed’s dead.

    • jeffreygoesto 3 days ago

      Padadadap - Sound of fingers on a leather hood...

  • jks 3 days ago

    An AI editor, a competitor to Cursor but written from scratch and not a VS Code fork. They recently announced a funding round from Sequoia. https://news.ycombinator.com/item?id=44961172

    • andrewmcwatters 3 days ago

      I don't understand why people say X is a competitor to Cursor, which is built on Visual Studio Code, when GitHub Copilot came out first, and is... built on Visual Studio Code.

      It also didn't start out as a competitor to either.

    • efilife 3 days ago

      It wasn't an AI editor for a long time

      • TheCraiggers 3 days ago

        Yup. Their big design goal seemed to just be "speed" for a majority of development. That's it.

    • athenot 3 days ago

      Even without any AI stuff, it's a fantastic editor for its speed.

      • azemetre 3 days ago

        Someone posted this in the other zed thread but it looks on par with VS Code in speed according to these results:

        https://mastodon.online/@nikitonsky/112146684329230663

        • nicce 3 days ago

          Depends how you measure it. At least my battery lasts hour longer when using Zed and when comparing to VSCode. Also, the link is almost 1,5 years old.

  • dmit 3 days ago

    Code editor. Imagine VSCode, but with a native GUI for each platform it supports and fewer plugins. And a single `disable_ai` setting that you can use to toggle those kinds of features off or on.

  • barbazoo 3 days ago

    Watch the video on https://zed.dev/, apparently it's really good at quickly cycling through open documents at 120Hz while still seeing every individual tab. Probably something people asked for at some point.

  • ricardobeat 3 days ago

    Spiritual successor to Sublime Text. They’ve been doing a lot of AI stuff but originally just focused on speed.

    https://zed.dev/

    • Jtsummers 3 days ago

      https://en.wikipedia.org/wiki/Atom_(text_editor)

      More like a spiritual successor to Atom, at least per the people that started it who came from that project.

      • ricardobeat 3 days ago

        Atom was based on web tech, like VSCode, while Zed is a native app with a custom GUI framework, just like Sublime Text. And just like ST, the standard option now for a fast barebones text editor. That's what I mean by 'spiritual successor'.

        • eviks 2 days ago

          Isn't extensible plugin API part of ST spirit? (so zed can't be a successor until its spirit incorporates a similar one)

      • lexoj 3 days ago

        Its funny how the same guy who wrote (borderline) the slowest editor, went ahead and built the fastest. Practice makes perfect I guess :)

  • Scarbutt 3 days ago

    A code editor with a lot of rough edges. If they don't start polishing the turd I doubt the'll make it.

  • skrtskrt 3 days ago

    [flagged]

    • jen20 3 days ago

      The reason I’ve been using Zed is _because_ there is no screwing about with any of that stuff. For Erlang and Elixir it’s been less problematic than IntelliJ, faster and less gross than VS code, and hasn’t required me to edit configuration files other than to turn the font size up.

    • zwnow 3 days ago

      Sorry I couldn't hear you through the nvim startup time and keyboard noises while you are waiting for your IDE to start

      • pjmlp 3 days ago

        Who restarts their IDE all the time?

        I take more than that to fetch a coffee down the kitchen area.

        • fidotron 3 days ago

          > Who restarts their IDE all the time?

          Android developers reindexing.

        • jen20 3 days ago

          Depends which IDE. IntelliJ stays open permanently. When I used full-fat visual studio it would crash so often that I’d have developed an even worse caffeine problem had I fetched coffee every time it needed restarting.

        • timeon 2 days ago

          Is this the reason why people say that 8gb is not enough for writing some code?

          • pjmlp 2 days ago

            Nah, Electron is the reason, my first real IDEs were the whole suite of Borland IDEs for MS-DOS and Windows 3.x.

        • mosburger 3 days ago

          > Who restarts their IDE all the time?

          Xcode users laugh nervously.

      • Ygg2 3 days ago

        Neovim just gets in the way. I observe the machine code directly through my sacred bond with the machine spirit. And the holy mechanical tentacles connected to my visual cortex.

      • skrtskrt 3 days ago

        Famous indicator of software quality: how fast an editor opened to write it.

        • 0x457 3 days ago

          Sometimes my ADHD kicks in while Intellij launches and I forget what I was working on.

          • skrtskrt 3 days ago

            This is completely fair lol

syntaxing 3 days ago

This is awesome, honestly with the release of Qwen3Coder-30B-A3B, we have a model that’s pretty close to the perfect local model. Obviously the larger 32B dense one does better but the 30B MoE model does agentic pretty well and is great at FIM/autocomplete

lttlrck 2 days ago

I would like to try Zed, but it doesn't run on my system due to impenetrable MESA/Vulkan errors with Intel UHD 700, even though vkcube runs fine.

Running a text editor should not be this hard, it's pretty ridiculous. Sublime Text is plenty fast without this nonsense.

colesantiago 3 days ago

I welcome this, now we get Zed for free with privacy on top without all the AI features that nobody asked for.

As soon as any dev tool gets VC backing there should be an open source alternative to alleviate the inevitable platform decay (or enshittification for lack of a better word)

This is a better outcome for everyone.

Some of us just want a good editor for free.

  • jen20 3 days ago

    > Some of us just want a good editor for free.

    Sums up the problem neatly. Everyone wants everything for free. Someone has to pay the developers. Sometimes things align (there is indeed a discussion in LinkedIn about Apple hiring the OPA devs today), mostly it doesn’t.

    • TheCraiggers 3 days ago

      > Someone has to pay the developers.

      Agreed. Although nobody ever mentions the 1,100+ developers that submitted PRs to Zed.

      And yeah. I know what you mean. But this is the other side of the OSS coin. You accept free work from outside developers, and it will inevitably get forked because of an issue. But from my perspective, it's a great thing for the community. We're all standing on the shoulders of giants here.

johanneskanybal 3 days ago

[flagged]

  • trostaft 3 days ago

    ???

    The first line of the README

    > Welcome to Zed, a high-performance, multiplayer code editor from the creators of Atom and Tree-sitter.

    The second line of the README (with links to download & package manager instructions omitted)

    > Installation

    > On macOS and Linux you can download Zed directly or install Zed via your local package manager.

    I do not dispute that HN is an echo chamber. But how did you come to your conclusions?

st3fan 3 days ago

I like this but can we stop calling product telemetry “spyware” please.

  • jart 3 days ago

    It kind of is. I don't want Richard Stallman knowing every time I open a file in emacs or run the ls command. Keep that crap out of local software. There should be better ways to get adoption metrics for your investors, like creating a package manager for your software, or partnering with security companies like Wiz. If you have telemetry, make it opt-in, and help users understand that it benefits them by being a vote in what bugs get fixed and what features get focused on. Then publish public reports that aggregate the telemetry data for transparency like Mozilla and Debian.

    • hiccuphippo 3 days ago

      It is a tool for developers. Give them a link to your bug tracker and let them tell you themselves.

      • jart 2 days ago

        People file issues when they're unhappy. If that's you're only vantage point you're gonna be crying yourself to sleep each night.

  • barnabee 3 days ago

    No. It's spyware. Software authors/vendors have no right to collect telemetry and it ought to be illegal to have any such data collection and/or exfiltration running on a user's device by default or without explicit, opt-in consent.

    • rendx 3 days ago

      It already is in Europe thanks to GDPR. Just not enough formal complaints or lawsuits (yet); e.g. IP addresses are explicitly Personally Identifiable Information.

  • JoshTriplett 3 days ago

    Why? Any non-opt-in product telemetry is spyware, and you have no idea what they'll do with the data. And if it's an AI company, there's an obvious thing for them to do with it.

    (Opt-in telemetry is much more reasonable, if it's clear what they're doing with it.)

    • mgsloan2 3 days ago

      Collection of data from code completions is off by default and opt-in. It also only collects data when one of several allowlisted opensource licenses are present in the worktree root.

      Options to disable crash reports and anonymous usage info are presented prominently when Zed is first opened, and can of course be configured in settings too.

  • max-privatevoid 3 days ago

    We can stop calling it spyware once it is not spyware (will never happen).

  • foresto 2 days ago

    If it collects information from someone, and they don't want it to, then it is spying.

    I am deeply disappointed in how often I encounter social pressure, condescending comments, license terms, dark patterns, confidentiality assurances, anonymization claims, and linguistic gymnastics trying to either convince me otherwise or publicly discredit me for pointing it out. No amount of those things will change the fact that it is spyware, but they do make the world an even worse place than the spyware itself does, and they do make clear that the people behind them are hostile actors.

    No, we will not stop calling it what it is.