Thanks to Chris to continue challenging his comfort zone (and mine!) and sharing his impressions and learnings with us!
I may be a little biased because I've been writing webapps with htmx for 4 years now, but here are my first thoughts:
- The examples given in this blogpost show what seems to be the main architectural difference between htmx and Datastar: htmx is HTML-driven, Datastar is server-driven. So yes, the API on client-side is simpler, but that's because the other side has to be more complex: on the first example, if the HTML element doesn't hold the information about where to inject the HTML fragment returned by the server, the server has to know it, so you have to write it somewhere on that side. I guess it's a matter of personal preference then, but from an architecture point-of-view both approaches stand still
- The argument of "less attributes" seems unfair when the htmx examples use optional attributes with their default value (yes you can remove the hx-trigger="click" on the first example, that's 20% less attributes, and the argument is now 20% less strong)
- Minor but still: the blogpost would gain credibility and its arguments would be stronger if HTML was used more properly: who wants to click on <span> elements? <button> exists just for that, please use it, it's accessible ;-)
- In the end I feel that the main Datastar selling point is its integration of client-side features, as if Alpine or Stimulus features were natively included in htmx. And that's a great point!
As far as I understand, the main difference between HTMX and datastar is that HTMX uses innerHTML-swap by default and datastar uses the morph-swap by default, which is available as an extension for HTMX [1].
Another difference is that datastar comes with SSE, which indeed makes it server driven, but you don't have to use SSE. Also datastar comes with client-side scripting by default. So you could say the datastar = integrated HTMX + idiomorph + SSE + Alpine.
The article stated that he no longer needs eventing to update other parts of the page, he can send down everything at once. So, I guess that is much less complex. Granted, eventing and pulling something down later could be a better approach depending on the circumstance.
Server side template wrangling is not really a big deal, if you use an HTML generation library...something like Python's Hpty/FastHTML or JavaScript's JSX. You can easily split the markup down into 'components' and combine them together trivially with composition.
I mean in practice you rarely target individual elements in datastar. You can sure. But targeting the main body with the entirety of the new content is way simpler. Morph sorts out the rest
A good example is when a page has expensive metrics specific to say a filter on the page. Let's say an action on the page shows a notification count change in the top right corner.
While morph will figure it outz it's unnecessary work done on the server to evaluate the entire body
Expensive queries on the server should be shared where they can be (eg: global leaderboard) or cached on the server (in the game of life demo each frame is rendered/calculated once, regardless of the number of users). Rendering the whole view gives you batching for free and you don't have to have all that overhead tracking what should be updated or changed. Fine grained updates are often a trap when it comes to building systems that can handle a lot of concurrent users. It's way simpler to update all connected users every Xms whenever something changes.
Yeah so that was how I used to think about these things. Now, I'm. less into the fine grain user updates too.
Partly, because the minute you have a shared widget across users 50%+ of your connected users are going to get an update when anything changes. So the overhead of tracking who should update when you are under high load is just that, overhead.
Being able to make those updates corse grain and homogeneous makes them easy to throttle so changes are effectively batched and you can easily set a max rate at which you push changes.
Same with diffing, the minute you need to update most of the page the work of diffing is pure overhead.
So in my mind a simpler corse grain system will actually perform better under heavy load in that worst case scenario somewhat counter intuitively. At least that's my current reasoning.
"Alpine or Stimulus features were natively included in htmx"
I'm contemplating using HTMX in a personal project - do you know if there are any resources out there explaining why you might also need other libraries like Alpine or Stimulus?
They're for client-side only features. Think toggling CSS classes, updating the index on a slider- you ideally don't want to have to hit the server for that
Reminds me a bit of the Seaside framework in Pharo. A lot of the things I programmed in Pharo at my previous employer was a lot of back and forth between front-end and back-end, because the back-end was managing the front-end state. For B2B apps that don't have a lot of latency requirements, etc., I'd say it's better. For high scalable B2C apps though? No.
Not GP, but I would say, it’s the same reason someone would use React. If you keep you state in a single place, the rest of the app can become very functional and pure. You receive data and tranform it (or render it). The actual business logic that manipulate the state can be contained in a single place.
This reduces a lot of accidental complexities. If done well, you only need to care about the programming language and some core libraries. Everything else becomes orthogonal of each other so cost of changes is greatly reduced.
I would imagine the same arguments for Smalltalk like live coding and an IDE within your production application. So you get some overlap with things like Phoenix LiveView, but more smalltalk-y.
I assume it had backend scaling issues, but usually backend scaling is over-stated and over-engineered, meanwhile news sites load 10+ MB of javascript.
> if the HTML element doesn't hold the information about where to inject the HTML fragment returned by the server, the server has to know it, so you have to write it somewhere on that side
I'm not too strong in frontend, but wouldn't this make for a lighter, faster front end? Especially added up over very many elements?
100%. Datastar is just make HTML spec support reactive expression in data-* attributes, that's it. You will become stronger at web cause it just gets out of your way
I don't think the difference would be significant. How many of your HTML elements would become interactive with htmx? There's a limit to how much interaction you can reasonably add on a page. This will also limit the number of new attributes you will introduce in the markup.
Also, by this argument should we leave out the 'href' attribute from the '<a>' tag and let the server decide what page to serve? Of course not, the 'href' attribute is a critical part of the functionality of HTML.
Htmx makes the same argument for the other attributes.
For those of you who don't think Datastar is good enough for realtime/collaborative/multiplayer and/or think you need any of the PRO features.
These three demos each run on a 5$ VPS and don't use any of the PRO features. They have all survived the front page of HN. Datastar is a fantastic piece of engineering.
On both the checkboxes/cells examples there's adaptive view rendering so you can zoom out a fair bit. There's also back pressure on the virtual scroll.
Can you explain how these work? Does the server send small subrectangles of the large grid when the user scrolls to new regions of the grid? Does the browser actually have a two-dimensional array in memory with a billion items, or is there some other data structure?
Yeah the server only sends what the user is currently looking + plus a buffer around their view. There's no actual checkbox state on the client. When the user clicks a checkbox a depress animation is started and a request is made (which the server responds to with no data and a 204). The user then gets the html for the next view down a long lived SSE connection that started when they first loaded the page. Because, there's a long lived connection, it has really good compression. Same thing happens when the user scrolls. If they scroll far enough a new view is rendered.
The billion items themselves are just in a server on the backend, stored in a sqlite database.
If I understand the code for these correctly though, you're not actually doing the "idiomatic" datastar things as the article describes? No diffing/patching individual elements, just rerender the entire page?
Tbh that mental model seems so much simpler than any or all of the other datastar examples I see with convoluted client state tracking from the server.
Would you build complex apps this way as well? I'd assume this simple approach only works because the UI being rendered is also relatively simple. Is there any content I can read around doing this "immediate mode" approach when the user is navigating across very different pages with possibly complicated widget states needing to be tracked to rerender correctly?
I mean Datastar is pretty flexible. I'd say CQRS is pretty idiomatic if you want to do multiplayer/realtime stuff. As you mentioned, once you've se that up, the mental model is much simpler. That being said the initial set up is more involved than req/response Datastar.
Yes we are building complex accounting software at work with Datastar and use the same model. "Real UI" is often more complex, but a lot less heavy less divs, less data, fewer concurrent users, etc compared to these demos. Checkboxes are a lot more div dense than a list of rows for example.
I don't use anything from pro and I use datastar at work. I do believe in making open source maintainable though so bought the license.
The pro stuff is mostly a collection of foot guns you shouldn't use and are a support burden for the core team. In some niche corporate context they are useful.
You can also implement your own plugins with the same functionality if you want it's just going to cost you time in instead of money.
I find devs complaining about paying for things never gets old. A one off life time license? How scandalous! Sustainable open source? Disgusting. Oh a proprietary AI model that is built on others work without their consent and steals my data? Only 100$ a month? Take my money!
I don't think the article does a good job of summarising the differences, so I'll have a go:
* Datastar sends all responses using SSE (Server Side Events). Usually SSE is employed to allow the server to push events to the client, and Datastar does this, but it also uses SSE encoding of events in response to client initiated actions like clicking a button (clicking the button sends a GET request and the server responds with zero or more SSE events over a time period of the server's choice).
* Whereas HTMX supports SSE as one of several extensions, and only for server-initiated events. It also supports Websockets for two-way interaction.
* Datastar has a concept of signals, which manages front-end state. HTMX doesn't do this and you'll need AlpineJS or something similar as well.
* HTMX supports something called OOB (out-of-band), where you can pick out fragments of the HTML response to be patched into various parts of the DOM, using the ID attribute. In Datastar this is the default behaviour.
* Datastar has a paid-for Pro edition, which is necesssary if you want certain behaviours. HTMX is completely free.
I think the other differences are pretty minor:
* Datastar has smaller library footprint but both are tiny to begin with (11kb vs 14kb), which is splitting hairs.
* Datastar needs fewer attributes to achieve the same behaviours. I'm not sure about this, you might need to customise the behaviour which requires more and more attributes, but again, it's not a big deal.
As someone on the sideline who's been considering HTMX, its alternatives and complements, this was a helpful comment! Even without having used any of it, I get the feeling they're going in the right direction, including HTMX author's humorous evangelism. If I remember correctly he also wrote Grug, which was satire and social criticism of high caliber.
D* doesnt only use SSE. It can do normal http request-response as well. Though, SSE can also do 0, 1 or infinite responses too.
Calling datastar's pro features "necessary" is a bit disingenuous - they literally tell people not to buy it because those features, themselves, are not actually necessary. Theyre just bells and whistles, and some are actually a bad idea (in their own words).
Datastar is 11kb and that includes all of the htmx plugins you mentioned (sse, idiomorph) and much more (all of alpine js, essentially).
As someone who wants to write open source but needs to be able to capture some financial value from doing that to be able to make it sustainable, what model do you prefer?
My current thoughts lean towards a fully functional open source product with a HashiCorp style BSL and commercial licensing for teams above a size threshold.
I think the open core model is fine, and the most financially sustainable. Just be up front about it from day 1. I don't think the honor system for licensing will get you the results you're wanting.
it depends strongly on why you want to write open source. if you like the idea of putting source code out into the world for other people to use and benefit from then go ahead and use whatever mix of open source and proprietary code you like, just be up front that that's what you are doing.
if you want to promise open source software simply to attract the mindshare and users who habitually ignore anything that isn't open source, trying to capture financial value may well be infeasible unless some rare confluence of stars lines up for you. the key is in the word "capture" - capturing the value implies making sure it goes to you rather than to someone else, and that means imposing restrictions that will simply piss those same users off.
I can't imagine that works very well for relatively small, simple, functional or intuitive projects though. Incentives wise, is it possible to sell reverse support: extracting payment for all the times the product works so well that support isn't needed?
It will be, eventually, unless a maintainer is able to maintain during the day. It doesn't matter what the source of free time is however: retired, rich, runs a company from their open source project, paid by somebody else, etc., but full time job + open source maintainer = dead project, eventually.
I don't think open issues is a fair way to judge project liveness. TypeScript also has hundreds of open issues going back years with no traction. Is TypeScript dead?
Yes, issues that are years old show me the commitment level. Not a knock against HTMX but a clear sign of priorities. Carson is free to meme all day and talk about other projects. It's very clear where he stands and that's fine
this year I created and released fixi.js, created the montana mini computer (https://mtmc.cs.montana.edu), published an paper on hypermedia via the ACM, got hyperscript to 1.0, released 3 versions of htmx, reworked all the classes that I teach at montana state and am planning on releasing a java-based take on rails that I'm building for my web programming class
i am also the president of the local youth baseball program and helped get BigSkyDevCon over the hump
i think you'd be surprised at how little time i actually spend on twitter
as always, my issue is never with how you spend your time. you are a giver of gifts and I wish more people that relied on HTMX stepped up to make it better. in no way should anything be expected of you. How you spend your time is obviously your call. MIT is MIT
It was a rhetorical question; the answer is no, old issues with no updates don't necessarily indicate anything about the health of the project. Different people have different project management styles. You use your style for your project, and Carson uses his for htmx. There's no one correct way to manage an issue backlog.
source is MIT, do what you want. The team found certain plugins to be anti-patterns and support burdens. You can find the old plugins in the repo source, feel free to fork from there!
I just come from writing a comment on the other Datastar post on the home page, literally saying that I don't see the point of it and that I don't like it.
But I'm now here to defend Datastar.
It's their code, which, up to now, they built and literally given away totally for free, under a MIT license. Everything (even what "they moved to the Pro tier") should still be free and under the MIT license that it was published under originally.
You just decided to rely and freeload (as, as far as I can tell, you never contributed to the project).
You decided to rely on a random third party that owns the framework. And now you're outraged because they've decided that from now on, future work will be paid.
The software was released as a free version, with NO expectation for it to go commercial.
The fact that they switch to a paid version, and stripping out features from the original free version, is called "bait and switch".
If OP knew in advanced, he will have been informed about this and the potential 299 price tag. And he will have been able to make a informed decision BEFORE integrating the code.
> You just decided to rely and freeload (as, as far as I can tell, you never contributed to the project).
But you complaint about him being a freeloader for not contributing to a project. What a ridiculous response.
I feel like you never even read the post and are making assumption that OP is a full time programmer.
Datastar can do whatever they want, its their code. But calling out a *bait and switch* does not make OP the bad guy.
Yeah, I agree, it's over the top. I'm just matching the over-the-top language of the original post, which pretty much calls the Datastar devs "disgraceful" and to "f them".
I did read the post. I know OP not a programmer. And that makes it even worse: OP has the audacity of saying they "make no money from the project" while it being a scheduling tool for their presumably plenty money-making clinic.
It would in fact be less shocking if they were a programmer doing a side project for fun.
This piece is not a rational, well tempered article. Is a rant by someone who just took something that was free and is now outraged and saying fuck you to those who made their project possible in the first place, not even understanding how licenses work or even being aware that the code they relied on is still there, on github, fully intact, and available for them.
This sort of people not only want to get it for free. They want their code to be maintained and improved for free in perpetuity.
its not bait and switch, its main has features we are willing to continue to support given we did a whole rewrite and this is what we think you should use. Don't like it? Fork it, code is still there. I hope your version is better!
> its not bait and switch, its main has features we are willing to continue to support given we did a whole rewrite and this is what we think you should use. Don't like it? Fork it, code is still there. I hope your version is better!
It sounds like your are the dev of Datastar...
Let me give one piece of advice. Drop the attitude because this is not how you interact in public as the developers of a paid piece of software.
You can get away with a lot when its free/hobby project, but the moment you request payment, there is a requirement for more professionalism. These reactions that i am reading, will trigger responses that will hurt your future paycheck. Your already off on a bad start with this "bait and switch", do not make it worse.
I really question your future client interactions, if they criticize your product(s) or practices.
> I hope your version is better!
No need for Datastar, my HTMX "alternative" has been in production (with different rewrites) over 20 years. So thank you for offering, but no need.
I'll certainly defend d*'s right to do what they did, but the wisdom of doing so is going to come into question as soon as they reject a PR because it contains a feature that's in Pro. I don't think people who are concerned about that deserve to be called "freeloaders", but I guess a fork is a way out of such acidic rhetoric too.
D* has a core, which is open and will be set in stone soon when v1 is released, with the expectation that it'll barely, if ever, change again.
The rest is plugins, which anyone can write or modify. There's no need for the plugins to get merged upstream - just use them in your project, and share them publicly if you want. You could even do the same with the pre-pro versions of the pro plugins - just make the (likely minor) modifications to make them compatible with the current datastar core.
They're also going to be releasing a formal public plugin api in the next release. Presumably it'll be even easier to do all of this then.
Sounds like they put some real thought into it then, which is good news. I was picturing two different core distributions, which would create the sort of conflict I was imagining, but as long as core does stay maintained, it seems likely that fear will stay imaginary.
As I answered somewhere else, the over-the-top freeloader term I think is justified because OP clearly expects not only to benefit from the work already available, freely, but also to be entitled, for free, to any work and improvement that comes in the future.
This is nonsensical. Someone did something for free. Fantastic. They used it, successfully, for a production system that enables scheduling for their job.
Nobody took that away from them. They didn't force them to rebuild their tool.
The code is even there, in the git history, available for them.
If OP doesn't like what the devs decided to do with the project, just move on or fork and pay someone to help you fix any outstanding bugs or missing features.
The “outrage” is literally just people saying they’ll use a different project instead. Why would they ever fork it? They don’t like the devs of datastar they don’t want to use it going forwards. Yes the developers are allowed to do what they want with their code and time, but people are allowed to vote with their feet and go elsewhere and they are allowed to be vocal about it.
The comment is tongue in cheek. On the discord it was discussed at length and some of the plugins in the Pro version were actually considered anti-patterns, it actually is kinda easy to complicate things needlessly when getting used to D* and I know I did this too in the beginning.
As was said by the commenter in another reply, the inspector is actually the bit that makes the Pro version much more appealing but most people wouldn't know from the sidelines.
Arguably that's good though - for the project. It means it's not a bait and switch like many have claimed. You can build pretty much anything with regular Datastar.
I thought the devs' emphatic assertions in their Discord NOT to buy Datastar Pro was a psyop dark pattern. I bought it to spite them, and barely use any of it. I want my css-in-js back!
Sorry, yes it was sarcasm (I should have indicated that explicitly). I'm happy to fund a tool that I really enjoy using, even if I don't use any of the PRO features.
Datastar always rubbed me the wrong way. The author was constantly pushing it in the HTMX discord, telling anyone who would listen that if they liked HTMX how great Datastar would be for them. Some pretty classy comments from them on reddit too:
> It was a full rewrite. Use the beta release forever if it has all the tools you need. No one is stopping you.
> Open source doesn't owe you anything and I expect the same back.
> The author was constantly pushing it in the HTMX discord, telling anyone who would listen that if they liked HTMX how great Datastar would be for them
Agree nothing unclassy. People have this strange expectation that an open source project is out there to serve every single person using it with total attention. It’s not, feel free to fork the beta and use it forever, make your own changes. The pro tier cost is a pittance for anyone using it for profit.
Going to vouch for this. Why does it matter what other people do? This is such a non issue, you are free to fork it and do your own work. I actually believe more open source repos should tastefully have paid tiers to help pay for the continued work.
Which has since been archived. Last post from “Datastar CEO”. I mean cmon, it’s a little cringe. That meme is funny when it’s about HTMX. Like at least try your own memes instead of riding on Carson’s sense of humor too.
It's also pretty shady that no mention is made of Datastar Pro on the home page [1]. You might well be well on the way to integrating Datastar into your website before you stumble across the Pro edition, which is only mentioned on the side bar of the reference page [2].
Isn't that only a problem if it advertised pro features there without mentioning the fact that they're paid? If it didn't then you could just be happy with the free features, no?
I'd expect it to make it explicit this is a freemium product, with free features and paid features. Nothing is given on the home page to indicate as such.
If they aren’t leading to expect that they have the paid features for free, how is offering them for money any different from just not offering those features at all?
It’s not like your exiting use cases stop working past 10 users or something.
if a feature I want is in the paid product then I assume there's less chance of it being added to the free version. every feature has to go through a process to decide if it's paid or free.
If there's money to be made the possibility that the feature will ever exist at all goes way up. I'd rather have the ability to pay for a feature if I decide I need it than to hope some maintainer gets around to building it for free.
They've said that the feature they put in the premium product are the features they don't want to build or maintain without being paid to do so.
Yeah cool, I think this is the point. People want to get paid for the work they produce and the dynamic in open source is not even quietly known to be unsustainable.
I like the communal aspect of open source, but I don’t like overly demanding and entitled free loaders. I’ve had enough of that in my well paid career over the last decade.
This way of getting paid may or may not resonate, but I applaud the attempt to make it work.
Because the incentive is now there. Maybe they don't get enough paid customers and want more money. This puts a bit of pressure to move a new feature that is really handy into the paid level. Then another and another. Might not happen but it could.
Most people using Datastar will not necessarily be smart enough to fork it and add their own changes. And when Datastar makes a new release of the base/free code people will want to keep up to date. That means individuals have to figure out how to integrate their already done changes into the new code and keep that going. It's not a matter of if something breaks your custom code but when.
Finally, many people internalize time as money with projects like this. They're spending many hours learning to use the framework. They don't want to have the effort made useless when something (ex: costs or features) changes outside of their control. Their time learning to use the code is what they "paid" for the software. Doesn't matter if it's rational to you if it is to them.
I'm only working in local dev right now, so i've got the pro version and inspector going. When I get to prod, perhaps this will be a problem.
Yet, surely, this could just be toggled with an env var or db setting or something? if dev, include pro and inspector component. If prod, use free version (or custom bundle that only has what you need)
Finally someone is speaking truth to power. These registered non-profits that release their code for free and their leisure time for support need to be knocked down a notch.
We all know they are evil. But you know the most evil thing? That code that was previously released under a free license? Still sneakily on display in the git history like the crown jewels in the Tower of London. Except of armed guard defending the code that wants to be free once more it's hidden behind arcane git commands. Name me a single person that knows how to navigate the git history. I'm waiting. Spoiler alert: I asked Claude and they don't exist.
Sure, but this person is a doctor (or similar) who took time to learn to code this form up to better serve their patients. They are most likely blessedly ignorant of software licenses and version control.
As I read it the op said, "I don't like how they changed this license, this is a bad direction and I didn't think there was adequate transparency."
And your rebuttal is, "Well you can always recover the code from the git history?"
I mean, this is true, but do you think this really addresses the spirit of the post's complaint? Does mentioning they're a non-profit change anything about the complaint?
The leadership and future of a software project is an important component in its use professionally. If someone believes that the project's leadership is acting in an unfair or unpredictable way then it's rational and prudent for them to first express displeasure, then disassociate with the project if they continue this course. But you've decided to write a post that suggests the poster is being irrational, unfair, and that they want the project to fail when clearly they don't.
If you'd like to critique the post's points, I suggest you do so rather than straw manning and well-poisoning. This post may look good to friends of the project, but to me as someone with only a passing familiarity with what's going on? It looks awful.
Oh I did. I got rid of it. Inspiring both constant censure and the kind of response you're giving drove me to despair.
I don't write things for public consumption now.
But we're not talking about me or the post. We're talking about your refusal to engage with the implications of what the project did.
I don't care what Datastar does. I'd never use Datastar. Looks like exactly what I don't need. They can certainly govern their product as they see fit.
But I've disassociated from projects for less egregious unannounced terms changes. And I've never had that decision come out for the worst, only neutral or better.
It's good to know -- having replace-url functionality behind the paywall is likely to be a deal-killer; I can't help but think that this "freemium" model is really going to kill datastar's prospects for taking off; at best it's likely to result in a fork which ends up subsuming the original project.
That said, the attitude of the guy in the article is really messed up. Saying "fuck you" to someone who gave you something amazing for free, because he's not giving you as much as you want for free -- it's entitled to a toxic degree, and poisons the well for anyone else who may want to do something open-source.
It's more like the mouse saying fuck you to the trap holding the cheese. It's not that the mouse isn't grateful for the free cheese. It's just the mouse understands the surrounding context.
> Saying "fuck you" to someone who gave you something amazing for free, because he's not giving you as much as you want for free
I don't have a problem, on principle, with paywalling new features. I don't like it, but I don't think it's bad behaviour.
Putting up a paywall around features that were previously free, however, I do take issue with. It's deceptive and it's not common practice. It tricks people into becoming invested and then holds hostage the features that they've become invested in using. Frankly, fuck that.
I'm normally not one to discourage anyone from open-source; but if toxic entitlement is going to get you this worked up, you might consider whether it's really your thing. The more successful you are the more you're going to encounter.
On the latter point, couldn't disagree more. He's saying "fuck you" to the product, not the person, and unilaterally removing extant features to paywall them imo is poisoning the well far more than a simple FU to a developer ever could?
We didn't remove the features, if you want to use the old ones they're still there in the repo. We just didn't want to support the old way of doing them when we actively tell people not to use them. If you're going to be a support burden going forward we want you to have skin in the game if not cool do it yourself no one's going to get mad at you
In any hypothetical open source project I make from now on where I am the owner and sole director I'll just get rid of the features entirely if they cause an undue support burden (which the datastar dev has gone up and down both threads saying this is what happened) to avoid specifically your comment.
Seems to fit in with your world view better and then I can just leave those people high and dry with much less concern!
Blog post author here. I never expect my blog post to get this much attention. I was emotional when I wrote that blog because I had spent a couple weeks to rewrite a service for self use. And the service was almost completely migrated to datastar from htmx.
I was facing a situation where I either need to stuck with the beta, or paying a pro version, as I was using the replace-url function a lot.
I was emotionally feeling betrayed. I went to the datastar reddit thread to raise my doubt that whether there would be more features that I rely on in the free version would be stripped out and be put behind the paywall. I was fine to convert my service to purely free tier features, when my service is stable and usable, I was very willing to buy a pro license.
But you know what? The datastar author jumped out and stated two points. He said the release version of datastar is a full rewrite, if I am not paying, I could stay in beta or fork it. And in open source world, he owned me nothing. Very legit points.
However, the real reason behind that fuck you statement is that I was attacked by the datastar discord members multiple times. In one of the humiliating replies I got, that guy said some one in the discord server told them to show support to datastar. Instead of supporting, they just mocked me and called me a troll as if I was an obstacle to their potential success, multiple people, multiple times.
I noticed some comments in the thread said that I don't know how to use version control, or ignorant towards software license. Well, I do use version control and occasionally contribute to open source projects. I am a doctor, I may not be as skillful as you all, but I do know some basics in programming.
Our Discord is generally a friendly place, but not the nicest. If you can't backup your ideas or defend your code with metrics you are gonna have a bad time. We help those that help yourself. IIRC you were forcefully tell how things should work so it'd be more like HTMX. We tend to go tit for tat so go back and look if we were actively dissuading you from bad ideas.
Is the greedy developer in the title the one who wants the 3rd party for free without contributing, or the developer who wrote the said 3rd party and asking compensation?
The problem is that the developer of datastar did a bait and switch. Releasing the beta for free, and then removing features into a pro version with a price tag.
Nothing wrong with people making money on their software but you need to make it clear from the start, that it will be paid software and what price range.
Bait and switch is often used to get people to use your software, you spend time into it, and then if you need a Pro feature, well, fork up or rework your code again. So your paying with your time or money. This is why its nasty and gets people riled up.
Its amazing how many people are defending this behavior.
Correct me if I am wrong here, but what you had for free, you still have it for free, since it's a MIT license, what you cloned initially is still "yours".
Is the problem thar one needs to fork / maintain the code from now on? Is the problem that one wants free support on top of the free library?
I'm not opposed to open source projects placing features that realistically only large/enterprise users would use behind a paywall, i.e. the open core model. When done fairly, I think this is the most sustainable way to build a business around OSS[1]. I even think that subscriptions to such features are a fair way of making the project viable long-term.
But if the project already had features that people relied on, removing them and forcing them to pay to get them back is a shitty move. The right approach would've been to keep every existing feature free, and only commercialize additional features that meet the above criteria.
Now, I can't say whether what they paywalled is a niche/pro feature or not. But I can understand why existing users wouldn't be happy about it.
if we're talking about something immense, like redis, you might have a point. But we're talking about a few hundred lines of simple javascript that are still available to fork and update to be compatible with the new API. The fact that no one has done such a simple thing yet means this is a non-issue
The thing is there's not much practical difference for users. They might not be aware that it's only a few hundred lines of code, and it really doesn't matter. The point is that they were depending on a software feature one day, and the next they were asked to pay for it. That's the very definition of a rugpull. Whether it's a few hundred lines of code, several thousand, or the entire product, the effect is the same.
Forking is always an option, of course, but not many people have the skills nor desire to maintain a piece of software they previously didn't need to. In some cases, this causes a rift in the community, as is the case for Redis/Valkey, Terraform/OpenTofu, etc., which is confusing and risky for users.
All of this could've been avoided by keeping all existing features freely available to everyone, and commercializing new value-add features for niche/enterprise users. Not doing that has understandably soured peoples' opinion of the project and tarnished their trust, as you can see from that blog post, and comments on here and on Reddit. It would be a mistake to ignore or dismiss them.
One other comment though: a lot of what you said rests upon the notion that people were relying on these features.
First, barely anyone used datastar at that point, and those features were particularly arcane. So, the impact was minimal.
Second, its likely that even fewer of them contributed anything at all to the project in general, and those features in particular. What claim do they have to anything - especially when it was just freely given to them, and not actually taken away (the code is still there)?
And to the extent that they can't or wont fix it themselves, what happens if the dev just says "im no longer maintaining datastar anymore"? You might say "well, at least he left them something usable", but how is that any different from considering the pro changes to just be a fork? In essence, he forked his own project - why does anyone have any claim to any of that?
Finally, if they cant fix it themselves (especially when AI could almost certainly fix it rapidly), should they really be developing anything?
In the end, this really is a non-issue. Again, most of the furor is quite clearly performative. Its like when DHH removed typescript from one of his projects that he and his company maintain, and people who have nothing to do with ruby came out of the woodwork to decry the change in his github repo. And even if they do have something to do with ruby, they have no say over how he writes his code.
> a lot of what you said rests upon the notion that people were relying on these features.
They were, though. The blog post linked above, and several people in the Reddit thread linked in the blog post mentioned depending on these features.
We can disagree about whether it matters that a small percentage of people used them, but I would argue that even if a single person did, a rugpull is certainly a shitty experience for them. It also has a network effect, where if other people see that developers did that, they are likely to believe that something similar in the future can happen again. Once trust is lost, it's very difficult to gain it back.
> Second, its likely that even fewer of them contributed anything at all to the project in general, and those features in particular. What claim do they have to anything - especially when it was just freely given to them, and not actually taken away (the code is still there)?
I think this is a very hostile mentality to have as an OSS developer. Delaney himself expressed something similar in that Reddit thread[1]:
> I expect nothing from you and you in turn should expect nothing from me.
This is wrong on many levels.
When a software project is published, whether as open source or otherwise, a contract is established between developers and potential users. This is formalized by the chosen license, but even without it, there is an unwritten contract. At a fundamental level, it states that users can expect the software to do what it advertises to do. I.e. that it solves a particular problem or serves a particular purpose, which is the point of all software. In turn, at the very least, the developer can expect the project's existence to serve as an advertisement of their brand. Whether they decide to monetize this or not, there's a reason they decide to publish it in the first place. It could be to boost their portfolio, which can help them land jobs, or in other more direct ways.
So when that contract is broken, which for OSS typically happens by the developer, you can understand why users would be upset.
Furthermore, the idea that because users are allowed to use the software without any financial obligations they should have no functional expectations of the software is incredibly user hostile. It's akin to the proverb "don't look a gift horse in the mouth", which boils down to "I can make this project as shitty as I want to, and you can't say anything about it". At that point, if you don't care about listening to your users, why even bother releasing software? Why choose to preserve user freedoms on one hand, but on the other completely alienate and ignore them? It doesn't make sense.
As for your point about the code still being there, that may be technically true. But you're essentially asking users to stick with a specific version of the software that will be unmaintained moving forward, as you focus on the shiny new product (the one with the complete rewrite). That's unrealistic for many reasons.
> And to the extent that they can't or wont fix it themselves, what happens if the dev just says "im no longer maintaining datastar anymore"?
That's an entirely separate scenario. If a project is not maintained anymore, it can be archived, or maintenance picked up by someone else. Software can be considered functionally complete and require little maintenance, but in the fast moving world of web development, that is practically impossible. A web framework, no matter how simple, will break eventually, most likely in a matter of months.
> Finally, if they cant fix it themselves (especially when AI could almost certainly fix it rapidly), should they really be developing anything?
Are you serious? You expect people who want to build a web site and move on with their lives to dig into a foreign code base, and fix the web framework? It doesn't matter how simple or complex it is. The fact you think this is a valid argument, and additionally insult their capability is wild to me. Bringing up "AI" is laughable.
> Again, most of the furor is quite clearly performative.
Again, it's really not. A few people (that we know of) were directly impacted by this, and the network effect of that has tarnished the trust other people had in the project. Doubling down on this, ignoring and dismissing such feedback as "performative", can only further harm the project. Which is a shame, as I truly do want it to gain traction, even if that is not the authors' goal.
Anyway, I wish you and the authors well. Your intentions seem to come from the right place, but I think this entire thing is a misstep.
The sibling comment already thoroughly addressed all of this, so there's no need to me to do so other than to say that, despite your good intentions, you don't seem to have even the slightest understanding of open source.
> despite your good intentions, you don't seem to have even the slightest understanding of open source
Please. Resorting to ad hominem when you don't have good arguments against someone's opinion is intellectually lazy.
> At no point does it say anything like "I am obliged to maintain this for you forever, or even at all, let alone to your liking"
I'm well familiar with most OSS licenses. I never claimed they said this.
My point was about an unwritten social contract of not being an asshole. When you do a public deed, such as publishing OSS, and that project gains users, you have certain obligations to those users at a more fundamental level than the license you chose, whether you want to acknowledge this or not.
When you ignore and intentionally alienate users, you can't be surprised when you receive backlash for it. We can blame this on users and say that they're greedy, and that as a developer you're allowed to do whatever you want, becuase—hey, these people are leeching off your hard work!—but that's simply hostile.
The point of free software is to provide a good to the world. If your intention is to just throw something over the fence and not take users into consideration—which are ultimately the main reason we build and publish software in the first place—then you're simply abusing this relationship. You want to reap the benefits of exposure that free software provides, while having zero obligations. That's incredibly entitled, and it would've been better for everyone involved if you had kept the software private.
There's literally no ad hominem where you claimed there was. That itself is ad hominem.
I'll go further this time - not only do you not understand open source licensing or ecosystem even slightly, but it's genuinely concerning that you think that someone sharing some code somehow creates "a relationship" with anyone who looks at it. The point of free software is free software, and the good to the world is whatever people make of that.
Again, the only people who seem to be truly bothered by any of this are people who don't use datastar.
Don't use it. In fact, I suspect that the datastar maintainers would prefer that you, specifically, don't use it. Use it to spite them! We don't care.
I also retract my statement about you having good intentions/communicating in good faith. I won't respond to you again.
> the only people who seem to be truly bothered by any of this are people who don't use datastar.
Yeah, those silly people who were previously interested in Datastar, and are criticizing the hostility of how this was handled. Who cares what they think?
> Don't use it. We don't care. In fact, I suspect that the datastar maintainers would prefer that you, specifically, don't use it.
Too bad. I'll use it to spite all of you!
> I also retract my statement about you having good intentions/communicating in good faith.
> a rugpull is certainly a shitty experience for them
It would certainly be a shitty experience, if there actually was a rugpull, which there was not. People who were using the version of Datastar that had all those features are still free to keep using that version. No one is taking it away. No rug was pulled.
> a contract is established between developers and potential users
Sorry, but no. The license makes this quite clear–every open source license in the world very explicitly says 'NO WARRANTY' in very big letters. 'No warranty' means 'no expectations'. Please, don't be one of those people who try to peer-pressure open source developers into providing free software support. Don't be one of the people who says that 'exposure' is a kind of payment. I can't put food on my table with 'exposure'. If you think 'exposure' by itself can be monetized, I'm sorry but you are not being realistic. Go and actually work on monetizing an open source project before you make these kinds of claims.
> why even bother releasing software?
Much research and study is not useful for many people. Why even bother doing research and development? Because there are some who might find it useful and convert it into something that works for themselves. Open source software is a gift. The giving of the gift does not place obligations on the giver. If you give someone a sweater, are you expected to keep patching it whenever it develops holes?
> If a project is not maintained anymore, it can be archived, or maintenance picked up by someone else.
Then why can't it be maintained by someone else in the case of using the old free version?
> A web framework, no matter how simple, will break eventually, most likely in a matter of months.
Sure, the ones that depend on a huge npm transitive dependency cone can. But libraries or frameworks like htmx and Datastar are not like that, they are single <script> files that you include directly in your HTML. There is no endless treadmill of npm packages that get obsoleted or have security advisories all the time.
> You expect people who want to build a web site and move on with their lives to dig into a foreign code base, and fix the web framework?
Well...ultimately, if I use some open source software, I am actually responsible for it. Especially if it's for a commercial use case. I can't just leech off the free work of others to fix or maintain the software to my needs. I need to either fix my own issues or pay someone to do it. If the upstream project happens to do it for me, I'm in luck. But that's all it is. There is ultimately no expectation that open source maintainers will support me for free, perpetually, when I use their software.
> A few people (that we know of) were directly impacted by this
What impact? One guy blogged that just because there are some paid features, it automatically kills the whole project for him. There's no clear articulation of why exactly he needs those exact paid features. Everything else we've seen in this thread is pile-ons.
> Doubling down on this, ignoring and dismissing such feedback as "performative"
Aren't you doing the same thing? You have been ignoring and dismissing the feedback that this is actually not that big of a deal. Why do you think that your opinion carries more weight than that of the actual maintainers and users of the project?
The open core part of the project was removed from NPM. Available only on GitHub.
There are no published plugins from the community, nor is there a repo where the community could have collaborated on OSS adding/plugins.
Are people being entitled expecting it ? Yes.
Is there something stopping people from taking up this work and creating a repo ? No.
But it is illustrative of the attitude of the owners.
The point is not to accuse of rug pull but how confident is the community in taking a dependency on such a project. The fact that the lead dev had to write an article responding to misunderstandings is in response to what the community feels about this.
The argument on their discord for licensing for professional teams 'contact us for pricing' goes like it depends on the number of employees in the company including non-tech folks.
> People who were using the version of Datastar that had all those features are still free to keep using that version.
Why are you ignoring my previous comment that contradicts this opinion?
> No one is taking it away. No rug was pulled.
When Redis changed licenses to SSPL/RSAL, users were also free to continue using the BSD-licensed version. Was that not a rug pull?
In practice, it doesn't matter whether the entire project was relicensed, or if parts of it were paywalled. Users were depending on a piece of software one day, and the next they were forced to abide by new terms if they want to continue receiving updates to it. That's the very definition of a rug pull. Of course nobody is claiming that developers physically took the software people were using away—that's ridiculous.
> Sorry, but no. The license makes this quite clear
My argument was beyond any legal licensing terms. It's about not being an asshole to your users.
> I can't put food on my table with 'exposure'.
That wasn't the core of my argument, but you sure can. Any public deed builds a brand and reputation, which in turn can lead to financial opportunities. I'm not saying the act of publishing OSS is enough to "put food on your table", but it can be monetized in many ways.
> Open source software is a gift. The giving of the gift does not place obligations on the giver. If you give someone a sweater, are you expected to keep patching it whenever it develops holes?
Jesus. There's so many things wrong with these statements, that I don't know where to start...
OSS is most certainly not a "gift". What a ridiculous thing to say. It's a philosophy and approach of making computers accessible and friendly to use for everyone. It's about building meaningful relationships between people in ways that we can all collectivelly build a better future for everyone.
Seeing OSS as a plain transaction, where users should have absolutely no expectations beyond arbitrary license terms, is no better than publishing proprietary software. Using it to promote your brand while ignoring your users is a corruption of this philosophy.
> Then why can't it be maintained by someone else in the case of using the old free version?
I addressed this in my previous comment.
> Sure, the ones that depend on a huge npm transitive dependency cone can. But libraries or frameworks like htmx and Datastar are not like that
Eh, no. Libraries with less dependencies will naturally require less maintenance, but are not maintenance-free. Browsers frequently change. SDK language ecosystems frequently change. Software doesn't exist in a vacuum, and it is incredibly difficult to maintain backwards compatibility over time. Ask Microsoft. In the web world, it's practically impossible.
> What impact? One guy [...]
Yeah, fuck that guy.
> Everything else we've seen in this thread is pile-ons.
Have you seen Reddit? But clearly, everyone who disagrees is "piling on".
> Aren't you doing the same thing? You have been ignoring and dismissing the feedback that this is actually not that big of a deal. Why do you think that your opinion carries more weight than that of the actual maintainers and users of the project?
Huh? I'm pointing out why I think this was a bad move, and why the negative feedback is expected. You can disagree with it, if you want, but at no point did I claim that my opinion carries more weight than anyone else's.
> Why are you ignoring my previous comment that contradicts this opinion?
Because it doesn't contradict it, it just disagrees with it. Because what actual argument did you have that people using an old version of the software can't keep using it? The one about things constantly breaking? On the web, the platform that's famously stable and backward-compatible? Sorry, I just don't find that believable for projects like htmx and Datastar which are very self-contained and use basic features of the web platform, not crazy things like WebSQL for example.
> When Redis changed licenses to SSPL/RSAL, users were also free to continue using the BSD-licensed version. Was that not a rug pull?
Firstly, there are tons of people on old versions of Redis who didn't even upgrade through all that and weren't even impacted. Secondly, Redis forks sprang up almost immediately, which is exactly what you yourself said was a viable path forward in an earlier comment–someone new could take over maintaining it. That's effectively what happened with Valkey.
> My argument was beyond any legal licensing terms.
And my argument is that there is no 'beyond' legal licensing terms, the terms are quite clear and you agree to them when you start using the software. In your opinion should it be standard practice for people to weasel their way out of agreed license terms after the fact?
> Any public deed builds a brand and reputation, which in turn can lead to financial opportunities.
Notice that you're missing quite a lot of steps there, and even then you can only end with 'can lead' to financial opportunities. Why? Because there's no guarantee that anyone will be able to monetize exposure. No serious person would claim that that uncertain outcome constitutes any kind of 'contract'. Anyone who does should be rightly called out.
> It's about building meaningful relationships between people in ways that we can all collectivelly build a better future for everyone.
Then by your own logic shouldn't everyone contribute to that effort? Why is it that only the one guy who creates the project must bear the burden of maintaining all of it in perpetuity?
> Seeing OSS as a plain transaction
Isn't that what you are doing by claiming that OSS is about providing software in exchange for exposure?
> Yeah, fuck that guy.
The guy who didn't even explain what exactly he lost by not being able to use the new paywalled features? The guy who likely was not impacted at all, and was just ranting on his blog because he didn't like someone monetizing their own project? You want us to take that guy seriously?
> everyone who disagrees is "piling on".
Everyone who disagrees? Yeah. Anyone who provides a coherent argument about exactly what they are missing out on by not being able to afford the paid version? I would take them seriously. I haven't seen anyone like that here.
Ive made similar points to the maintainers. It is what it is at this point.
But, honestly, to the people who actually understand, like and use Datastar, none of this matters. Most of the outrage is performative, at best - as can be seen by the pathetically superficial quality of the vast majority of criticisms in threads like this.
Frankly, if people can't/won't see that the devs are very clearly not VC rugpull assholes, and that the vast majority of the functionality is available for free, then they're probably also the sorts of people who aren't a good fit for something that is doing a great rethink of web development. The devs very explicitly are not trying to get rich (nor can they, due to the 501c3!) nor do they want this to be something massive - they're building it for their own needs, first and foremost, and for those who understand that vision.
I tried to understand this, but it seems like a non-native English speaker met an LLM and used it to create a blog post. Can someone please explain why this exists?
People can develop open source equivalents you know, you're not required to use the pro version to get a certain feature. From my understanding, datastar was designed to be entirely modular and extensible.
> I had a running service written in htmx for some time. It is a clinic opening hour service to inform my patients when I will be available in which clinic. (Yes, I am not a programmer, but a healthcare professional.)
-> that was pretty freaking cool to read, loved it
also chuckled at the idea of my website making, health professional going all "What the fuck." in front of his codebase.
what was taken from you? point to the source history that's been removed please. It's funny that stuff like this means people won't ever develop in the open. Hope that makes y'all happy
I was looking for a tool to follow along with signal patches and was a bit disappointed to see the inspector is under "pro"- that and the query string sync are the two nice-to-haves.
and youre in your right to fork the pre-pro versions of the now-pro plugins, update them to be compatible with the current version of the open-source course (a surely trivial task) and share them with the world. You can call your plugin pack d-free
I'm not referring to the language barrier - I live in a place where I write and speak at a juvenile level. I'm referring to the very low quality of thinking on display in the article, and in this reply (english is not from the US of A. And, moreover, the level of literacy in that country is nothing to envy)
Basically, the HTMX code says: "when this span is clicked, fetch /rebuild/status-button, extract the #rebuild-bundle-status-button element from the returned HTML, and replace the existing #rebuild-bundle-status-button element with it".
The Datastar code instead says: "when this span is clicked, fetch /rebuild/status-button and do whatever it says". Then, it's /rebuild/status-button's responsibility to provide the "swap the existing #rebuild-bundle-status-button element with this new one" instruction.
If /rebuild/status-button returns a bunch of elements with IDs, Datastar implicitly interprets that as a bunch of "swap the existing element with this new one" instructions.
This makes the resulting code look a bit simpler since you don't need to explicitly specify the "target", "select", or "swap" parts. You just need to put IDs on the elements and Datastar's default behavior does what you want (in this case).
Note that for this example you can get the same behavior (assuming the endpoint hit isn't using using SSE, which IMO Datastar over emphasizes) in HTMX via a combination of formatting your response body correctly and the response headers. It isn't the way things are typically done in HTMX for Locality of Behavior reasons, not because it's impossible.
in Datastar the locality of behavior is in you backend state...
datastar.Patch(renderComponent(db.NextRow))
imho, a single line is the ultimate LOB pattern. idk, ngmi
Datastar keeps the logic in the backend. Just like we used to do with basic html pages where you make a request, server returns html and your browser renders it.
With Datastar, you are essentially doing kind of PWA where you load the page once and then as you interact with it, it keeps making backend requests and render desired changes, instead of reloading the entire page. But yo uare getting back snippets of HTML so the browser does not have to do much except rendering itself.
This also means the state is back in the backend as well, unlike with SPA for example.
So again, Datastar goes back to the old request-response HTML model, which is perfectly fine, valid and tried, but it also allows you to have dynamic rendering, like you would have with JavaScript.
In other words, the front-end is purely visual and all the logic is delegated back to the backend server.
This essentially is all about thin client vs smart client where we constantly move between these paradigms where we move logic from backend to the frontend and then we swing back and move the logic from the frontend to the backend.
We started with thin clients as computers did not have sufficient computing power back in the day, so backend servers did most of the heavy lifting while the thin clients very little(essentially they just render the ready-made information). That changed over time and as computers got more capable, we moved more logic to the frontend and it allowed us to provide faster interaction as we no longer had to wait for the server to return response for every interaction. This is why there is so much JavaScript today, why we have SPAs and state on the client.
So Datastar essentially gives us a good alternative to choose whether we want to process more data on the backend or on the frontend, whilst also retaining the dynamic frontend and it is not just a basic request-response where every page has to re-render and where we have to wait for request to finish. We can do this in parallel and still have the impression if a "live" page.
Thanks, brings me back to my youth when I was being accused of cheating or being a bot in Counter-Strike :)
If you still don't get it, Datastar is essentially like a server-side rendering in JS, for PWAs, but it allows you to use any language you want on the backend whilst having a micro library(datastar itself) on the frontend. Allowing you to decouple JS from frontend and backend whilst still having all the benefits of it.
I've written costumer-facing interfaces in HTMX and currently quite like it.
One comment. HTMX supports out-of-bound replies which makes it possible to update multiple targets in one request. There's also ways for the server to redirect the target to something else.
I use this a lot, as well as HTMX's support for SSE. I'd have to check what Datastar offers here, because SSE is one thing that makes dashboarding in HTMX a breeze.
You're accusing the poster of shilling. That's against site rules, but aside from that it makes no sense in this context -- the post talks about the advantages of HTMX versus Datastar.
Being new to Datastar and having seen some of the hype recently, I'm really not sold on it.
The patch statements on the server injecting HTML seems absolutely awful in terms of separation of concerns, and it would undoubtedly be an unwieldy nightmare on an application of any size when more HTML is being injected from the server.
I'm seriously keen on trying it out. It's not like Htmx is bad, I've built a couple of projects in it with great success, but they all required some JS glue logic (I ended up not liking AlpineJS for various reasons) to handle events.
If Datastar can minimize that as well, even better!
I was late to the hypermedia party, started with datastore but now use HTMX when i want something in this space. The datastar api is a bit nicer but htmx 2.0 supports the same approach, the key thing is what htmx calls OOB updates, with that in place, everything else is a win in the htmx column.
1. If the element is out-of-band, it MUST have `htmx-swap-oob="true"` in it, or it may be discarded / cause unexpected results
2. If the element is not out-of-band, it MUST NOT have `htmx-swap-oob="true"` in it, or it may be ignored.
This makes it hard to use the same server-side HTML rendering code for for a component that may show up either OOB or not; you end up having to pass down "isOob" flags, which is ugly and annoying.
I think Datastar has the better approach here with making OOB the default. I suspect HTMX's non-OOB default makes more sense for very simple requirements where you simply replace the part of the DOM from which the action was triggered. But personally, situations where OOB is necessary is more typical.
Interestingly, elements sent via the HTMX websocket extension [1] do use OOB by default.
This really depends on your server-side HTML rendering approach. I have a library in which I can do this:
node +@ Hx.swap_oob "true"
And this adds the `hx-swap-oob=true` attribute to the given node. It makes it trivial to add on any defined markup in an oob swap.
I get that many people prefer template-based rendering, but imho to extract the maximum power from htmx an HTML library that's embedded directly in your programming language is much more powerful.
(I don't really understand his argument, but in general I'm in favor of maintainers doing what they think is the right thing; and in any case I'm using his work without paying, so not gonna complain.)
But even if I had an easy way to add the attribute, the fact that I need to think about that extra step is a bit of extra friction HTMX imposes, which datastar doesn't.
in reality, neither of them make any such claims. And they are not html-like - they're literally html. Especially datastar, which doesnt add any non-html-spec attributes.
Yeah, yeah, they "literally add nothing" except these small things like "custom Javascript-like DSL" (datastar), custom DSL and custom HTTP-headers (htmx).
But it's "just html", so it's all fine
Edit: Oh, don't forget that " Especially datastar, which doesnt add any non-html-spec attributes" in reality ads two custom DSLs. One in the form of HTML attribbutes, and the other in the form of a JS-like DSL:
the spec is literally just data-* (hence the name): you can add whatever you want to it and remain in spec. And they're meant to be read by javascript (like datastar)
I like the alpine-ajax API. You specify one or more targets and it swaps each of those elements. No default case or OOB, just keeping it uniform instead.
As for Datastar, all the signal and state stuff seems to me like a step in the wrong direction.
One of the big promises of HTMX is that the client doesn't have to understand the structure of the returned data since it's pre-compiled to the presentation layer, and it feels like this violates that quite heavily since now the calling page needs to know the IDs and semantics of the different elements the server will return.
This isn't really a criticism of Datastar, though: I think the popularity of OOB in HTMX indicates that the pure form of this is too idealistic for a lot of real-world cases. But it would be nice if we could come up with a design that gives the best of both worlds.
That doesn't seem to be the ‘standard’ way to use Datastar, at least as described in this article?
If one were to rerender the entire page every time, what's the advantage of any of these frameworks over just redirecting to another page (as form submissions do by default)?
It's the high performance way to use Datastar and personally I think it's the best DX.
1. It's much better in terms of compression and latency. As with brotli/zstd you get compression over the entire duration of the connection. So you keep one connection open and push all updates down it. All requests return 204 response. Because everything comes down the same connection brotli/zstd can give you 1000-8000x compression ratios. So in my demos for example, one check is 13-20bytes over the wire even though it's 140k of HTML uncompressed. Keeping the packet size around 1k or less is great for latency. Redirect also has to do more trips.
2. The server is in control. I can batch updates. The reason these demo's easily survive HN is because the updates are batched every 100ms. That means at most a new view gets pushed to you every 100ms, regardless of the number of users interacting with your view. In the case of the GoL demo the render is actually shared between all users, so it's only rendering once per 100ms regardless of the number of concurrent users.
3. The DX is nice and simple good old View = f (state), like react just over the network.
So even though HTTP/2 multiplexes each request over a single TCP connection, each HTTP connection is still compressed separately. Same with keep alive.
The magic is brotli/zstd are very good at streaming compression thanks to forward/backward references. What this effectively means is the client and the server share a compression window for the duration of the HTTP connection. So rather than each message being compressed separately with a new context, each message is compressed with the context of all the messages sent before it.
What this means in practice is if you are sending 140kb of divs on each frame, but only one div changed between frames, then the next frame will only be 13bytes because the compression algorithm basically says to the client "you know that message I sent you 100ms ago, well this one is almost identical apart from this one change". It's like a really performant byte level diffing algorithm, except you as the programmer don't have to think about it. You just re-render the whole frame and let compression do the rest.
In these demos I push a frame to every connected client when something changes at most every 100ms. What that means, it effectively all the changes that happen in that time are batched into a single frame. Also means the server can stay in charge and control the flow of data (including back pressure, if it's under to much load, or the client is struggling to render frames).
I was tired of writing backend APIs with the only purpose that they get consumed by the same app's frontend (typically React). Leading to boilerplate code both backend side (provide APIs) and frontend side (consume APIs: fetch, cache, propagate, etc.).
Now I am running 3 different apps in productions for which I no longer write APIs. I only define states and state updates in Python. The frontend code is written in Python, too, and auto-transpiled into a React app. The latter keeping its states and views automagically in sync with the backend. I am only 6 months into Reflex so far, but so far it's been mostly a joy. Of course you've got to learn a few but important details such as state dependencies and proper state caching, but the upsides of Reflex are a big win for my team and me. We write less code and ship faster.
I run 6 React apps in prod, which used to consume APIs written with Falcon, Django and FastAPI. Since 2 years ago, they all consume APIs from PostgREST. I define SQL views for the tables I want to expose, and optionally a bunch of SQL grants and SQL policies on the tables if I have different roles/permissions in the app, and PostgREST automatically transforms the views into endpoints, adds all the CRUD + UPSERT capabilities, handles the authorization, filtering, grouping, ordering, insert returning, pagination, and so on.
I checked this out because it sounded cool and I was not expecting to see a landing page about AI and "Contact sales" for pricing info if you don't want your work to be data-mined. 2025, man. Sigh.
I may be just completely out of my depth here, but I look at the cool example on their website, the Open the pod bay doors, HAL bit, and I don't like it, at all.
And reading comments one would think this is some amazing piece of technology. Am I just old and cranky or something?
This feels... very hard to reason about. Disjoint.
You have a front-end with some hard-coded IDs on e.g. <div>s. A trigger on a <button> that black-box calls some endpoint. And then, on the backend, you use the SDK for your choice language to execute some methods like `patchElements()` on e.g. an SSE "framework" which translates your commands to some custom "event" headers and metadata in the open HTTP stream and then some "engine" on the front-end patches, on the fly, the DOM with whatever you sent through the pipe.
This feels to me like something that will very quickly become very hard to reason about globally.
Presentation logic scattered in small functions all over the backend. Plus whatever on-render logic through a classic template you may have, because of course you may want to have an on-load state.
I'm doing React 100% nowadays. I'm happy, I'm end-to-end type safe, I can create the fanciest shiny UIs I can imagine, I don't need an alternative. But if I needed it, if I had to go back to something lighter, I'd just go back to all in SSR with Rails or Laravel and just sprinkle some AlpineJS for the few dynamic widgets.
Anyway, I'm sure people will say that you can definitely make this work and organize your code well enough and surely there are tons of successful projects using Datastar but I just fail to understand why would I bother.
I’ve not tried Datastar in anger but I have tried HTMX after all the hype and it quickly became unmaintainable.
My dream was having a Go server churning out all this hypermedia and I could swerve using a frontend framework, but I quickly found the Go code I was writing was rigid and convoluted. It just wasn’t nice. In fact it’s the only time I’ve had an evening coding session and forgotten what the code was doing on the same evening I started.
I’m having a completely opposite experience with Elixir and Phoenix. That feels like an end to end fluid experience without excessive cognitive load.
BEAM + Elixir + Phoenix feels like I can control a whole system from the CPU processes (almost) up to the UI elements on a remote user’s screen, all in one easy-ish-to-understand system and language.
Granted, I’ve only used it for smaller projects, but I can almost feel my brain relax as the JS fades out, and suddenly making web apps is super fun again.
html/template blocks are not as ergonomic. They force you to work on the template level and drill down into the blocks. Templ, Gomponents etc. let you build up the components from smaller pieces, like Lego.
The preferred pattern addresses your concern about scattered logic: a single long-lived SSE endpoint that "owns" the user's view of the app. That endpoint updates their field of view as appropriate - very much inspired by game dev's immediate mode rendering.
An interesting characteristic of Datastar: it's very opinionated about the shape of your backend but extremely unopinionated about how you implement that shape.
My understanding is Turbo is more aligned with htmx. Common practice in Turbo are generally patterns of last resort in Datastar.
e.g. Datastar prescribes a single long lived SSE endpoint that owns the state for the currently connected user's view of the world / app, while common practice in Turbo is to have many small endpoints that return a fragment of html when requested by the client.
The idea of HATEOS is that HTML isn't "presentation logic", it IS the state of your application. Then, the backend manages the state of your application.
Yup. Another way to frame it is a "return to form" by moving app and business logic back to the server. Technology like HTMX and Datastar are optimizations that allow for surgical updates of portions of the client DOM, instead of forcing full-page refreshes like we did 25 years ago.
I share your feelings. If you like React and its trade-offs, and you're comfortable using it (based on various HN discussion the easiest sign is that you're able to understand the concept of hooks and you don't have the need to wrongly yell everywhere how it's a bad abstraction :D), you can forget about Datastar or HTMAX.
For context, I worked with large React codebases, contributed to various ecosystem libraries (a few bigger ones: react-redux, styled components, react-router). So, I'm pretty comfortable with hooks, but I still make mistakes with them if React isn't in my daily routine (different day job now, only use React occasionally for some pet projects).
I've also onboard interns and juniors onto React codebase, and there's things about React that only really make sense if you're more old-school and know how different types behave to understand why certain things are necessary.
I remember explaining to an intern why passing an inlined object as a prop was causing the component to rerender, and they asked whether that's a codebase smell... That question kinda shocked me because to me it was obvious why this happens, but it's not even a React issue directly. Howeve the fix is to write "un-javascripty" code in React. So this persons intro to JS is React and their whole understanding of JS is weirdly anchored around React now.
So I totally understand the critique of hooks. They just don't seem to be in the spirit of the language, but do work really well in spite of the language.
As someone who survived the early JS wilderness, then found refuge in jQuery, and after trying a bunch of frameworks and libraries, finally settled on React: I think React is great, but objectively parts of it suck, and it's not entirl its fault
> and they asked whether that's a codebase smell...
Something that's been an issue with our most junior dev, he's heard a lot of terminology but never really learned what some of those terms mean, so he'll use them in ways that don't really make sense. Your example here is just the kind of thing I'd expect from him, if he's heard the phrase "code smell" but assumed something incorrect about what it meant and never actually looked up what it means.
It is possible your co-worker was asking you this the other way around - that they'd just learned the term and were trying to understand it rather than apply it.
Htmx got me into hypermedia heaven, but it lead me to datastar for sure.
Recently we also had an interview with the creator of datastar, where he also talked a bit about darkstar (something he wants to built on top of webtransport for the few things where datastar is no well suited for now)
The server deciding what to replace reminds me of some old (dangerous, I think) patterns like returning actual JS from the server which the client then executes.
But it was a nice pattern to work with: for example if you made code changes you often got hot-reloading ‘for free’ because the client can just query the server again. And it was by definition infinitely flexible.
I’d be interested to hear from anyone with experience of both Datastar and Hotwire. Hotwire always seemed very similar to HTMX to me, but on reflection it’s arguably closer to Datastar because the target is denoted by the server. I’ve only used Hotwire for anything significant, and I’m considering rewriting the messy React app I’ve inherited using one of these, so it’s always useful to hear from others about how things pan out working at scale.
> The server deciding what to replace reminds me of some old (dangerous, I think) patterns like returning actual JS from the server which the client then executes.
Basically every single web page on the modern web has the server returning JS that the client then executes. I think you should clarify what's dangerous about the specific pattern you're thinking of that isn't already intrinsic to the web as a whole.
I like Hotwire but I admit its a bit confusing to get started with and the docs dont help.
Form submits + redirects are a bit weird, you cant really make the server "break out" of a frame during a redirect if the form was submitted from inside a frame (there are workarounds, see https://github.com/hotwired/turbo/issues/257).
During 2015-2018 I was not working on FE and when I started again everyone was using js frameworks ditching MVC, aspx and similar.
Now I again not working on the FE for 3 years and it seems everybody is going back to sending HTML from server.
I am not saying it is wrong. Just it is abit funny looking from perspective how pendulum is going now the other way.
Despite the “just figure it out” style of documentation, I still believe Hotwire + Stimulus (optional honestly) to be the best iteration of the low-JavaScript reactivity bunch.
Htmx gives me bad vibes from having tons of logic _in_ your html. Datastar seems better in this respect but has limitations Hotwire long since solved.
That is not at all what HTMX does.
HTMX is "If the user clicks[1] here, fetch some html from the server and display it". HTMX doesn't put logic in your HTML.
> ...To accomplish this, most HTMX developers achieve updates either by “pulling” information from the server by polling every few seconds or by writing custom WebSocket code, which increases complexity.
This isn't true. HTMX has native support for "pushing" data to the browser with Websockets or SSE, without "custom" code.
I've been using datastar for the last year to much success. The core library is fantastic. I use Go as my backend language of choice, and have a boilerplate project built from examples that were in the original datastar site code. I've also added some extra examples to show how one can build web components that work seamlessly to integrate with JS libs that exist today and drive them from a backend server.
If you are looking to understand what's possible when you use datastar and you have some familiarity with Go, I hope this is a solid starting point for you.
I'm still trying to figure out what the key difference would be when writing an app with Datastar over HTMX.
I wrote /dev/push [1] with FastAPI + HTMX + Alpine.js, and I'm doing a fair bit with SSE (e.g. displaying logs in real time, updating state of deployments across lists, etc). Looking at the Datastar examples, I don't see where things would be easier that this [2]:
Also curious what others think of web components. I tried to use them when I was writing Basecoat [3] and ended up reverting to regular HTML + CSS + JS. Too brittle, too many issues (e.g. global styling), too many gaps (e.g. state).
BTW, this comment is very true when dealing with HTMX as well:
> But what I’m most excited about are the possibilities that Datastar enables. The community is routinely creating projects that push well beyond the limits experienced by developers using other tools.
For example when displaying the list of deployments, rather than trying to update any individual deployment as their state is updated, it's just simpler to just update the whole list. Your code is way simpler/lighter as you don't need to account for all the edge case (e.g. pager).
I don't see a good enough reason to move over from Htmx, unless im missing something you're just moving more lines to the server side. At this point why not just bite the bullet and go back to the old days of php serving html. "Look mom, 0 lines of markup"
>At this point why not just bite the bullet and go back to the old days of php serving html.
Going back to it is the point. HTMX lets you do that while still having that button refresh just a part of the page, instead of reloading the whole page.
It's AJAX with a syntax that frees you from JS and manual DOM manipulation.
I fairly recently developed an app in PHP, in the classic style, without frameworks.
It provided me with stuff I remembered, the $annoyance $of $variable $prefixes, the wonky syntax, and a type system that makes JS look amazing -- but it still didn't make me scream in pain and confusion like React. Getting the app done was way quicker than if any JS framework was involved.
Having two separate but tightly integrated apps is annoying. HTMX or any other classic web-dev approaches like PHP and Django make you have one app, the backend. The frontend is the result of executing the backend.
ALL OF THE LINES ARE ON THE SERVER FOR BOTH OF THEM! That's what ssr html is!
Both are just small javascript libraries that allow you to do some interactive stuff declarative in your ssr html. But Datastar is smaller, simpler, more powerful and closer to web standards.
Does it now allow handling non-2xx responses in non-SSE actions? Refusing to support it (even as an opt-in) is what made me just look into using alpine + alpinejs instead. SSE in d* is awesome when you have a feature that needs it, but IMO d* completely over-emphasizes and wants you to use it for everything. If I was using d*, I would use it more often, sure. But most of my projects just need little html updates on a click of a button, that's all. I'm not going to change the whole architecture to tailor it to a 1% feature.
> One of the amazing things from David Guillot’s talk is how his app updated the count of favored items even though that element was very far away from the component that changed the count.
This might not seem like a big deal, but it looks like Datastar dramatically reduces the overhead of a common use-case. The article shows how to update a component and a related count, elsewhere in the UI.
A more practical use-case might be to show a toast in tandem with navigating to another view. Or updating multiple fields on a form validation failure.
I like that datastar has better defaults, embracing SSE makes certain things much simpler and cleaner even on the backend (no need to wrangle templates with htmx oob for example).
I am okay with the open-core and pro model.
But, the maintainers are quite combative on HN and Reddit as well. This does not bode well for the otherwise great project.
Funny, the next cycle is starting ;-) I remember Vaadin which was a great framework just before angularJS took off. Now Datastar seems to give it another try and bring everyone back to server calls...
Lot of mentions of HTMX along the comments. If you are interested and have the time, read the the first chapters of this book[1]. Very well written and a bit nostalgic I should say, at least to some of us who had lived the web 1.0 days.
Yup, which is why I never understand why people keep making this criticism that could have been avoided by just reading the docs a little bit more or even asking on the htmx Discord.
Genuine question from an "old school" web developer: can someone please give me an example of where these new frontend technologies are actually better than just using HTML, CSS, and vanilla JavaScript or jQuery?
I have honestly yet to see an example where using something like React doesn't just look like it's adding unnecessary complexity.
These tools are absolutely nothing like React. Take a look at what htmx does, which is even simpler from a spec standpoint. There are actual on-going efforts to get it into the actual HTML spec. htmx and the like are basically built for us old-skool types (and thankfully many youngins are catching on).
Glad I'm not the only one. Ever since the first HTMX article, I felt like I was kidding myself. I had/have this thought in my head that "no way that we were that close to having all this right 25 years ago." I'm coming around and seeing that this tech gets the job done by doing one thing really well, and the whole API around it is dead-simple and bulletproof because of it. It's that good-old UNIX philosophy that's the enabling tech here.
While I can't say for certain that IE6 or early Firefox could have handled DOM swaps gracefully without real shadow DOM support, early Ajax provided the basic nuts-and-bolts to do all of this. So, why haven't we seen partial page updates as a formalism, sooner?
Not sure what you mean by so much more code. Datastar seems to do more than htmx. Otherwise, there are less features because React and friends over-complicate things for the vast majority of use-cases.
Ohhhhh I thought you meant simpler than React, lol. Gotcha. I was going by what it does, not line of code of the implementation, which is what matters in this context (a skeptic looking to check something out quickly).
I think Datastar back when I was learning web programming and the dawn of AJAX would be Xajax [1]. I didn't even learn JavaScript back then because Xajax would generate a JS shim that you could call to trigger server side functions, and the function replace page fragments with new, server-generated content.
While htmx reminds me of Adobe Spry Data [2] enough that I did a research into htmx and realize that Spry Data's equivalent is a htmx plugin and htmx itself is more similar to Basecamp's Hotwire. I assume there should be a late 2000 era AJAX library that do something similar to htmx, but I didn't use one as jQuery is easy enough anyway.
Anyway as other commenters has said, the idea of htmx is basically for some common use cases that you used jQuery, you might as well use no JavaScript at all to achieve the same tasks. But that is not possible today, so think of htmx as a polyfill for future HTML features.
Personally I still believe in progressive enhancements (a website should work 100% without JavaScript, but you'll lose all the syntactic sugar - for example Hashcash-style proof of work captcha may just give you the inputs and you'll have to do the exact same proof of work manually then submit the form), but I've yet to see any library that is able to offer that with complex interface, without code duplication at all. (Maybe ASP.NET can do that but I don't like the somewhat commercialized .NET ecosystem)
The UI I think would require React is a wizard-style form with clientside rendered widgets (eg. tabs). If you can't download a library to implement that, it is a lot of work to implement it on backend especially in modern websites where your session is now JWT instead of $_SESSION that requires a shared global session storage engine. I'd imagine that if you don't use React when the user go back to the tabbed page you'd need to either implement tab switching code on the backend side as well, or cheat and emit a JS code to switch the active tab to whatever the backend wants.
> The UI I think would require React is a wizard-style form with clientside rendered widgets (eg. tabs).
Can you think of any example sites/web apps which illustrate what you mean? I'm imagining something like VSCode, but AFAIK it's built with a custom JS framework and not React.
Try the EC2 creation page. There are tabs for advanced options, widgets like images selection that you can choose from AWS-managed, Community, your own AMI, etc. And then the next page is confirmation of similar widget which you can go back and edit. I'd imagine that if you render it in backend first and one of the tabs has error your backend form library has to know how to rerender all the widgets that you already implement in JavaScript once. If the page is done in SPA the backend just send the data back and the existing frontend widget just have to rehydrate itself.
I'm really just starting with htmx but came across datastar yesterday. This is a great comparison and is confirming some of my impressions, so thanks! I'll still look a bit more but if the main thing is that it's naively adding Alpine or Stimulus then datastar is not for me.
I was going to because I like the architecture, but then I saw the fact that licensing changes are going to be frequent, and that the developers seem a bit aggressive on another thread, and I've decided to skip it.
Datastar developers are free to do what they want with their code, but as someone who releases open source software, I'm tired of projects using open source simply to create a moat or user base then switch to a proprietary model.
"Why I switched from HTMX to Datastar" -> Why I never switched to HTMX, because there will always be something better, and for that there also will be something better.
Or the then backwards-incompatible HTMX v2 will give it the rest, leaving all the obsolete codebase. It's the circle of life.
I have never heard of it, but I loved it reading a bit of the docs, especially as someone who doesn't like the whole front end circus! I was planning to teach myself svelte but it seems this one is more than enough!
I can't say I like the server returning portions of HTML that need to match the HTML in the client, but I can see myself trying it in a monorepo and using some templating lib on both sides.
I guess one thing that might be potentially problematic is that if you update the server while someone has the page still open you need to match their original templates version and not the new (potentially incompatible) one
> Since then, teams everywhere have discovered the same thing: turning a single-page app into a multi-page hypermedia app often slashes lines of code by 60% or more while improving both developer and user experience.
Well, not at all. The only compelling reason for me to use server-side rendering for apps (not blogs obviously,they should be HTML) is for metadata tags. That's why I switched from pure React and everything has been harder, slower for the user and more difficult to debug than client-side rendering.
These "we cut 70% of our codebase" claims always make me laugh. We have no idea what was going on in that original codebase. The talk literally shows severely cursed lines stretching to the moon like:
<div
hx-get="{% url 'web-step-discussion-items-special-counters' object.bill_id object.pk %}?{{ request.GET.url...who knows how many characters long it is.
It's hard to tell whether they optimised the app, deleted a ton of noise, or just merged everything into those 300-character-long megalines.
> These "we cut 70% of our codebase" claims always make me laugh.
There's also a slide in my talk that presents how many JS dependencies we dropped, while not adding any new Python. Retrospectively, that is a much more impressive achievement.
... but the whole social movement of "back to the backend" is about getting rid of the client-side application as a separate component
of course it (should) lead to a lot less code! at the cost of completely foregoing most of the capabilities offered by having a well-defined API and a separate client-side application
... and of course this is happening as over the last ~2 decades we mostly figured out that most of that amazing freedom on the client-side does not worth it
... most clients are dumb devices (crawlers), most "interactions" are primitive read-only ones, and having a fast and simple site is a virtue (or at least it makes economic sense to shunt almost all complexity to the server-side, as we have fast and very capable PoPs close to users)
> ... and of course this is happening as over the last ~2 decades we mostly figured out that most of that amazing freedom on the client-side does not worth it
It's not that, at least in my opinion, it's that we love (what we perceive as) new and shiny things. For the last ten years with Angular, React, Vue et al., new waves of developers have forgotten that you can output stuff directly from the server to the browser outside of "APIs".
This implementation is "dumb" to me. Feels like the only innovation is using SSE, otherwise it's a `selector.onClick(() => {selector.outerHTML = await (fetch(endpoint)).json()});`. That's most of the functionality right there. You can even use native HTML element attributes instead of doing .onClick https://developer.mozilla.org/en-US/docs/Web/API/Element/cli....
Cool how do exponential back off, make sure that it auto connects on tab visibility changes, make sure when you replace stuff it keeps the same selection. I'm sure if you had enough of these you'll end up with 10 kilobyte shim
Ok? Not sure what's your point. I'm not saying the package is bloated or anything. I'm saying it's a very simple functionality that was considered an anti-pattern when Angular and React were coming up.
sprinkling event handlers all over and doing DOM manipulations and trying to pile jQuery plugins on other plugins ... that was the anti-pattern
saying nah, fuck this, let's just do a rerender is what happened, and going back to doing it on server-side is one way, but doing it on client-side is the "React way"
I don't know how old are you but I distinctly remember how big of a hard on everyone had for Angular and then React and virtualdom. React actually brought some good things in how you engineered your frontend code. This thing goes back on that completely and forces you to mix and match your frontend code on both the frontend and the backend. I genuinely don't understand how one could seriously consider this for a large application.
That's kind of the point. Don't throw out the new modern features of the browser but use them with fine-grainer activity. Otherwise most of the state lives in the back end. It's really just getting back to normalcy
That's... no, servers sending HTML to the client was where we started with all this.
That's why the H in HTTP and in HTML stand for "Hypertext." Any time a webserver replies with something other than markup, _that's_ the extension/exception to that very old design.
Now, if you're talking about the separation of user-interface, data, logic, and where HTML fits in, that's a much bigger discussion.
Datastar author here, all this happened while I was asleep so yeah I'm really good at reading the system in my dreams. I must be actually AGI and totally not a real person. Highly suspect
to elaborate on this for others, the datastar homepage has webcomponents such as the starfield animation at the top. And they;re releasing a fantastic web component framework/tool soon called Rocket. It'll be like Lit, but simpler, better and integrated with the rest of datastar
I had a SaaS project last year with massive HTMX code base. Code was big and pain was even bigger. A few months back I attempted to convert parts of it to DataStar but the introduction of premature "DataStar Pro" and putting pretty basic but essential utilities behind the paywall killed the vibe. I scrapped the idea and wouldn't go near it.
Having just watched the Vite documentary both HTMX and DataStar have a higher order mission to challenge dominant incumbent JS frameworks like React/NextJS. HTMX is struggling and in my opinion Datastar is DOA!
Win the adoption, win the narrative then figure out cashing in. People behind Vite won the JS bundling race, they now have a new company Void(0) and raised venture money. NextJS solved major React pain points, gave it away for free and built a multi-billion$ infrastructure to host it.
Be careful with Datastar. If the paid "PRO" features are not enough to warn you let me just say that I had a very unfortunate encounter with the author. I asked about how to do something like wire:navigate in Livewire and he told me that's not necessary and I don't understand Datastar and I should go fuck off. He was very ad hominem and aggressive. Won't use his product ever.
You are wrong. A few months ago before the Pro announcement I was exploring HTMX, Unpoly and Datastar was the new thing. It looked cool - especially the demo of a game like thingy. But the page was kind of unresponsive still. This is a common pattern among even Liveview pages where the server roundtrip is still a thing unlike in a typical SPA despite it beings using SSE it's still not local React/Svelte/Vue app experience. That's why you will end up moving more and more parts to Alpine from Livewire/Liveview .... anyway.... I asked the guys on the Discord channel for Datastar how would I do the spa like page navigation between pages. And he got irritated - probably because this wasn't the first time somebody asked that - and told me that I don't get Datastar that that is wrong, Datastar don't care about that. But it was in a such weird aggressive way , he was mociking me, my intelligence and used very childish ad hominem attacks. I then left. And ok, he doesn't like the spa navigation or datastar doesn't care with that at all, but the way he addressed via an attack on me was super negative. You don't call people idiots because they would like from Datastar a functionality that is common in HTMX, Unpoly, Liveview or Livewire. Perhaps they have something like that - or in the PRO version, but I don't care. If you want realtime go with Phoenix Liveview instead - their community is much more friendly and mature.
We don't shy away from telling you your ideas are terrible. Being mature is caring about your code and your users. You were VERY clearly making bad technically reasons and we pushed back. We aggressively care about the details and if you don't then please go use Liveview. We aren't trying to win popularity contests. Show code and prove your point or continue to clutch your pearls
Thanks to Chris to continue challenging his comfort zone (and mine!) and sharing his impressions and learnings with us!
I may be a little biased because I've been writing webapps with htmx for 4 years now, but here are my first thoughts:
- The examples given in this blogpost show what seems to be the main architectural difference between htmx and Datastar: htmx is HTML-driven, Datastar is server-driven. So yes, the API on client-side is simpler, but that's because the other side has to be more complex: on the first example, if the HTML element doesn't hold the information about where to inject the HTML fragment returned by the server, the server has to know it, so you have to write it somewhere on that side. I guess it's a matter of personal preference then, but from an architecture point-of-view both approaches stand still
- The argument of "less attributes" seems unfair when the htmx examples use optional attributes with their default value (yes you can remove the hx-trigger="click" on the first example, that's 20% less attributes, and the argument is now 20% less strong)
- Minor but still: the blogpost would gain credibility and its arguments would be stronger if HTML was used more properly: who wants to click on <span> elements? <button> exists just for that, please use it, it's accessible ;-)
- In the end I feel that the main Datastar selling point is its integration of client-side features, as if Alpine or Stimulus features were natively included in htmx. And that's a great point!
> htmx is HTML-driven, Datastar is server-driven
As far as I understand, the main difference between HTMX and datastar is that HTMX uses innerHTML-swap by default and datastar uses the morph-swap by default, which is available as an extension for HTMX [1].
Another difference is that datastar comes with SSE, which indeed makes it server driven, but you don't have to use SSE. Also datastar comes with client-side scripting by default. So you could say the datastar = integrated HTMX + idiomorph + SSE + Alpine.
[1] https://htmx.org/extensions/idiomorph/
The article stated that he no longer needs eventing to update other parts of the page, he can send down everything at once. So, I guess that is much less complex. Granted, eventing and pulling something down later could be a better approach depending on the circumstance.
You can send everything down at once with htmx too, with oob swaps.
yes you can, but the complexity is now moved to server side template wrangling. With SSE, its just separate events with targets. It feels much cleaner
Server side template wrangling is not really a big deal, if you use an HTML generation library...something like Python's Hpty/FastHTML or JavaScript's JSX. You can easily split the markup down into 'components' and combine them together trivially with composition.
I mean in practice you rarely target individual elements in datastar. You can sure. But targeting the main body with the entirety of the new content is way simpler. Morph sorts out the rest
A good example is when a page has expensive metrics specific to say a filter on the page. Let's say an action on the page shows a notification count change in the top right corner.
While morph will figure it outz it's unnecessary work done on the server to evaluate the entire body
Expensive queries on the server should be shared where they can be (eg: global leaderboard) or cached on the server (in the game of life demo each frame is rendered/calculated once, regardless of the number of users). Rendering the whole view gives you batching for free and you don't have to have all that overhead tracking what should be updated or changed. Fine grained updates are often a trap when it comes to building systems that can handle a lot of concurrent users. It's way simpler to update all connected users every Xms whenever something changes.
I agree on caching. But in general my point stands. The updates in question may not even be shared across users, but specific to one user.
Philosophically, I agree with you though.
Yeah so that was how I used to think about these things. Now, I'm. less into the fine grain user updates too.
Partly, because the minute you have a shared widget across users 50%+ of your connected users are going to get an update when anything changes. So the overhead of tracking who should update when you are under high load is just that, overhead.
Being able to make those updates corse grain and homogeneous makes them easy to throttle so changes are effectively batched and you can easily set a max rate at which you push changes.
Same with diffing, the minute you need to update most of the page the work of diffing is pure overhead.
So in my mind a simpler corse grain system will actually perform better under heavy load in that worst case scenario somewhat counter intuitively. At least that's my current reasoning.
"Alpine or Stimulus features were natively included in htmx"
I'm contemplating using HTMX in a personal project - do you know if there are any resources out there explaining why you might also need other libraries like Alpine or Stimulus?
They're for client-side only features. Think toggling CSS classes, updating the index on a slider- you ideally don't want to have to hit the server for that
Thanks - I was having a quick read of the documentation for those projects and that makes perfect sense.
if you use alpine, make sure to get the morph extensions for both htmx and alpine.
Reminds me a bit of the Seaside framework in Pharo. A lot of the things I programmed in Pharo at my previous employer was a lot of back and forth between front-end and back-end, because the back-end was managing the front-end state. For B2B apps that don't have a lot of latency requirements, etc., I'd say it's better. For high scalable B2C apps though? No.
Could you expand on why you think it (back-end managing the front-end's state) is better in the scenarios that you do?
Edit - rather than spam with multiple thank you comments, I'll say here to current and potential future repliers: thanks!
Not GP, but I would say, it’s the same reason someone would use React. If you keep you state in a single place, the rest of the app can become very functional and pure. You receive data and tranform it (or render it). The actual business logic that manipulate the state can be contained in a single place.
This reduces a lot of accidental complexities. If done well, you only need to care about the programming language and some core libraries. Everything else becomes orthogonal of each other so cost of changes is greatly reduced.
I would imagine the same arguments for Smalltalk like live coding and an IDE within your production application. So you get some overlap with things like Phoenix LiveView, but more smalltalk-y.
I assume it had backend scaling issues, but usually backend scaling is over-stated and over-engineered, meanwhile news sites load 10+ MB of javascript.
> if the HTML element doesn't hold the information about where to inject the HTML fragment returned by the server, the server has to know it, so you have to write it somewhere on that side
I'm not too strong in frontend, but wouldn't this make for a lighter, faster front end? Especially added up over very many elements?
100%. Datastar is just make HTML spec support reactive expression in data-* attributes, that's it. You will become stronger at web cause it just gets out of your way
I don't think the difference would be significant. How many of your HTML elements would become interactive with htmx? There's a limit to how much interaction you can reasonably add on a page. This will also limit the number of new attributes you will introduce in the markup.
Also, by this argument should we leave out the 'href' attribute from the '<a>' tag and let the server decide what page to serve? Of course not, the 'href' attribute is a critical part of the functionality of HTML.
Htmx makes the same argument for the other attributes.
Fantastic write up!
For those of you who don't think Datastar is good enough for realtime/collaborative/multiplayer and/or think you need any of the PRO features.
These three demos each run on a 5$ VPS and don't use any of the PRO features. They have all survived the front page of HN. Datastar is a fantastic piece of engineering.
- https://checkboxes.andersmurphy.com/
- https://cells.andersmurphy.com/
- https://example.andersmurphy.com/ (game of life multiplayer)
On both the checkboxes/cells examples there's adaptive view rendering so you can zoom out a fair bit. There's also back pressure on the virtual scroll.
Can you explain how these work? Does the server send small subrectangles of the large grid when the user scrolls to new regions of the grid? Does the browser actually have a two-dimensional array in memory with a billion items, or is there some other data structure?
Yeah the server only sends what the user is currently looking + plus a buffer around their view. There's no actual checkbox state on the client. When the user clicks a checkbox a depress animation is started and a request is made (which the server responds to with no data and a 204). The user then gets the html for the next view down a long lived SSE connection that started when they first loaded the page. Because, there's a long lived connection, it has really good compression. Same thing happens when the user scrolls. If they scroll far enough a new view is rendered.
The billion items themselves are just in a server on the backend, stored in a sqlite database.
If I understand the code for these correctly though, you're not actually doing the "idiomatic" datastar things as the article describes? No diffing/patching individual elements, just rerender the entire page?
Tbh that mental model seems so much simpler than any or all of the other datastar examples I see with convoluted client state tracking from the server.
Would you build complex apps this way as well? I'd assume this simple approach only works because the UI being rendered is also relatively simple. Is there any content I can read around doing this "immediate mode" approach when the user is navigating across very different pages with possibly complicated widget states needing to be tracked to rerender correctly?
I mean Datastar is pretty flexible. I'd say CQRS is pretty idiomatic if you want to do multiplayer/realtime stuff. As you mentioned, once you've se that up, the mental model is much simpler. That being said the initial set up is more involved than req/response Datastar.
Yes we are building complex accounting software at work with Datastar and use the same model. "Real UI" is often more complex, but a lot less heavy less divs, less data, fewer concurrent users, etc compared to these demos. Checkboxes are a lot more div dense than a list of rows for example.
> On both the checkboxes/cells examples there's adaptive view rendering so you can zoom out a fair bit.
how do you zoom out?
Also, even with your examples, wouldn't data-replace-url be a nice-to-have to auto update the url with current coordinates, e.g. ?x=123&y=456
Currently zoom the page web cmd+/-. At some point I'll add buttons and to proper quantised views.
> think you need any of the PRO features
Pro features ? Now I see - it is open core, with a $299 license. I'll pass.
Good for you!
I don't use anything from pro and I use datastar at work. I do believe in making open source maintainable though so bought the license.
The pro stuff is mostly a collection of foot guns you shouldn't use and are a support burden for the core team. In some niche corporate context they are useful.
You can also implement your own plugins with the same functionality if you want it's just going to cost you time in instead of money.
I find devs complaining about paying for things never gets old. A one off life time license? How scandalous! Sustainable open source? Disgusting. Oh a proprietary AI model that is built on others work without their consent and steals my data? Only 100$ a month? Take my money!
It is 299$ lifetime. It is extremely cheap
I don't think the article does a good job of summarising the differences, so I'll have a go:
* Datastar sends all responses using SSE (Server Side Events). Usually SSE is employed to allow the server to push events to the client, and Datastar does this, but it also uses SSE encoding of events in response to client initiated actions like clicking a button (clicking the button sends a GET request and the server responds with zero or more SSE events over a time period of the server's choice).
* Whereas HTMX supports SSE as one of several extensions, and only for server-initiated events. It also supports Websockets for two-way interaction.
* Datastar has a concept of signals, which manages front-end state. HTMX doesn't do this and you'll need AlpineJS or something similar as well.
* HTMX supports something called OOB (out-of-band), where you can pick out fragments of the HTML response to be patched into various parts of the DOM, using the ID attribute. In Datastar this is the default behaviour.
* Datastar has a paid-for Pro edition, which is necesssary if you want certain behaviours. HTMX is completely free.
I think the other differences are pretty minor:
* Datastar has smaller library footprint but both are tiny to begin with (11kb vs 14kb), which is splitting hairs.
* Datastar needs fewer attributes to achieve the same behaviours. I'm not sure about this, you might need to customise the behaviour which requires more and more attributes, but again, it's not a big deal.
As someone on the sideline who's been considering HTMX, its alternatives and complements, this was a helpful comment! Even without having used any of it, I get the feeling they're going in the right direction, including HTMX author's humorous evangelism. If I remember correctly he also wrote Grug, which was satire and social criticism of high caliber.
some quibbles
D* doesnt only use SSE. It can do normal http request-response as well. Though, SSE can also do 0, 1 or infinite responses too.
Calling datastar's pro features "necessary" is a bit disingenuous - they literally tell people not to buy it because those features, themselves, are not actually necessary. Theyre just bells and whistles, and some are actually a bad idea (in their own words).
Datastar is 11kb and that includes all of the htmx plugins you mentioned (sse, idiomorph) and much more (all of alpine js, essentially).
> Calling datastar's pro features "necessary" is a bit disingenuous
I didn't. I said:
> * Datastar has a paid-for Pro edition, which is necesssary if you want certain behaviours. HTMX is completely free.
I don't need to spell out why this means something very different to what you think it means.
I'll happily concede on the other two quibbles.
Sorry, i can see how i misinterpreted the "necessary" part.
I recently read this: https://drshapeless.com/blog/posts/htmx,-datastar,-greedy-de...
Which states some of the basic (great) functionality of Datastar has been moved to the Datastar Pro product (?!).
I’m eager to support an open source product financially and think the framework author is great, but the precedent this establishes isn’t great.
Same for me...
I had been tracking Datastar for months, waiting for the 1.0.0 release.
But my enthusiasm for Datastar has now evaporated. I've been bitten by the open-source-but-not-really bait and switch too many times before.
As someone who wants to write open source but needs to be able to capture some financial value from doing that to be able to make it sustainable, what model do you prefer?
My current thoughts lean towards a fully functional open source product with a HashiCorp style BSL and commercial licensing for teams above a size threshold.
I think the open core model is fine, and the most financially sustainable. Just be up front about it from day 1. I don't think the honor system for licensing will get you the results you're wanting.
it depends strongly on why you want to write open source. if you like the idea of putting source code out into the world for other people to use and benefit from then go ahead and use whatever mix of open source and proprietary code you like, just be up front that that's what you are doing.
if you want to promise open source software simply to attract the mindshare and users who habitually ignore anything that isn't open source, trying to capture financial value may well be infeasible unless some rare confluence of stars lines up for you. the key is in the word "capture" - capturing the value implies making sure it goes to you rather than to someone else, and that means imposing restrictions that will simply piss those same users off.
Sell support.
I can't imagine that works very well for relatively small, simple, functional or intuitive projects though. Incentives wise, is it possible to sell reverse support: extracting payment for all the times the product works so well that support isn't needed?
Selling support can be rough. I talked with the developer of PyMol.
Many corporations wouldn't buy licenses and those that would pay for support wanted support for hardware that was 2 or 3 generations old.
Gentle reminder: please encourage your corporation to pay for open source support whenever possible.
That doesn't work. My day job is at scenario working on NATS and I can tell you 99.9% of people don't pay for support
As a NATS user, I can say that's mostly because it just works and when it doesn't it's pretty easy to figure out :)
yep, and you'd think someone that has seen that would pick a different model... idk, ngmi
Some solo dev projects are used as a platform to sell books, training, ads, speaking engagement, consulting, and development.
Maintainers need a way to maintain during the day - not just evenings and weekends. Otherwise it eventually dies.
Does HTMX have a pro version? Is it dead?
htmx isn't dead, but, from a semantic stand point, it's pretty done:
https://htmx.org/essays/future/
there are bugs, but we have to trade off fixes against potentially breaking users sites that depend on (sometimes implicitly) the current behavior
this makes me very hesitant to make changes and accept PRs, but i also feel bad closing old issues without really diving deep into them
such is life in open source
Far from dead. Usage is growing.
HTMX is a single htmx.js file with like 4000 lines of pretty clearly written code.
It purports to - and I think succeeds - in adding a couple of missing hypermedia features to HTML.
It's not a "framework" - good
It's not serverside - good
Need to add a feature? Just edit htmx.js
Some people will get a fit once they find out it's JS, and not TypeScript.
No body is preventing them to write a layer of ts definition on top of the js. Check out postgres.js
It will be, eventually, unless a maintainer is able to maintain during the day. It doesn't matter what the source of free time is however: retired, rich, runs a company from their open source project, paid by somebody else, etc., but full time job + open source maintainer = dead project, eventually.
Yes look at the active issues on GitHub. There's hundreds and some going back years with no traction.
I don't think open issues is a fair way to judge project liveness. TypeScript also has hundreds of open issues going back years with no traction. Is TypeScript dead?
Yes, issues that are years old show me the commitment level. Not a knock against HTMX but a clear sign of priorities. Carson is free to meme all day and talk about other projects. It's very clear where he stands and that's fine
this year I created and released fixi.js, created the montana mini computer (https://mtmc.cs.montana.edu), published an paper on hypermedia via the ACM, got hyperscript to 1.0, released 3 versions of htmx, reworked all the classes that I teach at montana state and am planning on releasing a java-based take on rails that I'm building for my web programming class
i am also the president of the local youth baseball program and helped get BigSkyDevCon over the hump
i think you'd be surprised at how little time i actually spend on twitter
as always, my issue is never with how you spend your time. you are a giver of gifts and I wish more people that relied on HTMX stepped up to make it better. in no way should anything be expected of you. How you spend your time is obviously your call. MIT is MIT
It was a rhetorical question; the answer is no, old issues with no updates don't necessarily indicate anything about the health of the project. Different people have different project management styles. You use your style for your project, and Carson uses his for htmx. There's no one correct way to manage an issue backlog.
My project is really healthy then, as I summarily close issues as “not planned” rather than leave them open.
source is MIT, do what you want. The team found certain plugins to be anti-patterns and support burdens. You can find the old plugins in the repo source, feel free to fork from there!
Wait so you pay to use the anti patterns? That’s a new one.
[dead]
I just come from writing a comment on the other Datastar post on the home page, literally saying that I don't see the point of it and that I don't like it.
But I'm now here to defend Datastar.
It's their code, which, up to now, they built and literally given away totally for free, under a MIT license. Everything (even what "they moved to the Pro tier") should still be free and under the MIT license that it was published under originally.
You just decided to rely and freeload (as, as far as I can tell, you never contributed to the project).
You decided to rely on a random third party that owns the framework. And now you're outraged because they've decided that from now on, future work will be paid.
You know the three magic words:
Just. Fork. It.
Calling the OP a freeloader is over the top.
The software was released as a free version, with NO expectation for it to go commercial.
The fact that they switch to a paid version, and stripping out features from the original free version, is called "bait and switch".
If OP knew in advanced, he will have been informed about this and the potential 299 price tag. And he will have been able to make a informed decision BEFORE integrating the code.
> You just decided to rely and freeload (as, as far as I can tell, you never contributed to the project).
But you complaint about him being a freeloader for not contributing to a project. What a ridiculous response.
I feel like you never even read the post and are making assumption that OP is a full time programmer.
Datastar can do whatever they want, its their code. But calling out a *bait and switch* does not make OP the bad guy.
Yeah, I agree, it's over the top. I'm just matching the over-the-top language of the original post, which pretty much calls the Datastar devs "disgraceful" and to "f them".
I did read the post. I know OP not a programmer. And that makes it even worse: OP has the audacity of saying they "make no money from the project" while it being a scheduling tool for their presumably plenty money-making clinic.
It would in fact be less shocking if they were a programmer doing a side project for fun.
This piece is not a rational, well tempered article. Is a rant by someone who just took something that was free and is now outraged and saying fuck you to those who made their project possible in the first place, not even understanding how licenses work or even being aware that the code they relied on is still there, on github, fully intact, and available for them.
This sort of people not only want to get it for free. They want their code to be maintained and improved for free in perpetuity.
They deserve to be called freeloaders.
The license makes it very clear that “no expectations” goes all round, including the right to other people doing free maintenance for you.
its not bait and switch, its main has features we are willing to continue to support given we did a whole rewrite and this is what we think you should use. Don't like it? Fork it, code is still there. I hope your version is better!
> its not bait and switch, its main has features we are willing to continue to support given we did a whole rewrite and this is what we think you should use. Don't like it? Fork it, code is still there. I hope your version is better!
It sounds like your are the dev of Datastar...
Let me give one piece of advice. Drop the attitude because this is not how you interact in public as the developers of a paid piece of software.
You can get away with a lot when its free/hobby project, but the moment you request payment, there is a requirement for more professionalism. These reactions that i am reading, will trigger responses that will hurt your future paycheck. Your already off on a bad start with this "bait and switch", do not make it worse.
I really question your future client interactions, if they criticize your product(s) or practices.
> I hope your version is better!
No need for Datastar, my HTMX "alternative" has been in production (with different rewrites) over 20 years. So thank you for offering, but no need.
>Drop the attitude
I have to be honest, I dont see what's wrong with it.
They were accused of bait and switch, which is not even half true. Old Pro code is still available under MIT. Newer version charges more. That is it.
I'll certainly defend d*'s right to do what they did, but the wisdom of doing so is going to come into question as soon as they reject a PR because it contains a feature that's in Pro. I don't think people who are concerned about that deserve to be called "freeloaders", but I guess a fork is a way out of such acidic rhetoric too.
D* has a core, which is open and will be set in stone soon when v1 is released, with the expectation that it'll barely, if ever, change again.
The rest is plugins, which anyone can write or modify. There's no need for the plugins to get merged upstream - just use them in your project, and share them publicly if you want. You could even do the same with the pre-pro versions of the pro plugins - just make the (likely minor) modifications to make them compatible with the current datastar core.
They're also going to be releasing a formal public plugin api in the next release. Presumably it'll be even easier to do all of this then.
Sounds like they put some real thought into it then, which is good news. I was picturing two different core distributions, which would create the sort of conflict I was imagining, but as long as core does stay maintained, it seems likely that fear will stay imaginary.
one might say they've put far too much thought into it all. Its very impressive
FUD is all hackernews runs on apparently
As I answered somewhere else, the over-the-top freeloader term I think is justified because OP clearly expects not only to benefit from the work already available, freely, but also to be entitled, for free, to any work and improvement that comes in the future.
This is nonsensical. Someone did something for free. Fantastic. They used it, successfully, for a production system that enables scheduling for their job.
Nobody took that away from them. They didn't force them to rebuild their tool.
The code is even there, in the git history, available for them.
If OP doesn't like what the devs decided to do with the project, just move on or fork and pay someone to help you fix any outstanding bugs or missing features.
There is a generation divide in open source ideology over the past 10 - 20 years.
The modern one is what op and lots of younger generation agree upon. It should always be open source and continue to be supported by the community.
The old folks are basically take it or leave it. Fork it into my own while taking the maintenance burden too.
Wait - what's wrong with that? It's their project, they can merge whatever PRs they want!
> Just. Fork. It.
The “outrage” is literally just people saying they’ll use a different project instead. Why would they ever fork it? They don’t like the devs of datastar they don’t want to use it going forwards. Yes the developers are allowed to do what they want with their code and time, but people are allowed to vote with their feet and go elsewhere and they are allowed to be vocal about it.
It gets worse.
I payed the one off 299$ for a pro license but have yet to find a reason to use any of the pro features.
I was hoping to need them for the google sheets clone [1] I was building but I seem to be able to do it without PRO features.
- [1] https://cells.andersmurphy.com/
I don't understand. Why is it a problem with Datastar if you buy their Pro license without needing it?
The comment is tongue in cheek. On the discord it was discussed at length and some of the plugins in the Pro version were actually considered anti-patterns, it actually is kinda easy to complicate things needlessly when getting used to D* and I know I did this too in the beginning.
As was said by the commenter in another reply, the inspector is actually the bit that makes the Pro version much more appealing but most people wouldn't know from the sidelines.
Arguably that's good though - for the project. It means it's not a bait and switch like many have claimed. You can build pretty much anything with regular Datastar.
I, also, was swindled by those cultists.
I thought the devs' emphatic assertions in their Discord NOT to buy Datastar Pro was a psyop dark pattern. I bought it to spite them, and barely use any of it. I want my css-in-js back!
he is out of line but he is right
Could not tell if sarcasm or not. This seems awesome to me. You are using a piece of software and supporting it.
Sorry, yes it was sarcasm (I should have indicated that explicitly). I'm happy to fund a tool that I really enjoy using, even if I don't use any of the PRO features.
Datastar always rubbed me the wrong way. The author was constantly pushing it in the HTMX discord, telling anyone who would listen that if they liked HTMX how great Datastar would be for them. Some pretty classy comments from them on reddit too:
> It was a full rewrite. Use the beta release forever if it has all the tools you need. No one is stopping you.
> Open source doesn't owe you anything and I expect the same back.
> The author was constantly pushing it in the HTMX discord, telling anyone who would listen that if they liked HTMX how great Datastar would be for them
You know who else does that? THE DEVELOPER OF HTMX! https://htmx.org/essays/alternatives/
> Some pretty classy comments from them on reddit too:
What is unclassy about those comments? Seem sensible to me...
Agree nothing unclassy. People have this strange expectation that an open source project is out there to serve every single person using it with total attention. It’s not, feel free to fork the beta and use it forever, make your own changes. The pro tier cost is a pittance for anyone using it for profit.
React and HTMX don't have a PRO tier.
HTMX doesnt do half of what datastar does. And datastar's free version does 99% of what the pro version does
And react should be paying people to take on its immense performance and maintenance burden
React is literally maintained by a consortium of the world's biggest companies and before this Facebook/Meta. This is a ridiculous thing to say.
Going to vouch for this. Why does it matter what other people do? This is such a non issue, you are free to fork it and do your own work. I actually believe more open source repos should tastefully have paid tiers to help pay for the continued work.
I feel like you can push your own thing in your own discord…
Something about riding the hype train for a fully open and free library you did not create to push your product just feels strange to me.
there is (or at least was) literally a dedicated Datastar channel in the htmx discord...
Which has since been archived. Last post from “Datastar CEO”. I mean cmon, it’s a little cringe. That meme is funny when it’s about HTMX. Like at least try your own memes instead of riding on Carson’s sense of humor too.
I dont disagree. I wouldn't dare try to follow in Carson's shitposting/memeing footsteps - that's a line far too fine for me to walk.
well Datastar started as an attempt to make HTMX an framework instead of just a library. It was part of the HTMX discord for years
It's also pretty shady that no mention is made of Datastar Pro on the home page [1]. You might well be well on the way to integrating Datastar into your website before you stumble across the Pro edition, which is only mentioned on the side bar of the reference page [2].
[1]: https://data-star.dev/ [2]: https://data-star.dev/reference/datastar_pro#attributes
Isn't that only a problem if it advertised pro features there without mentioning the fact that they're paid? If it didn't then you could just be happy with the free features, no?
I'd expect it to make it explicit this is a freemium product, with free features and paid features. Nothing is given on the home page to indicate as such.
If they aren’t leading to expect that they have the paid features for free, how is offering them for money any different from just not offering those features at all?
It’s not like your exiting use cases stop working past 10 users or something.
if a feature I want is in the paid product then I assume there's less chance of it being added to the free version. every feature has to go through a process to decide if it's paid or free.
If there's money to be made the possibility that the feature will ever exist at all goes way up. I'd rather have the ability to pay for a feature if I decide I need it than to hope some maintainer gets around to building it for free.
They've said that the feature they put in the premium product are the features they don't want to build or maintain without being paid to do so.
I know the projects/specifics are completely different but this immediately reminded me of Meteor.js from back in the day
https://news.ycombinator.com/item?id=9569799
Technically very different but emotionally yes very the same but a lot simpler
Yeah cool, I think this is the point. People want to get paid for the work they produce and the dynamic in open source is not even quietly known to be unsustainable.
I like the communal aspect of open source, but I don’t like overly demanding and entitled free loaders. I’ve had enough of that in my well paid career over the last decade.
This way of getting paid may or may not resonate, but I applaud the attempt to make it work.
I don't get why people get so worked up over Datastar's pro tier - you almost certainly don't need it.
Because the incentive is now there. Maybe they don't get enough paid customers and want more money. This puts a bit of pressure to move a new feature that is really handy into the paid level. Then another and another. Might not happen but it could.
Most people using Datastar will not necessarily be smart enough to fork it and add their own changes. And when Datastar makes a new release of the base/free code people will want to keep up to date. That means individuals have to figure out how to integrate their already done changes into the new code and keep that going. It's not a matter of if something breaks your custom code but when.
Finally, many people internalize time as money with projects like this. They're spending many hours learning to use the framework. They don't want to have the effort made useless when something (ex: costs or features) changes outside of their control. Their time learning to use the code is what they "paid" for the software. Doesn't matter if it's rational to you if it is to them.
That's how people tick. We aren't satisfied with what we have if there is more. Doesn't matter if we need it.
yeah, but i WANT it
The inspector is great, but it's too much work to swap out the free bundle for the pro bundle every time I want to use it.
I'm only working in local dev right now, so i've got the pro version and inspector going. When I get to prod, perhaps this will be a problem.
Yet, surely, this could just be toggled with an env var or db setting or something? if dev, include pro and inspector component. If prod, use free version (or custom bundle that only has what you need)
Finally someone is speaking truth to power. These registered non-profits that release their code for free and their leisure time for support need to be knocked down a notch.
We all know they are evil. But you know the most evil thing? That code that was previously released under a free license? Still sneakily on display in the git history like the crown jewels in the Tower of London. Except of armed guard defending the code that wants to be free once more it's hidden behind arcane git commands. Name me a single person that knows how to navigate the git history. I'm waiting. Spoiler alert: I asked Claude and they don't exist.
Sure, but this person is a doctor (or similar) who took time to learn to code this form up to better serve their patients. They are most likely blessedly ignorant of software licenses and version control.
As I read it the op said, "I don't like how they changed this license, this is a bad direction and I didn't think there was adequate transparency."
And your rebuttal is, "Well you can always recover the code from the git history?"
I mean, this is true, but do you think this really addresses the spirit of the post's complaint? Does mentioning they're a non-profit change anything about the complaint?
The leadership and future of a software project is an important component in its use professionally. If someone believes that the project's leadership is acting in an unfair or unpredictable way then it's rational and prudent for them to first express displeasure, then disassociate with the project if they continue this course. But you've decided to write a post that suggests the poster is being irrational, unfair, and that they want the project to fail when clearly they don't.
If you'd like to critique the post's points, I suggest you do so rather than straw manning and well-poisoning. This post may look good to friends of the project, but to me as someone with only a passing familiarity with what's going on? It looks awful.
[flagged]
Oh I did. I got rid of it. Inspiring both constant censure and the kind of response you're giving drove me to despair.
I don't write things for public consumption now.
But we're not talking about me or the post. We're talking about your refusal to engage with the implications of what the project did.
I don't care what Datastar does. I'd never use Datastar. Looks like exactly what I don't need. They can certainly govern their product as they see fit.
But I've disassociated from projects for less egregious unannounced terms changes. And I've never had that decision come out for the worst, only neutral or better.
Good luck with your future endeavors, I guess.
I love everything about this answer
1. Open LICENSE on GitHub
2. Click on the commit ID
3. You’ll see something like “1 parent: fdsfgsd” – click through to that commit
4. Browse
I mean, it’s a shitty move for sure, but eh.
(Parent was being sarcastic)
Huh! I’m getting rusty.
Yes, much power! Datastar is the worst, how dare they?
[dead]
So let's say you wanted to use data-animate but on the free edition, would you just add some JS/CSS glue logic to make it work?
yup, why not. css animations go brrrr. and anime.js is a great library.
It's good to know -- having replace-url functionality behind the paywall is likely to be a deal-killer; I can't help but think that this "freemium" model is really going to kill datastar's prospects for taking off; at best it's likely to result in a fork which ends up subsuming the original project.
That said, the attitude of the guy in the article is really messed up. Saying "fuck you" to someone who gave you something amazing for free, because he's not giving you as much as you want for free -- it's entitled to a toxic degree, and poisons the well for anyone else who may want to do something open-source.
It's more like the mouse saying fuck you to the trap holding the cheese. It's not that the mouse isn't grateful for the free cheese. It's just the mouse understands the surrounding context.
except the trap isnt actually set, and there's a mound of cheese next to it anyway
The freemium model of everything makes me skeptical and reluctant to buy too much into many things.
Bit like Pydantic. It's a JSON parsing library at the end of the day, and now suddenly that's got a corporate backer and they've built a new thing
Polars is similar. It's a faster Pandas and now suddenly it's no longer the same prospect.
FastAPI the same. That one I find even more egregious since it's effectively Starlette + Pydantic.
Edit: Add Plotly/Dash, SQLAlchemy, Streamlit to that list.
I am totally skeptic about freemium too. Are FastAPI and SQLAlchemy freemium too? I didn't know that. Can you share more info, please?
There's now a "FastAPI Cloud" product that the author is working on.
SQLAlchemy just has paid for support, I shouldn't have included it with the others, I must have confused it with something else.
Thanks for the information.
its a 501c3 with no shares. please tell me how its the same?
I referenced the article with hesitation for the same reason, don't think the position the critique takes is great.
Interestingly, this article pops up first page if you search "htmx vs datastar".
> Saying "fuck you" to someone who gave you something amazing for free, because he's not giving you as much as you want for free
I don't have a problem, on principle, with paywalling new features. I don't like it, but I don't think it's bad behaviour.
Putting up a paywall around features that were previously free, however, I do take issue with. It's deceptive and it's not common practice. It tricks people into becoming invested and then holds hostage the features that they've become invested in using. Frankly, fuck that.
[flagged]
I'm normally not one to discourage anyone from open-source; but if toxic entitlement is going to get you this worked up, you might consider whether it's really your thing. The more successful you are the more you're going to encounter.
On the latter point, couldn't disagree more. He's saying "fuck you" to the product, not the person, and unilaterally removing extant features to paywall them imo is poisoning the well far more than a simple FU to a developer ever could?
Fair enough, but the use of coarse language can also impair the underlying point. i.e. the shock value can derail the reader.
For such reasons, The Economist style guide advises against using fancy language when simpler language will suffice.
https://en.wikipedia.org/wiki/Tone_policing
I don't really have much of a response beyond this.
err, fuck is simple
We didn't remove the features, if you want to use the old ones they're still there in the repo. We just didn't want to support the old way of doing them when we actively tell people not to use them. If you're going to be a support burden going forward we want you to have skin in the game if not cool do it yourself no one's going to get mad at you
Deprecating a feature and replacing it with a paywalled version is imo a distinction without a difference.
You're of course free to do it, just as I'm free continue to use other products which do not do this.
In any hypothetical open source project I make from now on where I am the owner and sole director I'll just get rid of the features entirely if they cause an undue support burden (which the datastar dev has gone up and down both threads saying this is what happened) to avoid specifically your comment.
Seems to fit in with your world view better and then I can just leave those people high and dry with much less concern!
You're not owed these people's time.
Blog post author here. I never expect my blog post to get this much attention. I was emotional when I wrote that blog because I had spent a couple weeks to rewrite a service for self use. And the service was almost completely migrated to datastar from htmx.
I was facing a situation where I either need to stuck with the beta, or paying a pro version, as I was using the replace-url function a lot.
I was emotionally feeling betrayed. I went to the datastar reddit thread to raise my doubt that whether there would be more features that I rely on in the free version would be stripped out and be put behind the paywall. I was fine to convert my service to purely free tier features, when my service is stable and usable, I was very willing to buy a pro license.
But you know what? The datastar author jumped out and stated two points. He said the release version of datastar is a full rewrite, if I am not paying, I could stay in beta or fork it. And in open source world, he owned me nothing. Very legit points.
However, the real reason behind that fuck you statement is that I was attacked by the datastar discord members multiple times. In one of the humiliating replies I got, that guy said some one in the discord server told them to show support to datastar. Instead of supporting, they just mocked me and called me a troll as if I was an obstacle to their potential success, multiple people, multiple times.
I noticed some comments in the thread said that I don't know how to use version control, or ignorant towards software license. Well, I do use version control and occasionally contribute to open source projects. I am a doctor, I may not be as skillful as you all, but I do know some basics in programming.
Our Discord is generally a friendly place, but not the nicest. If you can't backup your ideas or defend your code with metrics you are gonna have a bad time. We help those that help yourself. IIRC you were forcefully tell how things should work so it'd be more like HTMX. We tend to go tit for tat so go back and look if we were actively dissuading you from bad ideas.
Odd statement from a doctor using this at his practice:
"It is not like $299 is much for me, but I am just a hobbist."
It's one of the worst blog post I've ever read.
They kind of have a point but everything around it is ridiculous.
Yeah man. He’s a just a hobbit. But, aren’t we all just hobbits really?
Is the greedy developer in the title the one who wants the 3rd party for free without contributing, or the developer who wrote the said 3rd party and asking compensation?
I am confused.
The problem is that the developer of datastar did a bait and switch. Releasing the beta for free, and then removing features into a pro version with a price tag.
Nothing wrong with people making money on their software but you need to make it clear from the start, that it will be paid software and what price range.
Bait and switch is often used to get people to use your software, you spend time into it, and then if you need a Pro feature, well, fork up or rework your code again. So your paying with your time or money. This is why its nasty and gets people riled up.
Its amazing how many people are defending this behavior.
Correct me if I am wrong here, but what you had for free, you still have it for free, since it's a MIT license, what you cloned initially is still "yours".
Is the problem thar one needs to fork / maintain the code from now on? Is the problem that one wants free support on top of the free library?
MIT, source is still there, look at the tags and fork it. You have the same rites as me!
Take a chill pill sudodevnull ... As i stated before, turn down the rhetoric. Your not helping yourself with these "!" reactions.
You keep accusing me of something that's a lie. Rhetoric indeed
A lot of PRO plugins can be self developed. An example: there is a poor-man's inspector plugin at [1].
The replace-url thing should be a simple JS code using history API no?
[1] https://github.com/sudeep9/datastar-plugins?tab=readme-ov-fi...
> But the one that most inspired me was a web app that displayed data from every radar station in the United States.
Anyone have a link for this?
Yikes.
I'm not opposed to open source projects placing features that realistically only large/enterprise users would use behind a paywall, i.e. the open core model. When done fairly, I think this is the most sustainable way to build a business around OSS[1]. I even think that subscriptions to such features are a fair way of making the project viable long-term.
But if the project already had features that people relied on, removing them and forcing them to pay to get them back is a shitty move. The right approach would've been to keep every existing feature free, and only commercialize additional features that meet the above criteria.
Now, I can't say whether what they paywalled is a niche/pro feature or not. But I can understand why existing users wouldn't be happy about it.
[1]: https://news.ycombinator.com/item?id=45537750
if we're talking about something immense, like redis, you might have a point. But we're talking about a few hundred lines of simple javascript that are still available to fork and update to be compatible with the new API. The fact that no one has done such a simple thing yet means this is a non-issue
The thing is there's not much practical difference for users. They might not be aware that it's only a few hundred lines of code, and it really doesn't matter. The point is that they were depending on a software feature one day, and the next they were asked to pay for it. That's the very definition of a rugpull. Whether it's a few hundred lines of code, several thousand, or the entire product, the effect is the same.
Forking is always an option, of course, but not many people have the skills nor desire to maintain a piece of software they previously didn't need to. In some cases, this causes a rift in the community, as is the case for Redis/Valkey, Terraform/OpenTofu, etc., which is confusing and risky for users.
All of this could've been avoided by keeping all existing features freely available to everyone, and commercializing new value-add features for niche/enterprise users. Not doing that has understandably soured peoples' opinion of the project and tarnished their trust, as you can see from that blog post, and comments on here and on Reddit. It would be a mistake to ignore or dismiss them.
One other comment though: a lot of what you said rests upon the notion that people were relying on these features.
First, barely anyone used datastar at that point, and those features were particularly arcane. So, the impact was minimal.
Second, its likely that even fewer of them contributed anything at all to the project in general, and those features in particular. What claim do they have to anything - especially when it was just freely given to them, and not actually taken away (the code is still there)?
And to the extent that they can't or wont fix it themselves, what happens if the dev just says "im no longer maintaining datastar anymore"? You might say "well, at least he left them something usable", but how is that any different from considering the pro changes to just be a fork? In essence, he forked his own project - why does anyone have any claim to any of that?
Finally, if they cant fix it themselves (especially when AI could almost certainly fix it rapidly), should they really be developing anything?
In the end, this really is a non-issue. Again, most of the furor is quite clearly performative. Its like when DHH removed typescript from one of his projects that he and his company maintain, and people who have nothing to do with ruby came out of the woodwork to decry the change in his github repo. And even if they do have something to do with ruby, they have no say over how he writes his code.
> a lot of what you said rests upon the notion that people were relying on these features.
They were, though. The blog post linked above, and several people in the Reddit thread linked in the blog post mentioned depending on these features.
We can disagree about whether it matters that a small percentage of people used them, but I would argue that even if a single person did, a rugpull is certainly a shitty experience for them. It also has a network effect, where if other people see that developers did that, they are likely to believe that something similar in the future can happen again. Once trust is lost, it's very difficult to gain it back.
> Second, its likely that even fewer of them contributed anything at all to the project in general, and those features in particular. What claim do they have to anything - especially when it was just freely given to them, and not actually taken away (the code is still there)?
I think this is a very hostile mentality to have as an OSS developer. Delaney himself expressed something similar in that Reddit thread[1]:
> I expect nothing from you and you in turn should expect nothing from me.
This is wrong on many levels.
When a software project is published, whether as open source or otherwise, a contract is established between developers and potential users. This is formalized by the chosen license, but even without it, there is an unwritten contract. At a fundamental level, it states that users can expect the software to do what it advertises to do. I.e. that it solves a particular problem or serves a particular purpose, which is the point of all software. In turn, at the very least, the developer can expect the project's existence to serve as an advertisement of their brand. Whether they decide to monetize this or not, there's a reason they decide to publish it in the first place. It could be to boost their portfolio, which can help them land jobs, or in other more direct ways.
So when that contract is broken, which for OSS typically happens by the developer, you can understand why users would be upset.
Furthermore, the idea that because users are allowed to use the software without any financial obligations they should have no functional expectations of the software is incredibly user hostile. It's akin to the proverb "don't look a gift horse in the mouth", which boils down to "I can make this project as shitty as I want to, and you can't say anything about it". At that point, if you don't care about listening to your users, why even bother releasing software? Why choose to preserve user freedoms on one hand, but on the other completely alienate and ignore them? It doesn't make sense.
As for your point about the code still being there, that may be technically true. But you're essentially asking users to stick with a specific version of the software that will be unmaintained moving forward, as you focus on the shiny new product (the one with the complete rewrite). That's unrealistic for many reasons.
> And to the extent that they can't or wont fix it themselves, what happens if the dev just says "im no longer maintaining datastar anymore"?
That's an entirely separate scenario. If a project is not maintained anymore, it can be archived, or maintenance picked up by someone else. Software can be considered functionally complete and require little maintenance, but in the fast moving world of web development, that is practically impossible. A web framework, no matter how simple, will break eventually, most likely in a matter of months.
> Finally, if they cant fix it themselves (especially when AI could almost certainly fix it rapidly), should they really be developing anything?
Are you serious? You expect people who want to build a web site and move on with their lives to dig into a foreign code base, and fix the web framework? It doesn't matter how simple or complex it is. The fact you think this is a valid argument, and additionally insult their capability is wild to me. Bringing up "AI" is laughable.
> Again, most of the furor is quite clearly performative.
Again, it's really not. A few people (that we know of) were directly impacted by this, and the network effect of that has tarnished the trust other people had in the project. Doubling down on this, ignoring and dismissing such feedback as "performative", can only further harm the project. Which is a shame, as I truly do want it to gain traction, even if that is not the authors' goal.
Anyway, I wish you and the authors well. Your intentions seem to come from the right place, but I think this entire thing is a misstep.
[1]: https://old.reddit.com/r/datastardev/comments/1lxhdp9/though...
The sibling comment already thoroughly addressed all of this, so there's no need to me to do so other than to say that, despite your good intentions, you don't seem to have even the slightest understanding of open source.
Here's the text of the mit license https://mit-license.org/
At no point does it say anything like "I am obliged to maintain this for you forever, or even at all, let alone to your liking"
> despite your good intentions, you don't seem to have even the slightest understanding of open source
Please. Resorting to ad hominem when you don't have good arguments against someone's opinion is intellectually lazy.
> At no point does it say anything like "I am obliged to maintain this for you forever, or even at all, let alone to your liking"
I'm well familiar with most OSS licenses. I never claimed they said this.
My point was about an unwritten social contract of not being an asshole. When you do a public deed, such as publishing OSS, and that project gains users, you have certain obligations to those users at a more fundamental level than the license you chose, whether you want to acknowledge this or not.
When you ignore and intentionally alienate users, you can't be surprised when you receive backlash for it. We can blame this on users and say that they're greedy, and that as a developer you're allowed to do whatever you want, becuase—hey, these people are leeching off your hard work!—but that's simply hostile.
The point of free software is to provide a good to the world. If your intention is to just throw something over the fence and not take users into consideration—which are ultimately the main reason we build and publish software in the first place—then you're simply abusing this relationship. You want to reap the benefits of exposure that free software provides, while having zero obligations. That's incredibly entitled, and it would've been better for everyone involved if you had kept the software private.
There's literally no ad hominem where you claimed there was. That itself is ad hominem.
I'll go further this time - not only do you not understand open source licensing or ecosystem even slightly, but it's genuinely concerning that you think that someone sharing some code somehow creates "a relationship" with anyone who looks at it. The point of free software is free software, and the good to the world is whatever people make of that.
Again, the only people who seem to be truly bothered by any of this are people who don't use datastar.
Don't use it. In fact, I suspect that the datastar maintainers would prefer that you, specifically, don't use it. Use it to spite them! We don't care.
I also retract my statement about you having good intentions/communicating in good faith. I won't respond to you again.
> the only people who seem to be truly bothered by any of this are people who don't use datastar.
Yeah, those silly people who were previously interested in Datastar, and are criticizing the hostility of how this was handled. Who cares what they think?
> Don't use it. We don't care. In fact, I suspect that the datastar maintainers would prefer that you, specifically, don't use it.
Too bad. I'll use it to spite all of you!
> I also retract my statement about you having good intentions/communicating in good faith.
Oh, no.
> a rugpull is certainly a shitty experience for them
It would certainly be a shitty experience, if there actually was a rugpull, which there was not. People who were using the version of Datastar that had all those features are still free to keep using that version. No one is taking it away. No rug was pulled.
> a contract is established between developers and potential users
Sorry, but no. The license makes this quite clear–every open source license in the world very explicitly says 'NO WARRANTY' in very big letters. 'No warranty' means 'no expectations'. Please, don't be one of those people who try to peer-pressure open source developers into providing free software support. Don't be one of the people who says that 'exposure' is a kind of payment. I can't put food on my table with 'exposure'. If you think 'exposure' by itself can be monetized, I'm sorry but you are not being realistic. Go and actually work on monetizing an open source project before you make these kinds of claims.
> why even bother releasing software?
Much research and study is not useful for many people. Why even bother doing research and development? Because there are some who might find it useful and convert it into something that works for themselves. Open source software is a gift. The giving of the gift does not place obligations on the giver. If you give someone a sweater, are you expected to keep patching it whenever it develops holes?
> If a project is not maintained anymore, it can be archived, or maintenance picked up by someone else.
Then why can't it be maintained by someone else in the case of using the old free version?
> A web framework, no matter how simple, will break eventually, most likely in a matter of months.
Sure, the ones that depend on a huge npm transitive dependency cone can. But libraries or frameworks like htmx and Datastar are not like that, they are single <script> files that you include directly in your HTML. There is no endless treadmill of npm packages that get obsoleted or have security advisories all the time.
> You expect people who want to build a web site and move on with their lives to dig into a foreign code base, and fix the web framework?
Well...ultimately, if I use some open source software, I am actually responsible for it. Especially if it's for a commercial use case. I can't just leech off the free work of others to fix or maintain the software to my needs. I need to either fix my own issues or pay someone to do it. If the upstream project happens to do it for me, I'm in luck. But that's all it is. There is ultimately no expectation that open source maintainers will support me for free, perpetually, when I use their software.
> A few people (that we know of) were directly impacted by this
What impact? One guy blogged that just because there are some paid features, it automatically kills the whole project for him. There's no clear articulation of why exactly he needs those exact paid features. Everything else we've seen in this thread is pile-ons.
> Doubling down on this, ignoring and dismissing such feedback as "performative"
Aren't you doing the same thing? You have been ignoring and dismissing the feedback that this is actually not that big of a deal. Why do you think that your opinion carries more weight than that of the actual maintainers and users of the project?
The open core part of the project was removed from NPM. Available only on GitHub. There are no published plugins from the community, nor is there a repo where the community could have collaborated on OSS adding/plugins.
Are people being entitled expecting it ? Yes. Is there something stopping people from taking up this work and creating a repo ? No. But it is illustrative of the attitude of the owners. The point is not to accuse of rug pull but how confident is the community in taking a dependency on such a project. The fact that the lead dev had to write an article responding to misunderstandings is in response to what the community feels about this.
The argument on their discord for licensing for professional teams 'contact us for pricing' goes like it depends on the number of employees in the company including non-tech folks.
> People who were using the version of Datastar that had all those features are still free to keep using that version.
Why are you ignoring my previous comment that contradicts this opinion?
> No one is taking it away. No rug was pulled.
When Redis changed licenses to SSPL/RSAL, users were also free to continue using the BSD-licensed version. Was that not a rug pull?
In practice, it doesn't matter whether the entire project was relicensed, or if parts of it were paywalled. Users were depending on a piece of software one day, and the next they were forced to abide by new terms if they want to continue receiving updates to it. That's the very definition of a rug pull. Of course nobody is claiming that developers physically took the software people were using away—that's ridiculous.
> Sorry, but no. The license makes this quite clear
My argument was beyond any legal licensing terms. It's about not being an asshole to your users.
> I can't put food on my table with 'exposure'.
That wasn't the core of my argument, but you sure can. Any public deed builds a brand and reputation, which in turn can lead to financial opportunities. I'm not saying the act of publishing OSS is enough to "put food on your table", but it can be monetized in many ways.
> Open source software is a gift. The giving of the gift does not place obligations on the giver. If you give someone a sweater, are you expected to keep patching it whenever it develops holes?
Jesus. There's so many things wrong with these statements, that I don't know where to start...
OSS is most certainly not a "gift". What a ridiculous thing to say. It's a philosophy and approach of making computers accessible and friendly to use for everyone. It's about building meaningful relationships between people in ways that we can all collectivelly build a better future for everyone.
Seeing OSS as a plain transaction, where users should have absolutely no expectations beyond arbitrary license terms, is no better than publishing proprietary software. Using it to promote your brand while ignoring your users is a corruption of this philosophy.
> Then why can't it be maintained by someone else in the case of using the old free version?
I addressed this in my previous comment.
> Sure, the ones that depend on a huge npm transitive dependency cone can. But libraries or frameworks like htmx and Datastar are not like that
Eh, no. Libraries with less dependencies will naturally require less maintenance, but are not maintenance-free. Browsers frequently change. SDK language ecosystems frequently change. Software doesn't exist in a vacuum, and it is incredibly difficult to maintain backwards compatibility over time. Ask Microsoft. In the web world, it's practically impossible.
> What impact? One guy [...]
Yeah, fuck that guy.
> Everything else we've seen in this thread is pile-ons.
Have you seen Reddit? But clearly, everyone who disagrees is "piling on".
> Aren't you doing the same thing? You have been ignoring and dismissing the feedback that this is actually not that big of a deal. Why do you think that your opinion carries more weight than that of the actual maintainers and users of the project?
Huh? I'm pointing out why I think this was a bad move, and why the negative feedback is expected. You can disagree with it, if you want, but at no point did I claim that my opinion carries more weight than anyone else's.
> Why are you ignoring my previous comment that contradicts this opinion?
Because it doesn't contradict it, it just disagrees with it. Because what actual argument did you have that people using an old version of the software can't keep using it? The one about things constantly breaking? On the web, the platform that's famously stable and backward-compatible? Sorry, I just don't find that believable for projects like htmx and Datastar which are very self-contained and use basic features of the web platform, not crazy things like WebSQL for example.
> When Redis changed licenses to SSPL/RSAL, users were also free to continue using the BSD-licensed version. Was that not a rug pull?
Firstly, there are tons of people on old versions of Redis who didn't even upgrade through all that and weren't even impacted. Secondly, Redis forks sprang up almost immediately, which is exactly what you yourself said was a viable path forward in an earlier comment–someone new could take over maintaining it. That's effectively what happened with Valkey.
> My argument was beyond any legal licensing terms.
And my argument is that there is no 'beyond' legal licensing terms, the terms are quite clear and you agree to them when you start using the software. In your opinion should it be standard practice for people to weasel their way out of agreed license terms after the fact?
> Any public deed builds a brand and reputation, which in turn can lead to financial opportunities.
Notice that you're missing quite a lot of steps there, and even then you can only end with 'can lead' to financial opportunities. Why? Because there's no guarantee that anyone will be able to monetize exposure. No serious person would claim that that uncertain outcome constitutes any kind of 'contract'. Anyone who does should be rightly called out.
> It's about building meaningful relationships between people in ways that we can all collectivelly build a better future for everyone.
Then by your own logic shouldn't everyone contribute to that effort? Why is it that only the one guy who creates the project must bear the burden of maintaining all of it in perpetuity?
> Seeing OSS as a plain transaction
Isn't that what you are doing by claiming that OSS is about providing software in exchange for exposure?
> Yeah, fuck that guy.
The guy who didn't even explain what exactly he lost by not being able to use the new paywalled features? The guy who likely was not impacted at all, and was just ranting on his blog because he didn't like someone monetizing their own project? You want us to take that guy seriously?
> everyone who disagrees is "piling on".
Everyone who disagrees? Yeah. Anyone who provides a coherent argument about exactly what they are missing out on by not being able to afford the paid version? I would take them seriously. I haven't seen anyone like that here.
Ive made similar points to the maintainers. It is what it is at this point.
But, honestly, to the people who actually understand, like and use Datastar, none of this matters. Most of the outrage is performative, at best - as can be seen by the pathetically superficial quality of the vast majority of criticisms in threads like this.
Frankly, if people can't/won't see that the devs are very clearly not VC rugpull assholes, and that the vast majority of the functionality is available for free, then they're probably also the sorts of people who aren't a good fit for something that is doing a great rethink of web development. The devs very explicitly are not trying to get rich (nor can they, due to the 501c3!) nor do they want this to be something massive - they're building it for their own needs, first and foremost, and for those who understand that vision.
I tried to understand this, but it seems like a non-native English speaker met an LLM and used it to create a blog post. Can someone please explain why this exists?
Sorry for that. I am really a non-native English speaker. I did not use LLM, just bad English.
cause people hate how you give your own gifts to the world of open source. fuck 'em
https://www.youtube.com/watch?v=vagyIcmIGOQ&t=20017s
DHH is spot on like he is on many issues!
There is a saying in my language which translates: give someone a hand and they'll take your whole arm.
Having so many features behind a Pro gate makes this a non-starter for enterprise. How would anyone convince their company to adopt this?
People can develop open source equivalents you know, you're not required to use the pro version to get a certain feature. From my understanding, datastar was designed to be entirely modular and extensible.
EXACTLY which feature do you need?
> I had a running service written in htmx for some time. It is a clinic opening hour service to inform my patients when I will be available in which clinic. (Yes, I am not a programmer, but a healthcare professional.)
-> that was pretty freaking cool to read, loved it
also chuckled at the idea of my website making, health professional going all "What the fuck." in front of his codebase.
If the developer rug pulled once they will probably do it again. Thx for the heads up.
what was taken from you? point to the source history that's been removed please. It's funny that stuff like this means people won't ever develop in the open. Hope that makes y'all happy
I was looking for a tool to follow along with signal patches and was a bit disappointed to see the inspector is under "pro"- that and the query string sync are the two nice-to-haves.
so you want nothing to be useful in Pro? you are telling devs how to spend their time and effort?
[flagged]
Yeah, but as the HTMX author said, HTMX sucks! Definitely should use Datastar!
[flagged]
Hah, I downvoted you, without realizing you're the guy.
Thanks for your work, and keep fighting the good fight!
> But no, the datastar dev somehow move a portion of the freely available features behind a paywall. What the fuck.
Bait & Switch. They're in their right to do it, but it's a bad move, and nobody should use their project^M^M^M^Mduct anymore.
and youre in your right to fork the pre-pro versions of the now-pro plugins, update them to be compatible with the current version of the open-source course (a surely trivial task) and share them with the world. You can call your plugin pack d-free
I'd rather not get spammed by Bait&Switch projects that turn into products, thank you.
that's just not true, the current ones are a complete rewrite. Use the old way to your heart's content. It's MIT
[flagged]
i'm afraid this is rather the quality of a non native writer, sorry we are not all from the mother US of A
I'm not referring to the language barrier - I live in a place where I write and speak at a juvenile level. I'm referring to the very low quality of thinking on display in the article, and in this reply (english is not from the US of A. And, moreover, the level of literacy in that country is nothing to envy)
Here's a HN post from today for a coherent article about the same topic: https://news.ycombinator.com/item?id=45536000
I find it much more interesting to read from people who do programming as a hobby.
They focus on the practical solutions much more than on the typical bikeshedding.
"Bikeshedding" means debating over aspects that don't matter (much; i.e, color of the bike shed), the linked blog post isn't about asthetic changes.
Nor was my intent to label any of the linked blogs as such.
My intent was to say that hobbyists have a different, refreshing approach to programming and it's technologies that I appreciate.
I don't understand the examples in this post. For example, how does:
<span hx-target="#rebuild-bundle-status-button" hx-select="#rebuild-bundle-status-button" hx-swap="outerHTML" hx-trigger="click" hx-get="/rebuild/status-button"></span>
Turn into:
<span data-on-click="@get('/rebuild/status-button')"></span>
The other examples are even more confusing. In the end, I don't understand why the author switched from HTMX to Datastar.
Basically, the HTMX code says: "when this span is clicked, fetch /rebuild/status-button, extract the #rebuild-bundle-status-button element from the returned HTML, and replace the existing #rebuild-bundle-status-button element with it".
The Datastar code instead says: "when this span is clicked, fetch /rebuild/status-button and do whatever it says". Then, it's /rebuild/status-button's responsibility to provide the "swap the existing #rebuild-bundle-status-button element with this new one" instruction.
If /rebuild/status-button returns a bunch of elements with IDs, Datastar implicitly interprets that as a bunch of "swap the existing element with this new one" instructions.
This makes the resulting code look a bit simpler since you don't need to explicitly specify the "target", "select", or "swap" parts. You just need to put IDs on the elements and Datastar's default behavior does what you want (in this case).
Note that for this example you can get the same behavior (assuming the endpoint hit isn't using using SSE, which IMO Datastar over emphasizes) in HTMX via a combination of formatting your response body correctly and the response headers. It isn't the way things are typically done in HTMX for Locality of Behavior reasons, not because it's impossible.
in Datastar the locality of behavior is in you backend state... datastar.Patch(renderComponent(db.NextRow)) imho, a single line is the ultimate LOB pattern. idk, ngmi
I see, so the rendering logic is performed mostly on the back-end when using Datastar. It makes sense now, thanks.
This is just SSR html/hypermedia in general - the way the web was designed and worked for a long time. This site is like that too!
HTMX's html attributes are similarly defined in the backend. The difference with datastar is which attributes and how they work
It is quite simple.
Datastar keeps the logic in the backend. Just like we used to do with basic html pages where you make a request, server returns html and your browser renders it.
With Datastar, you are essentially doing kind of PWA where you load the page once and then as you interact with it, it keeps making backend requests and render desired changes, instead of reloading the entire page. But yo uare getting back snippets of HTML so the browser does not have to do much except rendering itself.
This also means the state is back in the backend as well, unlike with SPA for example.
So again, Datastar goes back to the old request-response HTML model, which is perfectly fine, valid and tried, but it also allows you to have dynamic rendering, like you would have with JavaScript.
In other words, the front-end is purely visual and all the logic is delegated back to the backend server.
This essentially is all about thin client vs smart client where we constantly move between these paradigms where we move logic from backend to the frontend and then we swing back and move the logic from the frontend to the backend.
We started with thin clients as computers did not have sufficient computing power back in the day, so backend servers did most of the heavy lifting while the thin clients very little(essentially they just render the ready-made information). That changed over time and as computers got more capable, we moved more logic to the frontend and it allowed us to provide faster interaction as we no longer had to wait for the server to return response for every interaction. This is why there is so much JavaScript today, why we have SPAs and state on the client.
So Datastar essentially gives us a good alternative to choose whether we want to process more data on the backend or on the frontend, whilst also retaining the dynamic frontend and it is not just a basic request-response where every page has to re-render and where we have to wait for request to finish. We can do this in parallel and still have the impression if a "live" page.
You explained nothing about how it actually works, seems an ai generated response
Thanks, brings me back to my youth when I was being accused of cheating or being a bot in Counter-Strike :)
If you still don't get it, Datastar is essentially like a server-side rendering in JS, for PWAs, but it allows you to use any language you want on the backend whilst having a micro library(datastar itself) on the frontend. Allowing you to decouple JS from frontend and backend whilst still having all the benefits of it.
Also why is it a span instead of a button or link?
My guess is to demonstrate that any element can be interactive with D*, not just buttons or links.
Interesting write-up. Thanks.
I've written costumer-facing interfaces in HTMX and currently quite like it.
One comment. HTMX supports out-of-bound replies which makes it possible to update multiple targets in one request. There's also ways for the server to redirect the target to something else.
I use this a lot, as well as HTMX's support for SSE. I'd have to check what Datastar offers here, because SSE is one thing that makes dashboarding in HTMX a breeze.
Isn’t HTMX fully open source? The comment you’re replying to has no outright praise for Datastar.
Must be part of HTMX' highly paid marketing team ;)
More like unpaid fan account. Anyway since HTMX is open source and free, it's easy to validate.
You're accusing the poster of shilling. That's against site rules, but aside from that it makes no sense in this context -- the post talks about the advantages of HTMX versus Datastar.
Ironically given the topic is hypermedia, the article doesn't link to the Datastar website. Here it is:
https://data-star.dev/
Being new to Datastar and having seen some of the hype recently, I'm really not sold on it.
The patch statements on the server injecting HTML seems absolutely awful in terms of separation of concerns, and it would undoubtedly be an unwieldy nightmare on an application of any size when more HTML is being injected from the server.
Let's have scattered bits of JS inject HTML instead, that'll fix it!
This. I feel wrong having endpoints that produce bits of HTML.
Servers sending HTML to the browser? Scandalous!
What has the world wide web come to?!
Yes. It is OK for simple form submits but going to get annoying for anything that feels like an app.
There's a reason xpath and xls look they way they do - they're powerful, but not very comfortable :/
Reimplantations tend to simply some bits, but end up amassing complexities in various corners...
People today don't remember that assigning to innerHTML isn't a good idea, so anything goes.
"Bits of HTML" was for a long time so common and normal it has its own term: HTML fragments.
Not sure if satire, or serious. Well done, well done.
Datastar is like Htmx done even better.
I'm seriously keen on trying it out. It's not like Htmx is bad, I've built a couple of projects in it with great success, but they all required some JS glue logic (I ended up not liking AlpineJS for various reasons) to handle events.
If Datastar can minimize that as well, even better!
Read the grugs around the fire essay on the datastar site
I was late to the hypermedia party, started with datastore but now use HTMX when i want something in this space. The datastar api is a bit nicer but htmx 2.0 supports the same approach, the key thing is what htmx calls OOB updates, with that in place, everything else is a win in the htmx column.
One frustration I have with OOB elements in HTMX:
1. If the element is out-of-band, it MUST have `htmx-swap-oob="true"` in it, or it may be discarded / cause unexpected results
2. If the element is not out-of-band, it MUST NOT have `htmx-swap-oob="true"` in it, or it may be ignored.
This makes it hard to use the same server-side HTML rendering code for for a component that may show up either OOB or not; you end up having to pass down "isOob" flags, which is ugly and annoying.
I think Datastar has the better approach here with making OOB the default. I suspect HTMX's non-OOB default makes more sense for very simple requirements where you simply replace the part of the DOM from which the action was triggered. But personally, situations where OOB is necessary is more typical.
Interestingly, elements sent via the HTMX websocket extension [1] do use OOB by default.
[1]: https://htmx.org/extensions/ws/
This really depends on your server-side HTML rendering approach. I have a library in which I can do this:
And this adds the `hx-swap-oob=true` attribute to the given node. It makes it trivial to add on any defined markup in an oob swap.I get that many people prefer template-based rendering, but imho to extract the maximum power from htmx an HTML library that's embedded directly in your programming language is much more powerful.
https://github.com/yawaramin/dream-html/blob/f7928616b9ca1d6...
> to extract the maximum power from htmx an HTML library that's embedded directly in your programming language is much more powerful.
I'm actually using gomponents, but the maintainer doesn't like the vibe of adding attributes to existing nodes.
https://github.com/maragudk/gomponents/issues/276
(I don't really understand his argument, but in general I'm in favor of maintainers doing what they think is the right thing; and in any case I'm using his work without paying, so not gonna complain.)
But even if I had an easy way to add the attribute, the fact that I need to think about that extra step is a bit of extra friction HTMX imposes, which datastar doesn't.
"I was late to the hypermedia party" → very late indeed :)
The term was coined in 1965 by Ted Nelson in: https://dl.acm.org/doi/10.1145/800197.806036
Here's the exact sentence: "The hyperfilm-- a browsable or vari-sequenced movie-- is only one of the possible hypermedia that require our attention."
Both Datastar and HTMX try to lay claim to being "the one true hypermedia" just because they both are vaguely HTML-like. And they gladly misappropriate Ted Nelson, too: https://dmitriid.com/hypermedia-is-a-property-of-the-client
in reality, neither of them make any such claims. And they are not html-like - they're literally html. Especially datastar, which doesnt add any non-html-spec attributes.
Yeah, yeah, they "literally add nothing" except these small things like "custom Javascript-like DSL" (datastar), custom DSL and custom HTTP-headers (htmx).
But it's "just html", so it's all fine
Edit: Oh, don't forget that " Especially datastar, which doesnt add any non-html-spec attributes" in reality ads two custom DSLs. One in the form of HTML attribbutes, and the other in the form of a JS-like DSL:
But as long as it's superficially HTML-spec compliant, this is nothing.the spec is literally just data-* (hence the name): you can add whatever you want to it and remain in spec. And they're meant to be read by javascript (like datastar)
https://developer.mozilla.org/en-US/docs/Web/HTML/How_to/Use...
At least you're living up to your profile! "Opinions on things I know nothing about"
Hey, that was a low blow, you can do better than insult another user!
On the topic: it might be in spec but it’s still a DSL inside an attribute
Do you really not see how I was responding in-kind to them?
> At least you're living up to your profile! "Opinions on things I know nothing about"
I've had this pinned on my twitter profile for a few years now, for people like you: https://x.com/dmitriid/status/1860589623321280995
I never argued that those attributes weren't compatible with HTML.
I like the alpine-ajax API. You specify one or more targets and it swaps each of those elements. No default case or OOB, just keeping it uniform instead.
As for Datastar, all the signal and state stuff seems to me like a step in the wrong direction.
I thought alpine ajax did OOB on any ids returned in a response.
Edit: right, as long as the element has x-sync on it, it will receive any OOB updates from any response.
You can swap multiple elements with targeting too: `x-target="comments comments_count"`, but, yeah most of the time `x-sync` is better.
One of the big promises of HTMX is that the client doesn't have to understand the structure of the returned data since it's pre-compiled to the presentation layer, and it feels like this violates that quite heavily since now the calling page needs to know the IDs and semantics of the different elements the server will return.
This isn't really a criticism of Datastar, though: I think the popularity of OOB in HTMX indicates that the pure form of this is too idealistic for a lot of real-world cases. But it would be nice if we could come up with a design that gives the best of both worlds.
The calling page knows nothing, you do an action on the client, the server might return an update view of the entire page. That's it.
You send down the whole page on every change. The client just renders. It's immediate mode like in video games.
That doesn't seem to be the ‘standard’ way to use Datastar, at least as described in this article?
If one were to rerender the entire page every time, what's the advantage of any of these frameworks over just redirecting to another page (as form submissions do by default)?
It's the high performance way to use Datastar and personally I think it's the best DX.
1. It's much better in terms of compression and latency. As with brotli/zstd you get compression over the entire duration of the connection. So you keep one connection open and push all updates down it. All requests return 204 response. Because everything comes down the same connection brotli/zstd can give you 1000-8000x compression ratios. So in my demos for example, one check is 13-20bytes over the wire even though it's 140k of HTML uncompressed. Keeping the packet size around 1k or less is great for latency. Redirect also has to do more trips.
2. The server is in control. I can batch updates. The reason these demo's easily survive HN is because the updates are batched every 100ms. That means at most a new view gets pushed to you every 100ms, regardless of the number of users interacting with your view. In the case of the GoL demo the render is actually shared between all users, so it's only rendering once per 100ms regardless of the number of concurrent users.
3. The DX is nice and simple good old View = f (state), like react just over the network.
> Because everything comes down the same connection brotli/zstd can give you 1000-8000x compression ratios.
Isn't this also the case by default for HTTP/2 (or even just HTTP/1.1 `Connection: keep-alive`)?
> The server is in control. I can batch updates.
That's neat! So you keep a connection open at all times and just push an update down it when something changes?
So even though HTTP/2 multiplexes each request over a single TCP connection, each HTTP connection is still compressed separately. Same with keep alive.
The magic is brotli/zstd are very good at streaming compression thanks to forward/backward references. What this effectively means is the client and the server share a compression window for the duration of the HTTP connection. So rather than each message being compressed separately with a new context, each message is compressed with the context of all the messages sent before it. What this means in practice is if you are sending 140kb of divs on each frame, but only one div changed between frames, then the next frame will only be 13bytes because the compression algorithm basically says to the client "you know that message I sent you 100ms ago, well this one is almost identical apart from this one change". It's like a really performant byte level diffing algorithm, except you as the programmer don't have to think about it. You just re-render the whole frame and let compression do the rest.
In these demos I push a frame to every connected client when something changes at most every 100ms. What that means, it effectively all the changes that happen in that time are batched into a single frame. Also means the server can stay in charge and control the flow of data (including back pressure, if it's under to much load, or the client is struggling to render frames).
> They converted it from React to HTMX, cutting their codebase by almost 70% while significantly improving its capabilities.
Happy user of https://reflex.dev framework here.
I was tired of writing backend APIs with the only purpose that they get consumed by the same app's frontend (typically React). Leading to boilerplate code both backend side (provide APIs) and frontend side (consume APIs: fetch, cache, propagate, etc.).
Now I am running 3 different apps in productions for which I no longer write APIs. I only define states and state updates in Python. The frontend code is written in Python, too, and auto-transpiled into a React app. The latter keeping its states and views automagically in sync with the backend. I am only 6 months into Reflex so far, but so far it's been mostly a joy. Of course you've got to learn a few but important details such as state dependencies and proper state caching, but the upsides of Reflex are a big win for my team and me. We write less code and ship faster.
> I was tired of writing backend APIs with the only purpose that they get consumed by the same app's frontend
PostgREST is great for this: https://postgrest.org
I run 6 React apps in prod, which used to consume APIs written with Falcon, Django and FastAPI. Since 2 years ago, they all consume APIs from PostgREST. I define SQL views for the tables I want to expose, and optionally a bunch of SQL grants and SQL policies on the tables if I have different roles/permissions in the app, and PostgREST automatically transforms the views into endpoints, adds all the CRUD + UPSERT capabilities, handles the authorization, filtering, grouping, ordering, insert returning, pagination, and so on.
I checked this out because it sounded cool and I was not expecting to see a landing page about AI and "Contact sales" for pricing info if you don't want your work to be data-mined. 2025, man. Sigh.
I may be just completely out of my depth here, but I look at the cool example on their website, the Open the pod bay doors, HAL bit, and I don't like it, at all.
And reading comments one would think this is some amazing piece of technology. Am I just old and cranky or something?
This feels... very hard to reason about. Disjoint.
You have a front-end with some hard-coded IDs on e.g. <div>s. A trigger on a <button> that black-box calls some endpoint. And then, on the backend, you use the SDK for your choice language to execute some methods like `patchElements()` on e.g. an SSE "framework" which translates your commands to some custom "event" headers and metadata in the open HTTP stream and then some "engine" on the front-end patches, on the fly, the DOM with whatever you sent through the pipe.
This feels to me like something that will very quickly become very hard to reason about globally.
Presentation logic scattered in small functions all over the backend. Plus whatever on-render logic through a classic template you may have, because of course you may want to have an on-load state.
I'm doing React 100% nowadays. I'm happy, I'm end-to-end type safe, I can create the fanciest shiny UIs I can imagine, I don't need an alternative. But if I needed it, if I had to go back to something lighter, I'd just go back to all in SSR with Rails or Laravel and just sprinkle some AlpineJS for the few dynamic widgets.
Anyway, I'm sure people will say that you can definitely make this work and organize your code well enough and surely there are tons of successful projects using Datastar but I just fail to understand why would I bother.
I’ve not tried Datastar in anger but I have tried HTMX after all the hype and it quickly became unmaintainable.
My dream was having a Go server churning out all this hypermedia and I could swerve using a frontend framework, but I quickly found the Go code I was writing was rigid and convoluted. It just wasn’t nice. In fact it’s the only time I’ve had an evening coding session and forgotten what the code was doing on the same evening I started.
I’m having a completely opposite experience with Elixir and Phoenix. That feels like an end to end fluid experience without excessive cognitive load.
BEAM + Elixir + Phoenix feels like I can control a whole system from the CPU processes (almost) up to the UI elements on a remote user’s screen, all in one easy-ish-to-understand system and language.
Granted, I’ve only used it for smaller projects, but I can almost feel my brain relax as the JS fades out, and suddenly making web apps is super fun again.
Did you use templ [1] for server side templating? It supports something called fragments, which may help with HTMX [2].
[1]: https://templ.guide/
[2]: https://templ.guide/syntax-and-usage/fragments
The standard library html/template also has that in the form of blocks.
html/template blocks are not as ergonomic. They force you to work on the template level and drill down into the blocks. Templ, Gomponents etc. let you build up the components from smaller pieces, like Lego.
The preferred pattern addresses your concern about scattered logic: a single long-lived SSE endpoint that "owns" the user's view of the app. That endpoint updates their field of view as appropriate - very much inspired by game dev's immediate mode rendering.
I've a tutorial that demonstrates this with Nushell as the backend: https://datastar-todomvc.cross.stream
An interesting characteristic of Datastar: it's very opinionated about the shape of your backend but extremely unopinionated about how you implement that shape.
> if I had to go back to something lighter, I'd just go back to all in SSR with Rails
FWIW, default config of Rails include Turbo nowadays, which seems quite similar to Datastar in concept.
My understanding is Turbo is more aligned with htmx. Common practice in Turbo are generally patterns of last resort in Datastar.
e.g. Datastar prescribes a single long lived SSE endpoint that owns the state for the currently connected user's view of the world / app, while common practice in Turbo is to have many small endpoints that return a fragment of html when requested by the client.
The idea of HATEOS is that HTML isn't "presentation logic", it IS the state of your application. Then, the backend manages the state of your application.
Yup. Another way to frame it is a "return to form" by moving app and business logic back to the server. Technology like HTMX and Datastar are optimizations that allow for surgical updates of portions of the client DOM, instead of forcing full-page refreshes like we did 25 years ago.
I share your feelings. If you like React and its trade-offs, and you're comfortable using it (based on various HN discussion the easiest sign is that you're able to understand the concept of hooks and you don't have the need to wrongly yell everywhere how it's a bad abstraction :D), you can forget about Datastar or HTMAX.
For context, I worked with large React codebases, contributed to various ecosystem libraries (a few bigger ones: react-redux, styled components, react-router). So, I'm pretty comfortable with hooks, but I still make mistakes with them if React isn't in my daily routine (different day job now, only use React occasionally for some pet projects).
I've also onboard interns and juniors onto React codebase, and there's things about React that only really make sense if you're more old-school and know how different types behave to understand why certain things are necessary.
I remember explaining to an intern why passing an inlined object as a prop was causing the component to rerender, and they asked whether that's a codebase smell... That question kinda shocked me because to me it was obvious why this happens, but it's not even a React issue directly. Howeve the fix is to write "un-javascripty" code in React. So this persons intro to JS is React and their whole understanding of JS is weirdly anchored around React now.
So I totally understand the critique of hooks. They just don't seem to be in the spirit of the language, but do work really well in spite of the language.
As someone who survived the early JS wilderness, then found refuge in jQuery, and after trying a bunch of frameworks and libraries, finally settled on React: I think React is great, but objectively parts of it suck, and it's not entirl its fault
> and they asked whether that's a codebase smell...
Something that's been an issue with our most junior dev, he's heard a lot of terminology but never really learned what some of those terms mean, so he'll use them in ways that don't really make sense. Your example here is just the kind of thing I'd expect from him, if he's heard the phrase "code smell" but assumed something incorrect about what it meant and never actually looked up what it means.
It is possible your co-worker was asking you this the other way around - that they'd just learned the term and were trying to understand it rather than apply it.
Htmx got me into hypermedia heaven, but it lead me to datastar for sure. Recently we also had an interview with the creator of datastar, where he also talked a bit about darkstar (something he wants to built on top of webtransport for the few things where datastar is no well suited for now)
https://netstack.fm/#episode-4
Thanks for writing this up — some great insights!
The server deciding what to replace reminds me of some old (dangerous, I think) patterns like returning actual JS from the server which the client then executes.
But it was a nice pattern to work with: for example if you made code changes you often got hot-reloading ‘for free’ because the client can just query the server again. And it was by definition infinitely flexible.
I’d be interested to hear from anyone with experience of both Datastar and Hotwire. Hotwire always seemed very similar to HTMX to me, but on reflection it’s arguably closer to Datastar because the target is denoted by the server. I’ve only used Hotwire for anything significant, and I’m considering rewriting the messy React app I’ve inherited using one of these, so it’s always useful to hear from others about how things pan out working at scale.
> The server deciding what to replace reminds me of some old (dangerous, I think) patterns like returning actual JS from the server which the client then executes.
Basically every single web page on the modern web has the server returning JS that the client then executes. I think you should clarify what's dangerous about the specific pattern you're thinking of that isn't already intrinsic to the web as a whole.
I like Hotwire but I admit its a bit confusing to get started with and the docs dont help. Form submits + redirects are a bit weird, you cant really make the server "break out" of a frame during a redirect if the form was submitted from inside a frame (there are workarounds, see https://github.com/hotwired/turbo/issues/257).
Also, custom actions [https://turbo.hotwired.dev/handbook/streams#custom-actions] are super powerfull, we use it to emmit browser events, update dom classes and attributes and so on, just be careful not to overuse it.
During 2015-2018 I was not working on FE and when I started again everyone was using js frameworks ditching MVC, aspx and similar. Now I again not working on the FE for 3 years and it seems everybody is going back to sending HTML from server.
I am not saying it is wrong. Just it is abit funny looking from perspective how pendulum is going now the other way.
What you see on hackernews/twitter is not what everybody is doing. Reality is that everyone is still rawdogging React in their legacy app
Despite the “just figure it out” style of documentation, I still believe Hotwire + Stimulus (optional honestly) to be the best iteration of the low-JavaScript reactivity bunch.
Htmx gives me bad vibes from having tons of logic _in_ your html. Datastar seems better in this respect but has limitations Hotwire long since solved.
I'd love to hear what those limitations are as someone who came from Hotwire and moved to Datastar.
>Htmx gives me bad vibes from having tons of logic _in_ your html
Write some HTMX and you'll find that exactly the opposite is true
> tons of logic _in_ your html
That is not at all what HTMX does. HTMX is "If the user clicks[1] here, fetch some html from the server and display it". HTMX doesn't put logic in your HTML.
[1] or hovers or scrolls.
> ...To accomplish this, most HTMX developers achieve updates either by “pulling” information from the server by polling every few seconds or by writing custom WebSocket code, which increases complexity.
This isn't true. HTMX has native support for "pushing" data to the browser with Websockets or SSE, without "custom" code.
I've been using datastar for the last year to much success. The core library is fantastic. I use Go as my backend language of choice, and have a boilerplate project built from examples that were in the original datastar site code. I've also added some extra examples to show how one can build web components that work seamlessly to integrate with JS libs that exist today and drive them from a backend server.
If you are looking to understand what's possible when you use datastar and you have some familiarity with Go, I hope this is a solid starting point for you.
- https://github.com/zangster300/northstar
I'm still trying to figure out what the key difference would be when writing an app with Datastar over HTMX.
I wrote /dev/push [1] with FastAPI + HTMX + Alpine.js, and I'm doing a fair bit with SSE (e.g. displaying logs in real time, updating state of deployments across lists, etc). Looking at the Datastar examples, I don't see where things would be easier that this [2]:
Also curious what others think of web components. I tried to use them when I was writing Basecoat [3] and ended up reverting to regular HTML + CSS + JS. Too brittle, too many issues (e.g. global styling), too many gaps (e.g. state).[1]: https://devpu.sh
[2]: https://github.com/hunvreus/devpush/blob/main/app/templates/...
[3]: https://basecoatui.com
BTW, this comment is very true when dealing with HTMX as well:
> But what I’m most excited about are the possibilities that Datastar enables. The community is routinely creating projects that push well beyond the limits experienced by developers using other tools.
For example when displaying the list of deployments, rather than trying to update any individual deployment as their state is updated, it's just simpler to just update the whole list. Your code is way simpler/lighter as you don't need to account for all the edge case (e.g. pager).
I don't see a good enough reason to move over from Htmx, unless im missing something you're just moving more lines to the server side. At this point why not just bite the bullet and go back to the old days of php serving html. "Look mom, 0 lines of markup"
>At this point why not just bite the bullet and go back to the old days of php serving html.
Going back to it is the point. HTMX lets you do that while still having that button refresh just a part of the page, instead of reloading the whole page. It's AJAX with a syntax that frees you from JS and manual DOM manipulation.
I fairly recently developed an app in PHP, in the classic style, without frameworks. It provided me with stuff I remembered, the $annoyance $of $variable $prefixes, the wonky syntax, and a type system that makes JS look amazing -- but it still didn't make me scream in pain and confusion like React. Getting the app done was way quicker than if any JS framework was involved.
Having two separate but tightly integrated apps is annoying. HTMX or any other classic web-dev approaches like PHP and Django make you have one app, the backend. The frontend is the result of executing the backend.
Unironically PHP serving HTML wasn't that bad apparently
ALL OF THE LINES ARE ON THE SERVER FOR BOTH OF THEM! That's what ssr html is!
Both are just small javascript libraries that allow you to do some interactive stuff declarative in your ssr html. But Datastar is smaller, simpler, more powerful and closer to web standards.
Does it now allow handling non-2xx responses in non-SSE actions? Refusing to support it (even as an opt-in) is what made me just look into using alpine + alpinejs instead. SSE in d* is awesome when you have a feature that needs it, but IMO d* completely over-emphasizes and wants you to use it for everything. If I was using d*, I would use it more often, sure. But most of my projects just need little html updates on a click of a button, that's all. I'm not going to change the whole architecture to tailor it to a 1% feature.
> One of the amazing things from David Guillot’s talk is how his app updated the count of favored items even though that element was very far away from the component that changed the count.
This might not seem like a big deal, but it looks like Datastar dramatically reduces the overhead of a common use-case. The article shows how to update a component and a related count, elsewhere in the UI.
A more practical use-case might be to show a toast in tandem with navigating to another view. Or updating multiple fields on a form validation failure.
Oh my. I didn’t notice that. What usually is a pita, now is a breeze
Datastar's "Build reactive web apps that stand the test of time" tagline invites some skepticism.
...are you going to share anything about that skepticism to allow for people to respond...?
Not OP but how long has datastar been around to make that kind of claim?
I assume the "claim" is because it is built on web standards, so there's nothing really that'll break
I like that datastar has better defaults, embracing SSE makes certain things much simpler and cleaner even on the backend (no need to wrangle templates with htmx oob for example).
I am okay with the open-core and pro model.
But, the maintainers are quite combative on HN and Reddit as well. This does not bode well for the otherwise great project.
Funny, the next cycle is starting ;-) I remember Vaadin which was a great framework just before angularJS took off. Now Datastar seems to give it another try and bring everyone back to server calls...
I really like the ideas of hmx I just didn't find the actual implementation was complete enough to make performant apps... So I fixed it
Datastar has been great. If you need the pro features you might be doing it wrong.
Lot of mentions of HTMX along the comments. If you are interested and have the time, read the the first chapters of this book[1]. Very well written and a bit nostalgic I should say, at least to some of us who had lived the web 1.0 days.
[1] https://hypermedia.systems/
HTMX can do all of that via OOB updates no?
Yup, which is why I never understand why people keep making this criticism that could have been avoided by just reading the docs a little bit more or even asking on the htmx Discord.
Yeah but then you don't get to write a cool blog post.
Sure, go try doing it and see what happens. Make sure to handle every verb, automatically connect, expo backoff, etc!
I literally do that, it is not that hard. Getting locked into framework with weird licensing is probably harder engineering problem to solve.
the htmx dev literally created idiomorph because, among other reasons, OOB isn't sufficient. A version of idiomorph is what datastar uses internally
you miss the sse support (in htmx it's a plugin) and the reactivity part (like alpine).
if you think they work the same at all you haven't tried either
OOB is annoying in HTMX because you need to include hx-swap-oob attribute in every component you want to swap this way. In Datastar you just use id.
Genuine question from an "old school" web developer: can someone please give me an example of where these new frontend technologies are actually better than just using HTML, CSS, and vanilla JavaScript or jQuery?
I have honestly yet to see an example where using something like React doesn't just look like it's adding unnecessary complexity.
These tools are absolutely nothing like React. Take a look at what htmx does, which is even simpler from a spec standpoint. There are actual on-going efforts to get it into the actual HTML spec. htmx and the like are basically built for us old-skool types (and thankfully many youngins are catching on).
> basically built for us old-skool types
Glad I'm not the only one. Ever since the first HTMX article, I felt like I was kidding myself. I had/have this thought in my head that "no way that we were that close to having all this right 25 years ago." I'm coming around and seeing that this tech gets the job done by doing one thing really well, and the whole API around it is dead-simple and bulletproof because of it. It's that good-old UNIX philosophy that's the enabling tech here.
While I can't say for certain that IE6 or early Firefox could have handled DOM swaps gracefully without real shadow DOM support, early Ajax provided the basic nuts-and-bolts to do all of this. So, why haven't we seen partial page updates as a formalism, sooner?
if it's simpler why is it so much more code and slower with less features?
Not sure what you mean by so much more code. Datastar seems to do more than htmx. Otherwise, there are less features because React and friends over-complicate things for the vast majority of use-cases.
Datastar is 40% smaller than HTMX even before you add SSE or Alpine or JS head support, etc.
Ohhhhh I thought you meant simpler than React, lol. Gotcha. I was going by what it does, not line of code of the implementation, which is what matters in this context (a skeptic looking to check something out quickly).
Ok I think I've been talking to a bot because looking back on this, I'm confused.
I think Datastar back when I was learning web programming and the dawn of AJAX would be Xajax [1]. I didn't even learn JavaScript back then because Xajax would generate a JS shim that you could call to trigger server side functions, and the function replace page fragments with new, server-generated content.
[1] https://github.com/Xajax/Xajax
While htmx reminds me of Adobe Spry Data [2] enough that I did a research into htmx and realize that Spry Data's equivalent is a htmx plugin and htmx itself is more similar to Basecamp's Hotwire. I assume there should be a late 2000 era AJAX library that do something similar to htmx, but I didn't use one as jQuery is easy enough anyway.
[2] https://opensource.adobe.com/Spry/articles/spry_primer/index...
Anyway as other commenters has said, the idea of htmx is basically for some common use cases that you used jQuery, you might as well use no JavaScript at all to achieve the same tasks. But that is not possible today, so think of htmx as a polyfill for future HTML features.
Personally I still believe in progressive enhancements (a website should work 100% without JavaScript, but you'll lose all the syntactic sugar - for example Hashcash-style proof of work captcha may just give you the inputs and you'll have to do the exact same proof of work manually then submit the form), but I've yet to see any library that is able to offer that with complex interface, without code duplication at all. (Maybe ASP.NET can do that but I don't like the somewhat commercialized .NET ecosystem)
The UI I think would require React is a wizard-style form with clientside rendered widgets (eg. tabs). If you can't download a library to implement that, it is a lot of work to implement it on backend especially in modern websites where your session is now JWT instead of $_SESSION that requires a shared global session storage engine. I'd imagine that if you don't use React when the user go back to the tabbed page you'd need to either implement tab switching code on the backend side as well, or cheat and emit a JS code to switch the active tab to whatever the backend wants.
> The UI I think would require React is a wizard-style form with clientside rendered widgets (eg. tabs).
Can you think of any example sites/web apps which illustrate what you mean? I'm imagining something like VSCode, but AFAIK it's built with a custom JS framework and not React.
Try the EC2 creation page. There are tabs for advanced options, widgets like images selection that you can choose from AWS-managed, Community, your own AMI, etc. And then the next page is confirmation of similar widget which you can go back and edit. I'd imagine that if you render it in backend first and one of the tabs has error your backend form library has to know how to rerender all the widgets that you already implement in JavaScript once. If the page is done in SPA the backend just send the data back and the existing frontend widget just have to rehydrate itself.
Datastar IS html, css and vanilla javascript. Its like jquery on steroids. It is anti-react. If that's what you're about, you'll love datastar
I'm really just starting with htmx but came across datastar yesterday. This is a great comparison and is confirming some of my impressions, so thanks! I'll still look a bit more but if the main thing is that it's naively adding Alpine or Stimulus then datastar is not for me.
its not like that at all, read the guide and build something before passing judgement
I was going to because I like the architecture, but then I saw the fact that licensing changes are going to be frequent, and that the developers seem a bit aggressive on another thread, and I've decided to skip it.
Datastar developers are free to do what they want with their code, but as someone who releases open source software, I'm tired of projects using open source simply to create a moat or user base then switch to a proprietary model.
> then I saw the fact that licensing changes are going to be frequent
What are you referring to here? Sounds important.
Edit: Looks like its this < https://drshapeless.com/blog/posts/htmx,-datastar,-greedy-de... >
NOTHING changed about the license. FUD
:pointing-up-emoji:
> I'll still look a bit more
"Why I switched from HTMX to Datastar" -> Why I never switched to HTMX, because there will always be something better, and for that there also will be something better.
Or the then backwards-incompatible HTMX v2 will give it the rest, leaving all the obsolete codebase. It's the circle of life.
Here is my take on datastar from htmx https://chrismalek.me/posts/data-star-first-impressions/
>datastar
I have never heard of it, but I loved it reading a bit of the docs, especially as someone who doesn't like the whole front end circus! I was planning to teach myself svelte but it seems this one is more than enough!
I can't say I like the server returning portions of HTML that need to match the HTML in the client, but I can see myself trying it in a monorepo and using some templating lib on both sides.
Let's say I'm intrigued and on the fence.
I guess one thing that might be potentially problematic is that if you update the server while someone has the page still open you need to match their original templates version and not the new (potentially incompatible) one
Fair point, but we often do "fat morphs", i.e. patching the whole <body> element because this one isn't going anywhere.
I've heard good things about Unpoly too. Any experiences around here?
I find Unpoly easier to use than HTMX, their demo is second-to-none:
But absolutely terrible name.> Since then, teams everywhere have discovered the same thing: turning a single-page app into a multi-page hypermedia app often slashes lines of code by 60% or more while improving both developer and user experience.
Well, not at all. The only compelling reason for me to use server-side rendering for apps (not blogs obviously,they should be HTML) is for metadata tags. That's why I switched from pure React and everything has been harder, slower for the user and more difficult to debug than client-side rendering.
These "we cut 70% of our codebase" claims always make me laugh. We have no idea what was going on in that original codebase. The talk literally shows severely cursed lines stretching to the moon like:
<div hx-get="{% url 'web-step-discussion-items-special-counters' object.bill_id object.pk %}?{{ request.GET.url...who knows how many characters long it is.
It's hard to tell whether they optimised the app, deleted a ton of noise, or just merged everything into those 300-character-long megalines.
> These "we cut 70% of our codebase" claims always make me laugh.
There's also a slide in my talk that presents how many JS dependencies we dropped, while not adding any new Python. Retrospectively, that is a much more impressive achievement.
... but the whole social movement of "back to the backend" is about getting rid of the client-side application as a separate component
of course it (should) lead to a lot less code! at the cost of completely foregoing most of the capabilities offered by having a well-defined API and a separate client-side application
... and of course this is happening as over the last ~2 decades we mostly figured out that most of that amazing freedom on the client-side does not worth it
... most clients are dumb devices (crawlers), most "interactions" are primitive read-only ones, and having a fast and simple site is a virtue (or at least it makes economic sense to shunt almost all complexity to the server-side, as we have fast and very capable PoPs close to users)
> ... and of course this is happening as over the last ~2 decades we mostly figured out that most of that amazing freedom on the client-side does not worth it
It's not that, at least in my opinion, it's that we love (what we perceive as) new and shiny things. For the last ten years with Angular, React, Vue et al., new waves of developers have forgotten that you can output stuff directly from the server to the browser outside of "APIs".
This implementation is "dumb" to me. Feels like the only innovation is using SSE, otherwise it's a `selector.onClick(() => {selector.outerHTML = await (fetch(endpoint)).json()});`. That's most of the functionality right there. You can even use native HTML element attributes instead of doing .onClick https://developer.mozilla.org/en-US/docs/Web/API/Element/cli....
I really don't see any benefit to using this.
Cool how do exponential back off, make sure that it auto connects on tab visibility changes, make sure when you replace stuff it keeps the same selection. I'm sure if you had enough of these you'll end up with 10 kilobyte shim
Ok? Not sure what's your point. I'm not saying the package is bloated or anything. I'm saying it's a very simple functionality that was considered an anti-pattern when Angular and React were coming up.
sprinkling event handlers all over and doing DOM manipulations and trying to pile jQuery plugins on other plugins ... that was the anti-pattern
saying nah, fuck this, let's just do a rerender is what happened, and going back to doing it on server-side is one way, but doing it on client-side is the "React way"
I don't know how old are you but I distinctly remember how big of a hard on everyone had for Angular and then React and virtualdom. React actually brought some good things in how you engineered your frontend code. This thing goes back on that completely and forces you to mix and match your frontend code on both the frontend and the backend. I genuinely don't understand how one could seriously consider this for a large application.
They said it’s from React, just the fact that you don’t have to deal with virtual DOM is almost guaranteed to get a 50% reduction
I'm really enjoying watching everything revert back to how it worked fifteen years ago
That's kind of the point. Don't throw out the new modern features of the browser but use them with fine-grainer activity. Otherwise most of the state lives in the back end. It's really just getting back to normalcy
I’m just waiting for the people remembering why we changed things fifteen years ago
I thought returning html from server was considered mixing up separation of concerns… oh well
This website that we are communicating on returns html from the server. That's how the web works
That's... no, servers sending HTML to the client was where we started with all this.
That's why the H in HTTP and in HTML stand for "Hypertext." Any time a webserver replies with something other than markup, _that's_ the extension/exception to that very old design.
Now, if you're talking about the separation of user-interface, data, logic, and where HTML fits in, that's a much bigger discussion.
partial html?
Returning html from a server is... just the WWW.
Oh I guess I missed the point. So everything is rendered on server and even templating is on server side. Fair.
Two posts about one, lesser known, framework in the top two spots of HN.
It smells of rigging.
Datastar author here, all this happened while I was asleep so yeah I'm really good at reading the system in my dreams. I must be actually AGI and totally not a real person. Highly suspect
Last time I checked both have questionable shadow dom support if any
Use webcomponents all the time
to elaborate on this for others, the datastar homepage has webcomponents such as the starfield animation at the top. And they;re releasing a fantastic web component framework/tool soon called Rocket. It'll be like Lit, but simpler, better and integrated with the rest of datastar
Honestly, seeing the Datastar server-side snippets reminded me of writing RJS in Rails back in the day.
Everything old is new again.
Having the backend aware of the IDs in the HTML leads to pain. The HTMX way seems a lot simpler and Rails + Turbo has gone in that direction as well.
i don't know what the big deal is
with a REST API, the front and back ends need to agree on the JSON field names
with an HTML API for Datastar, the front and back ends need to agree on the element IDs
Really not a huge difference
Right but this isn’t a REST API. This technique specifically rejects that approach.
With htmx, the server just returns HTML and the logic for handling it is entirely in the front end.
That’s interesting. Does it integrate well with Spring Boot?
I had a SaaS project last year with massive HTMX code base. Code was big and pain was even bigger. A few months back I attempted to convert parts of it to DataStar but the introduction of premature "DataStar Pro" and putting pretty basic but essential utilities behind the paywall killed the vibe. I scrapped the idea and wouldn't go near it.
Having just watched the Vite documentary both HTMX and DataStar have a higher order mission to challenge dominant incumbent JS frameworks like React/NextJS. HTMX is struggling and in my opinion Datastar is DOA!
Win the adoption, win the narrative then figure out cashing in. People behind Vite won the JS bundling race, they now have a new company Void(0) and raised venture money. NextJS solved major React pain points, gave it away for free and built a multi-billion$ infrastructure to host it.
DOA! NGMI! Should just give up cause popular stuff is popular!
Sorry I care more about metrics and flamegraphs than what tech youtuber is faffing on about.
surely this is missing a /s at the end... surely...
Link to datastar portal: https://data-star.dev/essays/why_another_framework
Be careful with Datastar. If the paid "PRO" features are not enough to warn you let me just say that I had a very unfortunate encounter with the author. I asked about how to do something like wire:navigate in Livewire and he told me that's not necessary and I don't understand Datastar and I should go fuck off. He was very ad hominem and aggressive. Won't use his product ever.
Excuse me but based on your post history I have to imagine it came out of nowhere
You are wrong. A few months ago before the Pro announcement I was exploring HTMX, Unpoly and Datastar was the new thing. It looked cool - especially the demo of a game like thingy. But the page was kind of unresponsive still. This is a common pattern among even Liveview pages where the server roundtrip is still a thing unlike in a typical SPA despite it beings using SSE it's still not local React/Svelte/Vue app experience. That's why you will end up moving more and more parts to Alpine from Livewire/Liveview .... anyway.... I asked the guys on the Discord channel for Datastar how would I do the spa like page navigation between pages. And he got irritated - probably because this wasn't the first time somebody asked that - and told me that I don't get Datastar that that is wrong, Datastar don't care about that. But it was in a such weird aggressive way , he was mociking me, my intelligence and used very childish ad hominem attacks. I then left. And ok, he doesn't like the spa navigation or datastar doesn't care with that at all, but the way he addressed via an attack on me was super negative. You don't call people idiots because they would like from Datastar a functionality that is common in HTMX, Unpoly, Liveview or Livewire. Perhaps they have something like that - or in the PRO version, but I don't care. If you want realtime go with Phoenix Liveview instead - their community is much more friendly and mature.
Echo this. Like the tech, but the community is toxic, led by the lead dev from the front. Don't see a future.
https://youtu.be/y79L3fhJI3o?t=8054 if you want to understand my feeling or approach.
“Someone that always accepts you also enables you to make bad choices.” That’s a good point!
[dead]
We don't shy away from telling you your ideas are terrible. Being mature is caring about your code and your users. You were VERY clearly making bad technically reasons and we pushed back. We aggressively care about the details and if you don't then please go use Liveview. We aren't trying to win popularity contests. Show code and prove your point or continue to clutch your pearls
[flagged]
[flagged]
[dead]
[dead]
from https://data-star.dev/: Simple. Fast. Light. No VCs. Coded by Hand
You have my sword!