I sympathize with the pedantry here and found Fielding's paper to be interesting, but this is a lost battle. When I see "REST API" I can safely assume the following:
- The API returns JSON
- CRUD actions are mapped to POST/GET/PUT/DELETE
- The team constantly bikesheds over correct status codes and at least a few are used contrary to the HTTP spec
- There's a decent chance listing endpoints were changed to POST to support complex filters
Like Agile, CI or DevOps you can insist on the original definition or submit to the semantic diffusion and use the terms as they are commonly understood.
Fielding won the war precisely because he was intellectually incoherent and mostly wrong. It's the "worse is better" of the 21st century.
RPC systems were notoriously unergonomic and at best marginally successful. See Sun RPC, RMI, DCOM, CORBA, XML-RPC, SOAP, Protocol Buffers, etc.
People say it is not RPC but all the time we write some function in Javascript like
const getItem = async (itemId) => { ... }
which does a
GET /item/{item_id}
and on the backend we have a function that looks like
Item getItem(String itemId) { ... }
with some annotation that explains how to map the URL to an item call. So it is RPC, but instead of a highly complex system that is intellectually coherent but awkward and makes developers puke, we have a system that's more manual than it could be but has a lot of slack and leaves developers feeling like they're in control. 80% of what's wrong with it is that people won't just use ISO 8601 dates.
When I realized that I was calling openapi-generator to create client side call stubs on non-small service oriented project, I started missing J2EE EJB. And it takes a lot to miss EJB.
I'd like to ask seasoned devs and engineers here. Is it the normal industry-wide blind spot where people still crave for and are happy creating 12 different description of the same things across remote, client, unit tests, e2e tests, orm, api schemas, all the while feeling much more productive than <insert monolith here> ?
I've seen some systems with a lot of pieces where teams have attempted to avoid repetition and arranged to use a single source of schema truth to generate various other parts automatically, and it was generally more brittle and harder to maintain due to different parts of the pipeline owned by different teams, and operated on different schedules. Furthermore it became hard to onboard to these environments and figure out how to make changes and deploy them safely. Sometimes the repetition is really the lesser evil.
I see, it's also reminiscent of the saying "microservices" are an organisational solution. It's just that I also see a lot of churn and friction due to incoherent versions and specs not being managed in sync now (some solutions exists are coming though)
> I've seen some systems with a lot of pieces where teams have attempted to avoid repetition and arranged to use a single source of schema truth to generate various other parts automatically, and it was generally more brittle and harder to maintain due to different parts of the pipeline owned by different teams, and operated on different schedules.
I'm not sure what would lead to this setup. For years there are frameworks that support generating their own OpenAPI spec, and even API gateways that not only take that OpenAPI spec as input for their routing configuration but also support exporting it's own.
> it was generally more brittle and harder to maintain
It depends on the system in question, sometimes it's really worth it. Such setups are brittle by design, otherwise you get teams that ship fast but produce bugs that surface randomly in the runtime.
Absolutely, it can work well when there is a team devoted to the schema registry and helping with adoption. But it needs to be worth it to be able to amortize the resources, so probably best for bigger organizations.
I keep pining for a stripped-down gRPC. I like the *.proto file format, and at least in principle I like the idea of using code-generation that follows a well-defined spec to build the client library. And I like making the API responsible for defining its own error codes instead of trying to reuse and overload the transport protocol's error codes and semantics. And I like eliminating the guesswork and analysis paralysis around whether parameters belong in the URL, in query parameters, or in some sort of blob payload. And I like having a well-defined spec for querying an API for its endpoints and message formats. And I like the well-defined forward and backward compatibility rules. And I like the explicit support for reusing common, standardized message formats across different specs.
But I don't like the micromanagement of field encoding formats, and I don't like the HTTP3 streaming stuff that makes it impossible to directly consume gRPC APIs from JavaScript running in the browser, and I don't like the code generators that produce unidiomatic client libraries that follow Google's awkward and idiosyncratic coding standards. It's not that I don't see their value, per se*. It's more that these kinds of features create major barriers to entry for both users and implementers. And they are there to solve problems that, as the continuing predominance of ad-hoc JSON slinging demonstrates, the vast majority of people just don't have.
Brb, I'm off to invent another language independent IDL for API definitions that is only implemented by 2 of the 5 languages you need to work with.
I'm joking, but I did actually implement essentially that internally. We start with TypeScript files as its type system is good at describing JSON. We go from there to JSON Schema for validation, and from there to the other languages we need.
The pattern I observe is that in old industries, people who paid the cost, try to come up with a big heavy solution (xml, xsd, xpath), but newcomers will not understand the need, and bail onto simpler ideas (json), until they hit the wall and start to invent their own (jsonschema, jquery).
I haven't written anything up - maybe one day - but our stack is `ts-morph` to get some basic metadata out of our "service definition" typescript files, `ts-json-schema-generator` to go from there to JSON Schema, `quicktype-core` to go to other languages.
Schema validation and type generation vary by language. When we need to validate schemas in JS/TS land, we're using `ajv`. Our generation step exports the JSON Schema to a valid JS file, and we load that up with AJV and grab schemas for specific types using `getSchema`.
I evaluated (shallowly) for our use case (TS/JS services, PHP monolith, several deployment platforms):
- typespec.io (didn't like having a new IDL, mixes transport concerns with service definition)
- trpc (focused on TS-only codebases, not multi language)
- OpenAPI (too verbose to write by hand, too focused on HTTP)
- protobuf/thrift/etc (too heavy, we just want JSON)
I feel like I came across some others, but I didn't see anyone just using TypeScript as the IDL. I think it's quite good for that purpose, but of course it is a bit too powerful. I have yet to put in guardrails that will error out when you get a bit too type happy, or use generics, etc.
It's not that we like it, it's just that most other solutions are so complex and difficult to maintain that repetition is really not that bad a thing.
I was however impressed with FastAPI, a python framework which brought together API implementation, data types and generating swagger specs in a very nice package. I still had to take care of integration tests by myself, but with pytest that's easy.
So there are some solutions that help avoid schema duplication.
My experience is that all of these layers have identical data models when a project begins, and it seems like you have a lot of boilerplate to repeat every time to describe "the same thing" in each layer.
But then, as the project evolves, you actually discover that these models have specific differences in different layers, even though they are mostly the same, and it becomes much harder to maintain them as {common model} + {differences}, than it is to just admit that they are just different related models.
For some examples of very common differences:
- different base types required for different languages (particularly SQL vs MDW vs JavaScript)
- different framework or language-specific annotations needed at different layers (public/UNIQUE/needs to start with a capital letter/@Property)
- extra attached data required at various layers (computed properties, display styles)
- object-relational mismatches
The reality is that your MDW data model is different from your Database schema and different from your UI data model (and there may be multiple layers as well in any of these). Any attempt to force them to conform to be kept automatically in sync will fail, unless you add to it all of the logic of those differences.
I remember getting my hands on a CORBA specification back as a wide-eyed teen thinking there is this magical world of programming purity somewhere: all 1200 pages of it, IIRC (not sure what version).
And then you don't really need most of it, and one thing you need is so utterly complicated, that it is stupid (no RoI) to even bother being compliant.
What RPC mechanisms, in your opinion, are the most ergonomic and why?
(I have been offering REST’ish and gRPC in software I write for many years now. With the REST’ish api generated from the gRPC APIs. I’m leaning towards dropping REST and only offering gRPC. Mostly because the generated clients are so ugly)
Just use gRPC or ConnectRPC (which is basically gRPC but over regular HTTP). It's simple and rigid.
REST is just too "floppy", there are too many ways to do things. You can transfer data as a part of the path, as query parameters, as POST fields (in multiple encodings!), as multipart forms, as streaming data, etc.
Just not in C++ code. gprc has a bajillon dependencies, and upgrades are a major pain. If you have a dedicated build team and they are willing to support this - sure, go ahead and use it.
But if you have multiple targets, or unusual compilers, or don't enjoy working with build systems, stay away from complex stuff. Sure, REST may need some manual scaffolding, but no matter what your target is, there is a very good chance it has JSON and HTTP libs.
> REST is just too "floppy", there are too many ways to do things.
I think there is some degree of confusion in your reply. You're trying to compare a framework with an architecture style. It's like comparing, say, OData with rpc-over-HTTP.
What about errors in REST? It's HTTP status codes, and implementations are free to pick whatever approach they want for response documents. Some frameworks default to using Problem Details responses, but no one forces that.
You can't rely on them because they can come from middleboxes (load balancers, proxies, captive portals in hotels, etc.).
So you can't rely on having structured errors for common codes such as 401/403/404, it's very typical to get unstructured text in payloads for such errors. Not a few REST bindings just fail with unhelpful serialization exceptions in such cases.
I'd agree with your great-grandparent post... people get stuff done because of that.
There has been no lack of heavyweight, pre-declare everything, code-generating, highly structured, prescriptive standards that sloppyREST has casually dispatched (pun fully intended) in the real world. After some 30+ years of highly prescriptive RPC mechanisms, at some point it becomes time to stop waiting for those things to unseat "sloppy" mechanisms and it's time to simply take it as a brute fact and start examining why that's the case.
Fortunately, in 2025, if you have a use case for such a system, and there are many many such valid use cases, you have a number of solid options to choose from. Fortunately sloppyREST hasn't truly killed them. But the fact that it empirically dominates it in the wild even so is now a fact older than many people reading this, and bears examination in that light rather than casual dismissals. It's easy to list the negatives, but there must be some positives that make it so popular with so many.
> There has been no lack of heavyweight, pre-declare everything, code-generating, highly structured, prescriptive standards
Care to list them? REST mania started around early 2000-s, and at that time there was only CORBA available as a cross-language portable RPC. Microsoft had DCOM.
I don't just mean the ones that existed at the time of the start of REST. I mean all the ones that have come up since then as well and failed to displace it.
Arguably the closest thing to a prescriptive winner is laying OpenAPI on top of REST APIs.
Also, REST defined as "A vaguely HTTP-ish API that carries JSON" would have to be put later than that. Bear in mind that even after JSON was officially "defined" it's not like it instantly spread everywhere. I am among the many people that reconstructed something like it because we didn't know about it yet, even though it was nominally years old by that point. It took years to propagate out. I'd put "REST as we are talking about it" as late 200xs at the earliest for when it was really popular and only into the 2010s as to when you started expecting people to mean that when they said "Web API".
I mean... I used to get stuff done with CORBA and DCOM.
It's the question of long-term consequences for supportability and product evolution. Will the next person supporting the API know all the hidden gotchas?
> Well the competition is REST which doesn’t have a schema or required fields, so not much of a problem.
A vague architecture style is not competition to a concrete framework. At best, you're claiming that the competition to gRPC is rolling your own ad-hoc RPC implementation.
What I expect to happen now is an epiphany. Why do most developers look at tools like gRPC and still decide it's a far better option to roll their own HTTP-based RPC interface? I mean, it's a rational choice for most. Think about that for a moment.
That's exactly how these systems fail in the marketplace. You make one decision that's good for, say, 50% of cases but disqualifying for 50% of cases and you lose 50% of the market.
Make 5 decisions like that and you lost 31/32 of the market.
Sometimes you need your timestamps to be in a named timezone. If I have a meeting at 9am local time next month, I probably want it to still be at 9am even if the government suddenly decided to cancel daylight time.
Exchange/GMail/etc. already has this problem/feature. Their response is simple: Follow the organiser's timezone. If it's 9am on the organiser's calendar, it will stay at 9am on the organiser's calendar. Everyone else's appointment will float to match the organiser.
It's a delimited string. There are many fields within that string already.
"2025-07-10T09:48:27+01:00"
That contains, by my quick glance, at least 8 fields of information. I would argue the one field it does not carry but probably should is the _name_ of the timezone it is for.
ISO8601 is really broad with loads of edge cases and differing versions. RFC 3339 is closer, but still with a few quirks.
Not sure why we can't have one of these that actually has just one way of representing each instant.
> He formalised a concept and gave it a snappy name, and then the concept got left behind and the name stolen away from the purpose he created it for.
I'm not sure the "name was stolen" or the zealot concept actually never got any traction in production environments due to all the problems it creates.
> I'm not sure the "name was stolen" or the zealot concept actually never got any traction in production environments due to all the problems it creates.
That is a false dichotomy. Fielding gave a name to a specific concept / architectural style, the concept got ignored (rightly or wrongly, doesn’t matter) while the name he coined got recycled for something essentially entirely unrelated.
I'm not super familiar with SOAP and CORBA, but how is SOAP any more coherent than a "RESTful" API? It's basically just a bag of messages. I guess it involves a schema, but that's not more coherent imo, since you just end up with specifics for every endpoint anyways.
CORBA is less "incoherent", but I'm not sure that's actually helpful, since it's still a huge mess. You can most likely become a lot more proficient with RESTful APIs and be more productive with them, much faster than you could with CORBA. Even if CORBA is extremely well specified, and "RESTful" is based more on vibes than anything specific.
Though to be clear I'm talking about the current definition of REST APIs, not the original, which I think wasn't super useful.
SOAP, CORBA and such have a theory for everything (say authentication) It's hard to learn that theory, you have to learn a lot of it to be able to accomplish anything at all, you have to deal with build and tooling issues, but if you look closely there will be all sorts of WTFs. Developers of standards like that are always implementing things like distributed garbage collection and distributed transactions which are invariably problematic.
Circa 2006 I was working on a site that needed to calculate sales tax and we were looking for an API that could help with that. One vendor uses SOAP which would have worked if we were running ASP.NET but we were running PHP. In two days I figured out enough to reverse engineer the authentication system (docs weren't quite enough to make something that worked) but then I had more problems to debug. A competitive vendor used a much simpler system and we had it working in 45 min -- auth is always a chokepoint because if you can't get it working 100% you get 0% of the functionality.
HTTP never had an official authentication story that made sense. According to the docs there are basic, digest, etc. Have you ever seen a site that uses them? The world quietly adopted cookie-based auth that was an ad-hoc version of JSON Web Tokens, once we got an intellectually coherent spec snake oil vendors could spam HN with posts about how bad JWT is because... It had a name and numerous specifics to complain about.
Look at various modern HTTP APIs and you see auth is all across the board. There was the time I did a "shootout" of roughly 10 visual recognition APIs, I got all of them working in 20-30 mins except for Google where I had to install a lot of software on my machine, trashed my Python, and struggled mightily because... they had a complex theory of authentication which was a barrier to doing anything at all.
Agree with most of what you said, except about HTTP Basic auth. That is used everywhere - take a look at any random API and there is roughly 90% chance that this is the authentication mechanism used. For backends which serve a single frontend maybe not so much, but still in places.
> That is used everywhere - take a look at any random API and there is roughly 90% chance that this is the authentication mechanism used.
I have no idea where you got that idea from. I'm yet to work in a project where any service doesn't employ a mix of bearer token authentication schemes and API keys.
> I sympathize with the pedantry here and found Fielding's paper to be interesting, but this is a lost battle.
Why do people feel compelled to even consider it to be a battle?
As I see it, the REST concept is useful, but the HATEOAS detail ends up having no practical value and creates more problems than the ones it solves. This is in line with the Richardson maturity model[1], where the apex of REST includes all the HATEOAS bells and whistles.
Should REST without HATEOAS classify as REST? Why not? I mean, what is the strong argument to differentiate an architectural style that meets all but one requirement? And is there a point to this nitpicking if HATEOAS is practically irrelevant and the bulk of RESTful APIs do not implement it? What's the value in this nitpicking? Is there any value to cite thesis as if they where Monty Python skits?
For me the battle is with people who want to waste time bikeshedding over the definition of "REST" and whether the APIs are "RESTful", with no practical advantages, and then having to steer the conversation--and their motivation--towards more useful things without alienating them. It's tiresome.
It was buried towards the bottom of the article, but the reason, to me:
Clients can be almost automatic with a HATEOS implementation, because it is a self describing protocol.
Of course, Open API (and perhaps to some extent now AI) also mean that clients don't need to be written they are just generated.
However it is important perhaps to remember the context here: SOAP is and was terrible, but for enterprise that needed a complex and robust RPC system, it was beginning to gain traction. HATEOS is a much more general yet simple and comprehensive system in comparison.
Of course, you don't need any of this. So people built APIs they did need that were not restfull but had an acronym that their bosses thought sounded better than SOAP, and the rest is History.
Then let developer-Darwin win and fire those people. Let the natural selection of the hiring process win against pedantic assholes. The days are too short to argue over issues that are not really issues.
Defining media types seems right to me, but what ends up happening is that you use swagger instead to define APIs and out the window goes HATEOAS, and part of the reason for this is just that defining media types is not something people do (though they should).
Basically: define a schema for your JSON, use an obvious CRUD mapping to HTTP verbs for all actions, use URI local-parts embedded in the JSON, use standard HTTP status codes, and embed more error detail in the JSON.
> (...) and part of the reason for this is just that defining media types is not something people do (...)
People do not define media types because it's useless and serves no purpose. They define endpoints that return specific resource types, and clients send requests to those endpoints expecting those resource types. When a breaking change is introduced, backend developers simply provide a new version of the API where a new endpoint is added to serve the new resource.
In theory, media types would allow the same endpoint to support multiple resource types. Services would sent specific resource types to clients if they asked for them by passing the media type in the accept header. That is all fine and dandy, except this forces endpoints to support an ever more complex content negotiation scheme that no backend framework comes close to support, and this brings absolutely no improvement in the way clients are developed.
>the HATEOAS detail ends up having no practical value and creates more problems than the ones it solves.
Many server-rendered websites support REST by design: a web page with links and forms is the state transferred to client. Even in SPAs, HATEOAS APIs are great for shifting business logic and security to server, where it belongs. I have built plenty of them, it does require certain mindset, but it does make many things easier. What problems are you talking about?
> Why do people feel compelled to even consider it to be a battle?
Because words have specific meanings. There’s a specific expectation when using them. It’s like if someone said “I can’t install this app on my iPhone” but then they have an android phone. They are similar in that they’re both smartphones and overall behave and look similar, but they’re still different.
If you are told an api is restful there’s an expectation of how it will behave.
Words derive their meaning from the context in which they are (not) used, which is not fixed and often changes over time.
Few people actually use the word RESTful anymore, they talk about REST APIs, and what they mean is almost certainly very far from what Roy had in mind decades ago.
People generally do not refer to all smartphones as iPhones, but if they did, that would literally change the meaning of the word. Examples: Zipper, cellophane, escalator… all specific brands that became ordinary words.
We should probably stop calling the thing that we call REST, REST and be done with it - it's only tangentially related to what Fielding tried to define.
> We should probably stop calling the thing that we call REST (...)
That solves no problem at all. We have Richardson maturity model that provides a crisp definition, and it's ignored. We have the concept of RESTful, which is also ignored. We have RESTless, to contrast with RESTful. Etc etc etc.
None of this discourages nitpickers. They are pedantic in one direction, and so lax in another direction.
I’m with you. HATEOAS is great when you have two independent (or more) enterprise teams with PMs fighting for budget.
When it’s just yours and your two pizza team, contract-first-design is totally fine. Just make sure you can version your endpoints or feature-flag new API’s so it doesn’t break your older clients.
GraphQL was "promising" something because it was a thing by a single company.
HATEOAS didn't need to "promise" anything since it was just describing already existing protocols and capabilities that you can see in the links I posted.
And that's how you got POST-only GraphQL which for years has been busily reinventing half of HTTP
Discoverability by whom, exactly? Like if it's for developer humans, then good docs are better. If it's for robots, then _maybe_ there's some value... But in reality, it's not for robots.
HATEOAS solves a problem that doesn't exist in practice. Can you imagine an API provider being like, "hey, we can go ahead and change our interface...should be fine as long as our users are using proper clients that automatically discover endpoints and programmatically adapt accordingly"? Or can you imagine an API consumer going, "well, this HTTP request delivers the data we need, but let's make sure not to hit it directly -- instead, let's recursively traverse a graph of requests each time to make sure this is still the way to do it!"
You have got it wrong.
Let's say I build some API with different user roles. Some users can delete an object, others can only read it. The UI knows about the semantics of the operations and logical names of it, so when UI gets the object from server it can simply check, if certain operations are available, instead of encoding the permission checking on the client side. This is the discoverability. It does not imply generated interfaces, UI may know something about the data in advance.
> You have got it wrong. Let's say I build some API with different user roles. Some users can delete an object, others can only read it. The UI knows about the semantics of the operations and logical names of it, so when UI gets the object from server it can simply check, if certain operations are available, instead of encoding the permission checking on the client side.
Maybe you should reconsider the way you ask questions on this forum. Your tone is not appropriate and the question itself just demonstrates that you don't understand this topic.
Yes, I'm aware of this header and know the web standards well enough.
In hypermedia API you communicate to client the list of all operations in the context of the resource (note: not ON the resource), which includes not only basic CRUD but also operations on adjacent resources (e.g. on user account you may have an operation of sending a message to this user). Yes, in theory one could use OPTIONS with a non-standard response body to communicate such operations that cannot be expressed in plain HTTP verbs in Allow header.
However such solution is not practical, because it requires an extra round trip for every resource. There's a better alternative, which is to provide the list of operations with the resource using one of the common standards - HAL, JSON-LD, Siren etc. The example in my another comment in this thread is based on HAL. If you wonder what is that, look no further than at Spring - it does support HAL APIs out of the box from quite a long time. And of course there's an RFC draft and a Wikipedia article (https://en.wikipedia.org/wiki/Hypertext_Application_Language).
This is actually what we do at [DAYJOB] and it's been working well for over 12 years. Like any other kind of interface indirection it adds the overhead of indirection for the benefit of being able to change the producer's side of the implementation without having to change all of the consumers at the same time.
In this example you receive list of permitted operations embedded in the resource model. href=. means you can perform this operation on resource self link.
The promise of REST and HATEOAS was best realized not by building RESTful apps like say "my airline reservation app" but by building a programming system, spiritually like HTTP + HTML, in which you'd able to declaratively specify applications, of which "my airline reservation app" could be one and "my sports gambling service" could be another. So some smart person would invent a new application protocol with rich semantics as you did above, and a new type of user agent installed on desktops understands how to present them to the user, and the app on the server just assembles the resources in this rich format, directing users to their choices through the states of hte program.
So that never got done (because it's complex) and people started building apps like "my airline reservation app" but then realized to to build that domain app you don't need all the abstraction of a full REST system.
Oh, interesting. So rather than the UI computing what operations should be allowed currently by, say, knowing the user's current role and having rules baked into it about the relationship between role and UI widgets, the UI can compute what motive should be in or simply off of explicit statements or capability from the server.
I can see some meat on these bones. The counterpoint is that the protocol is now chattier than it would be otherwise... But a full analysis of bandwidth to the client would have to factor that you have to ship over a whole framework to implement those rules and keep those rules synchronized between client and server implementation.
I’d suggest that bandwidth optimization should happen when it becomes critical and control presence of hypermedia controls via feature flag or header. This way frontend becomes simpler, so FE dev speed and quality improves, but backend becomes more complex. The main problem here is that most backend frameworks are supporting RMM level 2 and hypermedia controls require different architecture to make server code less verbose. Unfortunately REST wasn’t understood well, so full support of it wasn’t in focus of open source community.
Or probably just an Allow header on a response to another query (e.g. when fetching an object, server could respond with an Allow: GET, PUT, DELETE if the user has read-write access and Allow: GET if it’s read-only).
That’s a neat idea actually, I think I’ll need to read up on the semantics of Allow again…. There is no reason you couldn’t just include it with arbitrary responses, no?
It’s something else. List of available actions may include other resources, so you cannot express it with pure HTTP, you need a data model for that (HAL is one of possible solutions, but there are others)
That API doesn’t look like REST level 3 API. For example, there’s an endpoint to create a node. It is not referenced by root or anywhere else. GetNode endpoint does include some traversal links in response, but those links are part of domain model, not part of the protocol. HAL does offer a protocol by which you enhance your domain model with links with semantics and additional resources.
I always thought soooo many REST implementations and explainers were missing a trick by ignoring the OPTIONS verb, it seems completely natural to me, but people love to stuff things inside of JSON.
> If it's for robots, then _maybe_ there's some value...
Nah, machine readable docs beat HATEOAS in basically any application.
The person that created HATEOAS was really not designing an API protocol. It's a general use content delivery platform and not very useful for software development.
The problems do exist, and they're everywhere. People just invented all sorts of hacks and workarounds for these issues instead of thinking more carefully about them. See my posts in this thread for some examples:
For most APIs that doesn’t deliver any value which can’t be gained from API docs, so it’s hard to justify. However, these days it could be very useful if you want an AI to be able to navigate your API. But MCP has the spotlight now.
I think you throw away a useful description of an API by lumping them all under RPC. If you tell me your API is RPC instead of REST then I'll assume that:
* If the API is available over HTTP then the only verb used is POST.
* The API is exposed on a single URL and the `method` is encoded in the body of the request.
It is true, if you say "RPC" I'm more likely to assume gRPC or something like that. If you say "REST", I'm 95% confident that it is a standard / familiar OpenAPI style json-over-http style API but will reserve a 5% probability that it is actually HATEOAS and have to deal with that. I'd say, if you are doing Roy Fielding certified REST / HATEOAS it is non-standard and you should call it out specifically by using the term "HATEOAS" to describe it.
People in the real world referring to "REST" APIs, the kind that use HTTP verbs and have routes like /resource/id as RPC APIs. As it stands in the world outside of this thread nobody does that.
At some level language is outside of your control as an individual even if you think it's literally wrong--you sometimes have to choose between being 'correct' and communicating clearly.
To me, the most important nuance really is that just like "hypermedia links" (encoded as different link types, either with Link HTTP header or within the returned results) are "generic" (think that "activate" link), so is REST as done today: if you messed up and the proper action should not be "activate" but "enable", you are in no better position than having to change from /api/v1/account/ID/activate to /api/v2/account/ID/enable.
You still have to "hard code" somewhere what action anything needs to do over an API (and there is more missing metadata, like icons, translations for action description...).
Mostly to say that any thought of this approach being more general is only marginal, and really an illusion!
While I ask people whether they actually mean REST according to the paper or not, I am one of the people who refuse to just move on. The reason being that the mainstream use of the term doesn’t actually mean anything, it is not useful, and therefore not pragmatic at all. I basically say “so you actually just mean some web API, ok” and move on with that. The important difference being that I need to figure out the peculiarities of each such web API.
>> The important difference being that I need to figure out the peculiarities of each such web API
So if they say it is Roy Fielding certified, you would not have to figure out any "peculiarities"? I'd argue that creating a typical OpenAPI style spec which sticks to standard conventions is more professional than creating a pedantically HATEOAS API. Users of your API will be confused and confusion leads to bugs.
..that was written before swagger/openAPI was a thing. now there's a real spec with real adoption and real tools and folks can let the whole rest-epic-semantic-fail be an early chapter of web devs doing what they do (like pointing at remotely relevant academic paper to justify what they're doing at work)
So you enjoy being pedantic for the sake of being pedantic? I see no useful benefit either from a professional or social setting to act like this.
I don’t find this method of discovery very productive and often regardless of meeting some standard in the API the real peculiarities are in the logic of the endpoints and not the surface.
I can see a value in pedantry in a professional setting from a signaling point of view. It's a cheap way to tell people "Hey! I'm not like those other girls, I care about quality," without necessarily actually needing to do the hard work of building that quality in somewhere where the discerning public can actually see your work.
(This is not a claim that the original commenter doesn't do that work, of course, they probably do. Pedants are many things but usually not hypocrites. It's just a qualifier.)
You'd still probably rather work with that guy than with me, where my preferred approach is the opposite of penalty. I slap it all together and rush it out the door as fast as possible.
What some people call pedantic, others may call precision. I normally just call the not-quite-REST API styles as simply "HTTP APIs" or even "RPC-style" APIs if they use POST to retrieve data or name their routes in terms of actions (like some AWS APIs).
Like all things in life it’s about balance. If you are to say things like the person I replied to says he does you are ultimately creating friction for absolutely no gain. Hence why I said being pedantic for the sake of being pedantic or in other words, being difficult for no good reason. There is a time and place for everything but over a decade plus of working and building many different APIs I see no benefit.
I cannot even recall a time where it caused me enough issues to even think about it later on. The business logic. I have had moments where I thought something was strange in a Elasticsearch API but again it was of no consequence.
100%. The needs of the client rule, and REST rarely meets the challenge. When I read the title, I was like "pfff", REST is crap to start with, why do I care?
It also means they made some effort to use appropriate http verbs instead of GET/POST for everything, and they made an effort to organize their urls into patterns like `/things/:id/child/:child_id`.
It was probably an organic response to the complexity of SOAP/WSDL at the time, so people harping on how it's not HATEOAS kinda miss the historical context; people didn't want another WSDL.
Not even just clients, but servers too would block anything not GET/POST/HEAD. And I believe PHP still to this day only has $_GET and $_POST as out of the box superglobals to conveniently get data params. I recall some "REST" APIs would let you use POST for PUT/DELETE requests if you added a special var or header specifying.
> It also means they made some effort to use appropriate http verbs instead of GET/POST for everything, and they made an effort to organize their urls into patterns like `/things/:id`.
No not really. A lot of people don't understand REST to be anything other than JSON over HTTP. Sometimes, the HTTP verbs thing is done as part of CRUD but actually CRUD doesn't necessarily have to do with the HTTP verbs at all and there can just be different endpoints for each operation. It's a whole mess.
I can count on one hand the number of times I've worked on a service that can accurately be modeled as just representational state transfer. The rest have at least some features that are inherently, inescapably some form of remote procedure call. Which the original REST model eschews.
This creates a lot of impedance mismatch, because the HTTP protocol's semantics just weren't designed to model that kind of thing. So yeah, it is hard to figure out how to shoehorn that into POST/GET/PUT/DELETE and HTTP status codes. And folks who say it's easy tend to get there by hyper-focusing on that one time they were lucky enough to be working on a project where it wasn't so hard, and dismissing as rare exceptions the 80% of cases where it did turn out to be a difficult quagmire that forced a bunch of unsatisfying compromises.
Alternatively you can pick a protocol that explicitly supports RPC. But that's not necessarily any better because all the well-known options with good language support are over-engineered monstrosities like GRPC, SOAP, and (shudder) CORBA. It might reduce your domain modeling headaches, but at the cost of increased engineering and operations hassle. I really can't blame anyone for deciding that an ad-hoc, ill-specified, janky application of not-actually-REST is the more pragmatic option. Because, frankly, it probably is.
xml-rpc (before it transmogrified into SOAP) was pretty simple and flexible. Still exists, and there is a JSON variant now too. It's effectively what a lot of web APIs are: a way to invoke a method or function remotely.
HTTP/JSON API works too, but you can assume it's what they mean by REST.
It makes me wish we stuck with XML based stuff, it had proper standards, strictly enforced by libraries that get confused by things not following the standards. HTTP/JSON APIs are often hand-made and hand-read, NIH syndrone running rampant because it's perceived to be so simple and straightforward. To the point of "we don't need a spec, you can just see the response yourself, right?". At least that was the state ~2012, nowadays they use an OpenAPI spec but it's often incomplete, regardless of whether it's handmade (in which case people don't know everything they have to fill in) or generated (in which case the generators will often have limitations and MAYBE support for some custom comments that can fill in the gaps).
> HTTP/JSON API works too, but you can assume it's what they mean by REST.
This is the kind of slippery slope where pedantic nitpickers thrive. The start to complain that if you accept any media type other than JSON then it's not "REST-adjacent" anymore because JSON is in the name and some bloke wrote down somewhere that JSON was a trait of this architectural style.
In this sense, the term "RESTful" is useful to shut down these pedantic nitpickers. It's "REST-adjacent" still, but the right answer to nitpicking is "who cares".
> The start to complain that if you accept any media type other than JSON then it's not "REST-adjacent" anymore because JSON is in the name and some bloke wrote down somewhere that JSON was a trait of this architectural style.
wat?
Nowhere is JSON in the name of REpresentational State Transfer. Moreover, sending other representations than JSON (and/or different presentations in JSON) is not only acceptable, but is really a part of REST
> Nowhere is JSON in the name of REpresentational State Transfer.
If you read the message you're replying to, you'll notice you are commenting on the idea of coining the concept of HTTP/JSON API as a better fitting name.
This. Or maybe we should call it "Rest API" in lowercase, meaning not the state transfer, but the state of mind, where developer reached satisfaction with API design and is no longer bothered with hypermedia controls, schemas etc.
Assuming the / was meant to describe it as both an HTTP API and a JSON API (rather than HTTP API / JSON API) it should be JSON/HTTP, as it is JSON over HTTP, like TCP/IP or GNU/Linux :)
I recall having to maintain an integration to some obscure SOAP API that ate and spit out XML with strict schemas and while I can't remember much about it, I think the integration broke quite easily if the other end changed their API somehow.
SOAP in particular can really not be described as "proper".
It had the advantage that the API docs were always generated, and thus correct, but the most common thing is for one software stack not being able to use a service built with another stack.
> - The team constantly bikesheds over correct status codes and at least a few are used contrary to the HTTP spec
I really wish people just used 200 status code and put encoded errors in the payloads themselves instead of trying to fuse the transport layer's (which HTTP serves as, in this case) concerns with the application's concerns. Seriously, HTTP does not mandate that e.g. "HTTP/1.1 503 Ooops\r\n\r\n" should be stuffed into the TCP's RST packet, or into whatever TLS uses to signal severe errors, for bloody obvious reasons: it doesn't belong there.
Like, when you get a 403/404 error, it's very bloody difficult to tell apart the "the reverse proxy before the server is misconfigured and somebody forgot to expose the endpoint" and "the server executed your request to look up an item perfectly fine: the DB is functional, and the item you asked for is not in there" scenarios. And yeah, of course I could (and should) look at and try to parse the response's body but why? This "let's split off the 'error code' part of the message from the message and stuff it somewhere into the metadata, that'll be fine, those never get messed up or used for anything else, so no chance of confusion" approach just complicates things for everyone for no benefit whatsoever.
The point of status codes is to have a standard that any client can understand. If you have a load balancer, the load balancer can unhealthy backends based on the status code. Similarly if you have some job scheduler or workflow engine that's calling your API, they can execute an appropriate retry strategy based on the status code. The client in most cases does not care about why something failed, only whether it has failed. Being able to tell apart if the failure was due to reverse proxy or database or whatever is the server's concern and the server can always do that with its own custom error codes.
> The client in most cases does not care about why something failed, only whether it has failed.
"...and therefore using different status codes in the responses is mostly pointless. Therefore, use 200 and put "s":"error" in the response".
> Being able to tell apart if the failure was due to reverse proxy or database or whatever is the server's concern.
One of the very common failures is for the request to simply never reach "the server". In my experience, one of the very first steps in improving the error handling quality (on the client's side) is to start distinguishing between the low-level errors of "the user has literally no connection Internet" and "the user has connected somewhere, but that thing didn't really speak the server protocol", and the high-level errors "the client has talked with the application server (using the custom application protocol and everything), and there was an error on the application server's side". Using HTTP-status codes for both low- and high-level errors makes such distinctions harder to figure out.
I did say most cases, not all cases. There are some concerns that are considered cross cutting, to have made it into the standard. For instance, many clients will handle a 401 by redirecting to an auth flow, or handle a 429 rate limited by backing off before making a request, handle 426 by upgrading the protocol etc. Not all statuses may be relevant for a given system, you can club several scenarios under a 400 or a 500 and that's perfectly fine for many use cases. But when you have cross cutting concerns, it's beneficial to follow fine grained status codes. It gives you a lot of flexibility in how you can connect different parts of your architecture and reduces integration headaches.
I think a more precise term for what you're describing is transport errors vs business errors. You're right that you don't want to model all your business errors as HTTP status codes. Your business scenarios are most certainly numerous and need to be much more fine grained than what the standard offers. But the important thing is all errors business or transport eventually need to map to a HTTP status code because that's the protocol you're ultimately speaking.
what is a unhealthy request? is searching for a user which was _not found_ by the server unhealthy? was the request successful? thats where different opinions exist.
Sure, there's some nuance to it that depends on your application, but it's the server's responsibility to do so, not the client's. The status code exists for this reason and the standard also classifies status codes under client error and server error so that clients can determine whether a server is unhealthy simply by looking at the status code.
Eh, if you're doing RPC where the whole request/response are already in another layer on top of HTTP, then sure, 200 everything.
But to me, "REST" means "use the HTTP verbs to talk about resources". The whole point is that for resource-oriented APIs, you don't need another layer. In which case serving 404s for things that don't exist, or 409s when you try to put things into a weird state makes perfect sense.
I use the term "HTTP API"; more general. Context, in light of your definition: In many cases labeled "REST", there will only be POST, or POST and GET, and HTTP 200 status with an error in JSON is used instead of HTTP status codes. Your definition makes sense as a weaker form of the original, but it it still too strict compared to how the term is used. "REST" = "HTTP with JSON bodies" is the most practical definition I have.
On the other hand, functional app returning http errors clouds your observability and can hide real errors. It's not always ideal for the client either. 404 specifically is bad. Do I have a wrong id, wrong address, is it actually 401/403, or is it just returned by something along the way? Code alone tells you nothing, might as well return 200 for a valid request that was correctly processed.
> HTTP 200 status with an error in JSON is used instead of HTTP status codes
I've seen some APIs that not only always return a 200 code, but will include a response in the JSON that itself indicates whether the HTTP request was successfully received, not whether the operation was successfully completed.
Building usable error handling with that kind of response is a real pain: there's no single identifier that indicates success/failure status, so we had to build our own lookup table of granular responses specific to each operation.
It's always better to use GET/POST exclusively. The verb mapping was theoretical from someone who didn't have to implement. I've long ago caved to the reality of the web's limited support for most of the other verbs.
Agreed... in most large (non trivial systems) REST ends up looking/devolving closer to RPC more and more and you end up just using get and post for most things and end up with a REST-ISH-RPC system in practice.
REST purists will not be happy, but that's reality.
I really hate my conclusions here, but from a limited freedom point of view, if all of that is going to happen...
> The team constantly bikesheds over correct status codes and at least a few are used contrary to the HTTP spec
So we should better start with a standard scaffolding for the replies so we can encode the errors and forget about status codes. So the only thing generating an error status is unhandled exception mapped to 500. That's the one design that survives people disagreeing.
> There's a decent chance listing endpoints were changed to POST to support complex filters
So we'd better just standardize that lists support both GET and POST from the beginning. While you are there, also accept queries on both the url and body parameters.
> - CRUD actions are mapped to POST/GET/PUT/DELETE
Agree on your other three but I've seen far too many "REST APIs" with update, delete & even sometimes read operations behind a POST. "SOAP-style REST" I like to call it.
Do you care? From my point of view, post, put, delete, update, and patch all do the same. I would argue that if there is a difference, making the distinction in the url instead of the request method makes it easier to search code and log. And what's the correct verb anyway?
So that's an argument that there may be too many request methods, but you could also argue there aren't enough. But then standardization becomes an absolute mess.
I don't. I could deliver a diatribe on how even the common arguments for differentiating GET & POST don't hold water. HEAD is the only verb with any mild use in the base spec.
On the other hand:
> correct status codes and at least a few are used contrary to the HTTP spec
This is a bigger problem than verb choice & something I very much care about.
I agree.
From what I have seen in corporate settings, using anything more than GET/POST takes the time to deploy the API to a different level. Using UPDATE, PATCH etc. typically involves firewall changes that may take weeks or months to get approved and deployed followed a never ending audit/re-justification process.
Yeah but GET doesn’t allow requests to have bodies (yeah, I know, technically you can but it’s not very useful), and this is a legitimate issue preventing its use in complex APIs.
There's no point in idempotency for operations that change the state. DELETE is supposed to be idempotent, but it can only be if you limit yourself to deletion by unique, non-repeating id. Should you do something like delete by email or product, you have to use another operation, which then obviously will be POST anyway. And there's no way to "cache" a delete operation.
It's just absurd to mention idempotency when the state gets altered.
The defined behaviors are not so well defined for more complex APIs.
You may have an API for example that updates one object and inserts another one, or even deletes an old resource and inserts a new one
The verbs are only very clear for very simple CRUD operations. There is a lot of nuance otherwise that you need documentation for and having to deal with these verbs both as the developer or user of an API is a nuisance with no real benefit
Exactly. What you describe is how I see REST being used today and I wish people accepted the semantic shift and stopped with their well-ackshually. It serves nothing.
> Like Agile, CI or DevOps you can insist on the original definition or submit to the semantic diffusion and use the terms as they are commonly understood
This is an insightful observation. It happens with pretty much everything
As it has been happening recently with the term vibecoding. It started with some definition, and now it’s morphed into more or less just meaning ai-assisted coding. Some people don’t like it[1]
RESTful has gone far beyond the http world. It's the new RPC with JSON payload for whatever. I use it on embedded systems that has no database at all, POST/GET/PUT/DELETE etc are perfectly simple to map into WRITE|READ|Modify|Remove commands. As long as the API is documented, I don't really care about its http origins.
Haha, our API still returns XML. At least, most of the endpoints do. Not the ones written by that guy who thinks predictability in an API is lower priority than modern code, those ones return JSON.
Presumably they had an existing API, and then REST became all the rage, so they remapped the endpoints and simply converted the XML to JSON. What do you do with the <tag>value</tag> construct? Map it to the name `$`!
Congratulations, we're REST now, the world is a better place for it. Off to the pub to celebrate, gents. Ugh.
I think people tend to forget these things are tools, not shackles
How can you idiomatically do a read only request with complex filters? For me both PUT and POST are "writable" operations, while "GET" are assumed to be read only. However, if you need to encode the state of the UI (filters or whatnot), it's preferred to use JSON rather than query params (which have length limitations).
One uses POST and recognizes that REST doesn't have to be so prescriptive.
The part of REST to focus on here is that the response from earlier well-formed requests will include all the forms (and possibly scripts) that allow for the client to make additional well-formed requests. If the complex filters are able to be made with a resource representation or from the root index, regardless of HTTP methods used, I think it should still count as REST (granted, HATEOAS is only part of REST but I think it should be a deciding part here).
When you factor in the effects of caching by intermediate proxy servers, you may find yourself adapting any search-like method to POST regardless, or at least GET with params, but you don't always want to, or can't, put the entire formdata in params.
Plus, with the vagaries of CSRF protections, per-user rate-limiting and access restrictions, etc.,, your GET is likely to turn into a POST for anything non-trivial. I wouldn't advise trying for pure REST-ful on the merits of its purity.
There's no requirement in HTTP (or REST) to either create a resource or return a Location header.
For the purposes of caching etc, it's useful to have one, as well as cache controls for the query results, and there can be links in the result relative to the Location (eg a link href of "next" is relative to the Location).
The response to POST can return everything you need. The Location header that you receive with it will contain permanent link for making the same search request again via GET.
Pros: no practical limit on query size.
Cons: permalink is not user-friendly - you cannot figure out what filters are applied without making the request.
If you really want this idiomatically correct, put the data in JSON or other suitable format, zip it and encode in Base64 to pass via GET as a single parameter. To hit the browser limits you will need so big query that you may hit UX constraints earlier in many cases (2048 bytes is 50+ UUIDs or 100+ polygon points etc).
Pros: the search query is a link that can be shared, the result can be cached.
Cons: harder to debug, may not work in some cases due to URI length limits.
HTML FORMs are limited to www-form-encoded or multipart. The length or the queries on a GET with a FORM is limited by intermediaries that shouldn't be limiting it. But that's reality.
Do a POST of a query document/media type that returns a "Location" that contains the query resource that the server created as well as the data (or some of it) with appropriate link elements to drive the client to receive the remainder of the query.
In this case, the POST is "writing" a query resource to the server and the server is dealing with that query resource and returning the resulting information.
Soon, hopefully, QUERY will save us all. In the meantime, simply using POST is fine.
I've also seen solutions where you POST the filter config, then reference the returned filter ID in the GET request, but that often seems like overkill even if it adds some benefits.
I describe mine as a JSON-Based Representational State SOAP API to other internal teams. When their eyes cross I get to work sifting through the contents of their pockets for linting errors and JIRA tickets.
I've been doing web development for more than a decade and I still can't figure out what REST actually means, it's more of a vibe.
When I think about some of the RESTy things we do like return part of the response as different HTTP codes, they don't really add that much value vs. keeping things on the same layer. So maybe the biggest value add so far is JSON, which thanks to its limited nature prevents complication, and OpenAPI ecosystem which grew kinda organically to provide pretty nice codegen and clients.
More complexity lessons here: look at oneOf support in OpenAPI implementations, and you will find half of them flat out don't have it, and the other half are buggy even in YOTL 2025.
> I've been doing web development for more than a decade and I still can't figure out what REST actually means, it's more of a vibe.
While I generally agree that REST isn’t really useful outside of academic thought experiments: I’ve been in this about as long as you are, and it really isn’t hard. Try reading Fieldings paper once; the ideas are sound and easy to understand, it’s just with a different vision of the internet than the one we ended up creating.
This is very true. Over my 15 years of engineering, I have never suffered_that_ much with integrating with an api (assuming it exists). So the lack of "HATEOaS" hasn't even been noticable for me. As long as they get most of the 400 status codes right (specifically 200, 401, 403, 429) I usually have no issuss integrating and don't even notice that they don't have some "discoverable api". As long as I can get the data I need or can make the update I need I am fine.
I think good rest api design is more a service for the engineer than the client.
> As long as they get most of the 400 status codes right (specifically 200, 401, 403, 429)
A client had build an API that would return 200 on broken requests. We pointed it out and asked if maybe it could return 500, to make monitoring easier. Sure thing, next version "Http 200 - 500", they just wrote 500 in the message body, return remained 200.
I just consumed an API where errors were marked with a "success": false field.
The "success" is never true. If it's successful, it's not there. Also, a few endpoints return 500 instead, because of course they do. Oh, and one returns nothing on error and data on success, because, again, of course it does.
Anyway, if you want a clearer symptom that your development stack is shit and has way too much accidental complexity, there isn't any.
This is the real world. You just deal with it (at least I do) because fighting it is more work and at the end of the day the boss wants the project done.
Ive seen this a few times in the past but for a different reason. What would happen in these cases was that internally there’d be some cascade of calls to microservices that all get collected. In the most egregious examples it’s just some proxy call wrapping the “real” response.
So it becomes entirely possible to get a 200 from the thing responding g to you but it may be wrapping an upstream error that gave it a 500.
I've had frontend devs ask for this, because it was "easier" to handle everything in the same then callback. They wanted me to put ANY error stuff as a payload in the response.
> So the lack of "HATEOaS" hasn't even been noticable for me.
I think HATEOAS tackles problems such as API versioning, service discovery, and state management in thin clients. API versioning is trivial to manage with sound API Management policies, and the remaining problems aren't really experienced by anyone. So you end up having to go way out of your way to benefit from HATEOAS, and you require more complexity both on clients and services.
In the end it's a solution searching for problems, and no one has those problems.
>> If you move user posts to another server, the href changes, nothing else does
It isn't clear what insurance you are really buying here. You can't possibly mean another physical server. Obviously that happens all the time with any site but no one is changing links to point to the actual hardware - just use a normal load balancer. Is it domain name change insurance? That doesn't add up either.
>> If suddenly a bug is found that lets people iterate through users that aren't them, you can encrypt the url, but nothing else changes.
Normally you would just fix the problem instead of doing weird application level encryption stuff.
>> The bane of the life of backend developers is frontend developers that do dumb "URL construction" which assumes that the URL format never changes
If those "frontend" developers are paying customers as in the case of AWS, OpenAI, Anthropic then you probably want to make your API as simple as possible for them to understand.
> The team constantly bikesheds over correct status codes and at least a few are used contrary to the HTTP spec
I've done this enough times that now I don't really bother engaging.
I don't believe anyone gets it 100% correct ever.
As long as there is nothing egregiously incorrect,
I'll accept whatever.
> I sympathize with the pedantry here and found Fielding's paper to be interesting, but this is a lost battle.
True. Losing hacking/hacker was sad but I can live with it - crypto becoming associated with scam coins instead of cryptography makes me want to fight.
I have seen monstrosities claiming to be rest that use HTTP but actually have a separate set of action verbs, nestled inside of HTTP's.
In a server holding a "deck of cards," there might be a "HTTP GET <blah-de-blah>/shuffle.html" call with the side-effect of performing a server-side randomization operation.
I just made that up because I don't want to impugn anyone. But I've seen API sets full of nonsense just like that.
This article also tries to make the distinction of not focusing on the verbs themselves. That the RESTful dissertation doesn’t focus on them.
The other side of this is that the IETF RESTful proposals from 1999 that talk about the protocol for implementation are just incomplete. The obscure verbs have no consensus on their implementation and libraries across platforms may do PUT, PATCH, DELETE incompatibly. This is enough reason to just stick with GET and POST and not try to be a strict REST adherents since you’ll hit a wall.
Importantly for the discussion, this also doesn't mean the push for REST api's was a failure. Sure, we didn't end up with what was precisely envisioned from that paper, but we still got a whole lot better than CORBA and SOAP.
The lowest common denominator in the REST world is a lot better than the lowest common denominator in SOAP world, but you have to convince the technically literate and ideological bunch first.
>- There's a decent chance listing endpoints were changed to POST to support complex filters
Please. Everyone knows they tried to make the complex filter work as a GET, then realized the filtering query is so long that it breaks whatever WAF or framework is being used because they block queries longer than 4k chars.
I disagree. It's a perfectly fine approach to many kinds of APIs, and people aren't "mediocre" just for using widely accepted words to describe this approach to designing HTTP APIs.
So your view is that the person who coins a term forever has full rights to dictate the meaning of that term, regardless of what meaning turns out to be useful in practice and gets broadly accepted by the community? And you think that anyone who disagrees with such an ultra-prescriptivist view of linguistics is somehow a "mediocre programmer"? Do I have that right?
I have no dog in this fight, but 90% of technical people around me keep calling authentication authorization no matter how many times I explain the difference to those who even care to listen. It's misused in almost every application developed in this country.
Sometimes it really is bad and "everybody" can be very wrong, yes. None of us are native English speakers (most don't speak English at all), so these foreign sounding words all look the same, it's a forgivable "offence".
No. For all people who use "REST": If reading Fielding is the exception that gets you on HN, than not reading Fielding is what average person does. Mediocre.
Using Fieldings term to refer to something else is an extra source of confusion which kinda makes the term useless. Nobody knows what the speaker exactly refers no.
The point is lost on you though. There are REST APIs (almost none), and there are "REST APIs" - a battle cry of mediocre developers. Now go tell them their restful has nothing to do with rest. And I am now just repeating stuff said in article and in comments here.
Why should I (or you, for that matter) go and tell them their restful has nothing to do with rest? Why does it matter? They're making perfectly fine HTTP APIs, and they use the industry standard term to describe what kind of HTTP API it is.
It's convenient to have a word for "HTTP API where entities are represented by JSON objects with unique paths, errors are communicated via HTTP status codes and CRUD actions use the appropriate HTTP methods". The term we have for that kind of API is "rest". And that's fine.
> 1. Never said I'm going to tell them. It's on someone else. I'm just going to lower my expectation from such developers accordingly.
This doesn't seem like a useful line of conversation, so I will ignore it.
> 2. So just "HTTP API".
No! There are many kinds of HTTP APIs. I've both made and used "HTTP APIs" where HTTP is used as a transport and API semantics are wholly defined by the message types. I've seen APIs where every request is an HTTP POST with a protobuf-encoded request message and every response is a 200 OK with a protobuf-encoded response message (which might then indicate an error). I've seen GraphQL APIs. I've seen RPC-style APIs where every "RPC call" is a POST requset to an endpoint whose name looks like a function name. I've seen APIs where request and response data is encoded using multipart/form-data.
Hell, even gRPC APIs are "HTTP APIs": gRPC uses HTTP/2 as a transport.
Telling me that something is an "HTTP API" tells me pretty much nothing about how it works or how I'm expected to use it, other than that HTTP is in some way involved. On the other hand, if you tell me that something is a "REST API", I already have a very good idea about how to use it, and the documentation can assume a lot of pre-existing context because it can assume that I've used similar APIs before.
> On the other hand, if you tell me that something is a "REST API", I already have a very good idea about how to use it (...)
Precisely this. The value of words is that they help communicate concepts. REST API or even RESTful API conveys a precise idea. To help keep pedantry in check, Richardson's maturity model provides value.
Everyone manages to work with this. Not those who feel the need to attack people with blanket accusations of mediocrity, though. They hold onto meaningless details.
It’s the worst kind of pedantry. Simultaneously arrogant, uncharitable and privileged.
Most of us are not writing proper Restful APIs because we’re dealing with legacy software, weird requirements the egos of other developers. We’re not able to build whatever we want.
> It’s the worst kind of pedantry. Simultaneously arrogant, uncharitable and privileged.
I'd go as far as to claim it is by far the dumbest kind, because it has no value, serves no purpose, and solves no problem. It's just trivia used to attack people.
I met a DevOps guy who didn't know what "dotfiles" are.
However I'd argue people who use the term to describe it the same as everyone else is the smart one, if you want to refer to the "real" one just add "strict" or "real" in front of it.
I don't think we should dismiss people over drifting definitions and lack of "fountational knowledge".
This is more like people arguing over "proper" English, the point of language is to communicate ideas. I work for a German company and my German is not great but if I can make myself understood, that's all that's needed. Likewise, the point of an API is to allow programs, systems, and people to interoperate. If it accomplishes that goal, it's fine and not worth fighting over.
If my API is supposed to rely on content-type, how many different representations do I need? JSON is a given anymore, and maybe XML, but why not plain text, why not PDF? My job isn't an academic paper, good enough to get the job done is going to have to be good enough.
> This is more like people arguing over "proper" English, the point of language is to communicate ideas.
ur s0 rait, eye d0nt nnno wy ne1 b0dderz tu b3 "proppr"!!!!1!!
</sarcasm>
You are correct that communication is the point. Words do communicate a message. So too does disrespect for propriety: it communicates the message that the person who is ignorant or disrespectful of proper language is either uneducated or immature, and that in turn implies that such a person’s statements and opinions should be discounted if not ignored entirely.
Words and terms mean things. The term ‘REST’ was coined to mean something. I contend that the thing ‘REST’ originally denoted is a valuable thing to discuss, and a valuable thing to employ (I could be wrong, but how easy will it be for us to debate that if we can’t even agree on a term for the thing?).
It’s similar to the ironic use of the word ‘literally.’ The word has a useful meaning, there is already the word ‘figuratively’ which can be used to mean ‘not literally’ and a good replacement for the proper meaning of ‘literally’ doesn’t spring to mind: misusing it just decreases clarity and hinders communication.
> If my API is supposed to rely on content-type, how many different representations do I need? JSON is a given anymore, and maybe XML, but why not plain text, why not PDF?
Whether something is JSON or XML is independent of the representation — they are serialisations (or encodings) of a representation. E.g. {"type": "foo","id":1}, <foo id="1"/>, <foo><id>1</id></foo> and (foo (id 1)) all encode the same representation.
>misusing it just decreases clarity and hinders communication
There is no such thing as "misusing language". Language changes. It always does.
Maybe you grew up in an area of the world where it's really consistent everywhere, but in my experience I'm going to have a harder time understanding people even two to three villages away.
Because language always changes.
Words mean a particular thing at a point in time and space. At another one, they might mean something completely different. And that's fine.
You can like it or dislike it, that's up to you. However, I'd say every little bit of negative thoughts in that area only serve to make yourself miserable, since humanity and language at large just aren't consistent.
And that's ok. Be it REST, literally or even a normal word such as 'nice', which used to mean something like 'foolish'.
Again, language is inconsistent by default and meanings never stay the same for long - the more a terminus technicus gets adapted by the wider population, the more its meaning gets widened and/or changed.
One solution for this is to just say "REST in its original meaning" when referring to what is now the exception instead of the norm.
Pretty much everyone speaks English too, it's the official language of the company. Though we all try to be respectful; if I can't understand them then they tell me again in English. I try to respond as much as possible in German and switch to English if needed - there's also heavy use of deepl on my side which seems to be a lot more idiomatic than Google, MS, or Apple translate.
When I was working on my first HTTP-based API 13 years ago, based on many comments about true REST, I decided to first study what REST should really be. I've read Fielding's paper cover to cover, I've read RESTful Web Services Cookbook from O'Reilly and then proceeded to workaround Django idioms to provide REST API. This was a bit cargo cult thinking from my end, I didn't truly understand how REST would benefit my service. I took me several more years and several more HTTP APIs to understand that in the case of these services, there were no benefits.
The vision of API that is self discoverable and that works with a generic client is not practical in most cases. I think that perhaps AWS dashboard with its multitude of services has some generic UI code that allows to handle these services without service-specific logic, but I doubt even that.
Fielding's paper doesn't provide a complete recipe for building self-discoverable APIs. It is an architecture, but the details of how clients should really discover the endpoints and determine what these endpoints are doing is left out of the paper. To make truly discoverable API you need to specify protocol for endpoints discovery, operations descriptions, help messages etc. Then you need clients that understand your specification, so it is not really a generic client. If your service is the only one that implements this client, you made a lot of extra effort to end up with the same solution that not REST services implement - a service provides an API and JS code to work with the API (or a command line client that works with the API), but there is no client code reuse at all.
I also think that good UX is not compatible with REST goals. From a user perspective, app-specific code can provide better UX than generic code that can discover endpoints and provide UI for any app. Of course, UI elements can be standardized and described in some languages (remember XUL?), so UI can adapt to app requirements. But the most flexible way for such standardization is to provide a language like JavaScript that is responsible for building UI.
> The vision of API that is self discoverable and that works with a generic client is not practical in most cases. [..] Fielding's paper doesn't provide a complete recipe for building self-discoverable APIs. It is an architecture, but the details of how clients should really discover the endpoints and determine what these endpoints are doing is left out of the paper. To make truly discoverable API you need to specify protocol for endpoints discovery, operations descriptions, help messages etc. Then you need clients that understand your specification, so it is not really a generic client
You said what I've thought about REST better than I could have put it.
A true implementation of a REST client is simply not possible. Any client needs to know what all those URLs are going to do. If you suddenly add a new action (like /cansofspam/123/frobnicate), a client won't know what to do with it. The client will need to be updated to add frobnication functionality, or else it just ignores it. At best, it could present a "Frobnicate" button.
This really is why nobody has implemented a REST server or client that actually conforms to Fielding's paper. It's just not realistic to have a client that can truly self-discover an API without being written to know what APIs to expect.
> A true implementation of a REST client is simply not possible
Sure it is, it's just not very interesting to a programmer. It's the browser. That's why there was no need talk about client implementations. And why it's hypermedia driven. It's implicit in the description that it's meant to be discoverable by humans.
AirBnb rediscovered REST when they implemented their Server Driven UI Platform. Once you strip away all the minutiae about resources and URIs the fundamental idea of HATEOS is ship the whole UI from the server and have the client be generic (the browser). Now you can't have the problem where the frontend gets desynced with the backend.
I'm watching with some interest to see if the LLM/MCP crowd gradually reinvents REST principles. LLMs are the only software we have invented yet which is smart enough to use a REST interface.
I think you're right. APIs have a lot of aspects to them, so describing them is hard. API users need to know typical latency bounds, which error codes may be retried, whether an action is atomic or idempotent. HATEOAS gets you none of these things.
So fully implementing a perfect version of REST is usually not necessary for most types of problems users actually encounter.
What REST has given us is an industry-wide lingua franca. At the basic level, it's a basic understanding of how to map nouns/verbs to HTTP verbs and URLs. Users get to use the basic HTTP response codes. There's still a ton of design and subtlety to all this. Do you really get to do things that are technically allowed, but might break at a typical load balancer (returning bodies with certain error codes)? Is your returning 500 retriable in all cases, with what preferred backoff behavior?
> What REST has given us is an industry-wide lingua franca. At the basic level, it's a basic understanding of how to map nouns/verbs to HTTP verbs and URLs. Users get to use the basic HTTP response codes. There's still a ton of design and subtlety to all this. Do you really get to do things that are technically allowed, but might break at a typical load balancer (returning bodies with certain error codes)? Is your returning 500 retriable in all cases, with what preferred backoff behavior?
What was wrong with all nouns and verbs map to POST (maybe sometimes GET), and HTTP response codes other than 200 mean your request failed somewhere between the client code and the application server code. HTTP 200 means the application server processed the request and you can check the payload for an application indicator of success, failure, and/or partial success. If you work with enough systems, you end up going back to this, because least common denominator works everywhere.
Either way, anything that isn't ***** SOAP is a good start.
>API users need to know typical latency bounds, which error codes may be retried, whether an action is atomic or idempotent. HATEOAS gets you none of these things.
Those things aren't always necessary. However API users always need to know which endpoints are available in the current context. This can be done via documentation and client-side business logic implementing it (arguably, more work) or this can be done with HATEOAS (just check if server returned the endpoint).
HTTP 500 retriable sounds like a design error, when you can use HTTP 503 to explicitly say "try again later, it's temporal".
I think this hits the nail on the head. Complaining that the current understanding of REST isn't exactly the same as the original usage is missing the point that now REST gives people a good idea of what to expect and how to use the exposed interface.
It's actually a very analogous complaint to how object-oriented programming isn't how it was supposed to be and that only Smalltalk got it right. People now understand what is meant when people say OOP even if it's not what the creator of the term envisioned.
Computer Science, and even the world in general, is littered with examples of this process in action. What's important is that there's a general consensus of the current meaning of a word.
One thing though - if you do take the time to learn the original "perfect" versions of these things, it helps you become a much better system designer. I'm constantly worried about API design because it has such large and hard-to-change consequences.
On the other hand, we as an industry have also succeeded quite a bit! So many of our abstractions work really well.
The browser is "generic code" that provides the UX we use all day, every day.
REST includes allowing code to be part of the response from a server, there are the obvious security issues, but the browsers (and the standards) have dealt with a lot of that.
It's not just the original REST that usually has no benefits. The industry's reinterpreted version of weak REST also usually has little to no benefits. Who really cares that deleting a resource must necessarily be done with the DELETE HTTP verb rather than simply a POST?
The POST verb exists, there's no reason not to use it to ask a server to delete data.
In fact, there are plenty of reasons not to use DELETE and PUT. Middleboxes managed by incompetent security people block them, they require that developers have a minimum of expertise and don't break the idempotency rule, lots of software stacks simply don't support them (yeah, those stacks are bad, what still doesn't change anything), and the most of the internet just don't use the benefit they provide (because they don't trust the developers behind the server to not break the rules).
And you just added more work to yourself to interpret the HTTP verb. You already need work to interpret the body of a POST request, so why not put the information of "the operation is trying to delete" inside the body?
You have to represent the action somehow. And letting proxies understand a wee bit of what's going on is useful. That's how you can have a proxy that lets your users browse the web but not login to external sites, and so on.
> To make truly discoverable API you need to specify protocol for endpoints discovery, operations descriptions, help messages etc. Then you need clients that understand your specification, so it is not really a generic client.
Generic clients just need to understand hypermedia and they can discover your API, as long as your API returns hypermedia from its starting endpoint and all other endpoints are transitively linked from that start point.
Let me ask you this: if I gave you an object X in your favourite OO language, could you use your languages reflection capabilities to discover all properties of every object transitively reachable from X, and every method that could be called on X and all objects transitively reachable from X? Could you not even invoke many of those methods assuming the parameter types are mostly standardized objects or have constructors that accept standardized objects?
This is what discoverability via HATEOAS is. True REST can be seen as exporting an object model with reflection capabilities. For clients that are familiar with your API, they are using hypermedia to access known/named properties and methods, and generic clients can use reflection to do the same.
> Let me ask you this: if I gave you an object X in your favourite OO language, could you use your languages reflection capabilities to discover all properties of every object transitively reachable from X, and every method that could be called on X and all objects transitively reachable from X? Could you not even invoke many of those methods assuming the parameter types are mostly standardized objects or have constructors that accept standardized objects?
Sure this can be done, but I can't see how to build a useful generic app that interacts with objects automatically by discovering the methods and calling them with discovered parameters. For things like debugger, REPL, or some database inspection/manipulation tool, this approach is useful, but for most apps exposed to end users, the UI needs to be aware what the available methods do and need to be intentionally designed to provide intuitive ways of calling the methods.
> For things like debugger, REPL, or some database inspection/manipulation tool, this approach is useful, but for most apps exposed to end users
Yes, exactly, but the point is that something like Swagger becomes completely trivial, and so you no longer need a separate, complex tool to do what the web automatically gives you.
The additional benefits are on the server-end, in terms of maintenance and service flexibility. For instance, you can now replace and transition any endpoint URL (except the entry endpoint) at any time without disrupting clients, as clients no longer depend on specific URL formats (URLs are meaningful only to the server), but depend only on the hypermedia that provides the endpoints they should be using. This is Wheeler's aphorism: hypermedia adds one level of indirection to an API which adds all sorts of flexibility.
For example, you could have a set of servers implementing an application function, each designated by a different URL, and serve the URL for each server in the hypermedia using any policy that makes sense, effectively making an application-specific load balancer. We worked around scaling issues over the years by adding adding SNI to TLS and creating dedicated load balancers, but Fielding's REST gave us everything we needed long before! And it's more flexible than SNI because these servers don't even have to be physically located behind a load balancer.
There are many ideas in the REST paper that are super useful, but the goal of making a generic client working with any API is difficult if not impossible to achieve.
Was the client of the service that you worked on fully generic and application independent? It is one thing to be able to change URLs only on the server, without requiring a client code change, and such flexibility is indeed practical benefit that the REST architecture gives us. It is another thing to change say, a calendar application into a messaging application just by returning a different entry point URL to the same generic client code. This goal is something that REST architecture tried to address, but IMO it was not realized in practice.
> There are many ideas in the REST paper that are super useful, but the goal of making a generic client working with any API is difficult if not impossible to achieve.
It's definitely possible to achieve: anywhere that data is missing you present an input prompt, which is exactly what a web browser does.
That said, the set of autonomous programs that can do something useful without knowing what they're doing is of course more limited. These are generic programs like search engines and AI training bots that crawl and index information.
> It is another thing to change say, a calendar application into a messaging application just by returning a different entry point URL to the same generic client code.
Browser provide generic execution environment, but the client code (JavaScript/HTML/CSS) is not generic. Calendar application and messaging application entry points provide application specific code for implementing calendar or messaging apps functions . I don't think this is what was proposed in the REST paper, otherwise we wouldn't have articles like 'Most RESTful APIs aren't really RESTful'.
> but the client code (JavaScript/HTML/CSS) is not generic
The HTML/hypermedia returned is never generic, that's why HATEOAS works at all and is so flexible.
The "client" JS code is provided by the server, so it's not really client-specific (the client being the web browser here--maybe should call it "agent"). Regardless, sending JS is an optimization, calendars and messaging are possible using hypermedia alone, and proves the point that the web browser is a generic hypermedia agent that changes behaviour based on hypermedia that's dictated solely by the URL.
You can start programming any app with a plain hypermedia version and then add JS to make the user experience better, which is the approach that HTMx is reviving.
What I don't get from this and some other comments in this thread, is that the argument seems to be that REST is practical, every web page is actually a REST app, it has one entry point, all the actions are discoverable by the user from this entry point, application specific JavaScript code is allowed by REST architecture. But then, why are there so many articles and posts (also by Fielding) that complain that people claim do be doing REST, but are actually not doing it?
In all these discussion, I didn't see an article that would actually show an example of a successful application that does REST properly, all elements of it.
While I haven't looked too deeply, I think HN might be an example that follows REST. At least I don't see anything in the functionality that wouldn't be easily fulfilled by following REST with no change in the outwards behaviour. A light sprinkle of JS to avoid some page reloads and that's it.
I agree that not many frameworks encourage "true" REST design, but I don't think it's too hard to get the hang of it. Try out htmx on a toy project and restrict yourself to using literally no JS and no session state, and every UI-focused endpoint of your favoured server-side framework returns HTML.
> Generic clients just need to understand hypermedia
Yikes. Nobody wants to implement a browser to create a UI for ordering meals from a restaurant. I'm pretty sure the reason we ended up settling on just tossing JSON blobs around and baking the semantics of them into the client is that we don't want the behavior of the application to get tripped up on whether someone failed to close a <b> tag.
(Besides: practically, for a web-served interface, the client may as well carry semantic understanding because the client came from the server).
> Yikes. Nobody wants to implement a browser to create a UI for ordering meals from a restaurant.
You don't need a full web browser. Fielding published his thesis in 2000, browsers were almost trivial then, and the needs for programming are even more trivial: you can basically skip any HTML that isn't a link tag or form data for most purposes.
> baking the semantics of them into the client is that we don't want the behavior of the application to get tripped up on whether someone failed to close a <b> tag.
This is such a non-issue. Why aren't you worried about badly formatted JSON? Because we have well-tested JSON formatters. In a world where people understood the value of hypermedia as an interchange format, we'd be in exactly the same position.
And to be clear, if JSON had links as a first class type rather than just strings, then that would qualify as a hypermedia format too.
If I'm going to do HTML that isn't HTML then I might as well not do HTML, there's a lot of sharp edges in that particular markup that I'd prefer to avoid.
> Why aren't you worried about badly formatted JSON?
Because the json spec is much smaller than the HTML spec so it is much easier for the parser to prevalidate and reject invalid JSON.
Maybe I need to reread the paper and substitute "a good hypermedia language" for HTML conceptually, see if it makes more sense to me.
Fielding's thesis barely mentions HTML (20 times), and usually in the context of discussing standards or why JS beat Java applets, but he discusses hypermedia quite a bit (over 70 times).
If you extended JSON so that URLs (or URIs) were first-class, something like:
it would form a viable hypermedia format because then you can reliably distinguish references from other forms of data. I think the only reason something like this wasn't done is that Crockford wanted JSON to be easily parsable by existing JS interpreters.
You can workaround this with JSON schema to some extent, where the schema identifies which strings are URLs, but that's just way more cumbersome than the distinction being made right in the format.
> Fielding's paper doesn't provide a complete recipe for building self-discoverable APIs.
But it does though. A HTTP server returns a HTTP response to a request from a browser. The request is a HTML webpage that is rendered to the user with all discoverable APIs visible as clickable links. Welcome to the World Wide Web.
You describe how web pages work, web pages are intended for human interactions, APIs are intended for machine interaction. How a generic Python or JavaScript client can discover these APIs? Such clients will request JSON representation of a resource, because JSON is intended for machine consumption, HTML is intended for humans. Representations are equivalent, if you request JSON representations of a /users resource, you get a JSON list. If you request HTML representation of a /users resource you get an HTML list, but the content should be the same. Should you return UI controls for modifying a list as part of the HTML representation? If you do so, your JSON and HTML representations are different, and your Python and JavaScript client still cannot discover what list modification operations are possible, only human can do it by looking at the HTML representation. This is not REST if I understand the paper correctly.
> You describe how web pages work, web pages are intended for human interactions
Exactly, yes! The first few sentences from Wikipedia...
"REST (Representational State Transfer) is a software architectural style that was created to describe the design and guide the development of the architecture for the World Wide Web. REST defines a set of constraints for how the architecture of a distributed, Internet-scale hypermedia system, such as the Web, should behave." -- [1]
If you are desiging a system for the Web, use REST. If you are designing a system where a native app (that you create) talks to a set of services on a back end (that you also create), then why conform to REST principles?
Most web apps today use APIs that return JSON and are called by JavaScript. Can you use REST for such services or does REST require a switch to HTML representation rendered by the server where each interaction returns new HTML page? How such HTML representation can even use PUT and DELETE verbs, as these are available only to JavaScript code? What If I design a system where API calls can be made both from the web and from a command line client or a library? Should I use two different architecture to cover both use cases?
> Most web apps today use APIs that return JSON and are called by JavaScript. Can you use REST for such services
You kind of could, but it's a bad idea. A core tenet of the REST architecture is that it supports a network of independent servers that provide different services (i.e. webpages) and users can connect to any of them with a generic client (i.e. a web browser). If your mission is to build a specialized API for a specialized client app (a JS web app in your example), then using REST just adds complexity for no reason.
For example, you could define a new content-type application/restaurantmenu+json and build a JS client that renders the content-type like a restaurant's homepage. Then you could use your restaurant browser JS client to view any restaurant's menu in a pretty UI... except your own restaurant's server is the only one that delivers application/restaurantmenu+json, so your client is only usable on your own page and you did a whole lot of additional work for no reason.
> does REST require a switch to HTML representation ... How such HTML representation can even use PUT and DELETE verbs
Fielding's REST is really just an abstract idea about how to build networks of services. It doesn't require using HTTP(S) or HTML, but it so happens that the most significant example of REST (the WWW) is built on HTTPS and HTML.
As in the previous example, you could build a REST app that uses HTTP and application/restaurantmenu+json instead of HTML. This representation could direct the client to use PUT and DELETE verbs if you like, even though these aren't a thing in HTML.
Thanks for the insight. This very well matches my experience from the top comment of this thread. I added discovery related functionality to JSON based API in an attempt to follow REST and didn't see any benefits from the extra work and complexity. Understanding that REST is inherently for HTML (or a similar hypertext based generic client) and it doesn't make sense to try to match it with JSON+JS based API is very refreshing. Even the article that sparkled this discussion gives example of JSON based API with discover related functionality added to it.
Keep in mind that Fielding used his "REST" principles to drive work on the release of HTTP 1.1 in 1999. He subsequently codified these RESTful principles in his dissertation in 2000. The first JSON message was sent in 2001. The reason RESTful is perfectly suited to the WWW is because REST drove HTTP 1.1.
Now days there are just so many use cases where an architecture is more suited to RPC (and POST). And trying to bend the architecture to be "more RESTful" just serves to complicate.
Personally I never saw "self-discoverable" as a goal, let alone an achievable one, so I think you're overestimating the ambitions of simple client-design.
Notably, the term "discoverable" doesn't even appear in TFA.
From the article: 'The phrase “not being driven by hypertext” in Roy Fielding’s criticism refers to the absence of Hypermedia as the Engine of Application State (HATEOAS) in many APIs that claim to be RESTful. HATEOAS is a fundamental principle of REST, requiring that the client dynamically discover actions and interactions through hypermedia links embedded in server responses, rather than relying on out-of-band knowledge (e.g., API documentation).'
Fielding's idea of REST does seem pretty pointless. "Did you know that human-facing websites are made out of hyperlinked pages? This is so crazy that it needs its own name for everyone to parrot!" But a web application isn't going to be doing much beyond basic CRUD when every individual change in state is supposed to be human-driven. And if it's not human-driven, then it's protocol-driven, and therefore not REST.
Rest is a structured description of how html/http/web work sorta. An example of a non rest aspect of how a webpage works is how the favicon is by default fetched by a well known url, or how cookies use a magic list of domains to decide if two origins are similar enough or not.
Other than things like this the browser makes very little assumptions about how a website works, it just loads what the html tells it to load and shows the content to the user. Imagine the alternative where browser by default assumed that special pages example.com/login and example.com/logout existed and would sometimes navigate you there by themselves (like with a prompt "do you want to login?")
If you wanted to design a new improved html alternative from scratch you likely would want the same properties.
The issue with Rest API is that most of what we call API are not websites and most of their clients are not browser but servers or the JavaScript in the browser where IDs are generally more useful than links.
REST is incredibly successful, html is rest, CSS is rest, even JavaScript itself is rest, but we do not call APIs that return html/CSS/js/media APIs we call them websites
Where this kind of API design is useful is when there is a user with an agent (e.g. a browser or similar) who can navigate the API and interact with the different responses based on their media types and what the links are called.
Most web APIs are not designed with this use-case in mind. They're designed to facilitate web apps that are much more specific in what they're trying to present to the user. This is both deliberate and valuable; app creators need to be able to control the presentation to achieve their apps' goals.
REST API design is for use-cases where the users should have control over how they interact with the resources provided by the API. Some examples that should be using REST API design:
- Government portals for publicly accessible information, like legal codes, weather reports, or property records
- Government portals for filing forms and other interactions
- Open data initiatives like Wikipedia and OpenStreetmap
Considering these examples, it makes sense that policing of what "REST" means comes from the more academically-minded, while the detractors of the definition are typically app developers trying to create a very specific user experience. The solution is easy: just don't call it REST unless it actually is.
> Where this kind of API design is useful is when there is a user with an agent (e.g. a browser or similar) who can navigate the API and interact with the different responses based on their media types and what the links are called.
The funny thing is, that perfectly describes HTML. Here’s a document with links to other documents, which the user can navigate based on what the links are called. Because if it’s designed for users, it’s called a User Interface. If it’s designed for application programming, it’s called an Application Programming Interface. This is why HATEOAS is kinda silly to me. It pretends APIs should be used by Users directly. But we already have that, it’s called a UI.
The point is that your Web UI can easily be made to be a REST HATEOAS conforming API at the same time. No separate codepaths, no duplicate efforts, just maybe some JSON templates in addition to HTML templates.
You're right, pure REST is very academic. I've worked with open/big data, and there's always a struggle to get realistic performance and app architecture design; for anything non-obvious, I'd say there are shades of REST rather than a simple boolean yes/no. Even academics have to produce a working solution or "application", i.e. that which can be actually applied, at some point.
> Where this kind of API design is useful is when there is a user with an agent (e.g. a browser or similar) who can navigate the API and interact with the different responses based on their media types and what the links are called.
It's also useful when you're programming a client that is not a web page!
You GET a thing, you dereference fields/paths in the returned representation, you construct a new URI, you perform an operation on it, and so on.
Consider a directory / database application. You can define a RESTful, HATEOAS API for it, write a single-page web application for it -or a non-SPA if you prefer-, and also write libraries and command-line interfaces to the same thing, all using roughly similar code that does what I described above. That's pretty neat. In the case of a non-SPA you can use pure HTML and not think that you're "dereferencing fields of the returned representation", but the user and the user-agent are still doing just that.
> Where this kind of API design is useful is when there is a user with an agent (e.g. a browser or similar) who can navigate the API and interact with the different responses based on their media types and what the links are called.
> Most web APIs are not designed with this use-case in mind.
I wonder if this will change as APIs might support AI consumption?
Discoverability is very important to an AI, much more so than to a web app developer.
MCP shows us how powerful tool discoverability can be. HATEOS could bring similar benefits to bare API consumption.
> Furthermore, the initial cognitive overhead of building a truly hypermedia-driven client was perceived as a significant barrier. It felt easier for a developer to read documentation and hardcode a URI template like /users/{id}/orders than to write a client that could dynamically parse a _links section and discover the “orders” URI at runtime.
It "was perceived as" a barrier because it is a barrier. It "felt easier" because it is easier. The by-the-book REST principles aren't a good cost-benefit tradeoff for common cases.
It is like saying that your microwave should just have one button that you press to display a menu of "set timer", "cook", "defrost", etc., and then one other button you use to select from the menu, and then when you choose one it shows another menu of what power level and then another for what time, etc. It's more cumbersome than just having some built-in buttons and learning what they do.
I actually own a device that works in that one-button way. It's an OBD engine code reader. It only has two buttons, basically "next" and "select" and everything is menus. Even for a use case that basically only has two operations ("read the codes" and "clear a code"), it is noticeably cumbersome.
Also, the fact that people still suggest it's indispensable to read Fielding's dissertation is the kind of thing that should give everyone pause. If the ideas are good there should be many alternative statements for general audiences or different perspectives. No one says that you don't truly understand physics unless you read Newton's Principia.
This is a very good and detailed review of the concepts of REST, kudos to the author.
One additional point I would add is that making use of the REST-ful/HATEOAS pattern (in the original sense) requires a conforming client to make the juice worth the squeeze:
What's often missed when this topic comes up is the question of who the back end API is intended for.
REST and HATEOAS are beneficial when the consumer is meant to be a third party that doesn't directly own the back end. The usual example is a plain old HTML page, the end user of that API is the person using a browser. MCP is a more recent example, that protocol is only needed because they want agents talking to APIs they don't own and need a solution for discoverability and interpretability in a sea of JSON RPC APIs.
When the API consumer is a frontend app written specifically for that backend, the benefits of REST often just don't outweigh the costs. It takes effort to design a more generic, better documented and specified API. While I don't like using tools like tRPC in production, its hugely useful for me when prototyping for much the same reason, I'm building both ends of the app and its faster to ignore separation of concerns.
UI designers want control over the look of the page in detail. E.g. some actions that can be taken on a resource are a large button and some are hidden in a menu or not rendered in the UI at all.
A client application that doesn't have any knowledge about what actions are going to be possible with a resource, instead rendering them dynamically based on the API responses, is going to make them all look the same.
So RESTful APIs as described in the article aren't useful for the most common use case of Web APIs, implementing frontend UIs.
My experience with "RESTful APIs" rarely has much to do with the UI. Why even have any API if all you care about is the UI? Why not go back to server driven crap like DWR then?
My experience is that SPAs have been the way to make frontends, for the last eight years or so. May be coming to an end now. Anyway, contact with the backend all went through an API.
During that same time, the business also wanted to use the fact that our applications had an API as a selling point - our customers are pretty technical and some of them write scripts against our backends.
Backenders read about API design, they get the idea they should be REST like (as in, JSON, with different HTTP methods for CRUD operations).
And of course we weren't going to have two separate APIs, that we ran our frontends on our API was another selling point (eat your own dog food, proof that the API can do everything our frontend can, etc).
So: the UI runs on a REST API.
I'm hoping that we'll go back to Django templates with a sprinkle of HTMX here and there in the future, but who knows. That will probably be a separate backend that runs in front of this API then...
> our applications had an API as a selling point - our customers are pretty technical and some of them write scripts against our backends
It is a selling point. A massive one if you're writing enterprise software. It's not merely about "being technical", but mandatory for recurring automated jobs and integration with their other software.
Because UI toolkit independent APIs are more flexible than just returning HTML, and considering only HTML means that you offer subpar experiences on most platforms. Not just mobile software, where web apps are awful, but also desktop, where your software doesn't integrate well with the platform if it's just a webpage.
Returning purely data means being able to transform it in any way you want, no matter where you use it. And depending on your usecase, it also means being able to sell access to it.
1. UX designers operate on every stage of software development lifecycle from product discovery to post-launch support (validation of UX hypotheses), they do not exercise control - they work within constraints as part of the team. The location of a specific action in UI and interaction triggering it is orthogonal to availability of this action. Availability is defined by the state. If state restricts certain actions, UX must reflect that.
2. From architectural point of view, once you encapsulate the checking state behavior, the following will work the same way: "if (state === something)" and "if (resource.links["action"] !== null)". The latter approach will be much better, because in most cases any state-changing actions will require validation on server and you can implement the logic only once (on server).
I have been developing HATEOAS applications for quite a while and maintain HAL4J library: there are some complexities in this approach, but UI design is certainly not THE problem.
I'll never understand why the HATEOAS meme hasn't died.
Is anyone using it? Anywhere?
What kind of magical client can make use of an auto-discoverable API? And why does this client have no prior knowledge of the server they are talking to?
> I'll never understand why the HATEOAS meme hasn't died.
> Is anyone using it? Anywhere?
As I recall ACME (the protocol used by Let’s Encrypt) is a HATEOAS protocol. If so (a cursory glance at RFC 8555 indicates that it may be), then it’s used by almost everyone who serves HTTPS.
Arguably HTTP, when used as it was intended, is itself a HATEOAS protocol.
> What kind of magical client can make use of an auto-discoverable API? And why does this client have no prior knowledge of the server they are talking to?
LLMs seem to do well at this.
And remember that ‘auto-discovery’ means different things. A link typed next enables auto-discovery of the next resource (whatever that means); it assumes some pre-existing knowledge in the client of what ‘next’ actually means.
This is true, but isn’t this quite far away from the normal understanding of API, which is an interface consumed by a program? Isn’t this the P in Application Programming Interface? If it’s a human at the helm, it’s called a User Interface.
I agree that's a common understanding of things, but I don't think that it's 100% accurate. I think that a web browser is a client program, consuming a RESTful application programming interface in the manner that RESTful APIs are designed to be consumed, and presenting the result to a human to choose actions.
I think if you restrict the notion of client to "automated programs that do not have a human driving them" then REST becomes much less useful:
If you allow the notion of client to include "web browser driven by humans", then what is it about Fielding's dissertation that is considered so important and original in the first place? Sure it's formal and creates some new and precise terminology, but the concept of browsing was already well established when he wrote it.
It formalized the network architecture of distributed hypermedia systems and described interesting characteristics and tradeoffs of that approach. Whether or not it did a GOOD job of that for the layman I will leave to you, only noting the confusion around the topic found, ironically, across the internet.
At that level, it would be infinitely clearer to say, "There is no such thing as a RESTful API, since the purpose of REST is to connect a system to a human user. There is only such a thing as a RESTful UI based on an underlying protocol (HTML/HTTP). But the implementation of this protocol (the web browser) is secondary to the actual purpose of the system, which is always a UI."
There is such a thing as a RESTful API, and that API must use hypertext, as is clearly laid out in Fielding's dissertation. I don't know what a RESTful UI is, but I do know what a hypertext is, how a server can return a hypertext, how a client can receive that hypertext and present it to a user to select actions from.
Whether or not the API is being consumed by a script client or a browser client doesn't change the RESTful-ness of it, although it does change how useful the aspects of REST (in particular, the uniform interface) will be to that client.
I'd say that my web browser is not using hypertext. It is merely transforming it so that I can use the resulting hypermedia, and thereby interface with the remote host. That is, my browser isn't the one that decides how to interface with the remote host; I am. The browser implements the hypertext protocol and presents me a user interface to the remote host.
Fielding might have a peculiar idea of what an "API" is, so that a "human + browser" is a programmatic application, but if that's what he says, then I think his ideas are just dumb and I shouldn't bother listening to him.
> Whether or not the API is being consumed by a script client or a browser client doesn't change the RESTful-ness of it
There's no way for a "script client" to use hypertext without implementing a fixed protocol on top of it, which is allegedly not-RESTful. Unless you count a search engine crawler as such a client, I guess, but that's secondary to the purpose of hypertext.
> An application programming interface (API) is a connection between computers or between computer programs. It is a type of software interface, offering a service to other pieces of software.[1] A document or standard that describes how to build such a connection or interface is called an API specification. A computer system that meets this standard is said to implement or expose an API. The term API may refer either to the specification or to the implementation.
The server and browser are two different computer programs. The browser understand how to make an API connection to a remote server and then take an HTML response it receives (if it gets one of that media type) and transform it into a display to present to the user, allowing the user to choose actions found in the HTML. It then understands how to take actions by the user and turn those into further API interactions with the remote system or systems.
Because the browser waits for a human to intervene and make choices (sometimes, consider redirects) doesn't make the overall system any less of a distributed one, with pieces of software integrating via APIs following a specific network architecture, namely what Fielding called REST.
Your intuition that this idea doesn't make a lot of sense for a script-client is correct:
More broadly, I dislike the characterization of the web browser as the "client" in this situation. After all, the browser isn't the recipient of the remote host's services: it's just the messenger or agent on behalf of the (typically human) user, who is the real client of the server, and the recipient of the hypermedia it offers via a hypertext protocol.
That is, the browser may be communicating with the remote server (using APIs provided by the local OS), but it is not itself interfacing with the server, i.e., being offered a service for its own benefit. It may possibly be said that the whole system of "user + browser" interfaces with the remote server, but then it is no longer an application.
(Of course, this is all assuming the classical model of HTML web pages presented to the user as-is. With JS, we can have scripts and browser extensions acting for their own purposes, so that they may be rightly considered "client" programs. But none of these are using a REST API in Fielding's sense.)
OK, i understand you dislike it. But by any reasonable standard the web is a client/server distributed system, where the browsers are the clients. I understand you don't feel like that's right, but objectively that's what is going on. The browser is interfacing with the remote server, via an API discovered in the hypertext responses, based on actions taken by the users. It is no different than, for example, a MMORPG connecting to an API based on user actions in the game except that the actions are discovered in the hypertext responses. That's the crux of the uniform interface of REST.
Yes. You used such an api to post your reply. And I am using it as well, via the affordances presented by the mobile safari hypermedia client program. Quite an amazing system!
I also use Google Maps, YouTube, Spotify, and Figma in the same web browser. But surely most of the functionality of those would not be considered HATEOAS.
Yes, very strongly agree. Browsers, through the code-on-demand "optional" constraint on REST, have become so powerful that people have started to build RPC-style applications in them.
Ironic that Fielding's dissertation contained the seed of REST's destruction!
I used it on an enterprise-grade video surveillance system. It was great - basically solved the versioning and permissions problem at the API level. We leveraged other RFCs where applicable.
The biggest issue was that people wanted to subvert the model to "make things easier" in ways that actually made things harder. The second biggest issue is that JSON is not, out of the box, a hypertext format. This makes application/json not suitable for HATEOAS, and forcing some hypertext semantics onto it always felt like a kludge.
I think OData isn't used, and that's a proper standard and a lower bar to clear. HATEOAS isn't even benefiting from a popular standard, which is both a cause and a result.
You realize that anyone using a browser to view HTML is using HATEOS, right? You could probably argue whether SPAs fit the bill, but for sure any server rendered or static site is using HATEOS.
The point isn't that clients must have absolutely no prior knowledge of the server, its that clients shouldn't have to have complete knowledge of the server.
We've grown used to that approach because most of us have been building tightly coupled apps where the frontend knows exactly how the backend works, but that isn't the only way to build a website or web app.
For a traditional web application, HATEOS is that. HTML as the engine of application state: the application state is whatever the server returns, and we can assess the application state at any time by using our eyeballs to view the HTML. For these applications, HTML is not just a presentation layer, it is the data.
The application is then auto-discoverable. We have links to new endpoints, URLs, that progress or modify the application state. Humans can navigate these, yes, but other programs, like crawlers, can as well.
Not totally sure I understand your question, sorry if I don't quite answer it here.
With REST you need to know a few things like how to find and parse the initial content. I need a browser that can go from a URL to rendered HTML, for example. I don't need to know anything about what content is available beyond that though, the HTML defines what actions I can take and what other pages I can visit.
RPC APIs are the opposite. I still need to know how to find and parse the response, but I need to deeply understand how those APIs are structured and what I can do. I need to know schemas for the API responses, I need to know what other APIs are available, I need to know how those APIs relate and how to handle errors, etc.
And what POST do you send? A bare POST with no data, or with parameters in it's body?
What if you also want to GET the status of cancellation? Change the type of `method` to an array so you can `"method": ["POST", "GET"]`?
What if you want to cancel the cancellation? Do you do `POST /orders/123/cancel/cancel HTTP/...`, or `DELETE /orders/123/cancel HTTP/...`?
So, people adopt, making an originally very pure and "based" standard into something they can actually use. After all, all of those things are meant to be productive, rather than ideological.
As someone that criticized a number of their employers API's for not being sufficiently ReSTful especially with regards to HatEoS, I eventually realized the challenge is the clients. App developers and client developers mostly just want to deal with structured objects that they've built fixed function UX around (including the top level) and desire constructing URLs on the client. It takes a special kind of developer to desire building special mini-browsers everywhere that would require hateos and from the server side.
I think LLM's are going to be the biggest shift in terms of actually driving more truly ReSTful APIs, though LLM's are probably equally happy to take ReST-ish responses, they are able to effectively deal with arbitrary self describing payloads.
MCP at it's core seems to design around the fact that you've got an initial request to get the schema and then the payload, which works great for a lot of our not-quite-ReST API's but you could see over time just doing away with the extra ceremony and doing it all in one request and effectively moving back in the direction of true ReST.
> By using HATEOAS and referencing schema definitions (such as XSD or JSON Schema) from within your resource representations, you can enable clients to understand the structure of the data and navigate the API dynamically.
I actually think this is where the problem lies in the real world. One of the most useful features of a JSON schema is the "additionalProperties" keyword. If applied to the "_links" subschema we're back to the original problem of "out of band" information defining the API.
I just don't see what the big deal is if we have more robust ways of serving the docs somewhere else outside of the JSON response. Would it be equivalent if the only URL in "_links" that I ever populate is a link to the JSONified Swagger docs for the "self" path for the client to consume? What's the point in even having "_links" then? How insanely bloated would that client have to be to consume something that complicated? The templates in Swagger are way more information dense and dynamic than just telling you what path and method to use. There's often a lot more for the client to handle than just CRUD links and there exists no JSON schema that could be consistent across all parts of the API.
> The core problem it addresses is client-server coupling. There are probably countless projects where a small change in a server’s URI structure required a coordinated (and often painful) deployment of multiple client applications. A HATEOAS-driven approach directly solves this by decoupling the client from the server’s namespace. This addresses the quality of evolvability.
Not sure I agree with this. All it does is move the coupling problem around. A client that doesn't understand where to find a URL in a document (or even which URL's are available for what purpose within that document) is just as bad as a client that assumes the wrong URL structure.
At some point, the client of an API needs to understand the semantics of what that API provides and how/where it provides those semantics. Moving it from a URL hierarchy to a document structure doesn't provide a huge amount of added value. (Particularly in a world where essentially all of the server API's are defined in terms of URL patterns routing to handlers. This is explicit hardcoded encouragement to think in a style in opposition to the HATEOAS philosophy.)
I also tend to think that the widespread migration of data formats from XML to JSON has worked against "Pure" REST/HATEOAS. XML had/has the benefit of a far richer type structure when compared to JSON. While JSON is easier to parse on a superficial level, doing things like identifying times, hyperlinks, etc. is more difficult due to the general lack of standardization of these things. JSON doesn't provide enough native and widespread representations of basic concepts needed for hypertext.
(This is one of those times I'd love some counterexamples. Aside from the original "present hypertext documents to humans via a browser" use case, I'd love to read more about examples of successful programmatic API's written in a purely HATEOAS style.)
With an HTML body the link will be displayed as content and so will be directly clickable. But if the body is JSON then the client has to somehow generate a UI for the user, which requires some kind of interpretation of the data, so I don’t understand that case.
Just call it a HTTP API and everyone is happy. People forget REST was never intended for API’s in the first place. REST was designed for information systems navigated by humans, not programs.
Similarly, I call Java programs "Object Oriented programs" despite Alan Kays protests that it isn't at all what Object Orientation was described as in early papers.
The sad truth is that it's the less widely used concept that has to shift terminology, if it comes into wide use for something else or a "diluted" subset of the original idea(s). Maybe the true-OO-people have a term for Kay-like OO these days?
I think the idea of saving "REST" to mean the true Fielding style including HATEOAS and everything is probably as futile as trying to reserve OO to not include C++ or Java.
I struggle to believe that any API in history has been improved by the developer more faithfully following REST’s strictures. The closest we’ve come to actually decoupled, self describing APIs is MCP, and that required inventing actual AIs to understand them.
The most successful API in history – the World-Wide Web – uses REST principles. That’s where REST came from. It was somebody who was involved in the creation of the early web who looked at it and wrote down a description of what properties of the web made it so successful.
REST on the WWW only works because humans read and interpret the results. Arguably, that’s not an API (Application Programming Interface) but a UI (User Interface).
I have yet to see an API that was improved by following strict REST principles. If REST describes the web (a UI, not an API), and it’s the only useful example of REST, is REST really meaningful?
> REST on the WWW only works because humans read and interpret the results.
This is very obviously not true. Take search engine crawlers, for example. There isn’t a human operator of GoogleBot deciding which links to follow on a case-by-case basis.
> I have yet to see an API that was improved by following strict REST principles.
I see them all the time. It’s ridiculous how many instances of custom logic in APIs can be replaced with “just follow the link we give you”.
It’s not. It’s pretty much the opposite. This is what he’s talking about:
> our clever thinker invents a new, higher, broader abstraction
> When you go too far up, abstraction-wise, you run out of oxygen.
> They tend to work for really big companies that can afford to have lots of unproductive people with really advanced degrees that don’t contribute to the bottom line.
REST is the opposite. REST is “We did this. It worked great! This is why.” And web developers around the world are using this every single day in practical projects without even realising it. The average web developer uses REST, including HATEOAS, all the time, and it works great for them. It’s just when they set out to do it on purpose, they often get distracted by some weird fake definition of REST that is completely different.
That's absolutely not what the essay is about. It's about the misassignment of credit for the success of a technology by people who think the minutiae of the clever implementation was important.
I think you bring up an interesting tangential point that I might agree with--that the people doing the misalignment are how architecture astronauts remain employed.
But the core of Joel Spolsky's three posts on Architecture Astronauts is his expression of frustration at engineers who don't focus on delivering product value. These "Architecture Astronauts" are building layer on layer of abstraction so high that what results is a "worldchanging" yet extremely convoluted system that no real product would use.
> "What is it going to take for you to get the message that customers don’t want the things that architecture astronauts just love to build."
> "this so called synchronization problem is just not an actual problem, it’s a fun programming exercise that you’re doing because it’s just hard enough to be interesting but not so hard that you can’t figure it out."
I don't think this is tangential at all. This whole conversation is exactly the same as Spolsky's point about Napster: it's hard to know what to say to someone who thinks the reason the web was successful was REST, rather than HTML letting you make cool web pages with images in them. And this has played out exactly as you'd expect: nobody cares at all about REST, because it's pure architecture astronaut stuff.
Academically it might be correct, but shipping real features will in most cases be more important than hitting some text book definition of correctness.
Sure, you’re right: pragmatics, in practice, are more important than theory.
But you’re assuming that there is a real contradiction between shipping features and RESTful design. I believe that RESTful design can in many cases actually increase feature delivery speed through its decoupling of clients and servers and more deeply due to its operational model.
Notice that both of those are plural words. When you have many clients and many servers implementing a protocol a formal agreement of protocol is required. REST (which I will not claim to understand well) makes a formal agreement much easier, but you still need some agreement. However when there is just one server and just one client (I'll count all web browsers as one since the browser protocols are well defined enough) you can go faster by just implementing both sides and testing they work for a long time.
And have a dictionary in my server mapping method names to the actual functions.
All functions take one param (a dictionary with the data), validate it, use it and return another single dictionary along with appropriate status code.
You can add versions and such but at that point you just use JSON-RPC.
This kind of setup can be much better than REST APIs for certain usecases
This makes automating things like retrying network calls hell. You can safely assume a GET will be idempotent, and safely retry on failure with delay. A POST might, or might not also empty your bank account.
If you're doing well-formed RPC over POST, as opposed to ad hoc RPC (which, let's be honest, is the accurate description for many "REST" APIs in the wild), then requests and responses should have something like an `id` field, e.g. in JSON-RPC:
Commonly, servers shouldn't accept duplicate request IDs outside of unambiguous do-over conditions. The details will be in the implementations of server and client, as they should be, i.e. not in the specification of the RPC protocol.
When you are retrying an API, you are calling the API, you know whether its a getBookings() or a addBooking() API. So write the client code based on that.
Instead of the API developer making sure GET /bookings is idempotent, he is going to be making sure getBookings() is idempotent. Really, what is the difference?
As for the benefits, you get a uniform interface, no quirks with URL encoding, no nonsense with browsers pre-loading, etc etc,. It's basically full control with zero surprises.
The only drawback is with cookies. Samesite: Lax depends on you using GET for idempotent actions and POST for unsafe actions. However, I am advocating the use of this only for "fetch() + createElement() = UI" kind of app, where you will use tokens for everything anyways.
Ok I may have been wrong. I checked the thesis and couldn't see this aspect mentioned. Most of the thesis seems like stuff I agree with. Damn. I'm fighting an impression of REST I had.
It felt easier going through the post after reading these bits near the end:
> The widespread adoption of a simpler, RPC-like style over HTTP can probably attributed to practical trade-offs in tooling and developer experience
> Therefore, simply be pragmatic. I personally like to avoid the term “RESTful” for the reasons given in the article and instead say “HTTP” based APIs.
Yeah but why cause needless confusion? The colloquial definition of "RESTful" is better understood as just something you defined using the OpenAPI spec. All other variants of "HTTP API" are likely hot garbage nobody wants anyway.
I politely pointed out that this previous submission "Stop using REST for state synchronization" (https://news.ycombinator.com/item?id=43997286) was not in fact ReST at all, but just an HTTP API and I was down voted for it. You would think that programming is a safe place to be pedantic.
It's all HTTP API unless you're actually doing ReST in which case you're probably doing it wrong.
ReST and HATEOAS are great ideas until you actually stop and think about it, then you'll find that they only work as ideas in some idealized world that real HTTP clients do not exist in.
This doesn’t provide any good arguments for why Roy Fielding’s conception should be taken as the gospel of how things should be done. At best, it points out that what we call REST now isn’t what Roy Fielding wanted.
Furthermore, it doesn’t explain how Roy Fielding’s conception would make sense for non-interactive clients. The fact that it doesn’t make sense is a large part of why virtually nobody is following it.
Take this quote: “A REST API should be entered with no prior knowledge beyond the initial URI (bookmark) and set of standardized media types that are appropriate for the intended audience (i.e., expected to be understood by any client that might use the API). From that point on, all application state transitions must be driven by client selection of server-provided choices that are present in the received representations or implied by the user’s manipulation of those representations.”
If the client application only understands media types and isn’t supposed to know anything about the interrelationships of the data or possible actions on it, and there is no user that could select from the choices provided by the server, then it’s not clear how the client can do anything purposeful.
Surely, an automated client, or rather its developer, needs a model (a schema) of what is possible to do with the API. Roy Fieldings doesn’t address that aspect at all. At best, his REST API would provide a way for the client to map its model to the actual server calls to make, based on configuration information provided by the server as “hypertext”. But the point of such an indirection is unclear, because the configuration information itself would have to follow a schema known and understood by the client, so again wouldn’t be RESTful in Roy Fielding’s sense.
People are trying to fill in the blanks of what Roy Fielding might have meant, but in the end it just doesn’t make a lot of sense for what REST APIs are used in practice.
As I replied to the sibling comment, you're misunderstanding rest and hypermedia. The "schema" is html and the browser is the automated client that is exceptionally good at rendering whatever html the backend has decided to send.
Browsers are interactive clients, the opposite of automated clients. What you are saying supports the conclusion that Roy Fielding’s conception is unsuitable for non-interactive clients. However, the vast majority of real-world REST APIs are targeting automation, hence it doesn’t make sense for them to be “RESTful”.
Fielding was absolutely not saying that his REST was the One True approach. But it DOES mean something
The issue at hand here is that he coined REST and the whole world is using that term for something completely unrelated (eg an http json api).
You could start writing in binary here if you thought that that would be a more appropriate way to communicate, but it wouldn't be English (or any humanly recognizable language) no matter how hard you try to say it is.
If you want to discuss whether hypermedia/rest/hateaos is a better approach for web apps than http json APIs, I'd encourage you to read htmx.org/essays and engage with that community who find it to be an enormous liberation.
It may mean something, but Roy Fielding went out of his way, over many years, to not talk about the actual use cases he had in mind. It would have been easy for him to clarify that he was only talking about interactive browser applications. But he didn’t. And the people who came up with HATEOAS didn’t think he was. Nor did any of the blog articles that are espousing the alleged virtues of RESTfulness. So it’s not surprising that the term “REST” was appropriated for something else. In any case, it’s much too late to change that, it’s water down the bridge.
I’m only mildly interested in discussing hypothetical hypermedia browsers, for which Roy Fielding’s conception might be well and good (but also fairly incomplete, IMO). What developers care about is how to design HTTP-based APIs for programmatic use.
How are web browsers hypothetical? We're using one with rest/hateoas/hypermedia right now...
You don't seem to have even the slightest idea of what you're talking about here. Again, I suggest checking out the htmx essays and their hypermedia.systems book
In a non-interactive case, what is supposed to be reading a response and deciding which links to do some something with or what to do with them?
Let's say you've got a non-interactive program to get daily market close prices. A response returns a link labelled "foobarxyz", which is completely different to what the API returned yesterday and the day before.
How is your program supposed to magically know what to do? (without your input/interaction)
Why does "your program" need to know anything? The whole point of hypermedia is that there isn't any "program" other than the web browser that agnostically renders whatever html it receives. If the (backend) "program" development team decides that a foobarxyz link should be returned, then that's what is correct.
I suspect that your misunderstanding is because you're still looking at REST as a crud api, rather than what it actually is. That was the point of this article, though it was too technical.
> Why doesn't fielding's conception make sense for non-interactive clients?
> Why does "your program" need to know anything? The whole point of hypermedia is that there isn't any "program" other than the web browser that agnostically renders whatever html it receives.
Seems like you're contradicting yourself here.
If a non-interactive client isn't supposed to know anything and just "render" whatever it gets back, how can it perform useful work on the result?
If it can't, in which sense does REST still make sense for non-interactive clients?
Strict HATEOAS is bad for an API as it leads to massively bloated payloads. We _should_ encode information in the API documentation or a meta endpoint so that we don't have to send tons of extra information with every request.
> REST isn’t about exposing your internal object model over HTTP — it’s about building distributed systems that behave like the web.
I think I finally understand what Fielding is getting at. His REST principles boil down to allowing dynamic discovery of verbs for entities that are typed only by their media types. There's a level of indirection to allow for dynamic discovery. And there's a level of abstraction in saying entities are generic media objects. These two conceptual leaps allow the REST API to be used in a more dynamic, generic way - with benefits at the API level that the other levels of the web stack has ("client decoupling, evolvability, dynamic interaction").
In the simple (albeit niche) case, a UI could populate a list of buttons based on the URIs/verbs that the REST API returns. So the UI would be totally dynamic based on the backend - and so, work pretty generically across REST APIs.
But for a client, UI or otherwise, to make use of a dynamic set of URIs/verbs would require it to either look for a specific keyword (hard coding the intents it can satisfy) or be able to semantically understand the API (which is hard, requires a human).
Oddly, all this stuff is full circle with the AI stuff. The MCP protocol is designed to give AIs text-based descriptions of APIs, so they can reason about how to use them.
The simplest case, and the most common, is that of a browser rendering the HTML response from a website request. The HTML contains the URL links to other APIs that the user can click on. Think of navigating any website.
Htmx essays have already been mentioned, so here are my thoughts on the matter. I feel like to have a productive discussion of REST and HATEOAS, we must first agree on the basics. Repeating my own comment from a couple of weeks ago, H stands for hypermedia, and hypermedia is a type of media, that uses common format for representing some server-driven state and embedding hypermedia controls which are presented by back-end agnostic hypermedia client to a user for discoverability and interaction.
As such, JSON driven APIs can't be REST, since there is no common format for representing hypermedia controls, which means that there's no way to implement hypermedia client which can present those controls to the user and facilitate interactions. Is there such implmentation? Yes, HTML is the hypermedia, <input>s and <button>s are controls and browsers are the clients. REST and HATEOAS is designed for the humans, and trying to somehow combine it with machine-to-machine interaction results in awkward implementations, blurry definitions and overcomplication.
Richardson maturity model is a clear indication of those problems, I see it as an admission of "well, there isn't much practicality in doing proper REST for machine-to-machine comms, but that's fine, you can only do some parts of it and it's still counts". I'm not saying we shouldn't use its ideas, resource-based URLs are nice, using feature of HTTP is reasonable, but under the name REST it leads to constant arguments between the "dissertation" crowd and "the industry has moved on" crowd. The worst/best part is both those crowds are totally right and this argument will continue for as long as we use HTTP
I made it sound like JSON APIs can't be REST in principle, which is of course not true. If someone were to create hypermedia control specification for JSON and implement hypermedia client for it, it would of course would match the definition. But since we don't have such specification and compliant client at this time, we can't do REST as it is defined
> If you are building a public API for external developers you don’t control, invest in HATEOAS. If you are building a backend for a single frontend controlled by your own team, a simpler RPC-style API may be the more practical choice.
My conclusion is exactly the opposite. In-house developers can be expected (read: cajoled) to do things the "right" way, like follow links at runtime. You can run tests against your client and server. Internally, flexible REST makes independent evolution of the front end and back end easy.
Externally, you must cater to somebody who hard-coded a URL into their curl command that runs on cron and whose code can't tolerate the slightest deviation from exactly what existed when the script was written. In that case, an RPC-like call is great and easy to document. Increment from `/v1/` to `/v2/`, writer a BC layer between them and move on.
I think we should focus less on API schemas and more on just copying how browsers work.
Some examples:
It should be far more common for http clients to have well supported and heavily used Cookie jar implementations.
We should lean on Accept headers much more, especially with multiple mime-types and/or wildcards.
Http clients should have caching plugins to automatically respect caching headers.
There are many more examples. I've seen so much of HTTP reimplemented on top of itself over the years, often with poor results. Let's stop doing that. And when all our clients are doing those parts right, I suspect our APIs will get cleaner too.
My biggest takeaway from Roy Fielding's dissertation wasn't how to construct a RESTful architecture or what is the one true REST, but how to understand any computer architecture -- particularly their constraints -- in order to design and implement appropriate systems. I can easily identify anti-patterns (even in implementations) because they violate the constraints which in turns, takes away from the properties of the architecture. This also quickly allows me to evaluate and understand libraries, runtimes, topologies, and so forth.
I used to get caught up in what is REST and what is not, and that misses the point. It's similar to how Christopher Alexander's ideas pattern languages gets used in a way now that misses the point. Alexander was cited in introductory chapter of Fielding's dissertation. These are all very big ideas with broad applicability and great depth.
When combined with Promise Theory, this gives a dynamic view of systems.
It is not sufficient to crawl the API. The client also needs to know how to display the forms, which collect the data for the links presented by the API. If you want to crawl the API you also have the crawl the whole client GUI.
Because that is the easiest to implement, the easiest to write, the easiest to manually test and tinker with (by writing it directly into the url bar), the easiest to automate (curl .../draw_point?x=7&y=20). It also makes it possible to put it into a link and into a bookmark.
This is great for API's that only have a few actions that can be taken on a given resource.
REST-API's then are especially suited for acting as a gateway to a database, to easily CRUD and fetch lists of information.
The best API's I've seen mix and match both patterns. RESTful API endpoints for data, "function call" endpoints for often-used actions like voting, bulk actions and other things that the client needs to be able to do, but you want the API to be in control of how it is applied.
> REST-API's then are especially suited for acting as a gateway to a database, to easily CRUD and fetch lists of information.
I don't disagree, but I've found (delivering LoB applications) that they are not homogenous: The way REST is implemented, right now, makes it not especially suitable for acting as a gateway to a database.
When you're free of constraints (i.e. greenfield application) you can do better (ITO reliability, product feature velocity, etc) by not using a tree exchange form (XML or JSON).
Because then it's not just a gateway to a database, it's an ill-specified, crippled, slow, unreliable and ad-hoc ORM: it tries to map trees (objects) to tables (relations) and vice versa, with predictably poor results.
Bots, browsers that preload URLs, caching (both browser and backend and everything in between), the whole infrastructure of the Web that assumes GET never mutates and is always safe to repeat or serve from cache.
Using GET also circumvents browser security stuff like CORS, because again the browser assumes GET never mutates.
Then that does not conform to the HTTP spec. GET endpoints must be safe, idempotent, cachable. Opening up a site to cases were web crawlers/scrapers may wreak havoc.
Indeed, user embedded pictures can fire GET requests while can not make POST requests. But this is not a problem if you don't allow users to embed pictures, or you authenticate the GET request somehow. Anyway GET requests are just fine.
CORS prevents reading from a resource, not from sending the request.
If you find that surprising, think about that the JS could also have for example created a form with the vote page as the target and clicked on the submit button. All completely unrelated to CORS.
CORS does nothing of the sort. It does the exact opposite – it’s explicitly designed to allow reading a resource, where the SOP would ordinarily deny it.
That any bot crawling your website is going to click on your links and inadvertently mutate data.
Reading your original comment I was thinking "Sure, as long as you have a good reason of doing it this way anything goes" but I realized that you prefer to do it this way because you don't know any better.
If you rely on the HTTP method to authenticate users to mutate data, you are completely lost. Bots and humans can send any method they like. It's just a string in the request.
Use cookies and auth params like HN does for the upvote link. Not HTTP methods.
> If you rely on the HTTP method to authenticate users to mutate data, you are completely lost
I don't know where you are getting that from but it's the first time I've heard of it.
If your link is indexed by a bot, then that bot will "click" on your links using the HTTP GET method—that is a convention and, yes, a malicious bot would try to send POST and DELETE requests. For the latter, this is why you authenticate users but this is unrelated to the HTTP verb.
> Use cookies and auth params like HN does for the upvote link
If it uses GET, this is not standard and I would strongly advise against it except if it's your pet project and you're the only maintainer.
Follow conventions and make everyone's lives easier, ffs.
Because HTTP is a lot more sophisticated than anyone cares to acknowledge. The entire premise of "REST", as it is academically defined, is an oversimplification of how any non-trivial API would actually work. The only good part is the notion of "state transfer".
Not a REST API, but I've found it particularly useful to include query parameters in a POST endpoint that implements a generic webhook ingester.
The query parameters allow us to specify our own metadata when configuring the webhook events in the remote application, without having to modify our own code to add new routes.
I used to do that but I've been fully converted to REST and CRUD gang. Once you establish the initial routes and objects it's really easy mount everything else on it and move fast with changes. Also using tools like httpie it's super easy to test anything right in your terminal.
I've not done much with GraphQL myself, but a lot of my colleagues have and have all sworn off it except in very specific circumstances.
My impression is that it's far too flexible. Connecting it up to a database means you're essentially running arbitrary SQL queries, which means whoever is writing the GraphQL queries also needs to know how those queries will get translated to SQL, and therefore what the database structure/performance characteristics are going to be. That's a pain if you're using GraphQL internally and now your queries are spread out, potentially just multiple codebases. But if you exclude the GraphQL API publicly, now you don't even know what the queries are that people are going to want to use.
Mostly these days we use RPC-style APIs for internal APIs where we can control everything and be really precise about what gets called when and where. And then more "traditional" REST/resource-oriented endpoints for public APIs where we might have more general queries.
The thing to internalize about "true" REST is that HN (and the rest of the web) is really a RESTful web service. You visit the homepage, a hypermedia format is delivered to a generic client (your browser), and its resources (pages, sections, profiles, etc) can all be navigated to by following links.
Links update when you log in or out, indicating the state of your session. Vote up/down links appear or disappear based on one's profile. This is HATEOAS.
Link relations can be used to alter how the client (browser) interprets the link—a rel="stylesheet" causes very different behavior from rel="canonical".
JavaScript provides even provides "code on-demand" as it's called in Fielding's paper.
From that perspective, REST is incredible. REST is extremely flexible, scalable, evolvable, etc. It is the pattern that powers the web.
Now, it's an entirely different story when it come to what many people call REST APIs, which are often nothing like HN. They cannot be consumed by a generic client. They are not interlinked. They don't ship code on-demand.
Is "REST" to blame? No. Few people have time or reason to build a client as powerful as the browser to consume their SaaS product's API.
But even building a truly generic client isn't the hardest thing about building RESTful APIs—the hardest thing is that the web depends entirely on having a human-in-the-loop and your standard API integration's purpose is to eliminate having a human in the loop.
For example, a human reads the link text saying "Log in" or "Reset password" and interprets that text to understand the state of the system (they do not have an authenticated session). And a human can reinterpret a redesigned webpage with links in a new location, but trivial clients can't reinterpret a refactored JSON object (or XML for that matter).
The folly is in thinking that there's some design pattern out there that's better than REST without understanding that the actual problem to be solved by that elusive, perfect paradigm is how you'll be able to refactor your API when your API's clients will likely be bodged-together JS programs whose authors dug through JSON for the URL they needed and then hard-coded it in a curl command instead of conscientiously and meticulously reading documentation and semantically looking up the URL at runtime, follows redirects, and handles failures gracefully.
I see a lot of people who read Fielding's thesis and found it interesting.
I did not find it interesting. I found it excessively theoretical and proscriptive. It led to a lot of people arguing pedantically over things that just weren't important.
I just want to exchange JSON-structured messages over HTTP, using the least amount of HTTP required to implement request and response. I'm also OK with protocol buffers over grpc, or really any decent serialization technology over any well-implemented transport. Sometimes it's CRUD, sometimes it's inference, sometimes it's direct actions on a server.
Hmm. I shoudl write a thesis. JSMOHTTP (pronounced "jizmo-huttup")
I always urge software architects (are they still around?) and senior engineers in charge of APIs to think very carefully about the consumers of the API.
If the only consumer is your own UI, you should use a much more integrated RPC style that helps you be fast. Forget about OpenAPI etc: Use a tool or library that makes it dead simple to provide data the UI needs.
If you have a consumer outside your organization: a RESTish API it is.
If your consumer is supposed to be generic and can "discover" your API, RESTful is the way to go.
But no one writes generic ones anymore. We already have the ultimate one: the browser.
HATEOAS might make a come back as it might be useful to expose an API to AI agents that would browse a service.
On the other hand, agents could as well understand an OpenAPI document, as the description of each path/schema can be much more verbose than HATEOAS. There is a reason why OpenAPI-style API are favored: less verbosity of payload. If cost of agents is based on their consumption/production of tokens, verbosity matters.
This post follows the general, highly academic/dogmatic, tone that I’ve seen when certain folks talk about REST. Most of the article talks about what _not_ to do, and has very little details on how to actually do it.
The idea of having client/server decoupled via a REST api that is itself discoverable, and that allows independent deployment, seems like a great advantage.
However, the article lacks even the simplest example of an api done the “wrong” vs the “right” way. Say I have a TODO api, how do I make it so that it uses HATEOAS (also who’s coming up with these acronyms…smh)?
Overall the article comes across more as academic pontification on “what not to do” instead of actionable advice.
Agreed. I wish there was some examples to better understand what the author means.
Like, in a web app, do i have any prior knowledge about the "_links" actions? Do I know that the server is going to return the actions "self" and "activate"? Is the idea to hide the routes from the user until the api call, but he should know that the api could return actions like "self", "activate" or "deactivate"? How do you communicate that an action requires a specific body? For example, the call activate is done in POST and expect a json body with a date inside. How do you tell that to the user?
> However, the article lacks even the simplest example of an api done the “wrong” vs the “right” way.
Unless the design and requirements are unusually complex or extreme, all styles of API and front end work well enough. Any example would have to be lengthy, to provide context for the advantages of "true" ReST architecture, and contrived.
Some of this is sensible. I especially like the idea of an interactive starting point which gives you useful links and info, but I can see how that would be difficult with more complex calls — showing examples and providing rich documentation would be difficult. Otherwise, just follow the recommendations for REST verbs (so what if they mostly map to CRUD?), and document your API well. Tools like Swagger really make this quite easy.
"Reductio Ad Roy Feldium" is the internet addage[1] that as in a hacker news discussion about a rest api grows, the probabilty someone cites roy felding's dissertation approaches 1. I'm glad this post cut right to the chase!
[1] ok it's not an internet adage. I invented it and joke with friends about it
I think that all of the unemployed CS grads are rediscovering the "best practices" of the last 40 years in lieu of working. Well, just remember, every HATEOAS-conforming REST API, every chaos-monkey-enabled Microservice-Oriented Architecture, every app that someone spent tone of time hacking down the cyclomatic complexity score, every meticulously UML-diagrammed four-tier architecture, has had their main engineers laid off and replaced by a crack team of junior engineers who adulterated it down to spaghetti code. In the post-AI world, features talk, architecture walks.
In my experience REST is just a code word for a distributed glob of function calls which communicate via JSON. It's a development and maintenance nightmare.
I am wondering if anyone can resolve this misunderstanding of REST for me…
If the backend provides a _links map which contains “orders” for example in the list - doesn’t the front end need to still understand what that key represents? Is there another piece I am missing that would actually decouple the front end from the backend?
I tried to follow the approach with hypermedia and discoverable resources/actions in my hobby projects. But I "failed" at the point that this would mean additional HTTP calls from a client to "discover" a resource/its actions. Given the latency of a HTTP call, relativly seen, this was not conclusive for me.
ElasticSearch and OpenSearch are certainly egregiously guilty of this. Their API is an absolute nightmare to work with if you don't have a supported native client. Why such a popular project doesn't have an easy-to-use OpenAPI spec document in this day and age is beyond me.
If you want to produce better APIs, try consuming them. A lot of places have this clean split between backend and frontend teams. They barely talk to each other sometimes. And a pattern I've seen over and over again is that some product manager decides feature X is needed. The backend team goes to work and delivers some API for feature X and then the frontend team has to consume the API. These APIs aren't necessarily very good if the backend people don't understand how the frontend uses them.
The symptom is usually if a seemingly simple API change on the backend leads to a lot of unexpected client side complexity to consume the API. That's because the API change breaks with some frontend expectation/assumption that frontend developers then need to work around. A simple example: including a userId with a response. To a frontend developer, the userId is not useful. They'll need a user name, a profile photo, etc. Now you get into all sorts of possible "why don't you just .." type solutions. I've done them all. They all have issues and it leads to a lot of complexity on either the server or the client.
You can bloat your API and calculate all this server side. Now all your API calls that include a userId gain some extra fields. Which means extra lookups and joins. So they get a bit slower as well. But the frontend can pretend that the server always tells it everything it needs. The other solution is to look things up from the frontend. This adds overhead. But if the frontend is clever about it, a lot of that information is very cachable. And of course graphql emerged to give frontend developers the ability to just ask for what they need from some microservices.
All these approaches have pros and cons. Most of the complexity is about what comes back, not about how it comes back or how it is parsed. But it helps if the backend developers are at least aware of what is needed on the frontend. A good way is to just do some front end development for a while. It will make you a better backend developer. Or do both. And by that I don't mean do javascript everywhere and style yourself as a full stack developer because you whack all nails with the same hammer. I mean doing things properly and experiencing the mismatches and friction for yourself. And then learn to do it properly.
The above example with the userIds is real. I've had to deal with that on multiple projects. And I've tried all of the approaches. My most recent insight here is that user information changes infrequently and should be looked up separately from other information asynchronously and then cached client side. This keeps APIs simple and forces frontend developers to not treat the server as a magical oracle and instead do sane things client side to minimize API calls and deal with application state. Good state management is key. If you don't have that, dealing with stateless network protocols (like REST) is painful. But state has to live somewhere and having it client side makes you less dependent on how the server side state management works. Which means it's easier to fix things when that needs to change.
HATEOAS + Document Type Description which includes (ideally internationalized) natural language description in addition to machine readable is what MCP should have been.
And not everything in reality maps nicely to hypermedia conventions. The problem with REST is trying to shoehorn a lot of problems in a set of abstractions that were initially created for documents.
At some point, we built REST clients so generic they could handle nearly any use case. Honestly, building truly RESTful APIs has been easy for ages, just render HTML on the server and send it to the browser. That's 100% REST with no fuss.
The irony is, when people try to implement "pure REST" (as in Level 3 of the Richardson Maturity Model with HATEOAS), they often end up reinventing a worse version of a web browser. So it's no surprise that most developers stop at Level 2—using proper HTTP verbs and resource-based URIs. Full REST just isn't worth the complexity in most real-world applications.
> A REST API should not be dependent on any single communication protocol, though its successful mapping to a given protocol may be dependent on the availability of metadata, choice of methods, etc. In general, any protocol element that uses a URI for identification must allow any URI scheme to be used for the sake of that identification. [Failure here implies that identification is not separated from interaction.]
What the heck does this mean? Does it mean that my API isn’t REST if it can’t interpret “http://example.com/path/to/resource” in the same way it interprets “COM<example>::path.to.resource”? Is it saying my API should support HTTP, FTP, SMB, and ODBC all the same? What am I missing?
As far as I know the only actual rest implementation, as Fielding envisioned it, a system where you send the entire representational state of the program with each request is the system Fielding coined the term REST to describe. The WEB.
Has any other system done this? where you send the whole application for each state with each state. project xandu?
I do find it funny how Fielding basically said "hey look at the web, isn't that a weird way to structure a program, lets talk about it." and every one sort of suffered a collective mental brain fart and replied "oh you mean http, got it"
RESTful APIs are not RESTful because REST is meh. Our APSi includes HATEAOS links and I have never, not once, witnessed their actual use (but they do double the size of response payloads).
It’s interesting that Stripe still even uses form-post on requests.
And rather than just using next-href your clients append next-id to a hardcoded things base URL? That seems like way more work than doing it the REST way.
I just spent a good portion of the day trying to figure out how GCP's allegedy "RESTful" (it's not) API names resources. If only there was a universal identifier for resources…
But no, a service account in GCP has no less than ~4 identifiers. And the API endpoint I wanted to call needed to know which resource, so the question then is "which of the 4 identifiers do I feed it?" The right answer? None of them.
The "right" answer is that you need to manually build a string, a concatenate a bunch of static pieces with the project ID and the object's ID to form a more IDer ID. So now we need the project ID … and projects have two of those. So the right answer is that exactly 1 of the 8 different permutations works (if we don't count the constant string literals involved in the string building).
Just give me a URI, and then let me pass that URI, FFS.
We collectively glazed over Roy Fielding's dissertation, didn't really see the point, liked the sound of the word "REST" and used it to describe whatever we wanted to do with http / json. Sorry, Roy, but you can keep HATEOAS - no one is going to take that from you.
LMAO all companies asking for extensive REST API design/implementation experience in their job requirements, along with the lastest hot frontend frameworks.
I should probably fire back by asking if they know what they're asking for, because I'm pretty sure they don't.
I spent years fussing about getting all of my APIs to fit the definition of REST and to do HATEAOS properly. I spent way too much time trying to conform everything as an action on a resource. Now, don't get me wrong. It is quite helpful to try to model things at stateless resources with a limited set of actions on them and to think about idempotency for specific actions in ways I don't think we did it properly in the SOAP days(at least I didn't). And in many cases it led to less brittle interfaces which were easier to reason about.
I still like REST and try to use it as much as I can when developing interfaces but I am not beholden to it. There are many cases which are not resources or are not stateless and sure you can find some obtuse way to make them be resources but that at times either leads to bad abstractions that don't convey the vocabulary of the underlying system and thus over time creates this rift in context between the interface and the underlying logic or we expose underlying implementation details as they could be easier to model as resources.
"REST" is our industry's most successful collective delusion: everyone knows it's wrong, everyone knows we're using it wrong, and somehow that works better than being right.
Eh. I won't write "pure" REST, because it's difficult to use, and I don't know if I have ever seen a tool that uses it as such. I know why it was designed that way, but I have never needed that.
I tend to use REST-like methods to select mode (POST, GET, DELETE, PATCH, etc.), but the data is usually a simple set of URL arguments (or associated data). I don't really get too bent out of shape about ensuring the data is an XML/JSON/Whatever match for the model structure. I'll often use it coming out, but not going in.
> The core problem it addresses is client-server coupling. There are probably countless projects where a small change in a server’s URI structure required a coordinated (and often painful) deployment of multiple client applications. A HATEOAS-driven approach directly solves this by decoupling the client from the server’s namespace. This addresses the quality of evolvability.
Eh, "a small change in a server’s URI structure" breaks links, so already you're in trouble.
But sure, embedding [local-parts of] URIs in the contents (or headers) exchanged is indeed very useful.
This seems to mostly boil down to including links rather than just IDs and having the client "just know" how to use those IDs.
Django Rest Framework seems to do this by default. There seems very little reason not to include links over hardcoding URLs in clients. Imagine just being able to restructure your backend and clients just follow along. No complicated migrations etc. I suspect many people just live with crappy backends because it's too difficult to coordinate the rollout of a v2 API.
However, this doesn't cover everything. There's still a ton of "out of band" information shared between client and server. Maybe there's a way to embed Swagger-style docs directly into an API and truly decouple server and client, bit it would seem to take a lot more than just using links over IDs.
Still I think there's nothing to lose by using links over IDs. Just do it on your next API (or use something like DRF that does it for you).
I built a company that actually did implement HATEOS in our API. It was a nightmare. So much processing time was spent on every request setting up all the URLs and actions that could be taken. And no one used it for anything anyways. Our client libraries used it but we had full control over them anyways and if anything, it made the libraries more complex.
While I agree it's an interesting idea in theory, it's unnecessary in the real world and has a lot of downsides.
Unless you really read and followed the paper, just call it a web api and tell your sales people to do the same. Calling it REST makes you sound like a manager that hasn't done any actual dev in 15 years.
I find it pretty shocking that this was written in 2025 without a mention of the fact that the only clients that are evolvable enough to interface with a REST API can be categorized to these three types:
1. Browsers and "API Browsers" (think something like Swagger)
2. Human and Artificial Intelligence (basically LLMs)
3. Clients downloaded from the server
You'd think that they'd point out these massive caveats. After all, the evolvable client that can handle any API, which is the thing that Roy Fielding has been dreaming about, has finally been invented.
REST and HATEOAS were intentionally developed to against the common use case of a static non-evolving client such as an android app that isn't a browser.
Instead you get this snarky blog post telling people that they are doing REST wrong, rather than pointing out that REST is something almost nobody needs (self discoverable APIs intended for evolvable clients).
If you wanted to build e.g. the matrix chat protocol on top of REST, then Roy Fielding would tell you to get lost.
If what I'm saying doesn't make sense to you, then your understanding of REST is insufficient, but let me tell you that understanding REST is a meaningless endeavor, because all you'll gain from that understanding is that you don't need it.
In REST clients are not allowed to have any out of band information about the structure or schema of the API.
You are not allowed to send GET, POST, PUT, DELETE requests to client constructed URLs.
Now that might sound reasonable. After all HATEOAS gives you all the URLs so you don't need to construct them.
Except here is the kicker. This isn't some URL specific thing. It also applies to the attributes and links in the response. You're not allowed to assume that the name "John Doe" is stored under the attribute "name" or that the activate link is stored in "activate". Your client needs to handle any theoretical API that could come from the server. "name" could be "fullName" or "firstNameAndLastName" or "firstAndLastName" or "displayName".
Now you might argue, hey but I'm allowed to parse JSON into a hierarchical object layout [0] and JPEGs into a two dimensional pixel array to be displayed onto a screen, surely it's just a matter of setting a content type or media type? Then I'll be allowed to write code specific to my resource! Except, REST doesn't define or propose any mechanism for application specific media types. You must register your media type globally for all humanity at IANA or go bust.
This might come across as a rant, but it is meant to be informative so I'll tell you what REST and HATEOAS are good for: Building micro browsers relying on human intelligence to act as the magical evolvable client. The way you're supposed to use REST and HATEOAS is by using e.g. the HAL-FORMS media type to give a logical representation of your form. Your evolvable client then translates the HAL-FORM into a html form or an android form or a form inside your MMO which happens to have a registration form built into the game itself, rather than say the launcher.
Needless to say, this is completely useless for machine to machine communication, which is where the phrase "REST API" is most commonly (ab)used.
Now for one final comment on this article in particular:
>Why aren’t most APIs truly RESTful?
>The widespread adoption of a simpler, RPC-like style over HTTP can probably attributed to practical trade-offs in tooling and developer experience: The ecosystem around specifications like OpenAPI grew rapidly, offering immediate, benefits that proved irresistible to development teams.
This is actually completely irrelevant and ignores the fact that REST as designed was never meant to be used in the vast situations where RPC over HTTP is used. The use cases for "RPC over HTTP" and REST have incredibly low overlap.
>These tools provided powerful features like automatic client/server code generation, interactive documentation, and request validation out-of-the-box. For a team under pressure to deliver, the clear, static contract provided by an OpenAPI definition was and still is probably often seen as “good enough,”
This feels like a complete reversal and shows that the author of this blog post himself doesn't understand the practical implications of his own blog post. The entire point of HATEOAS is that you cannot have automatic client code generation unless it happens during the runtime of the application. It's literally not allowed to generate code in REST, because it prevents your client from evolving at runtime.
>making the long-term architectural benefits of HATEOAS, like evolvability, seem abstract and less urgent.
Except as I said, unless you have a requirement to have something like a mini browser embedded in a smartphone app, desktop application or video game, what's the point of that evolvability?
>Furthermore, the initial cognitive overhead of building a truly hypermedia-driven client was perceived as a significant barrier.
Significant barrier is probably the understatement of the century. Building the "truly hypermedia-driven client" is equivalent to solving AGI in the machine to machine communication use case. The browser use-case only works because humans already possess general intelligence.
>It felt easier for a developer to read documentation and hardcode a URI template like /users/{id}/orders than to write a client that could dynamically parse a _links section and discover the “orders” URI at runtime.
Now the author is using snark to appeal to emotions by equivocating the simplest and most irrelevant problem with the hardest problem in a hand waving manner. "Those silly code monkeys, how dare they not build AGI! It's as simple as parsing _links and discover the "orders" URI at runtime". Except as I said, you're not allowed to assume that there is an "orders" link since that is out of band information. Your client must be intelligent enough to not only handle a API where the "/user/{id}/orders" link is stored under _links. The server is allowed give the link of "/user/{id}/orders" a randomly generated name that is changing with every request. It's also allowed to change the url path to any randomly generated structure, as long as the server is able to keep track of it. The HATEOAS server is allowed to return a human language description of each field and link, but the client is not allowed to assume that the orders are stored under any specific attribute. Hence you'd need an LLM to know which field is the "orders" field.
>In many common scenarios, such as a front-end single-page application being developed by the same team as the back-end, the client and server are already tightly coupled. In this context, the primary problem that HATEOAS solves—decoupling the client from the server’s URI structure—doesn’t present as an immediate pain point, making the simpler, documentation-driven approach the path of least resistance.
Bangs head at desk over and over and over. A webapp that is using HTML and JS downloaded from the server is following the spirit of HATEOAS. The client evolves with the server. That's the entire point of REST and HATEOAS.
[0] Whose contents may only be processed in a structure oblivious way
> the only clients that are evolvable enough to interface with a REST API can be categorized to these three types
You mention swagger. Swagger is an anti-REST tech. Defining a media type is the REST equivalent of writing a swagger API description.
If you can define an API in swagger, you can define one via a media type. It's just that the latter is generally not done because to do it requires a JSON schema (or similar) and people mostly don't use that or think of that as how one defines an API.
Boss: we need an API for XYZ
Employee: sure thing boss, I'll write it in swagger and implement by Friday!
> Instead you get this snarky blog post telling people that they are doing REST wrong, rather than pointing out that REST is something almost nobody needs (self discoverable APIs intended for evolvable clients).
Were using actual REST right now. That's what SSR html uses.
The rest of your (vastly snarkier) diatribe can be ignored.
And, yet, you then said the following, which seems to contradict the rest of what you said before it...
> Bangs head at desk over and over and over. A webapp that is using HTML and JS downloaded from the server is following the spirit of HATEOAS. The client evolves with the server. That's the entire point of REST and HATEOAS.
> rather than pointing out that REST is something almost nobody needs (self discoverable APIs intended for evolvable clients).
Well, besides that, I don't see how REST solves the problem it says it addresses. So your user object includes an activate field that describes the URI you hit to activate the user. When that URI changes, the client doesn't even notice, because it queries for a user and then visits whatever it finds in the activate field.
Then you change the term from "activate" to "unslumber". How does the client figure that out? How is this a different problem from changing the user activation URI?
The system won't be able remember why the user was created unless the content of the post includes data saying it was a signup. That's important for any type of reporting like telemetry and billing.
So then one gets to bike-shed if "signup" it is in the request path, query parameters, or the body. Or that since the user resource doesn't exist yet perhaps one can't call a method on it, so it really should be /users:signup (on the users collection, like /users:add).
Provided one isn't opposed to adopting what was bike-shedded elsewhere, there is a fairly well specified way of doing something RESTful, here is a link to its custom methods page: https://google.aip.dev/136. Its approach would be to add information about signup in a request to the post to /users: https://google.aip.dev/133. More or less it describes a way to be RESTful with HTTP/1.1+JSON or gRPC.
That's correct, the example you are giving represents bike-shedding among request path variations.
I assumed most readers of my comment would get that the idea that /users/signup is ambiguous whether or not that is supposed to be another resource, while /users:signup is less so.
I sympathize with the pedantry here and found Fielding's paper to be interesting, but this is a lost battle. When I see "REST API" I can safely assume the following:
- The API returns JSON
- CRUD actions are mapped to POST/GET/PUT/DELETE
- The team constantly bikesheds over correct status codes and at least a few are used contrary to the HTTP spec
- There's a decent chance listing endpoints were changed to POST to support complex filters
Like Agile, CI or DevOps you can insist on the original definition or submit to the semantic diffusion and use the terms as they are commonly understood.
Fielding won the war precisely because he was intellectually incoherent and mostly wrong. It's the "worse is better" of the 21st century.
RPC systems were notoriously unergonomic and at best marginally successful. See Sun RPC, RMI, DCOM, CORBA, XML-RPC, SOAP, Protocol Buffers, etc.
People say it is not RPC but all the time we write some function in Javascript like
which does a and on the backend we have a function that looks like with some annotation that explains how to map the URL to an item call. So it is RPC, but instead of a highly complex system that is intellectually coherent but awkward and makes developers puke, we have a system that's more manual than it could be but has a lot of slack and leaves developers feeling like they're in control. 80% of what's wrong with it is that people won't just use ISO 8601 dates.When I realized that I was calling openapi-generator to create client side call stubs on non-small service oriented project, I started missing J2EE EJB. And it takes a lot to miss EJB.
I'd like to ask seasoned devs and engineers here. Is it the normal industry-wide blind spot where people still crave for and are happy creating 12 different description of the same things across remote, client, unit tests, e2e tests, orm, api schemas, all the while feeling much more productive than <insert monolith here> ?
I've seen some systems with a lot of pieces where teams have attempted to avoid repetition and arranged to use a single source of schema truth to generate various other parts automatically, and it was generally more brittle and harder to maintain due to different parts of the pipeline owned by different teams, and operated on different schedules. Furthermore it became hard to onboard to these environments and figure out how to make changes and deploy them safely. Sometimes the repetition is really the lesser evil.
I see, it's also reminiscent of the saying "microservices" are an organisational solution. It's just that I also see a lot of churn and friction due to incoherent versions and specs not being managed in sync now (some solutions exists are coming though)
> I've seen some systems with a lot of pieces where teams have attempted to avoid repetition and arranged to use a single source of schema truth to generate various other parts automatically, and it was generally more brittle and harder to maintain due to different parts of the pipeline owned by different teams, and operated on different schedules.
I'm not sure what would lead to this setup. For years there are frameworks that support generating their own OpenAPI spec, and even API gateways that not only take that OpenAPI spec as input for their routing configuration but also support exporting it's own.
> it was generally more brittle and harder to maintain
It depends on the system in question, sometimes it's really worth it. Such setups are brittle by design, otherwise you get teams that ship fast but produce bugs that surface randomly in the runtime.
Absolutely, it can work well when there is a team devoted to the schema registry and helping with adoption. But it needs to be worth it to be able to amortize the resources, so probably best for bigger organizations.
I keep pining for a stripped-down gRPC. I like the *.proto file format, and at least in principle I like the idea of using code-generation that follows a well-defined spec to build the client library. And I like making the API responsible for defining its own error codes instead of trying to reuse and overload the transport protocol's error codes and semantics. And I like eliminating the guesswork and analysis paralysis around whether parameters belong in the URL, in query parameters, or in some sort of blob payload. And I like having a well-defined spec for querying an API for its endpoints and message formats. And I like the well-defined forward and backward compatibility rules. And I like the explicit support for reusing common, standardized message formats across different specs.
But I don't like the micromanagement of field encoding formats, and I don't like the HTTP3 streaming stuff that makes it impossible to directly consume gRPC APIs from JavaScript running in the browser, and I don't like the code generators that produce unidiomatic client libraries that follow Google's awkward and idiosyncratic coding standards. It's not that I don't see their value, per se*. It's more that these kinds of features create major barriers to entry for both users and implementers. And they are there to solve problems that, as the continuing predominance of ad-hoc JSON slinging demonstrates, the vast majority of people just don't have.
Brb, I'm off to invent another language independent IDL for API definitions that is only implemented by 2 of the 5 languages you need to work with.
I'm joking, but I did actually implement essentially that internally. We start with TypeScript files as its type system is good at describing JSON. We go from there to JSON Schema for validation, and from there to the other languages we need.
> Brb, I'm off to invent another language independent IDL for API definitions that is only implemented by 2 of the 5 languages you need to work with.
Watch out, OpenAPI is now 3 versions deep and supports both JSON and YAML.
If younger me had been told, "one day kid, you will miss working with XML", I'd have laughed.
YAML made me miss JSON. JSON made me miss XML.
The pattern I observe is that in old industries, people who paid the cost, try to come up with a big heavy solution (xml, xsd, xpath), but newcomers will not understand the need, and bail onto simpler ideas (json), until they hit the wall and start to invent their own (jsonschema, jquery).
same goes for java vs php/python
Definitely. And often, it's the right call, or the thing wouldn't generate any business value (such as money) at all in a reasonable time.
But boy, how messy spaghetti don't we get for it, sometimes.
(Invent their own, badly, at first. Sigh.)
anything I could read to imitate that workflow ?
I haven't written anything up - maybe one day - but our stack is `ts-morph` to get some basic metadata out of our "service definition" typescript files, `ts-json-schema-generator` to go from there to JSON Schema, `quicktype-core` to go to other languages.
Schema validation and type generation vary by language. When we need to validate schemas in JS/TS land, we're using `ajv`. Our generation step exports the JSON Schema to a valid JS file, and we load that up with AJV and grab schemas for specific types using `getSchema`.
I evaluated (shallowly) for our use case (TS/JS services, PHP monolith, several deployment platforms):
- typespec.io (didn't like having a new IDL, mixes transport concerns with service definition)
- trpc (focused on TS-only codebases, not multi language)
- OpenAPI (too verbose to write by hand, too focused on HTTP)
- protobuf/thrift/etc (too heavy, we just want JSON)
I feel like I came across some others, but I didn't see anyone just using TypeScript as the IDL. I think it's quite good for that purpose, but of course it is a bit too powerful. I have yet to put in guardrails that will error out when you get a bit too type happy, or use generics, etc.
Can't thank you enough. I'm gonna try these and see.
It's not that we like it, it's just that most other solutions are so complex and difficult to maintain that repetition is really not that bad a thing.
I was however impressed with FastAPI, a python framework which brought together API implementation, data types and generating swagger specs in a very nice package. I still had to take care of integration tests by myself, but with pytest that's easy.
So there are some solutions that help avoid schema duplication.
fastapi + sqlmodel does remove many layers that is true, but you still have other services requiring lots of boilerplate
My experience is that all of these layers have identical data models when a project begins, and it seems like you have a lot of boilerplate to repeat every time to describe "the same thing" in each layer.
But then, as the project evolves, you actually discover that these models have specific differences in different layers, even though they are mostly the same, and it becomes much harder to maintain them as {common model} + {differences}, than it is to just admit that they are just different related models.
For some examples of very common differences:
- different base types required for different languages (particularly SQL vs MDW vs JavaScript)
- different framework or language-specific annotations needed at different layers (public/UNIQUE/needs to start with a capital letter/@Property)
- extra attached data required at various layers (computed properties, display styles)
- object-relational mismatches
The reality is that your MDW data model is different from your Database schema and different from your UI data model (and there may be multiple layers as well in any of these). Any attempt to force them to conform to be kept automatically in sync will fail, unless you add to it all of the logic of those differences.
anybody ever worked with model-driven methodologies ? the central model is then derived to other definitions
Having 12 different independent copies means nobody on your 30 people multi-region team is blocked.
I remember getting my hands on a CORBA specification back as a wide-eyed teen thinking there is this magical world of programming purity somewhere: all 1200 pages of it, IIRC (not sure what version).
And then you don't really need most of it, and one thing you need is so utterly complicated, that it is stupid (no RoI) to even bother being compliant.
And truly, less is more.
What RPC mechanisms, in your opinion, are the most ergonomic and why?
(I have been offering REST’ish and gRPC in software I write for many years now. With the REST’ish api generated from the gRPC APIs. I’m leaning towards dropping REST and only offering gRPC. Mostly because the generated clients are so ugly)
Just use gRPC or ConnectRPC (which is basically gRPC but over regular HTTP). It's simple and rigid.
REST is just too "floppy", there are too many ways to do things. You can transfer data as a part of the path, as query parameters, as POST fields (in multiple encodings!), as multipart forms, as streaming data, etc.
Just not in C++ code. gprc has a bajillon dependencies, and upgrades are a major pain. If you have a dedicated build team and they are willing to support this - sure, go ahead and use it.
But if you have multiple targets, or unusual compilers, or don't enjoy working with build systems, stay away from complex stuff. Sure, REST may need some manual scaffolding, but no matter what your target is, there is a very good chance it has JSON and HTTP libs.
> REST is just too "floppy", there are too many ways to do things.
I think there is some degree of confusion in your reply. You're trying to compare a framework with an architecture style. It's like comparing, say, OData with rpc-over-HTTP.
You can mess up grpc just as much. Errors are a good place to start.
Wait until you hear about errors in REST...
What about errors in REST? It's HTTP status codes, and implementations are free to pick whatever approach they want for response documents. Some frameworks default to using Problem Details responses, but no one forces that.
You can't rely on them because they can come from middleboxes (load balancers, proxies, captive portals in hotels, etc.).
So you can't rely on having structured errors for common codes such as 401/403/404, it's very typical to get unstructured text in payloads for such errors. Not a few REST bindings just fail with unhelpful serialization exceptions in such cases.
People get stuff done despite at all that.
I'd agree with your great-grandparent post... people get stuff done because of that.
There has been no lack of heavyweight, pre-declare everything, code-generating, highly structured, prescriptive standards that sloppyREST has casually dispatched (pun fully intended) in the real world. After some 30+ years of highly prescriptive RPC mechanisms, at some point it becomes time to stop waiting for those things to unseat "sloppy" mechanisms and it's time to simply take it as a brute fact and start examining why that's the case.
Fortunately, in 2025, if you have a use case for such a system, and there are many many such valid use cases, you have a number of solid options to choose from. Fortunately sloppyREST hasn't truly killed them. But the fact that it empirically dominates it in the wild even so is now a fact older than many people reading this, and bears examination in that light rather than casual dismissals. It's easy to list the negatives, but there must be some positives that make it so popular with so many.
> There has been no lack of heavyweight, pre-declare everything, code-generating, highly structured, prescriptive standards
Care to list them? REST mania started around early 2000-s, and at that time there was only CORBA available as a cross-language portable RPC. Microsoft had DCOM.
And that was it. There was almost nothing else.
It was so bad that ZeroC priced their ICE suite based on a PERCENTAGE OF GROSS SALES: https://web.archive.org/web/20040603094344/http://www.zeroc.... Their ICE suite was basically an RPC with a human-designed IDL and non-crazy bindings for C/C++/Java.
Then the situation got WORSE when SOAP came.
At this point, anything, literally anything, that didn't involve XML was greeted with enthusiasm.
I don't just mean the ones that existed at the time of the start of REST. I mean all the ones that have come up since then as well and failed to displace it.
Arguably the closest thing to a prescriptive winner is laying OpenAPI on top of REST APIs.
Also, REST defined as "A vaguely HTTP-ish API that carries JSON" would have to be put later than that. Bear in mind that even after JSON was officially "defined" it's not like it instantly spread everywhere. I am among the many people that reconstructed something like it because we didn't know about it yet, even though it was nominally years old by that point. It took years to propagate out. I'd put "REST as we are talking about it" as late 200xs at the earliest for when it was really popular and only into the 2010s as to when you started expecting people to mean that when they said "Web API".
> Care to list them?
From the top of my head, OData.
https://www.odata.org/
This is a recent project. REST happened basically in the environment where your choices were CORBA, DCOM, SOAP and other such monstrosities.
Of course, REST won handily. We're not in this environment anymore, thankfully, and REST now is getting some well-deserved scrutiny.
> This is a recent project.
OData officially started out in 2007. Roy Fielding's thesis was published in 2000.
So it was a contemporary of Protobufs, Cap’n Proto, and other frameworks. Facebook had Thrift, Amazon had Coral, and so on.
They appeared almost simultaneously, for the very same reason: REST by itself is too vague and unreliable.
I mean... I used to get stuff done with CORBA and DCOM.
It's the question of long-term consequences for supportability and product evolution. Will the next person supporting the API know all the hidden gotchas?
The critical problem with gRPC is that it uses protocol buffers.
Which are...terrible.
Example: structured schema, but no way to require fields.
Well the competition is REST which doesn’t have a schema or required fields, so not much of a problem.
> Well the competition is REST which doesn’t have a schema or required fields, so not much of a problem.
A vague architecture style is not competition to a concrete framework. At best, you're claiming that the competition to gRPC is rolling your own ad-hoc RPC implementation.
What I expect to happen now is an epiphany. Why do most developers look at tools like gRPC and still decide it's a far better option to roll their own HTTP-based RPC interface? I mean, it's a rational choice for most. Think about that for a moment.
With Protobuf this is a conscious decision to avoid back-compat issues. I'm not sure if I like it.
That's exactly how these systems fail in the marketplace. You make one decision that's good for, say, 50% of cases but disqualifying for 50% of cases and you lose 50% of the market.
Make 5 decisions like that and you lost 31/32 of the market.
Infra teams like it, app devs don't like it.
I’m a dev and I like it.
Amen. Particularly ISO8601.
Always thought that a standard like ISO8601 which always stores the date and time in UTC but appends the local time zone would beneficial.
Sometimes you need your timestamps to be in a named timezone. If I have a meeting at 9am local time next month, I probably want it to still be at 9am even if the government suddenly decided to cancel daylight time.
unless the customer you're meeting is in another timezone where the government didn't cancel daylight time
Exchange/GMail/etc. already has this problem/feature. Their response is simple: Follow the organiser's timezone. If it's 9am on the organiser's calendar, it will stay at 9am on the organiser's calendar. Everyone else's appointment will float to match the organiser.
I don't think I ever needed something like that... Since most cases don't need local time zone, why not keep two separate fields?
It's a delimited string. There are many fields within that string already.
That contains, by my quick glance, at least 8 fields of information. I would argue the one field it does not carry but probably should is the _name_ of the timezone it is for.ISO8601 is really broad with loads of edge cases and differing versions. RFC 3339 is closer, but still with a few quirks. Not sure why we can't have one of these that actually has just one way of representing each instant.
Related: https://ijmacd.github.io/rfc3339-iso8601/
That would be solved if JSON had a native date type in ISO format.
JSON doesn’t really have data types beyond very simple ones
> JSON doesn’t really have data types beyond very simple ones
What do you think primitive types are supposed to be?
The below type definition (TS) fits the ECMA schema for JSON:
You didn't answered my question.
> Fielding won the war
It’s a bit odd to say fielding “won the war” when for years he had a blog pointing out all the APIs doing RPC over HTTP and calling it REST.
He formalised a concept and gave it a snappy name, and then the concept got left behind and the name stolen away from the purpose he created it for.
If that’s what you call victory, I guess Marx can rest easy.
> He formalised a concept and gave it a snappy name, and then the concept got left behind and the name stolen away from the purpose he created it for.
I'm not sure the "name was stolen" or the zealot concept actually never got any traction in production environments due to all the problems it creates.
> I'm not sure the "name was stolen" or the zealot concept actually never got any traction in production environments due to all the problems it creates.
That is a false dichotomy. Fielding gave a name to a specific concept / architectural style, the concept got ignored (rightly or wrongly, doesn’t matter) while the name he coined got recycled for something essentially entirely unrelated.
I'm not super familiar with SOAP and CORBA, but how is SOAP any more coherent than a "RESTful" API? It's basically just a bag of messages. I guess it involves a schema, but that's not more coherent imo, since you just end up with specifics for every endpoint anyways.
CORBA is less "incoherent", but I'm not sure that's actually helpful, since it's still a huge mess. You can most likely become a lot more proficient with RESTful APIs and be more productive with them, much faster than you could with CORBA. Even if CORBA is extremely well specified, and "RESTful" is based more on vibes than anything specific.
Though to be clear I'm talking about the current definition of REST APIs, not the original, which I think wasn't super useful.
SOAP, CORBA and such have a theory for everything (say authentication) It's hard to learn that theory, you have to learn a lot of it to be able to accomplish anything at all, you have to deal with build and tooling issues, but if you look closely there will be all sorts of WTFs. Developers of standards like that are always implementing things like distributed garbage collection and distributed transactions which are invariably problematic.
Circa 2006 I was working on a site that needed to calculate sales tax and we were looking for an API that could help with that. One vendor uses SOAP which would have worked if we were running ASP.NET but we were running PHP. In two days I figured out enough to reverse engineer the authentication system (docs weren't quite enough to make something that worked) but then I had more problems to debug. A competitive vendor used a much simpler system and we had it working in 45 min -- auth is always a chokepoint because if you can't get it working 100% you get 0% of the functionality.
HTTP never had an official authentication story that made sense. According to the docs there are basic, digest, etc. Have you ever seen a site that uses them? The world quietly adopted cookie-based auth that was an ad-hoc version of JSON Web Tokens, once we got an intellectually coherent spec snake oil vendors could spam HN with posts about how bad JWT is because... It had a name and numerous specifics to complain about.
Look at various modern HTTP APIs and you see auth is all across the board. There was the time I did a "shootout" of roughly 10 visual recognition APIs, I got all of them working in 20-30 mins except for Google where I had to install a lot of software on my machine, trashed my Python, and struggled mightily because... they had a complex theory of authentication which was a barrier to doing anything at all.
Worse is better.
Agree with most of what you said, except about HTTP Basic auth. That is used everywhere - take a look at any random API and there is roughly 90% chance that this is the authentication mechanism used. For backends which serve a single frontend maybe not so much, but still in places.
> That is used everywhere - take a look at any random API and there is roughly 90% chance that this is the authentication mechanism used.
I have no idea where you got that idea from. I'm yet to work in a project where any service doesn't employ a mix of bearer token authentication schemes and API keys.
I've found recently that CORS doesn't work with it, which kills it for a lot of usecases.
> Have you ever seen a site that uses them?
I lost the thread...are we talking websites or APIs?
Both use HTTP, but those are pretty different interfaces.
I mean, HTTP is an RPC protocol. It has methods and arguments and return types.
What I object to about eg xml-rpc is that it layers a second RPC protocol over HTTP so now I have two of them...
> I sympathize with the pedantry here and found Fielding's paper to be interesting, but this is a lost battle.
Why do people feel compelled to even consider it to be a battle?
As I see it, the REST concept is useful, but the HATEOAS detail ends up having no practical value and creates more problems than the ones it solves. This is in line with the Richardson maturity model[1], where the apex of REST includes all the HATEOAS bells and whistles.
Should REST without HATEOAS classify as REST? Why not? I mean, what is the strong argument to differentiate an architectural style that meets all but one requirement? And is there a point to this nitpicking if HATEOAS is practically irrelevant and the bulk of RESTful APIs do not implement it? What's the value in this nitpicking? Is there any value to cite thesis as if they where Monty Python skits?
[1] https://en.wikipedia.org/wiki/Richardson_Maturity_Model
For me the battle is with people who want to waste time bikeshedding over the definition of "REST" and whether the APIs are "RESTful", with no practical advantages, and then having to steer the conversation--and their motivation--towards more useful things without alienating them. It's tiresome.
It was buried towards the bottom of the article, but the reason, to me:
Clients can be almost automatic with a HATEOS implementation, because it is a self describing protocol.
Of course, Open API (and perhaps to some extent now AI) also mean that clients don't need to be written they are just generated.
However it is important perhaps to remember the context here: SOAP is and was terrible, but for enterprise that needed a complex and robust RPC system, it was beginning to gain traction. HATEOS is a much more general yet simple and comprehensive system in comparison.
Of course, you don't need any of this. So people built APIs they did need that were not restfull but had an acronym that their bosses thought sounded better than SOAP, and the rest is History.
> Clients can be almost automatic with a HATEOS implementation, because it is a self describing protocol.
That was the theory, but it was never true in practice.
The oft comparisons to the browser really missed the mark. The browser was driven by advanced AI wetware.
Given the advancements in LLMs, it's not even clear that RESTish interfaces would be easier for them to consume (say vs. gRPC, etc.)
Then let developer-Darwin win and fire those people. Let the natural selection of the hiring process win against pedantic assholes. The days are too short to argue over issues that are not really issues.
Can we just call them HTTP APIs?
Defining media types seems right to me, but what ends up happening is that you use swagger instead to define APIs and out the window goes HATEOAS, and part of the reason for this is just that defining media types is not something people do (though they should).
Basically: define a schema for your JSON, use an obvious CRUD mapping to HTTP verbs for all actions, use URI local-parts embedded in the JSON, use standard HTTP status codes, and embed more error detail in the JSON.
> (...) and part of the reason for this is just that defining media types is not something people do (...)
People do not define media types because it's useless and serves no purpose. They define endpoints that return specific resource types, and clients send requests to those endpoints expecting those resource types. When a breaking change is introduced, backend developers simply provide a new version of the API where a new endpoint is added to serve the new resource.
In theory, media types would allow the same endpoint to support multiple resource types. Services would sent specific resource types to clients if they asked for them by passing the media type in the accept header. That is all fine and dandy, except this forces endpoints to support an ever more complex content negotiation scheme that no backend framework comes close to support, and this brings absolutely no improvement in the way clients are developed.
So why bother?
>the HATEOAS detail ends up having no practical value and creates more problems than the ones it solves.
Many server-rendered websites support REST by design: a web page with links and forms is the state transferred to client. Even in SPAs, HATEOAS APIs are great for shifting business logic and security to server, where it belongs. I have built plenty of them, it does require certain mindset, but it does make many things easier. What problems are you talking about?
complexity
Backend only and verbosity would be more correct description.
> Why do people feel compelled to even consider it to be a battle?
Because words have specific meanings. There’s a specific expectation when using them. It’s like if someone said “I can’t install this app on my iPhone” but then they have an android phone. They are similar in that they’re both smartphones and overall behave and look similar, but they’re still different.
If you are told an api is restful there’s an expectation of how it will behave.
> If you are told an api is restful there’s an expectation of how it will behave.
And today, for most people in most situations, that expectation doesn’t include anything to do with HATEOAS.
Words derive their meaning from the context in which they are (not) used, which is not fixed and often changes over time.
Few people actually use the word RESTful anymore, they talk about REST APIs, and what they mean is almost certainly very far from what Roy had in mind decades ago.
People generally do not refer to all smartphones as iPhones, but if they did, that would literally change the meaning of the word. Examples: Zipper, cellophane, escalator… all specific brands that became ordinary words.
We should probably stop calling the thing that we call REST, REST and be done with it - it's only tangentially related to what Fielding tried to define.
> We should probably stop calling the thing that we call REST (...)
That solves no problem at all. We have Richardson maturity model that provides a crisp definition, and it's ignored. We have the concept of RESTful, which is also ignored. We have RESTless, to contrast with RESTful. Etc etc etc.
None of this discourages nitpickers. They are pedantic in one direction, and so lax in another direction.
Ultimately it's all about nitpicking.
I’m with you. HATEOAS is great when you have two independent (or more) enterprise teams with PMs fighting for budget.
When it’s just yours and your two pizza team, contract-first-design is totally fine. Just make sure you can version your endpoints or feature-flag new API’s so it doesn’t break your older clients.
> Should REST without HATEOAS classify as REST? Why not?
Because what got backnamed HATEOAS is the very core of what Fielding called REST: https://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypert...
Everything else is window dressing.
> Why do people feel compelled to even consider it to be a battle?
Because September isn't just for users.
> but the HATEOAS detail ends up having no practical value and creates more problems than the ones it solves.
Only because we never had the tools and resources that, say, GraphQL has.
And now everyone keeps re-inventing half of HTTP anyway. See this diagram https://raw.githubusercontent.com/for-GET/http-decision-diag... (docs https://github.com/for-GET/http-decision-diagram/tree/master...) and this: https://github.com/for-GET/know-your-http-well
> Only because we never had the tools and resources that, say, GraphQL has.
GraphQL promised to solve real-world problems.
What real world problems does HATEOAS addresses? None.
GraphQL was "promising" something because it was a thing by a single company.
HATEOAS didn't need to "promise" anything since it was just describing already existing protocols and capabilities that you can see in the links I posted.
And that's how you got POST-only GraphQL which for years has been busily reinventing half of HTTP
HATEOAS adds lots of practical value if you care about discoverability and longevity.
Discoverability by whom, exactly? Like if it's for developer humans, then good docs are better. If it's for robots, then _maybe_ there's some value... But in reality, it's not for robots.
HATEOAS solves a problem that doesn't exist in practice. Can you imagine an API provider being like, "hey, we can go ahead and change our interface...should be fine as long as our users are using proper clients that automatically discover endpoints and programmatically adapt accordingly"? Or can you imagine an API consumer going, "well, this HTTP request delivers the data we need, but let's make sure not to hit it directly -- instead, let's recursively traverse a graph of requests each time to make sure this is still the way to do it!"
You have got it wrong. Let's say I build some API with different user roles. Some users can delete an object, others can only read it. The UI knows about the semantics of the operations and logical names of it, so when UI gets the object from server it can simply check, if certain operations are available, instead of encoding the permission checking on the client side. This is the discoverability. It does not imply generated interfaces, UI may know something about the data in advance.
> You have got it wrong. Let's say I build some API with different user roles. Some users can delete an object, others can only read it. The UI knows about the semantics of the operations and logical names of it, so when UI gets the object from server it can simply check, if certain operations are available, instead of encoding the permission checking on the client side.
Have you ever heard of HTTP's OPTIONS verb?
https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/...
Follow-up trick question: how come you never heard of it and still managed quite well to live without it?
Maybe you should reconsider the way you ask questions on this forum. Your tone is not appropriate and the question itself just demonstrates that you don't understand this topic.
Yes, I'm aware of this header and know the web standards well enough.
In hypermedia API you communicate to client the list of all operations in the context of the resource (note: not ON the resource), which includes not only basic CRUD but also operations on adjacent resources (e.g. on user account you may have an operation of sending a message to this user). Yes, in theory one could use OPTIONS with a non-standard response body to communicate such operations that cannot be expressed in plain HTTP verbs in Allow header.
However such solution is not practical, because it requires an extra round trip for every resource. There's a better alternative, which is to provide the list of operations with the resource using one of the common standards - HAL, JSON-LD, Siren etc. The example in my another comment in this thread is based on HAL. If you wonder what is that, look no further than at Spring - it does support HAL APIs out of the box from quite a long time. And of course there's an RFC draft and a Wikipedia article (https://en.wikipedia.org/wiki/Hypertext_Application_Language).
This is actually what we do at [DAYJOB] and it's been working well for over 12 years. Like any other kind of interface indirection it adds the overhead of indirection for the benefit of being able to change the producer's side of the implementation without having to change all of the consumers at the same time.
That's actually an interesting take, thank you.
How does the UI check if certain operations are available?
It’s literally in server response:
In this example you receive list of permitted operations embedded in the resource model. href=. means you can perform this operation on resource self link.The promise of REST and HATEOAS was best realized not by building RESTful apps like say "my airline reservation app" but by building a programming system, spiritually like HTTP + HTML, in which you'd able to declaratively specify applications, of which "my airline reservation app" could be one and "my sports gambling service" could be another. So some smart person would invent a new application protocol with rich semantics as you did above, and a new type of user agent installed on desktops understands how to present them to the user, and the app on the server just assembles the resources in this rich format, directing users to their choices through the states of hte program.
So that never got done (because it's complex) and people started building apps like "my airline reservation app" but then realized to to build that domain app you don't need all the abstraction of a full REST system.
Oh, interesting. So rather than the UI computing what operations should be allowed currently by, say, knowing the user's current role and having rules baked into it about the relationship between role and UI widgets, the UI can compute what motive should be in or simply off of explicit statements or capability from the server.
I can see some meat on these bones. The counterpoint is that the protocol is now chattier than it would be otherwise... But a full analysis of bandwidth to the client would have to factor that you have to ship over a whole framework to implement those rules and keep those rules synchronized between client and server implementation.
I’d suggest that bandwidth optimization should happen when it becomes critical and control presence of hypermedia controls via feature flag or header. This way frontend becomes simpler, so FE dev speed and quality improves, but backend becomes more complex. The main problem here is that most backend frameworks are supporting RMM level 2 and hypermedia controls require different architecture to make server code less verbose. Unfortunately REST wasn’t understood well, so full support of it wasn’t in focus of open source community.
OPTIONS https://datatracker.ietf.org/doc/html/rfc2616
More links here: https://news.ycombinator.com/item?id=44510745
Or probably just an Allow header on a response to another query (e.g. when fetching an object, server could respond with an Allow: GET, PUT, DELETE if the user has read-write access and Allow: GET if it’s read-only).
That’s a neat idea actually, I think I’ll need to read up on the semantics of Allow again…. There is no reason you couldn’t just include it with arbitrary responses, no?
I don’t see why not!
It’s something else. List of available actions may include other resources, so you cannot express it with pure HTTP, you need a data model for that (HAL is one of possible solutions, but there are others)
With HATEOAS you're supposed to return the list of available actions with the representation of your state.
Neo4j's old REST API was really good about that. See e.g. get node: https://neo4j.com/docs/rest-docs/current/#rest-api-get-node
That API doesn’t look like REST level 3 API. For example, there’s an endpoint to create a node. It is not referenced by root or anywhere else. GetNode endpoint does include some traversal links in response, but those links are part of domain model, not part of the protocol. HAL does offer a protocol by which you enhance your domain model with links with semantics and additional resources.
these levels? https://blog.restcase.com/4-maturity-levels-of-rest-api-desi...
Yes. Though more canonical link is here: https://martinfowler.com/articles/richardsonMaturityModel.ht...
I'm not saying it's perfect, but it's really good, and you could create a client for it in an evening.
I always thought soooo many REST implementations and explainers were missing a trick by ignoring the OPTIONS verb, it seems completely natural to me, but people love to stuff things inside of JSON.
> If it's for robots, then _maybe_ there's some value...
Nah, machine readable docs beat HATEOAS in basically any application.
The person that created HATEOAS was really not designing an API protocol. It's a general use content delivery platform and not very useful for software development.
The problems do exist, and they're everywhere. People just invented all sorts of hacks and workarounds for these issues instead of thinking more carefully about them. See my posts in this thread for some examples:
https://news.ycombinator.com/item?id=44509745
LLMs also appear to have an easier time consuming it (not surprisingly.)
For most APIs that doesn’t deliver any value which can’t be gained from API docs, so it’s hard to justify. However, these days it could be very useful if you want an AI to be able to navigate your API. But MCP has the spotlight now.
And that's fine, but then you're doing RPC instead of REST and we should all be clear and honest about that.
I think you throw away a useful description of an API by lumping them all under RPC. If you tell me your API is RPC instead of REST then I'll assume that:
* If the API is available over HTTP then the only verb used is POST.
* The API is exposed on a single URL and the `method` is encoded in the body of the request.
It is true, if you say "RPC" I'm more likely to assume gRPC or something like that. If you say "REST", I'm 95% confident that it is a standard / familiar OpenAPI style json-over-http style API but will reserve a 5% probability that it is actually HATEOAS and have to deal with that. I'd say, if you are doing Roy Fielding certified REST / HATEOAS it is non-standard and you should call it out specifically by using the term "HATEOAS" to describe it.
What would it take for you to update your assumptions?
People in the real world referring to "REST" APIs, the kind that use HTTP verbs and have routes like /resource/id as RPC APIs. As it stands in the world outside of this thread nobody does that.
At some level language is outside of your control as an individual even if you think it's literally wrong--you sometimes have to choose between being 'correct' and communicating clearly.
To me, the most important nuance really is that just like "hypermedia links" (encoded as different link types, either with Link HTTP header or within the returned results) are "generic" (think that "activate" link), so is REST as done today: if you messed up and the proper action should not be "activate" but "enable", you are in no better position than having to change from /api/v1/account/ID/activate to /api/v2/account/ID/enable.
You still have to "hard code" somewhere what action anything needs to do over an API (and there is more missing metadata, like icons, translations for action description...).
Mostly to say that any thought of this approach being more general is only marginal, and really an illusion!
While I ask people whether they actually mean REST according to the paper or not, I am one of the people who refuse to just move on. The reason being that the mainstream use of the term doesn’t actually mean anything, it is not useful, and therefore not pragmatic at all. I basically say “so you actually just mean some web API, ok” and move on with that. The important difference being that I need to figure out the peculiarities of each such web API.
>> The important difference being that I need to figure out the peculiarities of each such web API
So if they say it is Roy Fielding certified, you would not have to figure out any "peculiarities"? I'd argue that creating a typical OpenAPI style spec which sticks to standard conventions is more professional than creating a pedantically HATEOAS API. Users of your API will be confused and confusion leads to bugs.
op's article could've been plucked from 2012 - this is one of my favorite rest rants from 2012: https://mikehadlow.blogspot.com/2012/08/rest-epic-semantic-f...
..that was written before swagger/openAPI was a thing. now there's a real spec with real adoption and real tools and folks can let the whole rest-epic-semantic-fail be an early chapter of web devs doing what they do (like pointing at remotely relevant academic paper to justify what they're doing at work)
So you enjoy being pedantic for the sake of being pedantic? I see no useful benefit either from a professional or social setting to act like this.
I don’t find this method of discovery very productive and often regardless of meeting some standard in the API the real peculiarities are in the logic of the endpoints and not the surface.
I can see a value in pedantry in a professional setting from a signaling point of view. It's a cheap way to tell people "Hey! I'm not like those other girls, I care about quality," without necessarily actually needing to do the hard work of building that quality in somewhere where the discerning public can actually see your work.
(This is not a claim that the original commenter doesn't do that work, of course, they probably do. Pedants are many things but usually not hypocrites. It's just a qualifier.)
You'd still probably rather work with that guy than with me, where my preferred approach is the opposite of penalty. I slap it all together and rush it out the door as fast as possible.
>> "Hey! I'm not like those other girls, I care about quality,"
OMG. Pure gold!
What some people call pedantic, others may call precision. I normally just call the not-quite-REST API styles as simply "HTTP APIs" or even "RPC-style" APIs if they use POST to retrieve data or name their routes in terms of actions (like some AWS APIs).
Like all things in life it’s about balance. If you are to say things like the person I replied to says he does you are ultimately creating friction for absolutely no gain. Hence why I said being pedantic for the sake of being pedantic or in other words, being difficult for no good reason. There is a time and place for everything but over a decade plus of working and building many different APIs I see no benefit.
I cannot even recall a time where it caused me enough issues to even think about it later on. The business logic. I have had moments where I thought something was strange in a Elasticsearch API but again it was of no consequence.
REST is pretty much impossible to adhere to for any sufficiently complex API and we should just toss it in the garbage
100%. The needs of the client rule, and REST rarely meets the challenge. When I read the title, I was like "pfff", REST is crap to start with, why do I care?
REST means, generally, HTTP requests with json as a result.
It also means they made some effort to use appropriate http verbs instead of GET/POST for everything, and they made an effort to organize their urls into patterns like `/things/:id/child/:child_id`.
It was probably an organic response to the complexity of SOAP/WSDL at the time, so people harping on how it's not HATEOAS kinda miss the historical context; people didn't want another WSDL.
>> /things/:id/child/:child_id
It seems that nesting isn't super common in my experience. Maybe two levels if completely composite but they tend to be fairly flat.
Generally only /companies/:companyId/buildings
And then you get a list of all buildings for this company.
Every building has a url like: /buildings/:buildingId
So you constantly get back to the root.
Only exception is generally a tenant id which goes upfront for all requests for security/scoping purposes.
This seems like a really good model. It keeps things flat and easy to extend.
I see both.
E.g. GitHub /repos/:owner/:repo/pulls/comments/:comment_id
But flat is better than nested, esp if globally unique IDs are used already (and they often are).
Yes but /comments/:comment_uuid that has a parent to /pulls/:pull_uuid is harder to map the hierarchy it belongs to.
Not really if an URL link is added to the post in the comment response.
Also it is possible to embed a sub resource (or part of it).
Think a blog post.
/blogPost/:blogPostId
You can embed a blog object with the url and title so you can show the blogpost on a page with the name of the blog in one go.
If you need more details on the blog you can request /blogs/:blogId
> instead of GET/POST for everything
Sometimes that's a pragmatic choice too. I've worked with HTTP clients that only supported GET and POST. It's been a while but not that long ago.
Not even just clients, but servers too would block anything not GET/POST/HEAD. And I believe PHP still to this day only has $_GET and $_POST as out of the box superglobals to conveniently get data params. I recall some "REST" APIs would let you use POST for PUT/DELETE requests if you added a special var or header specifying.
> It also means they made some effort to use appropriate http verbs instead of GET/POST for everything, and they made an effort to organize their urls into patterns like `/things/:id`.
No not really. A lot of people don't understand REST to be anything other than JSON over HTTP. Sometimes, the HTTP verbs thing is done as part of CRUD but actually CRUD doesn't necessarily have to do with the HTTP verbs at all and there can just be different endpoints for each operation. It's a whole mess.
I also view it as inevitable.
I can count on one hand the number of times I've worked on a service that can accurately be modeled as just representational state transfer. The rest have at least some features that are inherently, inescapably some form of remote procedure call. Which the original REST model eschews.
This creates a lot of impedance mismatch, because the HTTP protocol's semantics just weren't designed to model that kind of thing. So yeah, it is hard to figure out how to shoehorn that into POST/GET/PUT/DELETE and HTTP status codes. And folks who say it's easy tend to get there by hyper-focusing on that one time they were lucky enough to be working on a project where it wasn't so hard, and dismissing as rare exceptions the 80% of cases where it did turn out to be a difficult quagmire that forced a bunch of unsatisfying compromises.
Alternatively you can pick a protocol that explicitly supports RPC. But that's not necessarily any better because all the well-known options with good language support are over-engineered monstrosities like GRPC, SOAP, and (shudder) CORBA. It might reduce your domain modeling headaches, but at the cost of increased engineering and operations hassle. I really can't blame anyone for deciding that an ad-hoc, ill-specified, janky application of not-actually-REST is the more pragmatic option. Because, frankly, it probably is.
xml-rpc (before it transmogrified into SOAP) was pretty simple and flexible. Still exists, and there is a JSON variant now too. It's effectively what a lot of web APIs are: a way to invoke a method or function remotely.
HTTP/JSON API works too, but you can assume it's what they mean by REST.
It makes me wish we stuck with XML based stuff, it had proper standards, strictly enforced by libraries that get confused by things not following the standards. HTTP/JSON APIs are often hand-made and hand-read, NIH syndrone running rampant because it's perceived to be so simple and straightforward. To the point of "we don't need a spec, you can just see the response yourself, right?". At least that was the state ~2012, nowadays they use an OpenAPI spec but it's often incomplete, regardless of whether it's handmade (in which case people don't know everything they have to fill in) or generated (in which case the generators will often have limitations and MAYBE support for some custom comments that can fill in the gaps).
> HTTP/JSON API works too, but you can assume it's what they mean by REST.
This is the kind of slippery slope where pedantic nitpickers thrive. The start to complain that if you accept any media type other than JSON then it's not "REST-adjacent" anymore because JSON is in the name and some bloke wrote down somewhere that JSON was a trait of this architectural style.
In this sense, the term "RESTful" is useful to shut down these pedantic nitpickers. It's "REST-adjacent" still, but the right answer to nitpicking is "who cares".
> The start to complain that if you accept any media type other than JSON then it's not "REST-adjacent" anymore because JSON is in the name and some bloke wrote down somewhere that JSON was a trait of this architectural style.
wat?
Nowhere is JSON in the name of REpresentational State Transfer. Moreover, sending other representations than JSON (and/or different presentations in JSON) is not only acceptable, but is really a part of REST
> Nowhere is JSON in the name of REpresentational State Transfer.
If you read the message you're replying to, you'll notice you are commenting on the idea of coining the concept of HTTP/JSON API as a better fitting name.
Read messages before replying? It's the internet! Ain't no one got time for that
:)
Don't stress it. It happens to the best of us.
This. Or maybe we should call it "Rest API" in lowercase, meaning not the state transfer, but the state of mind, where developer reached satisfaction with API design and is no longer bothered with hypermedia controls, schemas etc.
Assuming the / was meant to describe it as both an HTTP API and a JSON API (rather than HTTP API / JSON API) it should be JSON/HTTP, as it is JSON over HTTP, like TCP/IP or GNU/Linux :)
I recall having to maintain an integration to some obscure SOAP API that ate and spit out XML with strict schemas and while I can't remember much about it, I think the integration broke quite easily if the other end changed their API somehow.
> it had proper standards
Lol. Have you read them?
SOAP in particular can really not be described as "proper".
It had the advantage that the API docs were always generated, and thus correct, but the most common thing is for one software stack not being able to use a service built with another stack.
> - The team constantly bikesheds over correct status codes and at least a few are used contrary to the HTTP spec
I really wish people just used 200 status code and put encoded errors in the payloads themselves instead of trying to fuse the transport layer's (which HTTP serves as, in this case) concerns with the application's concerns. Seriously, HTTP does not mandate that e.g. "HTTP/1.1 503 Ooops\r\n\r\n" should be stuffed into the TCP's RST packet, or into whatever TLS uses to signal severe errors, for bloody obvious reasons: it doesn't belong there.
Like, when you get a 403/404 error, it's very bloody difficult to tell apart the "the reverse proxy before the server is misconfigured and somebody forgot to expose the endpoint" and "the server executed your request to look up an item perfectly fine: the DB is functional, and the item you asked for is not in there" scenarios. And yeah, of course I could (and should) look at and try to parse the response's body but why? This "let's split off the 'error code' part of the message from the message and stuff it somewhere into the metadata, that'll be fine, those never get messed up or used for anything else, so no chance of confusion" approach just complicates things for everyone for no benefit whatsoever.
The point of status codes is to have a standard that any client can understand. If you have a load balancer, the load balancer can unhealthy backends based on the status code. Similarly if you have some job scheduler or workflow engine that's calling your API, they can execute an appropriate retry strategy based on the status code. The client in most cases does not care about why something failed, only whether it has failed. Being able to tell apart if the failure was due to reverse proxy or database or whatever is the server's concern and the server can always do that with its own custom error codes.
> The client in most cases does not care about why something failed, only whether it has failed.
"...and therefore using different status codes in the responses is mostly pointless. Therefore, use 200 and put "s":"error" in the response".
> Being able to tell apart if the failure was due to reverse proxy or database or whatever is the server's concern.
One of the very common failures is for the request to simply never reach "the server". In my experience, one of the very first steps in improving the error handling quality (on the client's side) is to start distinguishing between the low-level errors of "the user has literally no connection Internet" and "the user has connected somewhere, but that thing didn't really speak the server protocol", and the high-level errors "the client has talked with the application server (using the custom application protocol and everything), and there was an error on the application server's side". Using HTTP-status codes for both low- and high-level errors makes such distinctions harder to figure out.
I did say most cases, not all cases. There are some concerns that are considered cross cutting, to have made it into the standard. For instance, many clients will handle a 401 by redirecting to an auth flow, or handle a 429 rate limited by backing off before making a request, handle 426 by upgrading the protocol etc. Not all statuses may be relevant for a given system, you can club several scenarios under a 400 or a 500 and that's perfectly fine for many use cases. But when you have cross cutting concerns, it's beneficial to follow fine grained status codes. It gives you a lot of flexibility in how you can connect different parts of your architecture and reduces integration headaches.
I think a more precise term for what you're describing is transport errors vs business errors. You're right that you don't want to model all your business errors as HTTP status codes. Your business scenarios are most certainly numerous and need to be much more fine grained than what the standard offers. But the important thing is all errors business or transport eventually need to map to a HTTP status code because that's the protocol you're ultimately speaking.
what is a unhealthy request? is searching for a user which was _not found_ by the server unhealthy? was the request successful? thats where different opinions exist.
Sure, there's some nuance to it that depends on your application, but it's the server's responsibility to do so, not the client's. The status code exists for this reason and the standard also classifies status codes under client error and server error so that clients can determine whether a server is unhealthy simply by looking at the status code.
Eh, if you're doing RPC where the whole request/response are already in another layer on top of HTTP, then sure, 200 everything.
But to me, "REST" means "use the HTTP verbs to talk about resources". The whole point is that for resource-oriented APIs, you don't need another layer. In which case serving 404s for things that don't exist, or 409s when you try to put things into a weird state makes perfect sense.
> The team constantly bikesheds over correct status codes and at least a few are used contrary to the HTTP spec
I had to chuckle here. So true!
I use the term "HTTP API"; more general. Context, in light of your definition: In many cases labeled "REST", there will only be POST, or POST and GET, and HTTP 200 status with an error in JSON is used instead of HTTP status codes. Your definition makes sense as a weaker form of the original, but it it still too strict compared to how the term is used. "REST" = "HTTP with JSON bodies" is the most practical definition I have.
>HTTP 200 status with an error in JSON is used instead of HTTP status codes
This is a bad approach. It prevents your frontend proxies from handling certain errors better. Such as: caching, rate limiting, or throttling abuse.
On the other hand, functional app returning http errors clouds your observability and can hide real errors. It's not always ideal for the client either. 404 specifically is bad. Do I have a wrong id, wrong address, is it actually 401/403, or is it just returned by something along the way? Code alone tells you nothing, might as well return 200 for a valid request that was correctly processed.
(devil's advocate, I use http codes :))
> HTTP 200 status with an error in JSON is used instead of HTTP status codes
I've seen some APIs that not only always return a 200 code, but will include a response in the JSON that itself indicates whether the HTTP request was successfully received, not whether the operation was successfully completed.
Building usable error handling with that kind of response is a real pain: there's no single identifier that indicates success/failure status, so we had to build our own lookup table of granular responses specific to each operation.
> I can safely assume [...] CRUD actions are mapped to POST/GET/PUT/DELETE
Not totally sure about that - I think you need to check what they decided about PUT vs PATCH.
Isn't that fairly straightforward? PUT for full updates and PATCH for partial ones. Does anybody do anything different?
PUT for partial updates, yes, constantly. What i worked with last week: https://docs.gitlab.com/api/projects/#edit-a-project
That's straightforwardly 'correct' and Fielding's thesis, yes. Yes people do things differently!
Lots of people make PUTs that work like PATCHes and it drives me crazy. Same with people who use POST to retrieve information.
Well you can't reliably use GET with bodies. There is the proposed SEARCH but using custom methods also might not work everywhere.
No, QUERY. https://datatracker.ietf.org/doc/html/draft-ietf-httpbis-saf...
SEARCH is from RFC 5323 (WebDAV).
The SEARCH verb draft was superseded by the QUERY verb draft last I checked. QUERY is somewhat more adopted, though it's still very new.
These verbs dont even make sense most of the time.
You sweet summer child.
It's always better to use GET/POST exclusively. The verb mapping was theoretical from someone who didn't have to implement. I've long ago caved to the reality of the web's limited support for most of the other verbs.
What is the limited support for CONNECT/HEAD/OPTIONS/PUT/DELETE ?
It was limited up until the last 10 years, and if someone hasn't updated their knowledge then it's still limited, I suppose.
Limited in what way?
XMLHttpRequest? fetch?
AFAIK, we're talking about HTML forms? But that's entirely irrelevant for the JSON-based APIs we're discussing.
Agreed... in most large (non trivial systems) REST ends up looking/devolving closer to RPC more and more and you end up just using get and post for most things and end up with a REST-ISH-RPC system in practice.
REST purists will not be happy, but that's reality.
> The team constantly bikesheds over correct status codes and at least a few are used contrary to the HTTP spec
401 Unauthorized. When the user is unauthenticated.
403 Forbidden. When the user is unauthorized.
Hell yeah. IMO we should collectively get over ourselves and just agree that what you describe is the true, proper, present-day meaning of "REST API".
I really hate my conclusions here, but from a limited freedom point of view, if all of that is going to happen...
> The team constantly bikesheds over correct status codes and at least a few are used contrary to the HTTP spec
So we should better start with a standard scaffolding for the replies so we can encode the errors and forget about status codes. So the only thing generating an error status is unhandled exception mapped to 500. That's the one design that survives people disagreeing.
> There's a decent chance listing endpoints were changed to POST to support complex filters
So we'd better just standardize that lists support both GET and POST from the beginning. While you are there, also accept queries on both the url and body parameters.
The world would be lovely if we could have standard error, listing responses, and a common query syntax.
I haven't done REST apis in a while, but I came across this recently for standardizing the error response: https://www.rfc-editor.org/rfc/rfc9457.html
I really like the idea of a type URL.
> - CRUD actions are mapped to POST/GET/PUT/DELETE
Agree on your other three but I've seen far too many "REST APIs" with update, delete & even sometimes read operations behind a POST. "SOAP-style REST" I like to call it.
Do you care? From my point of view, post, put, delete, update, and patch all do the same. I would argue that if there is a difference, making the distinction in the url instead of the request method makes it easier to search code and log. And what's the correct verb anyway?
So that's an argument that there may be too many request methods, but you could also argue there aren't enough. But then standardization becomes an absolute mess.
So I say: GET or POST.
> Do you care?
I don't. I could deliver a diatribe on how even the common arguments for differentiating GET & POST don't hold water. HEAD is the only verb with any mild use in the base spec.
On the other hand:
> correct status codes and at least a few are used contrary to the HTTP spec
This is a bigger problem than verb choice & something I very much care about.
I agree. From what I have seen in corporate settings, using anything more than GET/POST takes the time to deploy the API to a different level. Using UPDATE, PATCH etc. typically involves firewall changes that may take weeks or months to get approved and deployed followed a never ending audit/re-justification process.
> From my point of view, post, put, delete, update, and patch all do the same.
That's how we got POST-only GraphQL.
In HTTP (and hence REST) these verbs have well-defined behaviour, including the very important things like idempotence and caching: https://github.com/for-GET/know-your-http-well/blob/master/m...
Yeah but GET doesn’t allow requests to have bodies (yeah, I know, technically you can but it’s not very useful), and this is a legitimate issue preventing its use in complex APIs.
There's no point in idempotency for operations that change the state. DELETE is supposed to be idempotent, but it can only be if you limit yourself to deletion by unique, non-repeating id. Should you do something like delete by email or product, you have to use another operation, which then obviously will be POST anyway. And there's no way to "cache" a delete operation.
It's just absurd to mention idempotency when the state gets altered.
> There's no point in idempotency for operations that change the state.
Of course there is
> DELETE is supposed to be idempotent, but it can only be if you limit yourself to deletion by unique, non-repeating id
Which is most operations
> Should you do something like delete by email or product, you have to use another operation,
Erm.... No, you don't?
> which then obviously will be POST anyway. And there's no way to "cache" a delete operation.
Why would you want to cache a delete operation?
The defined behaviors are not so well defined for more complex APIs.
You may have an API for example that updates one object and inserts another one, or even deletes an old resource and inserts a new one
The verbs are only very clear for very simple CRUD operations. There is a lot of nuance otherwise that you need documentation for and having to deal with these verbs both as the developer or user of an API is a nuisance with no real benefit
> The defined behaviors are not so well defined for more complex APIs.
They are. Your APIs can always be defined as a combination of "safe, idempotent, cacheable"
I've had situations when I wanted a GET with a body :)
I actually had to change an API recently TO this. The request payload was getting too big, so we needed to send it via POST as a body.
> even sometimes read operations behind a POST
Even worse than that, when an API like the Pinboard API (v1) uses GET for write operations!
I work with an API that uses GET for delete :)
Exactly. What you describe is how I see REST being used today and I wish people accepted the semantic shift and stopped with their well-ackshually. It serves nothing.
Sounds about right. I've been calling this REST-ish for years and generally everyone I say that to gets what I mean without much (any) explanation.
> Like Agile, CI or DevOps you can insist on the original definition or submit to the semantic diffusion and use the terms as they are commonly understood
This is an insightful observation. It happens with pretty much everything
As it has been happening recently with the term vibecoding. It started with some definition, and now it’s morphed into more or less just meaning ai-assisted coding. Some people don’t like it[1]
1: https://simonwillison.net/2025/Mar/19/vibe-coding/
As long as it's not SOAP, it's great.
If I never have to use SOAP again in my life, I will die a happy man.
RESTful has gone far beyond the http world. It's the new RPC with JSON payload for whatever. I use it on embedded systems that has no database at all, POST/GET/PUT/DELETE etc are perfectly simple to map into WRITE|READ|Modify|Remove commands. As long as the API is documented, I don't really care about its http origins.
Haha, our API still returns XML. At least, most of the endpoints do. Not the ones written by that guy who thinks predictability in an API is lower priority than modern code, those ones return JSON.
I present to you this monstrosity: https://stackoverflow.com/q/39110233
Presumably they had an existing API, and then REST became all the rage, so they remapped the endpoints and simply converted the XML to JSON. What do you do with the <tag>value</tag> construct? Map it to the name `$`!
Congratulations, we're REST now, the world is a better place for it. Off to the pub to celebrate, gents. Ugh.
I think people tend to forget these things are tools, not shackles
the last point got me.
How can you idiomatically do a read only request with complex filters? For me both PUT and POST are "writable" operations, while "GET" are assumed to be read only. However, if you need to encode the state of the UI (filters or whatnot), it's preferred to use JSON rather than query params (which have length limitations).
So ... how does one do it?
One uses POST and recognizes that REST doesn't have to be so prescriptive.
The part of REST to focus on here is that the response from earlier well-formed requests will include all the forms (and possibly scripts) that allow for the client to make additional well-formed requests. If the complex filters are able to be made with a resource representation or from the root index, regardless of HTTP methods used, I think it should still count as REST (granted, HATEOAS is only part of REST but I think it should be a deciding part here).
When you factor in the effects of caching by intermediate proxy servers, you may find yourself adapting any search-like method to POST regardless, or at least GET with params, but you don't always want to, or can't, put the entire formdata in params.
Plus, with the vagaries of CSRF protections, per-user rate-limiting and access restrictions, etc.,, your GET is likely to turn into a POST for anything non-trivial. I wouldn't advise trying for pure REST-ful on the merits of its purity.
POST the filter, get a response back with the query to follow up with for the individual resources.
which then responds with And then you can make GET request calls against that resource.It adds in some data expiration problems to be solved, but its reasonably RESTful.
This has RESTful aesthetics but it is a bit unpractical if a read-only query changes state on the server, as in creating the uuid-referenced resource.
There's no requirement in HTTP (or REST) to either create a resource or return a Location header.
For the purposes of caching etc, it's useful to have one, as well as cache controls for the query results, and there can be links in the result relative to the Location (eg a link href of "next" is relative to the Location).
Isn't this twice as slow? If your server was far away it would double load times?
The response to POST can return everything you need. The Location header that you receive with it will contain permanent link for making the same search request again via GET.
Pros: no practical limit on query size. Cons: permalink is not user-friendly - you cannot figure out what filters are applied without making the request.
There was a proposal[1] a while back to define a new SEARCH verb that was basically just a GET with a body for this exact purpose.
[1]: https://www.ietf.org/archive/id/draft-ietf-httpbis-safe-meth...
Similarly, a more recent proposal for a new QUERY verb: https://httpwg.org/http-extensions/draft-ietf-httpbis-safe-m...
If you really want this idiomatically correct, put the data in JSON or other suitable format, zip it and encode in Base64 to pass via GET as a single parameter. To hit the browser limits you will need so big query that you may hit UX constraints earlier in many cases (2048 bytes is 50+ UUIDs or 100+ polygon points etc).
Pros: the search query is a link that can be shared, the result can be cached. Cons: harder to debug, may not work in some cases due to URI length limits.
"Filters" suggests that you are trying to query. So, QUERY, perhaps? https://httpwg.org/http-extensions/draft-ietf-httpbis-safe-m...
Or stop worrying and just use POST. The computer isn't going to care.
HTML FORMs are limited to www-form-encoded or multipart. The length or the queries on a GET with a FORM is limited by intermediaries that shouldn't be limiting it. But that's reality.
Do a POST of a query document/media type that returns a "Location" that contains the query resource that the server created as well as the data (or some of it) with appropriate link elements to drive the client to receive the remainder of the query.
In this case, the POST is "writing" a query resource to the server and the server is dealing with that query resource and returning the resulting information.
Soon, hopefully, QUERY will save us all. In the meantime, simply using POST is fine.
I've also seen solutions where you POST the filter config, then reference the returned filter ID in the GET request, but that often seems like overkill even if it adds some benefits.
I describe mine as a JSON-Based Representational State SOAP API to other internal teams. When their eyes cross I get to work sifting through the contents of their pockets for linting errors and JIRA tickets.
Yeah
I can assure you very few people care
And why would they? They're getting value out of this and it fits their head and model view
Sweating over this takes you nowhere
I've been doing web development for more than a decade and I still can't figure out what REST actually means, it's more of a vibe.
When I think about some of the RESTy things we do like return part of the response as different HTTP codes, they don't really add that much value vs. keeping things on the same layer. So maybe the biggest value add so far is JSON, which thanks to its limited nature prevents complication, and OpenAPI ecosystem which grew kinda organically to provide pretty nice codegen and clients.
More complexity lessons here: look at oneOf support in OpenAPI implementations, and you will find half of them flat out don't have it, and the other half are buggy even in YOTL 2025.
> I've been doing web development for more than a decade and I still can't figure out what REST actually means, it's more of a vibe.
While I generally agree that REST isn’t really useful outside of academic thought experiments: I’ve been in this about as long as you are, and it really isn’t hard. Try reading Fieldings paper once; the ideas are sound and easy to understand, it’s just with a different vision of the internet than the one we ended up creating.
You can also read Fielding’s old blog posts. He used to write about it a lot before before he stopped blogging.
This is very true. Over my 15 years of engineering, I have never suffered_that_ much with integrating with an api (assuming it exists). So the lack of "HATEOaS" hasn't even been noticable for me. As long as they get most of the 400 status codes right (specifically 200, 401, 403, 429) I usually have no issuss integrating and don't even notice that they don't have some "discoverable api". As long as I can get the data I need or can make the update I need I am fine.
I think good rest api design is more a service for the engineer than the client.
> As long as they get most of the 400 status codes right (specifically 200, 401, 403, 429)
A client had build an API that would return 200 on broken requests. We pointed it out and asked if maybe it could return 500, to make monitoring easier. Sure thing, next version "Http 200 - 500", they just wrote 500 in the message body, return remained 200.
Some developers just do not understand http.
I just consumed an API where errors were marked with a "success": false field.
The "success" is never true. If it's successful, it's not there. Also, a few endpoints return 500 instead, because of course they do. Oh, and one returns nothing on error and data on success, because, again, of course it does.
Anyway, if you want a clearer symptom that your development stack is shit and has way too much accidental complexity, there isn't any.
This is the real world. You just deal with it (at least I do) because fighting it is more work and at the end of the day the boss wants the project done.
Ive seen this a few times in the past but for a different reason. What would happen in these cases was that internally there’d be some cascade of calls to microservices that all get collected. In the most egregious examples it’s just some proxy call wrapping the “real” response.
So it becomes entirely possible to get a 200 from the thing responding g to you but it may be wrapping an upstream error that gave it a 500.
Sometimes I wish HN supported emojis so I could reply with the throw-up one.
I've had frontend devs ask for this, because it was "easier" to handle everything in the same then callback. They wanted me to put ANY error stuff as a payload in the response.
{ "statusCode": 200, "error" : "internal server error" }
Nice.
> So the lack of "HATEOaS" hasn't even been noticable for me.
I think HATEOAS tackles problems such as API versioning, service discovery, and state management in thin clients. API versioning is trivial to manage with sound API Management policies, and the remaining problems aren't really experienced by anyone. So you end up having to go way out of your way to benefit from HATEOAS, and you require more complexity both on clients and services.
In the end it's a solution searching for problems, and no one has those problems.
It isn't clear that HATEOS would be better. For instance:
>>Clients shouldn’t assume or hardcode paths like /users/123/posts
Is it really net better to return something like the following just so you can change the url structure.
"_links": { "posts": { "href": "/users/123/posts" }, }
I mean, so what? We've create some indirection so that the url can change (e.g. "/u/123/posts").
Yes, so the link doesn't have to be relative to the current host. If you move user posts to another server, the href changes, nothing else does.
If suddenly a bug is found that lets people iterate through users that aren't them, you can encrypt the url, but nothing else changes.
The bane of the life of backend developers is frontend developers that do dumb "URL construction" which assumes that the URL format never changes.
It's brittle and will break some time in the future.
>> If you move user posts to another server, the href changes, nothing else does
It isn't clear what insurance you are really buying here. You can't possibly mean another physical server. Obviously that happens all the time with any site but no one is changing links to point to the actual hardware - just use a normal load balancer. Is it domain name change insurance? That doesn't add up either.
>> If suddenly a bug is found that lets people iterate through users that aren't them, you can encrypt the url, but nothing else changes.
Normally you would just fix the problem instead of doing weird application level encryption stuff.
>> The bane of the life of backend developers is frontend developers that do dumb "URL construction" which assumes that the URL format never changes
If those "frontend" developers are paying customers as in the case of AWS, OpenAI, Anthropic then you probably want to make your API as simple as possible for them to understand.
> The team constantly bikesheds over correct status codes and at least a few are used contrary to the HTTP spec
I've done this enough times that now I don't really bother engaging. I don't believe anyone gets it 100% correct ever. As long as there is nothing egregiously incorrect, I'll accept whatever.
> I sympathize with the pedantry here and found Fielding's paper to be interesting, but this is a lost battle.
True. Losing hacking/hacker was sad but I can live with it - crypto becoming associated with scam coins instead of cryptography makes me want to fight.
I have seen monstrosities claiming to be rest that use HTTP but actually have a separate set of action verbs, nestled inside of HTTP's.
In a server holding a "deck of cards," there might be a "HTTP GET <blah-de-blah>/shuffle.html" call with the side-effect of performing a server-side randomization operation.
I just made that up because I don't want to impugn anyone. But I've seen API sets full of nonsense just like that.
this is most probably a 90% hit
100% agreed, “language evolves”
This article also tries to make the distinction of not focusing on the verbs themselves. That the RESTful dissertation doesn’t focus on them.
The other side of this is that the IETF RESTful proposals from 1999 that talk about the protocol for implementation are just incomplete. The obscure verbs have no consensus on their implementation and libraries across platforms may do PUT, PATCH, DELETE incompatibly. This is enough reason to just stick with GET and POST and not try to be a strict REST adherents since you’ll hit a wall.
Importantly for the discussion, this also doesn't mean the push for REST api's was a failure. Sure, we didn't end up with what was precisely envisioned from that paper, but we still got a whole lot better than CORBA and SOAP.
The lowest common denominator in the REST world is a lot better than the lowest common denominator in SOAP world, but you have to convince the technically literate and ideological bunch first.
We still have gRPC though...
>- There's a decent chance listing endpoints were changed to POST to support complex filters
Please. Everyone knows they tried to make the complex filter work as a GET, then realized the filtering query is so long that it breaks whatever WAF or framework is being used because they block queries longer than 4k chars.
[flagged]
I disagree. It's a perfectly fine approach to many kinds of APIs, and people aren't "mediocre" just for using widely accepted words to describe this approach to designing HTTP APIs.
> and people aren't "mediocre" just for using widely accepted words
If you work off "widely accepted words" when there is disagreeing primary literature, you are probably mediocre.
So your view is that the person who coins a term forever has full rights to dictate the meaning of that term, regardless of what meaning turns out to be useful in practice and gets broadly accepted by the community? And you think that anyone who disagrees with such an ultra-prescriptivist view of linguistics is somehow a "mediocre programmer"? Do I have that right?
I have no dog in this fight, but 90% of technical people around me keep calling authentication authorization no matter how many times I explain the difference to those who even care to listen. It's misused in almost every application developed in this country.
Sometimes it really is bad and "everybody" can be very wrong, yes. None of us are native English speakers (most don't speak English at all), so these foreign sounding words all look the same, it's a forgivable "offence".
No. For all people who use "REST": If reading Fielding is the exception that gets you on HN, than not reading Fielding is what average person does. Mediocre.
Using Fieldings term to refer to something else is an extra source of confusion which kinda makes the term useless. Nobody knows what the speaker exactly refers no.
The point is lost on you though. There are REST APIs (almost none), and there are "REST APIs" - a battle cry of mediocre developers. Now go tell them their restful has nothing to do with rest. And I am now just repeating stuff said in article and in comments here.
Why should I (or you, for that matter) go and tell them their restful has nothing to do with rest? Why does it matter? They're making perfectly fine HTTP APIs, and they use the industry standard term to describe what kind of HTTP API it is.
It's convenient to have a word for "HTTP API where entities are represented by JSON objects with unique paths, errors are communicated via HTTP status codes and CRUD actions use the appropriate HTTP methods". The term we have for that kind of API is "rest". And that's fine.
1. Never said I'm going to tell them. It's on someone else. I'm just going to lower my expectation from such developers accordingly.
2. So just "HTTP API". And that would suffice. Adding "restful" is trying to be extra-smart or fit in if everyone's around an extra-smart.
> 1. Never said I'm going to tell them. It's on someone else. I'm just going to lower my expectation from such developers accordingly.
This doesn't seem like a useful line of conversation, so I will ignore it.
> 2. So just "HTTP API".
No! There are many kinds of HTTP APIs. I've both made and used "HTTP APIs" where HTTP is used as a transport and API semantics are wholly defined by the message types. I've seen APIs where every request is an HTTP POST with a protobuf-encoded request message and every response is a 200 OK with a protobuf-encoded response message (which might then indicate an error). I've seen GraphQL APIs. I've seen RPC-style APIs where every "RPC call" is a POST requset to an endpoint whose name looks like a function name. I've seen APIs where request and response data is encoded using multipart/form-data.
Hell, even gRPC APIs are "HTTP APIs": gRPC uses HTTP/2 as a transport.
Telling me that something is an "HTTP API" tells me pretty much nothing about how it works or how I'm expected to use it, other than that HTTP is in some way involved. On the other hand, if you tell me that something is a "REST API", I already have a very good idea about how to use it, and the documentation can assume a lot of pre-existing context because it can assume that I've used similar APIs before.
> On the other hand, if you tell me that something is a "REST API", I already have a very good idea about how to use it (...)
Precisely this. The value of words is that they help communicate concepts. REST API or even RESTful API conveys a precise idea. To help keep pedantry in check, Richardson's maturity model provides value.
Everyone manages to work with this. Not those who feel the need to attack people with blanket accusations of mediocrity, though. They hold onto meaningless details.
You're being needlessly pedantic, and it seems the only purpose to this pedantry is finding a pretext to accuse everyone of being mediocre.
I think the pushback is because you labelled people who create "REST APIs" as "mediocre" without any explanation. That may be a good starting point.
It’s the worst kind of pedantry. Simultaneously arrogant, uncharitable and privileged.
Most of us are not writing proper Restful APIs because we’re dealing with legacy software, weird requirements the egos of other developers. We’re not able to build whatever we want.
And I agree with the feature article.
> It’s the worst kind of pedantry. Simultaneously arrogant, uncharitable and privileged.
I'd go as far as to claim it is by far the dumbest kind, because it has no value, serves no purpose, and solves no problem. It's just trivia used to attack people.
I met a DevOps guy who didn't know what "dotfiles" are.
However I'd argue people who use the term to describe it the same as everyone else is the smart one, if you want to refer to the "real" one just add "strict" or "real" in front of it.
I don't think we should dismiss people over drifting definitions and lack of "fountational knowledge".
This is more like people arguing over "proper" English, the point of language is to communicate ideas. I work for a German company and my German is not great but if I can make myself understood, that's all that's needed. Likewise, the point of an API is to allow programs, systems, and people to interoperate. If it accomplishes that goal, it's fine and not worth fighting over.
If my API is supposed to rely on content-type, how many different representations do I need? JSON is a given anymore, and maybe XML, but why not plain text, why not PDF? My job isn't an academic paper, good enough to get the job done is going to have to be good enough.
I agree, thought it would be really really nice if a http method like GET would not modify things. :)
> This is more like people arguing over "proper" English, the point of language is to communicate ideas.
ur s0 rait, eye d0nt nnno wy ne1 b0dderz tu b3 "proppr"!!!!1!!
</sarcasm>
You are correct that communication is the point. Words do communicate a message. So too does disrespect for propriety: it communicates the message that the person who is ignorant or disrespectful of proper language is either uneducated or immature, and that in turn implies that such a person’s statements and opinions should be discounted if not ignored entirely.
Words and terms mean things. The term ‘REST’ was coined to mean something. I contend that the thing ‘REST’ originally denoted is a valuable thing to discuss, and a valuable thing to employ (I could be wrong, but how easy will it be for us to debate that if we can’t even agree on a term for the thing?).
It’s similar to the ironic use of the word ‘literally.’ The word has a useful meaning, there is already the word ‘figuratively’ which can be used to mean ‘not literally’ and a good replacement for the proper meaning of ‘literally’ doesn’t spring to mind: misusing it just decreases clarity and hinders communication.
> If my API is supposed to rely on content-type, how many different representations do I need? JSON is a given anymore, and maybe XML, but why not plain text, why not PDF?
Whether something is JSON or XML is independent of the representation — they are serialisations (or encodings) of a representation. E.g. {"type": "foo","id":1}, <foo id="1"/>, <foo><id>1</id></foo> and (foo (id 1)) all encode the same representation.
>misusing it just decreases clarity and hinders communication
There is no such thing as "misusing language". Language changes. It always does.
Maybe you grew up in an area of the world where it's really consistent everywhere, but in my experience I'm going to have a harder time understanding people even two to three villages away.
Because language always changes.
Words mean a particular thing at a point in time and space. At another one, they might mean something completely different. And that's fine.
You can like it or dislike it, that's up to you. However, I'd say every little bit of negative thoughts in that area only serve to make yourself miserable, since humanity and language at large just aren't consistent.
And that's ok. Be it REST, literally or even a normal word such as 'nice', which used to mean something like 'foolish'.
Again, language is inconsistent by default and meanings never stay the same for long - the more a terminus technicus gets adapted by the wider population, the more its meaning gets widened and/or changed.
One solution for this is to just say "REST in its original meaning" when referring to what is now the exception instead of the norm.
> I work for a German company and my German is not great but if I can make myself understood, that's all that's needed.
Really? What if somebody else wants to get some information to you? How do you know what to work on?
Pretty much everyone speaks English too, it's the official language of the company. Though we all try to be respectful; if I can't understand them then they tell me again in English. I try to respond as much as possible in German and switch to English if needed - there's also heavy use of deepl on my side which seems to be a lot more idiomatic than Google, MS, or Apple translate.
What an incredibly bad take.
When I was working on my first HTTP-based API 13 years ago, based on many comments about true REST, I decided to first study what REST should really be. I've read Fielding's paper cover to cover, I've read RESTful Web Services Cookbook from O'Reilly and then proceeded to workaround Django idioms to provide REST API. This was a bit cargo cult thinking from my end, I didn't truly understand how REST would benefit my service. I took me several more years and several more HTTP APIs to understand that in the case of these services, there were no benefits.
The vision of API that is self discoverable and that works with a generic client is not practical in most cases. I think that perhaps AWS dashboard with its multitude of services has some generic UI code that allows to handle these services without service-specific logic, but I doubt even that.
Fielding's paper doesn't provide a complete recipe for building self-discoverable APIs. It is an architecture, but the details of how clients should really discover the endpoints and determine what these endpoints are doing is left out of the paper. To make truly discoverable API you need to specify protocol for endpoints discovery, operations descriptions, help messages etc. Then you need clients that understand your specification, so it is not really a generic client. If your service is the only one that implements this client, you made a lot of extra effort to end up with the same solution that not REST services implement - a service provides an API and JS code to work with the API (or a command line client that works with the API), but there is no client code reuse at all.
I also think that good UX is not compatible with REST goals. From a user perspective, app-specific code can provide better UX than generic code that can discover endpoints and provide UI for any app. Of course, UI elements can be standardized and described in some languages (remember XUL?), so UI can adapt to app requirements. But the most flexible way for such standardization is to provide a language like JavaScript that is responsible for building UI.
> The vision of API that is self discoverable and that works with a generic client is not practical in most cases. [..] Fielding's paper doesn't provide a complete recipe for building self-discoverable APIs. It is an architecture, but the details of how clients should really discover the endpoints and determine what these endpoints are doing is left out of the paper. To make truly discoverable API you need to specify protocol for endpoints discovery, operations descriptions, help messages etc. Then you need clients that understand your specification, so it is not really a generic client
You said what I've thought about REST better than I could have put it.
A true implementation of a REST client is simply not possible. Any client needs to know what all those URLs are going to do. If you suddenly add a new action (like /cansofspam/123/frobnicate), a client won't know what to do with it. The client will need to be updated to add frobnication functionality, or else it just ignores it. At best, it could present a "Frobnicate" button.
This really is why nobody has implemented a REST server or client that actually conforms to Fielding's paper. It's just not realistic to have a client that can truly self-discover an API without being written to know what APIs to expect.
> A true implementation of a REST client is simply not possible
Sure it is, it's just not very interesting to a programmer. It's the browser. That's why there was no need talk about client implementations. And why it's hypermedia driven. It's implicit in the description that it's meant to be discoverable by humans.
AirBnb rediscovered REST when they implemented their Server Driven UI Platform. Once you strip away all the minutiae about resources and URIs the fundamental idea of HATEOS is ship the whole UI from the server and have the client be generic (the browser). Now you can't have the problem where the frontend gets desynced with the backend.
> It's the browser.
This cannot be overstated.
I'm watching with some interest to see if the LLM/MCP crowd gradually reinvents REST principles. LLMs are the only software we have invented yet which is smart enough to use a REST interface.
I think you're right. APIs have a lot of aspects to them, so describing them is hard. API users need to know typical latency bounds, which error codes may be retried, whether an action is atomic or idempotent. HATEOAS gets you none of these things.
So fully implementing a perfect version of REST is usually not necessary for most types of problems users actually encounter.
What REST has given us is an industry-wide lingua franca. At the basic level, it's a basic understanding of how to map nouns/verbs to HTTP verbs and URLs. Users get to use the basic HTTP response codes. There's still a ton of design and subtlety to all this. Do you really get to do things that are technically allowed, but might break at a typical load balancer (returning bodies with certain error codes)? Is your returning 500 retriable in all cases, with what preferred backoff behavior?
> What REST has given us is an industry-wide lingua franca. At the basic level, it's a basic understanding of how to map nouns/verbs to HTTP verbs and URLs. Users get to use the basic HTTP response codes. There's still a ton of design and subtlety to all this. Do you really get to do things that are technically allowed, but might break at a typical load balancer (returning bodies with certain error codes)? Is your returning 500 retriable in all cases, with what preferred backoff behavior?
What was wrong with all nouns and verbs map to POST (maybe sometimes GET), and HTTP response codes other than 200 mean your request failed somewhere between the client code and the application server code. HTTP 200 means the application server processed the request and you can check the payload for an application indicator of success, failure, and/or partial success. If you work with enough systems, you end up going back to this, because least common denominator works everywhere.
Either way, anything that isn't ***** SOAP is a good start.
>API users need to know typical latency bounds, which error codes may be retried, whether an action is atomic or idempotent. HATEOAS gets you none of these things.
Those things aren't always necessary. However API users always need to know which endpoints are available in the current context. This can be done via documentation and client-side business logic implementing it (arguably, more work) or this can be done with HATEOAS (just check if server returned the endpoint).
HTTP 500 retriable sounds like a design error, when you can use HTTP 503 to explicitly say "try again later, it's temporal".
I think this hits the nail on the head. Complaining that the current understanding of REST isn't exactly the same as the original usage is missing the point that now REST gives people a good idea of what to expect and how to use the exposed interface.
It's actually a very analogous complaint to how object-oriented programming isn't how it was supposed to be and that only Smalltalk got it right. People now understand what is meant when people say OOP even if it's not what the creator of the term envisioned.
Computer Science, and even the world in general, is littered with examples of this process in action. What's important is that there's a general consensus of the current meaning of a word.
Yes, the field is littered with imperfection.
One thing though - if you do take the time to learn the original "perfect" versions of these things, it helps you become a much better system designer. I'm constantly worried about API design because it has such large and hard-to-change consequences.
On the other hand, we as an industry have also succeeded quite a bit! So many of our abstractions work really well.
The browser is "generic code" that provides the UX we use all day, every day.
REST includes allowing code to be part of the response from a server, there are the obvious security issues, but the browsers (and the standards) have dealt with a lot of that.
https://ics.uci.edu/~fielding/pubs/dissertation/net_arch_sty...
It's not just the original REST that usually has no benefits. The industry's reinterpreted version of weak REST also usually has little to no benefits. Who really cares that deleting a resource must necessarily be done with the DELETE HTTP verb rather than simply a POST?
The DELETE verb exists, there's no reason not to use it.
There is one reason. The DELETE absolutely must be idempotent. If it's not, then use POST.
The POST verb exists, there's no reason not to use it to ask a server to delete data.
In fact, there are plenty of reasons not to use DELETE and PUT. Middleboxes managed by incompetent security people block them, they require that developers have a minimum of expertise and don't break the idempotency rule, lots of software stacks simply don't support them (yeah, those stacks are bad, what still doesn't change anything), and the most of the internet just don't use the benefit they provide (because they don't trust the developers behind the server to not break the rules).
There's a great reason: I'm using HTTP only as a transport layer, not a semantic layer.
And you just added more work to yourself to interpret the HTTP verb. You already need work to interpret the body of a POST request, so why not put the information of "the operation is trying to delete" inside the body?
You have to represent the action somehow. And letting proxies understand a wee bit of what's going on is useful. That's how you can have a proxy that lets your users browse the web but not login to external sites, and so on.
> To make truly discoverable API you need to specify protocol for endpoints discovery, operations descriptions, help messages etc. Then you need clients that understand your specification, so it is not really a generic client.
Generic clients just need to understand hypermedia and they can discover your API, as long as your API returns hypermedia from its starting endpoint and all other endpoints are transitively linked from that start point.
Let me ask you this: if I gave you an object X in your favourite OO language, could you use your languages reflection capabilities to discover all properties of every object transitively reachable from X, and every method that could be called on X and all objects transitively reachable from X? Could you not even invoke many of those methods assuming the parameter types are mostly standardized objects or have constructors that accept standardized objects?
This is what discoverability via HATEOAS is. True REST can be seen as exporting an object model with reflection capabilities. For clients that are familiar with your API, they are using hypermedia to access known/named properties and methods, and generic clients can use reflection to do the same.
> Let me ask you this: if I gave you an object X in your favourite OO language, could you use your languages reflection capabilities to discover all properties of every object transitively reachable from X, and every method that could be called on X and all objects transitively reachable from X? Could you not even invoke many of those methods assuming the parameter types are mostly standardized objects or have constructors that accept standardized objects?
Sure this can be done, but I can't see how to build a useful generic app that interacts with objects automatically by discovering the methods and calling them with discovered parameters. For things like debugger, REPL, or some database inspection/manipulation tool, this approach is useful, but for most apps exposed to end users, the UI needs to be aware what the available methods do and need to be intentionally designed to provide intuitive ways of calling the methods.
> For things like debugger, REPL, or some database inspection/manipulation tool, this approach is useful, but for most apps exposed to end users
Yes, exactly, but the point is that something like Swagger becomes completely trivial, and so you no longer need a separate, complex tool to do what the web automatically gives you.
The additional benefits are on the server-end, in terms of maintenance and service flexibility. For instance, you can now replace and transition any endpoint URL (except the entry endpoint) at any time without disrupting clients, as clients no longer depend on specific URL formats (URLs are meaningful only to the server), but depend only on the hypermedia that provides the endpoints they should be using. This is Wheeler's aphorism: hypermedia adds one level of indirection to an API which adds all sorts of flexibility.
For example, you could have a set of servers implementing an application function, each designated by a different URL, and serve the URL for each server in the hypermedia using any policy that makes sense, effectively making an application-specific load balancer. We worked around scaling issues over the years by adding adding SNI to TLS and creating dedicated load balancers, but Fielding's REST gave us everything we needed long before! And it's more flexible than SNI because these servers don't even have to be physically located behind a load balancer.
There are many ideas in the REST paper that are super useful, but the goal of making a generic client working with any API is difficult if not impossible to achieve.
Was the client of the service that you worked on fully generic and application independent? It is one thing to be able to change URLs only on the server, without requiring a client code change, and such flexibility is indeed practical benefit that the REST architecture gives us. It is another thing to change say, a calendar application into a messaging application just by returning a different entry point URL to the same generic client code. This goal is something that REST architecture tried to address, but IMO it was not realized in practice.
> There are many ideas in the REST paper that are super useful, but the goal of making a generic client working with any API is difficult if not impossible to achieve.
It's definitely possible to achieve: anywhere that data is missing you present an input prompt, which is exactly what a web browser does.
That said, the set of autonomous programs that can do something useful without knowing what they're doing is of course more limited. These are generic programs like search engines and AI training bots that crawl and index information.
> It is another thing to change say, a calendar application into a messaging application just by returning a different entry point URL to the same generic client code.
Web browsers do exactly this!
> Web browsers do exactly this!
Browser provide generic execution environment, but the client code (JavaScript/HTML/CSS) is not generic. Calendar application and messaging application entry points provide application specific code for implementing calendar or messaging apps functions . I don't think this is what was proposed in the REST paper, otherwise we wouldn't have articles like 'Most RESTful APIs aren't really RESTful'.
> but the client code (JavaScript/HTML/CSS) is not generic
The HTML/hypermedia returned is never generic, that's why HATEOAS works at all and is so flexible.
The "client" JS code is provided by the server, so it's not really client-specific (the client being the web browser here--maybe should call it "agent"). Regardless, sending JS is an optimization, calendars and messaging are possible using hypermedia alone, and proves the point that the web browser is a generic hypermedia agent that changes behaviour based on hypermedia that's dictated solely by the URL.
You can start programming any app with a plain hypermedia version and then add JS to make the user experience better, which is the approach that HTMx is reviving.
What I don't get from this and some other comments in this thread, is that the argument seems to be that REST is practical, every web page is actually a REST app, it has one entry point, all the actions are discoverable by the user from this entry point, application specific JavaScript code is allowed by REST architecture. But then, why are there so many articles and posts (also by Fielding) that complain that people claim do be doing REST, but are actually not doing it?
In all these discussion, I didn't see an article that would actually show an example of a successful application that does REST properly, all elements of it.
While I haven't looked too deeply, I think HN might be an example that follows REST. At least I don't see anything in the functionality that wouldn't be easily fulfilled by following REST with no change in the outwards behaviour. A light sprinkle of JS to avoid some page reloads and that's it.
I agree that not many frameworks encourage "true" REST design, but I don't think it's too hard to get the hang of it. Try out htmx on a toy project and restrict yourself to using literally no JS and no session state, and every UI-focused endpoint of your favoured server-side framework returns HTML.
> Generic clients just need to understand hypermedia
Yikes. Nobody wants to implement a browser to create a UI for ordering meals from a restaurant. I'm pretty sure the reason we ended up settling on just tossing JSON blobs around and baking the semantics of them into the client is that we don't want the behavior of the application to get tripped up on whether someone failed to close a <b> tag.
(Besides: practically, for a web-served interface, the client may as well carry semantic understanding because the client came from the server).
> Yikes. Nobody wants to implement a browser to create a UI for ordering meals from a restaurant.
You don't need a full web browser. Fielding published his thesis in 2000, browsers were almost trivial then, and the needs for programming are even more trivial: you can basically skip any HTML that isn't a link tag or form data for most purposes.
> baking the semantics of them into the client is that we don't want the behavior of the application to get tripped up on whether someone failed to close a <b> tag.
This is such a non-issue. Why aren't you worried about badly formatted JSON? Because we have well-tested JSON formatters. In a world where people understood the value of hypermedia as an interchange format, we'd be in exactly the same position.
And to be clear, if JSON had links as a first class type rather than just strings, then that would qualify as a hypermedia format too.
If I'm going to do HTML that isn't HTML then I might as well not do HTML, there's a lot of sharp edges in that particular markup that I'd prefer to avoid.
> Why aren't you worried about badly formatted JSON?
Because the json spec is much smaller than the HTML spec so it is much easier for the parser to prevalidate and reject invalid JSON.
Maybe I need to reread the paper and substitute "a good hypermedia language" for HTML conceptually, see if it makes more sense to me.
Fielding's thesis barely mentions HTML (20 times), and usually in the context of discussing standards or why JS beat Java applets, but he discusses hypermedia quite a bit (over 70 times).
If you extended JSON so that URLs (or URIs) were first-class, something like:
it would form a viable hypermedia format because then you can reliably distinguish references from other forms of data. I think the only reason something like this wasn't done is that Crockford wanted JSON to be easily parsable by existing JS interpreters.You can workaround this with JSON schema to some extent, where the schema identifies which strings are URLs, but that's just way more cumbersome than the distinction being made right in the format.
> Fielding's paper doesn't provide a complete recipe for building self-discoverable APIs.
But it does though. A HTTP server returns a HTTP response to a request from a browser. The request is a HTML webpage that is rendered to the user with all discoverable APIs visible as clickable links. Welcome to the World Wide Web.
You describe how web pages work, web pages are intended for human interactions, APIs are intended for machine interaction. How a generic Python or JavaScript client can discover these APIs? Such clients will request JSON representation of a resource, because JSON is intended for machine consumption, HTML is intended for humans. Representations are equivalent, if you request JSON representations of a /users resource, you get a JSON list. If you request HTML representation of a /users resource you get an HTML list, but the content should be the same. Should you return UI controls for modifying a list as part of the HTML representation? If you do so, your JSON and HTML representations are different, and your Python and JavaScript client still cannot discover what list modification operations are possible, only human can do it by looking at the HTML representation. This is not REST if I understand the paper correctly.
> You describe how web pages work, web pages are intended for human interactions
Exactly, yes! The first few sentences from Wikipedia...
"REST (Representational State Transfer) is a software architectural style that was created to describe the design and guide the development of the architecture for the World Wide Web. REST defines a set of constraints for how the architecture of a distributed, Internet-scale hypermedia system, such as the Web, should behave." -- [1]
If you are desiging a system for the Web, use REST. If you are designing a system where a native app (that you create) talks to a set of services on a back end (that you also create), then why conform to REST principles?
[1] - https://en.wikipedia.org/wiki/REST
Most web apps today use APIs that return JSON and are called by JavaScript. Can you use REST for such services or does REST require a switch to HTML representation rendered by the server where each interaction returns new HTML page? How such HTML representation can even use PUT and DELETE verbs, as these are available only to JavaScript code? What If I design a system where API calls can be made both from the web and from a command line client or a library? Should I use two different architecture to cover both use cases?
> Most web apps today use APIs that return JSON and are called by JavaScript. Can you use REST for such services
You kind of could, but it's a bad idea. A core tenet of the REST architecture is that it supports a network of independent servers that provide different services (i.e. webpages) and users can connect to any of them with a generic client (i.e. a web browser). If your mission is to build a specialized API for a specialized client app (a JS web app in your example), then using REST just adds complexity for no reason.
For example, you could define a new content-type application/restaurantmenu+json and build a JS client that renders the content-type like a restaurant's homepage. Then you could use your restaurant browser JS client to view any restaurant's menu in a pretty UI... except your own restaurant's server is the only one that delivers application/restaurantmenu+json, so your client is only usable on your own page and you did a whole lot of additional work for no reason.
> does REST require a switch to HTML representation ... How such HTML representation can even use PUT and DELETE verbs
Fielding's REST is really just an abstract idea about how to build networks of services. It doesn't require using HTTP(S) or HTML, but it so happens that the most significant example of REST (the WWW) is built on HTTPS and HTML.
As in the previous example, you could build a REST app that uses HTTP and application/restaurantmenu+json instead of HTML. This representation could direct the client to use PUT and DELETE verbs if you like, even though these aren't a thing in HTML.
Thanks for the insight. This very well matches my experience from the top comment of this thread. I added discovery related functionality to JSON based API in an attempt to follow REST and didn't see any benefits from the extra work and complexity. Understanding that REST is inherently for HTML (or a similar hypertext based generic client) and it doesn't make sense to try to match it with JSON+JS based API is very refreshing. Even the article that sparkled this discussion gives example of JSON based API with discover related functionality added to it.
Keep in mind that Fielding used his "REST" principles to drive work on the release of HTTP 1.1 in 1999. He subsequently codified these RESTful principles in his dissertation in 2000. The first JSON message was sent in 2001. The reason RESTful is perfectly suited to the WWW is because REST drove HTTP 1.1.
Now days there are just so many use cases where an architecture is more suited to RPC (and POST). And trying to bend the architecture to be "more RESTful" just serves to complicate.
Personally I never saw "self-discoverable" as a goal, let alone an achievable one, so I think you're overestimating the ambitions of simple client-design.
Notably, the term "discoverable" doesn't even appear in TFA.
From the article: 'The phrase “not being driven by hypertext” in Roy Fielding’s criticism refers to the absence of Hypermedia as the Engine of Application State (HATEOAS) in many APIs that claim to be RESTful. HATEOAS is a fundamental principle of REST, requiring that the client dynamically discover actions and interactions through hypermedia links embedded in server responses, rather than relying on out-of-band knowledge (e.g., API documentation).'
Fielding's idea of REST does seem pretty pointless. "Did you know that human-facing websites are made out of hyperlinked pages? This is so crazy that it needs its own name for everyone to parrot!" But a web application isn't going to be doing much beyond basic CRUD when every individual change in state is supposed to be human-driven. And if it's not human-driven, then it's protocol-driven, and therefore not REST.
Rest is a structured description of how html/http/web work sorta. An example of a non rest aspect of how a webpage works is how the favicon is by default fetched by a well known url, or how cookies use a magic list of domains to decide if two origins are similar enough or not.
Other than things like this the browser makes very little assumptions about how a website works, it just loads what the html tells it to load and shows the content to the user. Imagine the alternative where browser by default assumed that special pages example.com/login and example.com/logout existed and would sometimes navigate you there by themselves (like with a prompt "do you want to login?")
If you wanted to design a new improved html alternative from scratch you likely would want the same properties.
The issue with Rest API is that most of what we call API are not websites and most of their clients are not browser but servers or the JavaScript in the browser where IDs are generally more useful than links.
REST is incredibly successful, html is rest, CSS is rest, even JavaScript itself is rest, but we do not call APIs that return html/CSS/js/media APIs we call them websites
[dead]
Where this kind of API design is useful is when there is a user with an agent (e.g. a browser or similar) who can navigate the API and interact with the different responses based on their media types and what the links are called.
Most web APIs are not designed with this use-case in mind. They're designed to facilitate web apps that are much more specific in what they're trying to present to the user. This is both deliberate and valuable; app creators need to be able to control the presentation to achieve their apps' goals.
REST API design is for use-cases where the users should have control over how they interact with the resources provided by the API. Some examples that should be using REST API design:
Considering these examples, it makes sense that policing of what "REST" means comes from the more academically-minded, while the detractors of the definition are typically app developers trying to create a very specific user experience. The solution is easy: just don't call it REST unless it actually is.> Where this kind of API design is useful is when there is a user with an agent (e.g. a browser or similar) who can navigate the API and interact with the different responses based on their media types and what the links are called.
The funny thing is, that perfectly describes HTML. Here’s a document with links to other documents, which the user can navigate based on what the links are called. Because if it’s designed for users, it’s called a User Interface. If it’s designed for application programming, it’s called an Application Programming Interface. This is why HATEOAS is kinda silly to me. It pretends APIs should be used by Users directly. But we already have that, it’s called a UI.
The point is that your Web UI can easily be made to be a REST HATEOAS conforming API at the same time. No separate codepaths, no duplicate efforts, just maybe some JSON templates in addition to HTML templates.
You're right, pure REST is very academic. I've worked with open/big data, and there's always a struggle to get realistic performance and app architecture design; for anything non-obvious, I'd say there are shades of REST rather than a simple boolean yes/no. Even academics have to produce a working solution or "application", i.e. that which can be actually applied, at some point.
When there is lots of data and performance is important, HTTP is the wrong protocol. JSON/XML/HTML is the wrong data format.
> Where this kind of API design is useful is when there is a user with an agent (e.g. a browser or similar) who can navigate the API and interact with the different responses based on their media types and what the links are called.
It's also useful when you're programming a client that is not a web page!
You GET a thing, you dereference fields/paths in the returned representation, you construct a new URI, you perform an operation on it, and so on.
Consider a directory / database application. You can define a RESTful, HATEOAS API for it, write a single-page web application for it -or a non-SPA if you prefer-, and also write libraries and command-line interfaces to the same thing, all using roughly similar code that does what I described above. That's pretty neat. In the case of a non-SPA you can use pure HTML and not think that you're "dereferencing fields of the returned representation", but the user and the user-agent are still doing just that.
> Government portals for publicly accessible information, like legal codes, weather reports, or property records
Yes, and it's so nice when done well.
https://www.weather.gov/documentation/services-web-api
> Where this kind of API design is useful is when there is a user with an agent (e.g. a browser or similar) who can navigate the API and interact with the different responses based on their media types and what the links are called.
> Most web APIs are not designed with this use-case in mind.
I wonder if this will change as APIs might support AI consumption?
Discoverability is very important to an AI, much more so than to a web app developer.
MCP shows us how powerful tool discoverability can be. HATEOS could bring similar benefits to bare API consumption.
> Furthermore, the initial cognitive overhead of building a truly hypermedia-driven client was perceived as a significant barrier. It felt easier for a developer to read documentation and hardcode a URI template like /users/{id}/orders than to write a client that could dynamically parse a _links section and discover the “orders” URI at runtime.
It "was perceived as" a barrier because it is a barrier. It "felt easier" because it is easier. The by-the-book REST principles aren't a good cost-benefit tradeoff for common cases.
It is like saying that your microwave should just have one button that you press to display a menu of "set timer", "cook", "defrost", etc., and then one other button you use to select from the menu, and then when you choose one it shows another menu of what power level and then another for what time, etc. It's more cumbersome than just having some built-in buttons and learning what they do.
I actually own a device that works in that one-button way. It's an OBD engine code reader. It only has two buttons, basically "next" and "select" and everything is menus. Even for a use case that basically only has two operations ("read the codes" and "clear a code"), it is noticeably cumbersome.
Also, the fact that people still suggest it's indispensable to read Fielding's dissertation is the kind of thing that should give everyone pause. If the ideas are good there should be many alternative statements for general audiences or different perspectives. No one says that you don't truly understand physics unless you read Newton's Principia.
This is a very good and detailed review of the concepts of REST, kudos to the author.
One additional point I would add is that making use of the REST-ful/HATEOAS pattern (in the original sense) requires a conforming client to make the juice worth the squeeze:
https://htmx.org/essays/hypermedia-clients
https://intercoolerjs.org/2016/05/08/hatoeas-is-for-humans
What's often missed when this topic comes up is the question of who the back end API is intended for.
REST and HATEOAS are beneficial when the consumer is meant to be a third party that doesn't directly own the back end. The usual example is a plain old HTML page, the end user of that API is the person using a browser. MCP is a more recent example, that protocol is only needed because they want agents talking to APIs they don't own and need a solution for discoverability and interpretability in a sea of JSON RPC APIs.
When the API consumer is a frontend app written specifically for that backend, the benefits of REST often just don't outweigh the costs. It takes effort to design a more generic, better documented and specified API. While I don't like using tools like tRPC in production, its hugely useful for me when prototyping for much the same reason, I'm building both ends of the app and its faster to ignore separation of concerns.
edit: typo
agree very strongly and think it goes even deeper than that!
https://intercoolerjs.org/2016/05/08/hatoeas-is-for-humans
https://htmx.org/essays/hypermedia-clients
*HATEOAS
UI designers want control over the look of the page in detail. E.g. some actions that can be taken on a resource are a large button and some are hidden in a menu or not rendered in the UI at all.
A client application that doesn't have any knowledge about what actions are going to be possible with a resource, instead rendering them dynamically based on the API responses, is going to make them all look the same.
So RESTful APIs as described in the article aren't useful for the most common use case of Web APIs, implementing frontend UIs.
My experience with "RESTful APIs" rarely has much to do with the UI. Why even have any API if all you care about is the UI? Why not go back to server driven crap like DWR then?
My experience is that SPAs have been the way to make frontends, for the last eight years or so. May be coming to an end now. Anyway, contact with the backend all went through an API.
During that same time, the business also wanted to use the fact that our applications had an API as a selling point - our customers are pretty technical and some of them write scripts against our backends.
Backenders read about API design, they get the idea they should be REST like (as in, JSON, with different HTTP methods for CRUD operations).
And of course we weren't going to have two separate APIs, that we ran our frontends on our API was another selling point (eat your own dog food, proof that the API can do everything our frontend can, etc).
So: the UI runs on a REST API.
I'm hoping that we'll go back to Django templates with a sprinkle of HTMX here and there in the future, but who knows. That will probably be a separate backend that runs in front of this API then...
> our applications had an API as a selling point - our customers are pretty technical and some of them write scripts against our backends
It is a selling point. A massive one if you're writing enterprise software. It's not merely about "being technical", but mandatory for recurring automated jobs and integration with their other software.
Because UI toolkit independent APIs are more flexible than just returning HTML, and considering only HTML means that you offer subpar experiences on most platforms. Not just mobile software, where web apps are awful, but also desktop, where your software doesn't integrate well with the platform if it's just a webpage.
Returning purely data means being able to transform it in any way you want, no matter where you use it. And depending on your usecase, it also means being able to sell access to it.
This is wrong on many levels.
1. UX designers operate on every stage of software development lifecycle from product discovery to post-launch support (validation of UX hypotheses), they do not exercise control - they work within constraints as part of the team. The location of a specific action in UI and interaction triggering it is orthogonal to availability of this action. Availability is defined by the state. If state restricts certain actions, UX must reflect that.
2. From architectural point of view, once you encapsulate the checking state behavior, the following will work the same way: "if (state === something)" and "if (resource.links["action"] !== null)". The latter approach will be much better, because in most cases any state-changing actions will require validation on server and you can implement the logic only once (on server).
I have been developing HATEOAS applications for quite a while and maintain HAL4J library: there are some complexities in this approach, but UI design is certainly not THE problem.
I'll never understand why the HATEOAS meme hasn't died.
Is anyone using it? Anywhere?
What kind of magical client can make use of an auto-discoverable API? And why does this client have no prior knowledge of the server they are talking to?
> I'll never understand why the HATEOAS meme hasn't died.
> Is anyone using it? Anywhere?
As I recall ACME (the protocol used by Let’s Encrypt) is a HATEOAS protocol. If so (a cursory glance at RFC 8555 indicates that it may be), then it’s used by almost everyone who serves HTTPS.
Arguably HTTP, when used as it was intended, is itself a HATEOAS protocol.
> What kind of magical client can make use of an auto-discoverable API? And why does this client have no prior knowledge of the server they are talking to?
LLMs seem to do well at this.
And remember that ‘auto-discovery’ means different things. A link typed next enables auto-discovery of the next resource (whatever that means); it assumes some pre-existing knowledge in the client of what ‘next’ actually means.
> As I recall ACME (the protocol used by Let’s Encrypt) is a HATEOAS protocol.
On this case specifically, everybody's lives are worse because of that.
I'm not super familiar with acme, but why is that? I usually dislike the HATEOS approach but I've never really seen it used seriously, so I'm curious!
Yes. You used it to enter this comment.
I am using it to enter this reply.
The magical client that can make use of an auto-discoverable API is called a "web browser", which you are using right this moment, as we speak.
This is true, but isn’t this quite far away from the normal understanding of API, which is an interface consumed by a program? Isn’t this the P in Application Programming Interface? If it’s a human at the helm, it’s called a User Interface.
I agree that's a common understanding of things, but I don't think that it's 100% accurate. I think that a web browser is a client program, consuming a RESTful application programming interface in the manner that RESTful APIs are designed to be consumed, and presenting the result to a human to choose actions.
I think if you restrict the notion of client to "automated programs that do not have a human driving them" then REST becomes much less useful:
https://htmx.org/essays/hypermedia-clients/
https://intercoolerjs.org/2016/05/08/hatoeas-is-for-humans.h...
AI may change this at some point.
If you allow the notion of client to include "web browser driven by humans", then what is it about Fielding's dissertation that is considered so important and original in the first place? Sure it's formal and creates some new and precise terminology, but the concept of browsing was already well established when he wrote it.
It formalized the network architecture of distributed hypermedia systems and described interesting characteristics and tradeoffs of that approach. Whether or not it did a GOOD job of that for the layman I will leave to you, only noting the confusion around the topic found, ironically, across the internet.
At that level, it would be infinitely clearer to say, "There is no such thing as a RESTful API, since the purpose of REST is to connect a system to a human user. There is only such a thing as a RESTful UI based on an underlying protocol (HTML/HTTP). But the implementation of this protocol (the web browser) is secondary to the actual purpose of the system, which is always a UI."
There is such a thing as a RESTful API, and that API must use hypertext, as is clearly laid out in Fielding's dissertation. I don't know what a RESTful UI is, but I do know what a hypertext is, how a server can return a hypertext, how a client can receive that hypertext and present it to a user to select actions from.
Whether or not the API is being consumed by a script client or a browser client doesn't change the RESTful-ness of it, although it does change how useful the aspects of REST (in particular, the uniform interface) will be to that client.
> and that API must use hypertext
I'd say that my web browser is not using hypertext. It is merely transforming it so that I can use the resulting hypermedia, and thereby interface with the remote host. That is, my browser isn't the one that decides how to interface with the remote host; I am. The browser implements the hypertext protocol and presents me a user interface to the remote host.
Fielding might have a peculiar idea of what an "API" is, so that a "human + browser" is a programmatic application, but if that's what he says, then I think his ideas are just dumb and I shouldn't bother listening to him.
> Whether or not the API is being consumed by a script client or a browser client doesn't change the RESTful-ness of it
There's no way for a "script client" to use hypertext without implementing a fixed protocol on top of it, which is allegedly not-RESTful. Unless you count a search engine crawler as such a client, I guess, but that's secondary to the purpose of hypertext.
From wikipedia's article on API[1]:
> An application programming interface (API) is a connection between computers or between computer programs. It is a type of software interface, offering a service to other pieces of software.[1] A document or standard that describes how to build such a connection or interface is called an API specification. A computer system that meets this standard is said to implement or expose an API. The term API may refer either to the specification or to the implementation.
The server and browser are two different computer programs. The browser understand how to make an API connection to a remote server and then take an HTML response it receives (if it gets one of that media type) and transform it into a display to present to the user, allowing the user to choose actions found in the HTML. It then understands how to take actions by the user and turn those into further API interactions with the remote system or systems.
Because the browser waits for a human to intervene and make choices (sometimes, consider redirects) doesn't make the overall system any less of a distributed one, with pieces of software integrating via APIs following a specific network architecture, namely what Fielding called REST.
Your intuition that this idea doesn't make a lot of sense for a script-client is correct:
https://intercoolerjs.org/2016/05/08/hatoeas-is-for-humans.h...
[1] - https://en.wikipedia.org/wiki/API
More broadly, I dislike the characterization of the web browser as the "client" in this situation. After all, the browser isn't the recipient of the remote host's services: it's just the messenger or agent on behalf of the (typically human) user, who is the real client of the server, and the recipient of the hypermedia it offers via a hypertext protocol.
That is, the browser may be communicating with the remote server (using APIs provided by the local OS), but it is not itself interfacing with the server, i.e., being offered a service for its own benefit. It may possibly be said that the whole system of "user + browser" interfaces with the remote server, but then it is no longer an application.
(Of course, this is all assuming the classical model of HTML web pages presented to the user as-is. With JS, we can have scripts and browser extensions acting for their own purposes, so that they may be rightly considered "client" programs. But none of these are using a REST API in Fielding's sense.)
OK, i understand you dislike it. But by any reasonable standard the web is a client/server distributed system, where the browsers are the clients. I understand you don't feel like that's right, but objectively that's what is going on. The browser is interfacing with the remote server, via an API discovered in the hypertext responses, based on actions taken by the users. It is no different than, for example, a MMORPG connecting to an API based on user actions in the game except that the actions are discovered in the hypertext responses. That's the crux of the uniform interface of REST.
I don't know what "for its own benefit" means.
So, given a hateos api, and stock firefox (or chrome, or safari, or whatever), it will generate client views with crud functionality?
Let alone ux affordances, branding, etc.
Yes. You used such an api to post your reply. And I am using it as well, via the affordances presented by the mobile safari hypermedia client program. Quite an amazing system!
No. I was served HTML. not a json respoise that the browser discovered how to display.
Yes. Exactly.
The connection between the "H" in HTML and the "H" in HATEOAS might help you connect some dots.
html is the hateoas response
The web browser is just following direct commands. The auto discovery and logic is implemented by my human brain
Yes.
https://intercoolerjs.org/2016/05/08/hatoeas-is-for-humans.h...
I also use Google Maps, YouTube, Spotify, and Figma in the same web browser. But surely most of the functionality of those would not be considered HATEOAS.
Yes, very strongly agree. Browsers, through the code-on-demand "optional" constraint on REST, have become so powerful that people have started to build RPC-style applications in them.
Ironic that Fielding's dissertation contained the seed of REST's destruction!
Wait what? So everything is already HATEOAS?
I thought the “problem” was that no one was building proper restful / HATEOAS APIs.
It can’t go both ways.
The web, in traditional HTML-based responses, uses HATEOAS, almost by definition. JSON APIs rarely do, and when they do it's largely pointless.
https://htmx.org/essays/how-did-rest-come-to-mean-the-opposi...
https://intercoolerjs.org/2016/05/08/hatoeas-is-for-humans.h...
I used it on an enterprise-grade video surveillance system. It was great - basically solved the versioning and permissions problem at the API level. We leveraged other RFCs where applicable.
The biggest issue was that people wanted to subvert the model to "make things easier" in ways that actually made things harder. The second biggest issue is that JSON is not, out of the box, a hypertext format. This makes application/json not suitable for HATEOAS, and forcing some hypertext semantics onto it always felt like a kludge.
https://htmx.org/ might be the closest attempt?
https://data-star.dev are taking things a bit further in terms of simplicity and performance and hypermedia concepts. Worth a look.
I think OData isn't used, and that's a proper standard and a lower bar to clear. HATEOAS isn't even benefiting from a popular standard, which is both a cause and a result.
You realize that anyone using a browser to view HTML is using HATEOS, right? You could probably argue whether SPAs fit the bill, but for sure any server rendered or static site is using HATEOS.
The point isn't that clients must have absolutely no prior knowledge of the server, its that clients shouldn't have to have complete knowledge of the server.
We've grown used to that approach because most of us have been building tightly coupled apps where the frontend knows exactly how the backend works, but that isn't the only way to build a website or web app.
HATEOAS is anything that serves the talking point now apparently
For a traditional web application, HATEOS is that. HTML as the engine of application state: the application state is whatever the server returns, and we can assess the application state at any time by using our eyeballs to view the HTML. For these applications, HTML is not just a presentation layer, it is the data.
The application is then auto-discoverable. We have links to new endpoints, URLs, that progress or modify the application state. Humans can navigate these, yes, but other programs, like crawlers, can as well.
What do you mean? Both HATEOAS and REST have clear definitions.
Can you be more specific? What exactly is the partial knowledge? And how is that different from non-conforming APIs?
Not totally sure I understand your question, sorry if I don't quite answer it here.
With REST you need to know a few things like how to find and parse the initial content. I need a browser that can go from a URL to rendered HTML, for example. I don't need to know anything about what content is available beyond that though, the HTML defines what actions I can take and what other pages I can visit.
RPC APIs are the opposite. I still need to know how to find and parse the response, but I need to deeply understand how those APIs are structured and what I can do. I need to know schemas for the API responses, I need to know what other APIs are available, I need to know how those APIs relate and how to handle errors, etc.
Not weird at all if people don't strictly follow a standard.
The world of programming, just like the real world, has a lot of misguided doctrines that looked really good on paper, but not on application.
For example:
Why "POST"?And what POST do you send? A bare POST with no data, or with parameters in it's body?
What if you also want to GET the status of cancellation? Change the type of `method` to an array so you can `"method": ["POST", "GET"]`?
What if you want to cancel the cancellation? Do you do `POST /orders/123/cancel/cancel HTTP/...`, or `DELETE /orders/123/cancel HTTP/...`?
So, people adopt, making an originally very pure and "based" standard into something they can actually use. After all, all of those things are meant to be productive, rather than ideological.
Have a /cancellation resource. This could be returned by /cancel or it could just be linked to directly
Now you have a noun and some of confusion goes away regarding GET and DELETE etc
As someone that criticized a number of their employers API's for not being sufficiently ReSTful especially with regards to HatEoS, I eventually realized the challenge is the clients. App developers and client developers mostly just want to deal with structured objects that they've built fixed function UX around (including the top level) and desire constructing URLs on the client. It takes a special kind of developer to desire building special mini-browsers everywhere that would require hateos and from the server side.
I think LLM's are going to be the biggest shift in terms of actually driving more truly ReSTful APIs, though LLM's are probably equally happy to take ReST-ish responses, they are able to effectively deal with arbitrary self describing payloads.
MCP at it's core seems to design around the fact that you've got an initial request to get the schema and then the payload, which works great for a lot of our not-quite-ReST API's but you could see over time just doing away with the extra ceremony and doing it all in one request and effectively moving back in the direction of true ReST.
> By using HATEOAS and referencing schema definitions (such as XSD or JSON Schema) from within your resource representations, you can enable clients to understand the structure of the data and navigate the API dynamically.
I actually think this is where the problem lies in the real world. One of the most useful features of a JSON schema is the "additionalProperties" keyword. If applied to the "_links" subschema we're back to the original problem of "out of band" information defining the API.
I just don't see what the big deal is if we have more robust ways of serving the docs somewhere else outside of the JSON response. Would it be equivalent if the only URL in "_links" that I ever populate is a link to the JSONified Swagger docs for the "self" path for the client to consume? What's the point in even having "_links" then? How insanely bloated would that client have to be to consume something that complicated? The templates in Swagger are way more information dense and dynamic than just telling you what path and method to use. There's often a lot more for the client to handle than just CRUD links and there exists no JSON schema that could be consistent across all parts of the API.
> The core problem it addresses is client-server coupling. There are probably countless projects where a small change in a server’s URI structure required a coordinated (and often painful) deployment of multiple client applications. A HATEOAS-driven approach directly solves this by decoupling the client from the server’s namespace. This addresses the quality of evolvability.
Not sure I agree with this. All it does is move the coupling problem around. A client that doesn't understand where to find a URL in a document (or even which URL's are available for what purpose within that document) is just as bad as a client that assumes the wrong URL structure.
At some point, the client of an API needs to understand the semantics of what that API provides and how/where it provides those semantics. Moving it from a URL hierarchy to a document structure doesn't provide a huge amount of added value. (Particularly in a world where essentially all of the server API's are defined in terms of URL patterns routing to handlers. This is explicit hardcoded encouragement to think in a style in opposition to the HATEOAS philosophy.)
I also tend to think that the widespread migration of data formats from XML to JSON has worked against "Pure" REST/HATEOAS. XML had/has the benefit of a far richer type structure when compared to JSON. While JSON is easier to parse on a superficial level, doing things like identifying times, hyperlinks, etc. is more difficult due to the general lack of standardization of these things. JSON doesn't provide enough native and widespread representations of basic concepts needed for hypertext.
(This is one of those times I'd love some counterexamples. Aside from the original "present hypertext documents to humans via a browser" use case, I'd love to read more about examples of successful programmatic API's written in a purely HATEOAS style.)
This is what I don’t understand either.
/user/123/orders
How is this fundamentally different than requesting /user/123 and assuming there’s a link called “orders” in the response body?
With an HTML body the link will be displayed as content and so will be directly clickable. But if the body is JSON then the client has to somehow generate a UI for the user, which requires some kind of interpretation of the data, so I don’t understand that case.
Just call it a HTTP API and everyone is happy. People forget REST was never intended for API’s in the first place. REST was designed for information systems navigated by humans, not programs.
Similarly, I call Java programs "Object Oriented programs" despite Alan Kays protests that it isn't at all what Object Orientation was described as in early papers.
The sad truth is that it's the less widely used concept that has to shift terminology, if it comes into wide use for something else or a "diluted" subset of the original idea(s). Maybe the true-OO-people have a term for Kay-like OO these days?
I think the idea of saving "REST" to mean the true Fielding style including HATEOAS and everything is probably as futile as trying to reserve OO to not include C++ or Java.
I struggle to believe that any API in history has been improved by the developer more faithfully following REST’s strictures. The closest we’ve come to actually decoupled, self describing APIs is MCP, and that required inventing actual AIs to understand them.
The most successful API in history – the World-Wide Web – uses REST principles. That’s where REST came from. It was somebody who was involved in the creation of the early web who looked at it and wrote down a description of what properties of the web made it so successful.
REST on the WWW only works because humans read and interpret the results. Arguably, that’s not an API (Application Programming Interface) but a UI (User Interface).
I have yet to see an API that was improved by following strict REST principles. If REST describes the web (a UI, not an API), and it’s the only useful example of REST, is REST really meaningful?
> REST on the WWW only works because humans read and interpret the results.
This is very obviously not true. Take search engine crawlers, for example. There isn’t a human operator of GoogleBot deciding which links to follow on a case-by-case basis.
> I have yet to see an API that was improved by following strict REST principles.
I see them all the time. It’s ridiculous how many instances of custom logic in APIs can be replaced with “just follow the link we give you”.
This is, almost canonically, the subject of Joel Spolsky's architecture astronauts essay.
It’s not. It’s pretty much the opposite. This is what he’s talking about:
> our clever thinker invents a new, higher, broader abstraction
> When you go too far up, abstraction-wise, you run out of oxygen.
> They tend to work for really big companies that can afford to have lots of unproductive people with really advanced degrees that don’t contribute to the bottom line.
REST is the opposite. REST is “We did this. It worked great! This is why.” And web developers around the world are using this every single day in practical projects without even realising it. The average web developer uses REST, including HATEOAS, all the time, and it works great for them. It’s just when they set out to do it on purpose, they often get distracted by some weird fake definition of REST that is completely different.
That's absolutely not what the essay is about. It's about the misassignment of credit for the success of a technology by people who think the minutiae of the clever implementation was important.
I think you bring up an interesting tangential point that I might agree with--that the people doing the misalignment are how architecture astronauts remain employed.
But the core of Joel Spolsky's three posts on Architecture Astronauts is his expression of frustration at engineers who don't focus on delivering product value. These "Architecture Astronauts" are building layer on layer of abstraction so high that what results is a "worldchanging" yet extremely convoluted system that no real product would use.
A couple choice quotes from https://www.joelonsoftware.com/2008/05/01/architecture-astro...:
> "What is it going to take for you to get the message that customers don’t want the things that architecture astronauts just love to build."
> "this so called synchronization problem is just not an actual problem, it’s a fun programming exercise that you’re doing because it’s just hard enough to be interesting but not so hard that you can’t figure it out."
I don't think this is tangential at all. This whole conversation is exactly the same as Spolsky's point about Napster: it's hard to know what to say to someone who thinks the reason the web was successful was REST, rather than HTML letting you make cool web pages with images in them. And this has played out exactly as you'd expect: nobody cares at all about REST, because it's pure architecture astronaut stuff.
[dead]
Academically it might be correct, but shipping real features will in most cases be more important than hitting some text book definition of correctness.
Sure, you’re right: pragmatics, in practice, are more important than theory.
But you’re assuming that there is a real contradiction between shipping features and RESTful design. I believe that RESTful design can in many cases actually increase feature delivery speed through its decoupling of clients and servers and more deeply due to its operational model.
its decoupling of clients and servers.
Notice that both of those are plural words. When you have many clients and many servers implementing a protocol a formal agreement of protocol is required. REST (which I will not claim to understand well) makes a formal agreement much easier, but you still need some agreement. However when there is just one server and just one client (I'll count all web browsers as one since the browser protocols are well defined enough) you can go faster by just implementing both sides and testing they work for a long time.
Drake meme for me:
REST = Hell No
GQL = Hell No.
RPC with status codes = Grin and point.
I like to get stuff done.
Imagine you are forced to organize your code filed like REST. Folder is a noun. Functions are verbs. One per folder. Etc. Would drive you nuts.
Why do this for API unless the API really really fits that style (rare).
GQL is expensive to parse and hides information from proxies (200 for everything)
> RPC with status codes
Yes. All endpoints POST, JSON in, JSON out (or whatever) and meaningful HTTP status codes. It's a great sweet spot.
Of course, this works only for apps that fetch() and createElement() the UI. But that's a lot of apps.
If I don't want to use an RPC framework or whatever I just do:
And have a dictionary in my server mapping method names to the actual functions.All functions take one param (a dictionary with the data), validate it, use it and return another single dictionary along with appropriate status code.
You can add versions and such but at that point you just use JSON-RPC.
This kind of setup can be much better than REST APIs for certain usecases
>All endpoints POST
This makes automating things like retrying network calls hell. You can safely assume a GET will be idempotent, and safely retry on failure with delay. A POST might, or might not also empty your bank account.
HTTP verbs are not just for decoration.
If you're doing well-formed RPC over POST, as opposed to ad hoc RPC (which, let's be honest, is the accurate description for many "REST" APIs in the wild), then requests and responses should have something like an `id` field, e.g. in JSON-RPC:
https://www.jsonrpc.org/specification#request_object
Commonly, servers shouldn't accept duplicate request IDs outside of unambiguous do-over conditions. The details will be in the implementations of server and client, as they should be, i.e. not in the specification of the RPC protocol.
> not just for decoration
Still, they are just a convention.
When you are retrying an API, you are calling the API, you know whether its a getBookings() or a addBooking() API. So write the client code based on that.
Instead of the API developer making sure GET /bookings is idempotent, he is going to be making sure getBookings() is idempotent. Really, what is the difference?
As for the benefits, you get a uniform interface, no quirks with URL encoding, no nonsense with browsers pre-loading, etc etc,. It's basically full control with zero surprises.
The only drawback is with cookies. Samesite: Lax depends on you using GET for idempotent actions and POST for unsafe actions. However, I am advocating the use of this only for "fetch() + createElement() = UI" kind of app, where you will use tokens for everything anyways.
> Imagine you are forced to organize your code filed like REST. Folder is a noun. Functions are verbs. One per folder. Etc. Would drive you nuts.
That’s got nothing to do with REST. You don’t have to do that at all with a REST API. Your URLs can be completely arbitrary.
Ok I may have been wrong. I checked the thesis and couldn't see this aspect mentioned. Most of the thesis seems like stuff I agree with. Damn. I'm fighting an impression of REST I had.
It felt easier going through the post after reading these bits near the end:
> The widespread adoption of a simpler, RPC-like style over HTTP can probably attributed to practical trade-offs in tooling and developer experience
> Therefore, simply be pragmatic. I personally like to avoid the term “RESTful” for the reasons given in the article and instead say “HTTP” based APIs.
I prefer call it "REST-like" APIs
Same. And REST for short ;)
Yeah but why cause needless confusion? The colloquial definition of "RESTful" is better understood as just something you defined using the OpenAPI spec. All other variants of "HTTP API" are likely hot garbage nobody wants anyway.
The article is seemingly accurate, but isn't particularly useful as it is written in FAR too technical of a style.
If anyone wants to learn more about all of this, https://htmx.org/essays and their free https://hypermedia.systems book are wonderful.
You could also check out https://data-star.dev for an even better approach to this.
I politely pointed out that this previous submission "Stop using REST for state synchronization" (https://news.ycombinator.com/item?id=43997286) was not in fact ReST at all, but just an HTTP API and I was down voted for it. You would think that programming is a safe place to be pedantic.
It's all HTTP API unless you're actually doing ReST in which case you're probably doing it wrong.
ReST and HATEOAS are great ideas until you actually stop and think about it, then you'll find that they only work as ideas in some idealized world that real HTTP clients do not exist in.
This doesn’t provide any good arguments for why Roy Fielding’s conception should be taken as the gospel of how things should be done. At best, it points out that what we call REST now isn’t what Roy Fielding wanted.
Furthermore, it doesn’t explain how Roy Fielding’s conception would make sense for non-interactive clients. The fact that it doesn’t make sense is a large part of why virtually nobody is following it.
Why doesn't fielding's conception make sense for non-interactive clients?
Take this quote: “A REST API should be entered with no prior knowledge beyond the initial URI (bookmark) and set of standardized media types that are appropriate for the intended audience (i.e., expected to be understood by any client that might use the API). From that point on, all application state transitions must be driven by client selection of server-provided choices that are present in the received representations or implied by the user’s manipulation of those representations.”
If the client application only understands media types and isn’t supposed to know anything about the interrelationships of the data or possible actions on it, and there is no user that could select from the choices provided by the server, then it’s not clear how the client can do anything purposeful.
Surely, an automated client, or rather its developer, needs a model (a schema) of what is possible to do with the API. Roy Fieldings doesn’t address that aspect at all. At best, his REST API would provide a way for the client to map its model to the actual server calls to make, based on configuration information provided by the server as “hypertext”. But the point of such an indirection is unclear, because the configuration information itself would have to follow a schema known and understood by the client, so again wouldn’t be RESTful in Roy Fielding’s sense.
People are trying to fill in the blanks of what Roy Fielding might have meant, but in the end it just doesn’t make a lot of sense for what REST APIs are used in practice.
As I replied to the sibling comment, you're misunderstanding rest and hypermedia. The "schema" is html and the browser is the automated client that is exceptionally good at rendering whatever html the backend has decided to send.
Browsers are interactive clients, the opposite of automated clients. What you are saying supports the conclusion that Roy Fielding’s conception is unsuitable for non-interactive clients. However, the vast majority of real-world REST APIs are targeting automation, hence it doesn’t make sense for them to be “RESTful”.
Sorry, perhaps we're talking past each other.
Fielding was absolutely not saying that his REST was the One True approach. But it DOES mean something
The issue at hand here is that he coined REST and the whole world is using that term for something completely unrelated (eg an http json api).
You could start writing in binary here if you thought that that would be a more appropriate way to communicate, but it wouldn't be English (or any humanly recognizable language) no matter how hard you try to say it is.
If you want to discuss whether hypermedia/rest/hateaos is a better approach for web apps than http json APIs, I'd encourage you to read htmx.org/essays and engage with that community who find it to be an enormous liberation.
It may mean something, but Roy Fielding went out of his way, over many years, to not talk about the actual use cases he had in mind. It would have been easy for him to clarify that he was only talking about interactive browser applications. But he didn’t. And the people who came up with HATEOAS didn’t think he was. Nor did any of the blog articles that are espousing the alleged virtues of RESTfulness. So it’s not surprising that the term “REST” was appropriated for something else. In any case, it’s much too late to change that, it’s water down the bridge.
I’m only mildly interested in discussing hypothetical hypermedia browsers, for which Roy Fielding’s conception might be well and good (but also fairly incomplete, IMO). What developers care about is how to design HTTP-based APIs for programmatic use.
How are web browsers hypothetical? We're using one with rest/hateoas/hypermedia right now...
You don't seem to have even the slightest idea of what you're talking about here. Again, I suggest checking out the htmx essays and their hypermedia.systems book
It should be obvious that the thing doing the interpretation and navigation is a human, not an automated system.
i dont have any clue why people keep bringing up automated systems in this discussion. its not relevant. Hypermedia - and REST - is for humans
If you need an http json api for bots to consume, go for it. They are not mutually exclusive.
In a non-interactive case, what is supposed to be reading a response and deciding which links to do some something with or what to do with them?
Let's say you've got a non-interactive program to get daily market close prices. A response returns a link labelled "foobarxyz", which is completely different to what the API returned yesterday and the day before.
How is your program supposed to magically know what to do? (without your input/interaction)
Why does "your program" need to know anything? The whole point of hypermedia is that there isn't any "program" other than the web browser that agnostically renders whatever html it receives. If the (backend) "program" development team decides that a foobarxyz link should be returned, then that's what is correct.
I suspect that your misunderstanding is because you're still looking at REST as a crud api, rather than what it actually is. That was the point of this article, though it was too technical.
https://htmx.org/essays is a good introduction to these things
> Why doesn't fielding's conception make sense for non-interactive clients?
> Why does "your program" need to know anything? The whole point of hypermedia is that there isn't any "program" other than the web browser that agnostically renders whatever html it receives.
Seems like you're contradicting yourself here.
If a non-interactive client isn't supposed to know anything and just "render" whatever it gets back, how can it perform useful work on the result?
If it can't, in which sense does REST still make sense for non-interactive clients?
Good.
Strict HATEOAS is bad for an API as it leads to massively bloated payloads. We _should_ encode information in the API documentation or a meta endpoint so that we don't have to send tons of extra information with every request.
> REST isn’t about exposing your internal object model over HTTP — it’s about building distributed systems that behave like the web.
I think I finally understand what Fielding is getting at. His REST principles boil down to allowing dynamic discovery of verbs for entities that are typed only by their media types. There's a level of indirection to allow for dynamic discovery. And there's a level of abstraction in saying entities are generic media objects. These two conceptual leaps allow the REST API to be used in a more dynamic, generic way - with benefits at the API level that the other levels of the web stack has ("client decoupling, evolvability, dynamic interaction").
In what context would a user discover parts of a REST API dynamically?
In the simple (albeit niche) case, a UI could populate a list of buttons based on the URIs/verbs that the REST API returns. So the UI would be totally dynamic based on the backend - and so, work pretty generically across REST APIs.
But for a client, UI or otherwise, to make use of a dynamic set of URIs/verbs would require it to either look for a specific keyword (hard coding the intents it can satisfy) or be able to semantically understand the API (which is hard, requires a human).
Oddly, all this stuff is full circle with the AI stuff. The MCP protocol is designed to give AIs text-based descriptions of APIs, so they can reason about how to use them.
The simplest case, and the most common, is that of a browser rendering the HTML response from a website request. The HTML contains the URL links to other APIs that the user can click on. Think of navigating any website.
Htmx essays have already been mentioned, so here are my thoughts on the matter. I feel like to have a productive discussion of REST and HATEOAS, we must first agree on the basics. Repeating my own comment from a couple of weeks ago, H stands for hypermedia, and hypermedia is a type of media, that uses common format for representing some server-driven state and embedding hypermedia controls which are presented by back-end agnostic hypermedia client to a user for discoverability and interaction.
As such, JSON driven APIs can't be REST, since there is no common format for representing hypermedia controls, which means that there's no way to implement hypermedia client which can present those controls to the user and facilitate interactions. Is there such implmentation? Yes, HTML is the hypermedia, <input>s and <button>s are controls and browsers are the clients. REST and HATEOAS is designed for the humans, and trying to somehow combine it with machine-to-machine interaction results in awkward implementations, blurry definitions and overcomplication.
Richardson maturity model is a clear indication of those problems, I see it as an admission of "well, there isn't much practicality in doing proper REST for machine-to-machine comms, but that's fine, you can only do some parts of it and it's still counts". I'm not saying we shouldn't use its ideas, resource-based URLs are nice, using feature of HTTP is reasonable, but under the name REST it leads to constant arguments between the "dissertation" crowd and "the industry has moved on" crowd. The worst/best part is both those crowds are totally right and this argument will continue for as long as we use HTTP
I felt the need to clarify this point:
> As such, JSON driven APIs can't be REST
I made it sound like JSON APIs can't be REST in principle, which is of course not true. If someone were to create hypermedia control specification for JSON and implement hypermedia client for it, it would of course would match the definition. But since we don't have such specification and compliant client at this time, we can't do REST as it is defined
Wasn't the entire point of calling an API RESTful, that it's explicitly not REST, but only kind of REST-like.
Also, who determined these rules are the definition of RESTful?
RESTful means that it respects REST constraints. One is an adjective and the other a noun (like "state" and "stateless").
> Also, who determined these rules are the definition of RESTful?
Roy Fielding.
I have always said that HATEOAS starting with “HATE” is highly descriptive of my attitude toward it.
It is a fundamentally flawed concept that does not work in the real world. Full stop.
> If you are building a public API for external developers you don’t control, invest in HATEOAS. If you are building a backend for a single frontend controlled by your own team, a simpler RPC-style API may be the more practical choice.
My conclusion is exactly the opposite. In-house developers can be expected (read: cajoled) to do things the "right" way, like follow links at runtime. You can run tests against your client and server. Internally, flexible REST makes independent evolution of the front end and back end easy.
Externally, you must cater to somebody who hard-coded a URL into their curl command that runs on cron and whose code can't tolerate the slightest deviation from exactly what existed when the script was written. In that case, an RPC-like call is great and easy to document. Increment from `/v1/` to `/v2/`, writer a BC layer between them and move on.
At my FAANG company, the central framework team has taken calling what people do in reality HTTP bindings. https://smithy.io/2.0/spec/http-bindings.html
I think we should focus less on API schemas and more on just copying how browsers work.
Some examples:
It should be far more common for http clients to have well supported and heavily used Cookie jar implementations.
We should lean on Accept headers much more, especially with multiple mime-types and/or wildcards.
Http clients should have caching plugins to automatically respect caching headers.
There are many more examples. I've seen so much of HTTP reimplemented on top of itself over the years, often with poor results. Let's stop doing that. And when all our clients are doing those parts right, I suspect our APIs will get cleaner too.
My biggest takeaway from Roy Fielding's dissertation wasn't how to construct a RESTful architecture or what is the one true REST, but how to understand any computer architecture -- particularly their constraints -- in order to design and implement appropriate systems. I can easily identify anti-patterns (even in implementations) because they violate the constraints which in turns, takes away from the properties of the architecture. This also quickly allows me to evaluate and understand libraries, runtimes, topologies, and so forth.
I used to get caught up in what is REST and what is not, and that misses the point. It's similar to how Christopher Alexander's ideas pattern languages gets used in a way now that misses the point. Alexander was cited in introductory chapter of Fielding's dissertation. These are all very big ideas with broad applicability and great depth.
When combined with Promise Theory, this gives a dynamic view of systems.
It is not sufficient to crawl the API. The client also needs to know how to display the forms, which collect the data for the links presented by the API. If you want to crawl the API you also have the crawl the whole client GUI.
Didn't we go through all this years ago and determined that we should invent a new term - REST-like and so were able to put this all to bed?
You know what type of API I like best?
Because that is the easiest to implement, the easiest to write, the easiest to manually test and tinker with (by writing it directly into the url bar), the easiest to automate (curl .../draw_point?x=7&y=20). It also makes it possible to put it into a link and into a bookmark.This is also how HN does it:
This is great for API's that only have a few actions that can be taken on a given resource.
REST-API's then are especially suited for acting as a gateway to a database, to easily CRUD and fetch lists of information.
The best API's I've seen mix and match both patterns. RESTful API endpoints for data, "function call" endpoints for often-used actions like voting, bulk actions and other things that the client needs to be able to do, but you want the API to be in control of how it is applied.
> REST-API's then are especially suited for acting as a gateway to a database, to easily CRUD and fetch lists of information.
I don't disagree, but I've found (delivering LoB applications) that they are not homogenous: The way REST is implemented, right now, makes it not especially suitable for acting as a gateway to a database.
When you're free of constraints (i.e. greenfield application) you can do better (ITO reliability, product feature velocity, etc) by not using a tree exchange form (XML or JSON).
Because then it's not just a gateway to a database, it's an ill-specified, crippled, slow, unreliable and ad-hoc ORM: it tries to map trees (objects) to tables (relations) and vice versa, with predictably poor results.
Can you give an example of an endpoint where you would prefer a "RESTful API endpoint"?
If you type it into the URL bar, it will use GET.
Surely you're not advocating mutating data with GET?
What's your problem with it?
Bots, browsers that preload URLs, caching (both browser and backend and everything in between), the whole infrastructure of the Web that assumes GET never mutates and is always safe to repeat or serve from cache.
Using GET also circumvents browser security stuff like CORS, because again the browser assumes GET never mutates.
So why is there no problem with vote/flag/vouche on HN being GET endpoints?
Then that does not conform to the HTTP spec. GET endpoints must be safe, idempotent, cachable. Opening up a site to cases were web crawlers/scrapers may wreak havoc.
There is, it's bad. Luckily votes aren't very crucial.
Votes are crucial. HN goes to great lengths to prevent votes that do not stem from real user intent.
See this post for example:
https://news.ycombinator.com/item?id=22761897
Quotes:
"Voting ring detection has been one of HN's priorities for over 12 years"
"I've personally spent hundreds of hours working on this"
https://news.ycombinator.com/item?id=3742902
Indeed, user embedded pictures can fire GET requests while can not make POST requests. But this is not a problem if you don't allow users to embed pictures, or you authenticate the GET request somehow. Anyway GET requests are just fine.
The same would have worked with a POST endpoint.
The story url only would have to point to a web page that creates the upvote post request via JS.
That runs into CORS protections though.
CORS is a lot less strict around GET as it is supposed to be safe.
Nope, it would not have been prevented by CORS.
CORS prevents reading from a resource, not from sending the request.
If you find that surprising, think about that the JS could also have for example created a form with the vote page as the target and clicked on the submit button. All completely unrelated to CORS.
> CORS prevents reading from a resource
CORS does nothing of the sort. It does the exact opposite – it’s explicitly designed to allow reading a resource, where the SOP would ordinarily deny it.
Even mdn calls it "violating the CORS security rules" instead of SOP rules: https://developer.mozilla.org/en-US/docs/Web/HTTP/Guides/COR...
Anyway, this is lame low effort trolling for some unknown purpose. Stop it.
That any bot crawling your website is going to click on your links and inadvertently mutate data.
Reading your original comment I was thinking "Sure, as long as you have a good reason of doing it this way anything goes" but I realized that you prefer to do it this way because you don't know any better.
If you rely on the HTTP method to authenticate users to mutate data, you are completely lost. Bots and humans can send any method they like. It's just a string in the request.
Use cookies and auth params like HN does for the upvote link. Not HTTP methods.
You say that, but there are lots of security features like SameSite=Lax that are built on the assumption that GET requests are harmless.
> If you rely on the HTTP method to authenticate users to mutate data, you are completely lost
I don't know where you are getting that from but it's the first time I've heard of it.
If your link is indexed by a bot, then that bot will "click" on your links using the HTTP GET method—that is a convention and, yes, a malicious bot would try to send POST and DELETE requests. For the latter, this is why you authenticate users but this is unrelated to the HTTP verb.
> Use cookies and auth params like HN does for the upvote link
If it uses GET, this is not standard and I would strongly advise against it except if it's your pet project and you're the only maintainer.
Follow conventions and make everyone's lives easier, ffs.
There was a post about Garage opener I read here sometime back. https://news.ycombinator.com/item?id=16964907
That’s pretty bad design. Only GETs should include a querystring. Links should only read, not create, update or delete.
> Only GETs should include a querystring.
Why?
Because HTTP is a lot more sophisticated than anyone cares to acknowledge. The entire premise of "REST", as it is academically defined, is an oversimplification of how any non-trivial API would actually work. The only good part is the notion of "state transfer".
Not a REST API, but I've found it particularly useful to include query parameters in a POST endpoint that implements a generic webhook ingester.
The query parameters allow us to specify our own metadata when configuring the webhook events in the remote application, without having to modify our own code to add new routes.
I used to do that but I've been fully converted to REST and CRUD gang. Once you establish the initial routes and objects it's really easy mount everything else on it and move fast with changes. Also using tools like httpie it's super easy to test anything right in your terminal.
You're going to run into all kinds of security issues if you let GET endpoints have side effects.
I don't understand why no one or barely anyone is using graphql. It's the evolution of all that REST crap.
or and so on, instead of having to manually glue together responses and relations.It's literally SQL over the wire without needed to write SQL.
The payload is JSON, the response is JSON. EZ.
I've not done much with GraphQL myself, but a lot of my colleagues have and have all sworn off it except in very specific circumstances.
My impression is that it's far too flexible. Connecting it up to a database means you're essentially running arbitrary SQL queries, which means whoever is writing the GraphQL queries also needs to know how those queries will get translated to SQL, and therefore what the database structure/performance characteristics are going to be. That's a pain if you're using GraphQL internally and now your queries are spread out, potentially just multiple codebases. But if you exclude the GraphQL API publicly, now you don't even know what the queries are that people are going to want to use.
Mostly these days we use RPC-style APIs for internal APIs where we can control everything and be really precise about what gets called when and where. And then more "traditional" REST/resource-oriented endpoints for public APIs where we might have more general queries.
REST almost never is worth it. It’s a nice idea, but in practice things often are more complicated.
API quality is often not relevant to the business after it passes the “mostly works” bar.
I’ll just use plain http or RPC when it’s not important and spend more time on things that make a difference.
Most databases aren't relational, either, in the sense that Codd defined relational. They are, instead, useful.
It's mostly just semantic drift. "REST" is less of a mouthful than "JSON over HTTP". Nobody ever realised the potential of discoverability.
The thing to internalize about "true" REST is that HN (and the rest of the web) is really a RESTful web service. You visit the homepage, a hypermedia format is delivered to a generic client (your browser), and its resources (pages, sections, profiles, etc) can all be navigated to by following links.
Links update when you log in or out, indicating the state of your session. Vote up/down links appear or disappear based on one's profile. This is HATEOAS.
Link relations can be used to alter how the client (browser) interprets the link—a rel="stylesheet" causes very different behavior from rel="canonical".
JavaScript provides even provides "code on-demand" as it's called in Fielding's paper.
From that perspective, REST is incredible. REST is extremely flexible, scalable, evolvable, etc. It is the pattern that powers the web.
Now, it's an entirely different story when it come to what many people call REST APIs, which are often nothing like HN. They cannot be consumed by a generic client. They are not interlinked. They don't ship code on-demand.
Is "REST" to blame? No. Few people have time or reason to build a client as powerful as the browser to consume their SaaS product's API.
But even building a truly generic client isn't the hardest thing about building RESTful APIs—the hardest thing is that the web depends entirely on having a human-in-the-loop and your standard API integration's purpose is to eliminate having a human in the loop.
For example, a human reads the link text saying "Log in" or "Reset password" and interprets that text to understand the state of the system (they do not have an authenticated session). And a human can reinterpret a redesigned webpage with links in a new location, but trivial clients can't reinterpret a refactored JSON object (or XML for that matter).
The folly is in thinking that there's some design pattern out there that's better than REST without understanding that the actual problem to be solved by that elusive, perfect paradigm is how you'll be able to refactor your API when your API's clients will likely be bodged-together JS programs whose authors dug through JSON for the URL they needed and then hard-coded it in a curl command instead of conscientiously and meticulously reading documentation and semantically looking up the URL at runtime, follows redirects, and handles failures gracefully.
I see a lot of people who read Fielding's thesis and found it interesting.
I did not find it interesting. I found it excessively theoretical and proscriptive. It led to a lot of people arguing pedantically over things that just weren't important.
I just want to exchange JSON-structured messages over HTTP, using the least amount of HTTP required to implement request and response. I'm also OK with protocol buffers over grpc, or really any decent serialization technology over any well-implemented transport. Sometimes it's CRUD, sometimes it's inference, sometimes it's direct actions on a server.
Hmm. I shoudl write a thesis. JSMOHTTP (pronounced "jizmo-huttup")
i completely agree with you. the authors approach seems complex and unnecessary. my basic expectation when I see something labeled as a REST API is:
1. i can submit a request via HTTP
2. data is returned as JSON by a response
3. the most minimal amount of HTTP/Pagination necessary is required
I always urge software architects (are they still around?) and senior engineers in charge of APIs to think very carefully about the consumers of the API.
If the only consumer is your own UI, you should use a much more integrated RPC style that helps you be fast. Forget about OpenAPI etc: Use a tool or library that makes it dead simple to provide data the UI needs.
If you have a consumer outside your organization: a RESTish API it is.
If your consumer is supposed to be generic and can "discover" your API, RESTful is the way to go.
But no one writes generic ones anymore. We already have the ultimate one: the browser.
HATEOAS might make a come back as it might be useful to expose an API to AI agents that would browse a service.
On the other hand, agents could as well understand an OpenAPI document, as the description of each path/schema can be much more verbose than HATEOAS. There is a reason why OpenAPI-style API are favored: less verbosity of payload. If cost of agents is based on their consumption/production of tokens, verbosity matters.
This post follows the general, highly academic/dogmatic, tone that I’ve seen when certain folks talk about REST. Most of the article talks about what _not_ to do, and has very little details on how to actually do it.
The idea of having client/server decoupled via a REST api that is itself discoverable, and that allows independent deployment, seems like a great advantage.
However, the article lacks even the simplest example of an api done the “wrong” vs the “right” way. Say I have a TODO api, how do I make it so that it uses HATEOAS (also who’s coming up with these acronyms…smh)?
Overall the article comes across more as academic pontification on “what not to do” instead of actionable advice.
Agreed. I wish there was some examples to better understand what the author means. Like, in a web app, do i have any prior knowledge about the "_links" actions? Do I know that the server is going to return the actions "self" and "activate"? Is the idea to hide the routes from the user until the api call, but he should know that the api could return actions like "self", "activate" or "deactivate"? How do you communicate that an action requires a specific body? For example, the call activate is done in POST and expect a json body with a date inside. How do you tell that to the user?
> However, the article lacks even the simplest example of an api done the “wrong” vs the “right” way.
Unless the design and requirements are unusually complex or extreme, all styles of API and front end work well enough. Any example would have to be lengthy, to provide context for the advantages of "true" ReST architecture, and contrived.
Some of this is sensible. I especially like the idea of an interactive starting point which gives you useful links and info, but I can see how that would be difficult with more complex calls — showing examples and providing rich documentation would be difficult. Otherwise, just follow the recommendations for REST verbs (so what if they mostly map to CRUD?), and document your API well. Tools like Swagger really make this quite easy.
"Reductio Ad Roy Feldium" is the internet addage[1] that as in a hacker news discussion about a rest api grows, the probabilty someone cites roy felding's dissertation approaches 1. I'm glad this post cut right to the chase!
[1] ok it's not an internet adage. I invented it and joke with friends about it
Purist ideas rarely survive contact with reality, or something.
Likewise if the founders of the web took one look at a full on React based site they would shriek in horror at what's now the defacto standard.
I think that all of the unemployed CS grads are rediscovering the "best practices" of the last 40 years in lieu of working. Well, just remember, every HATEOAS-conforming REST API, every chaos-monkey-enabled Microservice-Oriented Architecture, every app that someone spent tone of time hacking down the cyclomatic complexity score, every meticulously UML-diagrammed four-tier architecture, has had their main engineers laid off and replaced by a crack team of junior engineers who adulterated it down to spaghetti code. In the post-AI world, features talk, architecture walks.
In my experience REST is just a code word for a distributed glob of function calls which communicate via JSON. It's a development and maintenance nightmare.
I am wondering if anyone can resolve this misunderstanding of REST for me…
If the backend provides a _links map which contains “orders” for example in the list - doesn’t the front end need to still understand what that key represents? Is there another piece I am missing that would actually decouple the front end from the backend?
I tried to follow the approach with hypermedia and discoverable resources/actions in my hobby projects. But I "failed" at the point that this would mean additional HTTP calls from a client to "discover" a resource/its actions. Given the latency of a HTTP call, relativly seen, this was not conclusive for me.
Worse, most if not all "REST" apps have security vulnerabilities because of how browser front-ends handle authentication.
To handle authentication "properly" you have to use cookies or sessions which inheritly make apps not RESTful.
ElasticSearch and OpenSearch are certainly egregiously guilty of this. Their API is an absolute nightmare to work with if you don't have a supported native client. Why such a popular project doesn't have an easy-to-use OpenAPI spec document in this day and age is beyond me.
See https://stackoverflow.com/a/29520505/771665
The term has caused so much bikeshedding and unnecessary confusion.
If you want to produce better APIs, try consuming them. A lot of places have this clean split between backend and frontend teams. They barely talk to each other sometimes. And a pattern I've seen over and over again is that some product manager decides feature X is needed. The backend team goes to work and delivers some API for feature X and then the frontend team has to consume the API. These APIs aren't necessarily very good if the backend people don't understand how the frontend uses them.
The symptom is usually if a seemingly simple API change on the backend leads to a lot of unexpected client side complexity to consume the API. That's because the API change breaks with some frontend expectation/assumption that frontend developers then need to work around. A simple example: including a userId with a response. To a frontend developer, the userId is not useful. They'll need a user name, a profile photo, etc. Now you get into all sorts of possible "why don't you just .." type solutions. I've done them all. They all have issues and it leads to a lot of complexity on either the server or the client.
You can bloat your API and calculate all this server side. Now all your API calls that include a userId gain some extra fields. Which means extra lookups and joins. So they get a bit slower as well. But the frontend can pretend that the server always tells it everything it needs. The other solution is to look things up from the frontend. This adds overhead. But if the frontend is clever about it, a lot of that information is very cachable. And of course graphql emerged to give frontend developers the ability to just ask for what they need from some microservices.
All these approaches have pros and cons. Most of the complexity is about what comes back, not about how it comes back or how it is parsed. But it helps if the backend developers are at least aware of what is needed on the frontend. A good way is to just do some front end development for a while. It will make you a better backend developer. Or do both. And by that I don't mean do javascript everywhere and style yourself as a full stack developer because you whack all nails with the same hammer. I mean doing things properly and experiencing the mismatches and friction for yourself. And then learn to do it properly.
The above example with the userIds is real. I've had to deal with that on multiple projects. And I've tried all of the approaches. My most recent insight here is that user information changes infrequently and should be looked up separately from other information asynchronously and then cached client side. This keeps APIs simple and forces frontend developers to not treat the server as a magical oracle and instead do sane things client side to minimize API calls and deal with application state. Good state management is key. If you don't have that, dealing with stateless network protocols (like REST) is painful. But state has to live somewhere and having it client side makes you less dependent on how the server side state management works. Which means it's easier to fix things when that needs to change.
Basically JSON-RPC really, and a better use of HTTP verbs, most of the time.
HATEOAS + Document Type Description which includes (ideally internationalized) natural language description in addition to machine readable is what MCP should have been.
Nooooo not this discourse again.
And not everything in reality maps nicely to hypermedia conventions. The problem with REST is trying to shoehorn a lot of problems in a set of abstractions that were initially created for documents.
Turn back lest you be dragged into the RESTy bikeshed
At some point, we built REST clients so generic they could handle nearly any use case. Honestly, building truly RESTful APIs has been easy for ages, just render HTML on the server and send it to the browser. That's 100% REST with no fuss.
The irony is, when people try to implement "pure REST" (as in Level 3 of the Richardson Maturity Model with HATEOAS), they often end up reinventing a worse version of a web browser. So it's no surprise that most developers stop at Level 2—using proper HTTP verbs and resource-based URIs. Full REST just isn't worth the complexity in most real-world applications.
How does hateoas work with parameters ?
I mean .. ok, you have the bookmark uri, aka the entrypoint
From there, you get links of stuff. The client still need to "know" their identifiers but anyway
But the params of the routes .. and I am not only speaking of their type, I am also speaking of their meaning .. how would that work ?
I think it cannot, so the client code must "know" them, again via out of band mecanisms.
And at this point, the whole stuff is useless and we just use openapi
Ironically it feels like GraphQL is more RESTful than most REST api's if we want to follow Fielding's paper.
Except for discoverability, nice URLs, and meaningful HTTP methods.
Did you just say "discoverability" is an issue with GraphQL with a straight face?
There are plenty of valid criticisms, but that is not one, in fact thats where it shines.
Discoverability of resources starting from a root URL is what I meant, which is probably moot, because GraphQL wants you to use just one. :D
> A REST API should not be dependent on any single communication protocol, though its successful mapping to a given protocol may be dependent on the availability of metadata, choice of methods, etc. In general, any protocol element that uses a URI for identification must allow any URI scheme to be used for the sake of that identification. [Failure here implies that identification is not separated from interaction.]
What the heck does this mean? Does it mean that my API isn’t REST if it can’t interpret “http://example.com/path/to/resource” in the same way it interprets “COM<example>::path.to.resource”? Is it saying my API should support HTTP, FTP, SMB, and ODBC all the same? What am I missing?
As far as I know the only actual rest implementation, as Fielding envisioned it, a system where you send the entire representational state of the program with each request is the system Fielding coined the term REST to describe. The WEB.
Has any other system done this? where you send the whole application for each state with each state. project xandu?
I do find it funny how Fielding basically said "hey look at the web, isn't that a weird way to structure a program, lets talk about it." and every one sort of suffered a collective mental brain fart and replied "oh you mean http, got it"
I just call them http APIs. Is this too far wrong? Actually a genuine question.
RESTful APIs are not RESTful because REST is meh. Our APSi includes HATEAOS links and I have never, not once, witnessed their actual use (but they do double the size of response payloads).
It’s interesting that Stripe still even uses form-post on requests.
> Our APSi includes HATEAOS links and I have never, not once, witnessed their actual use (but they do double the size of response payloads).
So your payloads look like this:
And rather than just using next-href your clients append next-id to a hardcoded things base URL? That seems like way more work than doing it the REST way.I love all the comments here that you can't build a proper UX/UI with a "perfect" REST API even though browsers do it all day, every day.
REST includes code-on-demand as part of the style, HTTP allows for that with the "Link" header and HTML via <script>.
Who cares, honestly? I never understood this debate; nobody has ever produced a perfect RESTful API anyway
[dead]
I just spent a good portion of the day trying to figure out how GCP's allegedy "RESTful" (it's not) API names resources. If only there was a universal identifier for resources…
But no, a service account in GCP has no less than ~4 identifiers. And the API endpoint I wanted to call needed to know which resource, so the question then is "which of the 4 identifiers do I feed it?" The right answer? None of them.
The "right" answer is that you need to manually build a string, a concatenate a bunch of static pieces with the project ID and the object's ID to form a more IDer ID. So now we need the project ID … and projects have two of those. So the right answer is that exactly 1 of the 8 different permutations works (if we don't count the constant string literals involved in the string building).
Just give me a URI, and then let me pass that URI, FFS.
We collectively glazed over Roy Fielding's dissertation, didn't really see the point, liked the sound of the word "REST" and used it to describe whatever we wanted to do with http / json. Sorry, Roy, but you can keep HATEOAS - no one is going to take that from you.
https://htmx.org/img/memes/dbtohtml.png
LMAO all companies asking for extensive REST API design/implementation experience in their job requirements, along with the lastest hot frontend frameworks.
I should probably fire back by asking if they know what they're asking for, because I'm pretty sure they don't.
I spent years fussing about getting all of my APIs to fit the definition of REST and to do HATEAOS properly. I spent way too much time trying to conform everything as an action on a resource. Now, don't get me wrong. It is quite helpful to try to model things at stateless resources with a limited set of actions on them and to think about idempotency for specific actions in ways I don't think we did it properly in the SOAP days(at least I didn't). And in many cases it led to less brittle interfaces which were easier to reason about.
I still like REST and try to use it as much as I can when developing interfaces but I am not beholden to it. There are many cases which are not resources or are not stateless and sure you can find some obtuse way to make them be resources but that at times either leads to bad abstractions that don't convey the vocabulary of the underlying system and thus over time creates this rift in context between the interface and the underlying logic or we expose underlying implementation details as they could be easier to model as resources.
"REST" is our industry's most successful collective delusion: everyone knows it's wrong, everyone knows we're using it wrong, and somehow that works better than being right.
Eh. I won't write "pure" REST, because it's difficult to use, and I don't know if I have ever seen a tool that uses it as such. I know why it was designed that way, but I have never needed that.
I tend to use REST-like methods to select mode (POST, GET, DELETE, PATCH, etc.), but the data is usually a simple set of URL arguments (or associated data). I don't really get too bent out of shape about ensuring the data is an XML/JSON/Whatever match for the model structure. I'll often use it coming out, but not going in.
Fine. They are not actually RESTful. But does it actually matter?
> The core problem it addresses is client-server coupling. There are probably countless projects where a small change in a server’s URI structure required a coordinated (and often painful) deployment of multiple client applications. A HATEOAS-driven approach directly solves this by decoupling the client from the server’s namespace. This addresses the quality of evolvability.
Eh, "a small change in a server’s URI structure" breaks links, so already you're in trouble.
But sure, embedding [local-parts of] URIs in the contents (or headers) exchanged is indeed very useful.
This seems to mostly boil down to including links rather than just IDs and having the client "just know" how to use those IDs.
Django Rest Framework seems to do this by default. There seems very little reason not to include links over hardcoding URLs in clients. Imagine just being able to restructure your backend and clients just follow along. No complicated migrations etc. I suspect many people just live with crappy backends because it's too difficult to coordinate the rollout of a v2 API.
However, this doesn't cover everything. There's still a ton of "out of band" information shared between client and server. Maybe there's a way to embed Swagger-style docs directly into an API and truly decouple server and client, bit it would seem to take a lot more than just using links over IDs.
Still I think there's nothing to lose by using links over IDs. Just do it on your next API (or use something like DRF that does it for you).
I built a company that actually did implement HATEOS in our API. It was a nightmare. So much processing time was spent on every request setting up all the URLs and actions that could be taken. And no one used it for anything anyways. Our client libraries used it but we had full control over them anyways and if anything, it made the libraries more complex.
While I agree it's an interesting idea in theory, it's unnecessary in the real world and has a lot of downsides.
Unless you really read and followed the paper, just call it a web api and tell your sales people to do the same. Calling it REST makes you sound like a manager that hasn't done any actual dev in 15 years.
Hot take: HATEOAS only works when humans are navigating.
Ah yes - nobody is doing REST correctly. My favorite form of bikeshedding.
Indeed, and I find it funny that the debate even exists.
I find it pretty shocking that this was written in 2025 without a mention of the fact that the only clients that are evolvable enough to interface with a REST API can be categorized to these three types:
1. Browsers and "API Browsers" (think something like Swagger)
2. Human and Artificial Intelligence (basically LLMs)
3. Clients downloaded from the server
You'd think that they'd point out these massive caveats. After all, the evolvable client that can handle any API, which is the thing that Roy Fielding has been dreaming about, has finally been invented.
REST and HATEOAS were intentionally developed to against the common use case of a static non-evolving client such as an android app that isn't a browser.
Instead you get this snarky blog post telling people that they are doing REST wrong, rather than pointing out that REST is something almost nobody needs (self discoverable APIs intended for evolvable clients).
If you wanted to build e.g. the matrix chat protocol on top of REST, then Roy Fielding would tell you to get lost.
If what I'm saying doesn't make sense to you, then your understanding of REST is insufficient, but let me tell you that understanding REST is a meaningless endeavor, because all you'll gain from that understanding is that you don't need it.
In REST clients are not allowed to have any out of band information about the structure or schema of the API.
You are not allowed to send GET, POST, PUT, DELETE requests to client constructed URLs.
Now that might sound reasonable. After all HATEOAS gives you all the URLs so you don't need to construct them.
Except here is the kicker. This isn't some URL specific thing. It also applies to the attributes and links in the response. You're not allowed to assume that the name "John Doe" is stored under the attribute "name" or that the activate link is stored in "activate". Your client needs to handle any theoretical API that could come from the server. "name" could be "fullName" or "firstNameAndLastName" or "firstAndLastName" or "displayName".
Now you might argue, hey but I'm allowed to parse JSON into a hierarchical object layout [0] and JPEGs into a two dimensional pixel array to be displayed onto a screen, surely it's just a matter of setting a content type or media type? Then I'll be allowed to write code specific to my resource! Except, REST doesn't define or propose any mechanism for application specific media types. You must register your media type globally for all humanity at IANA or go bust.
This might come across as a rant, but it is meant to be informative so I'll tell you what REST and HATEOAS are good for: Building micro browsers relying on human intelligence to act as the magical evolvable client. The way you're supposed to use REST and HATEOAS is by using e.g. the HAL-FORMS media type to give a logical representation of your form. Your evolvable client then translates the HAL-FORM into a html form or an android form or a form inside your MMO which happens to have a registration form built into the game itself, rather than say the launcher.
Needless to say, this is completely useless for machine to machine communication, which is where the phrase "REST API" is most commonly (ab)used.
Now for one final comment on this article in particular:
>Why aren’t most APIs truly RESTful?
>The widespread adoption of a simpler, RPC-like style over HTTP can probably attributed to practical trade-offs in tooling and developer experience: The ecosystem around specifications like OpenAPI grew rapidly, offering immediate, benefits that proved irresistible to development teams.
This is actually completely irrelevant and ignores the fact that REST as designed was never meant to be used in the vast situations where RPC over HTTP is used. The use cases for "RPC over HTTP" and REST have incredibly low overlap.
>These tools provided powerful features like automatic client/server code generation, interactive documentation, and request validation out-of-the-box. For a team under pressure to deliver, the clear, static contract provided by an OpenAPI definition was and still is probably often seen as “good enough,”
This feels like a complete reversal and shows that the author of this blog post himself doesn't understand the practical implications of his own blog post. The entire point of HATEOAS is that you cannot have automatic client code generation unless it happens during the runtime of the application. It's literally not allowed to generate code in REST, because it prevents your client from evolving at runtime.
>making the long-term architectural benefits of HATEOAS, like evolvability, seem abstract and less urgent.
Except as I said, unless you have a requirement to have something like a mini browser embedded in a smartphone app, desktop application or video game, what's the point of that evolvability?
>Furthermore, the initial cognitive overhead of building a truly hypermedia-driven client was perceived as a significant barrier.
Significant barrier is probably the understatement of the century. Building the "truly hypermedia-driven client" is equivalent to solving AGI in the machine to machine communication use case. The browser use-case only works because humans already possess general intelligence.
>It felt easier for a developer to read documentation and hardcode a URI template like /users/{id}/orders than to write a client that could dynamically parse a _links section and discover the “orders” URI at runtime.
Now the author is using snark to appeal to emotions by equivocating the simplest and most irrelevant problem with the hardest problem in a hand waving manner. "Those silly code monkeys, how dare they not build AGI! It's as simple as parsing _links and discover the "orders" URI at runtime". Except as I said, you're not allowed to assume that there is an "orders" link since that is out of band information. Your client must be intelligent enough to not only handle a API where the "/user/{id}/orders" link is stored under _links. The server is allowed give the link of "/user/{id}/orders" a randomly generated name that is changing with every request. It's also allowed to change the url path to any randomly generated structure, as long as the server is able to keep track of it. The HATEOAS server is allowed to return a human language description of each field and link, but the client is not allowed to assume that the orders are stored under any specific attribute. Hence you'd need an LLM to know which field is the "orders" field.
>In many common scenarios, such as a front-end single-page application being developed by the same team as the back-end, the client and server are already tightly coupled. In this context, the primary problem that HATEOAS solves—decoupling the client from the server’s URI structure—doesn’t present as an immediate pain point, making the simpler, documentation-driven approach the path of least resistance.
Bangs head at desk over and over and over. A webapp that is using HTML and JS downloaded from the server is following the spirit of HATEOAS. The client evolves with the server. That's the entire point of REST and HATEOAS.
[0] Whose contents may only be processed in a structure oblivious way
> the only clients that are evolvable enough to interface with a REST API can be categorized to these three types
You mention swagger. Swagger is an anti-REST tech. Defining a media type is the REST equivalent of writing a swagger API description.
If you can define an API in swagger, you can define one via a media type. It's just that the latter is generally not done because to do it requires a JSON schema (or similar) and people mostly don't use that or think of that as how one defines an API.
Boss: we need an API for XYZ
Employee: sure thing boss, I'll write it in swagger and implement by Friday!
> Instead you get this snarky blog post telling people that they are doing REST wrong, rather than pointing out that REST is something almost nobody needs (self discoverable APIs intended for evolvable clients).
Were using actual REST right now. That's what SSR html uses.
The rest of your (vastly snarkier) diatribe can be ignored.
And, yet, you then said the following, which seems to contradict the rest of what you said before it...
> Bangs head at desk over and over and over. A webapp that is using HTML and JS downloaded from the server is following the spirit of HATEOAS. The client evolves with the server. That's the entire point of REST and HATEOAS.
> rather than pointing out that REST is something almost nobody needs (self discoverable APIs intended for evolvable clients).
Well, besides that, I don't see how REST solves the problem it says it addresses. So your user object includes an activate field that describes the URI you hit to activate the user. When that URI changes, the client doesn't even notice, because it queries for a user and then visits whatever it finds in the activate field.
Then you change the term from "activate" to "unslumber". How does the client figure that out? How is this a different problem from changing the user activation URI?
REST(ful) API issues can all be resolved with one addition:
Adding actions to it!
POST api/registration / api/signup? All of this sucks. Posting or putting on api/user? Also doesn‘t feel right.
POST to api/user:signup
Boom! Full REST for entities + actions with custom requests and responses for actions!
How do I make a restful filter call? GET request params are not enough…
You POST to api/user:search, boom!
(I prefer to use the description RESTful API, instead of REST API -everyone fails to implement pure REST anyways, and it‘s unnecessarily limited.)
What is the problem with posting to /user/signup that posting to /user:signup solves?
The system won't be able remember why the user was created unless the content of the post includes data saying it was a signup. That's important for any type of reporting like telemetry and billing.
So then one gets to bike-shed if "signup" it is in the request path, query parameters, or the body. Or that since the user resource doesn't exist yet perhaps one can't call a method on it, so it really should be /users:signup (on the users collection, like /users:add).
Provided one isn't opposed to adopting what was bike-shedded elsewhere, there is a fairly well specified way of doing something RESTful, here is a link to its custom methods page: https://google.aip.dev/136. Its approach would be to add information about signup in a request to the post to /users: https://google.aip.dev/133. More or less it describes a way to be RESTful with HTTP/1.1+JSON or gRPC.
> So then one gets to bike-shed if "signup" it is in the request path, query parameters, or the body.
But that's not a difference between /user/signup and /user:signup .
That's correct, the example you are giving represents bike-shedding among request path variations.
I assumed most readers of my comment would get that the idea that /users/signup is ambiguous whether or not that is supposed to be another resource, while /users:signup is less so.
You might not want a dedicated „Signup“ entity in your model and db.
you would POST to /users
what's the confusion? you're creating a new user entity in the users collection.
...so? Don't have one.
r/noshitsherlock
for a lot of places, POST with JSON body is REST