55555 a year ago

Check out this simple, easy API replacement: just set up Docker, APISIX, Grafana, Prometheus, etc.

Does anyone else miss the old days of just editing your site's code and adding new columns to your MYSQL database while people were surfing your website? That was fun.

Tip: Another way to turn your DB into an API is to just connect to it and then write SQL queries.

  • EdwardDiego a year ago

    I miss aspects of those days, because I made a good bit of money from people who used MyISAM because "it's more performant" (or just because it was the default, yes, I'm dating myself here), and then corrupted their data because MyISAM didn't enforce constraints.

    Other than the easy money, nah, don't miss the old days that much.

    Having to use Reflector to decompile a core .NET library to figure out wtf it was doing because MSDN was inadequate, and the source was very much not open.

    PHP 4 code bases that heavily leaned on dangerous globals.

    J2EE.

    Ah, the "good" ol days.

    • dmitriid a year ago

      > who used MyISAM

      I remember reading "High-Performance MySQL" in mid-200s. It was a real eye-opener: all the things you needed from a DB where somewhat randomly available across the different storage mechanisms, but not in any consistent form.

      Something like: Oh, you need query optimisation? Use one. You need constraints? Use the other. you need fast indexes? Use the first one again. And so on.

      > Ah, the "good" ol days.

      You'll pry my nostalgia from my cold dead hands!

      • jagged-chisel a year ago

        > mid-200s

        I am astounded to learn MySQL predates so much technological advancement! >.<

        • bee_rider a year ago

          Pretty sure they mean mid-2000’s. So actually MySQL won’t be invented for another 480 years or so.

        • sdiacom a year ago

          The code base certainly looks the part.

    • Aeolun a year ago

      It was all more fun at the time because of those reasons. You didn't need anything more than 'barely adequate' to be at the top of your game.

  • tptacek a year ago

    Grafana and Prometheus aren't necessary, and Docker is at this point pretty close to a Linux platform feature. APISIX is a reasonable complaint --- the "poor man's API" is "using a big open source project that provides generic APIs", sure, that's not all that clever.

    • jjallen a year ago

      The goal was simplicity and cheapness, not being clever. Using a simple, existing tool is better than unnecessary complication, no?

      • morelisp a year ago

        I've never looked an an Apache project, certainly not APISIX, and thought "wow, so simple".

  • ilrwbwrkhv a year ago

    Absolutely. It is shameful how much stuff needs to be run before you can even get started.

    • tommek4077 a year ago

      You still can do it, the main feature is: it just works. You don't have to follow the hype. One problem could be that new devs never experience the simplicity of such a setup, but that is not your problem.

    • dasloop a year ago

      Like, for example an OS, hardware even electricity! But if you give things for granted, why not docker or APISIX?

    • jspash a year ago

      Just be thankful javascript isn't involved!

  • ls15 a year ago

    Or use https://github.com/mevdschee/php-crud-api and skip writing queries (for basic CRUD).

    > Howto: Upload "api.php" to your webserver, configure it to connect to your database, have an instant full-featured REST API.

  • quickthrower2 a year ago

    hostgator.com still exists. install some wordpress, edit the PHP files, add some tables in MySQL, have fun!

  • CharlieDigital a year ago

    You might like Directus.

    It is nominally a "headless CMS", but it's so close to the SQL that I think of it more like API in a box.

tasuki a year ago

> Creating a full-fledged API [...]. You need to think about [...] the REST principles

I can assure you you don't need to do that. And most people don't.

It's perfectly possible to create a useful API without adhering to the REST principles.

[Edit, having read the rest of the article]: Case in point, this very PostgREST tool promoted in the article appears very useful, while not thinking about the REST principles.

  • abraae a year ago

    If you're talking about HATEOAS (which many zealots equate one to one with REST) you may be pleased to know there is no reference to it whatsoever in this article.

    • kreetx a year ago

      Or is it the other way around? I seem to remember that Roy Fielding did actually mean HATEOAS when saying REST, that REST isn't only CRUD but a discoverable and self-describing API.

      EDIT: And the corollary of above is that most people when saying REST actually mean HTTP with correct method use.

      If not, then is there a good source on what you mean by REST and HATEOAS?

      • morelisp a year ago

        A lot of people confuse HATEOAS, the idea, with HAL, some attempt to standardize it.

    • morelisp a year ago

      > zealots

      REST is defined by four interface constraints: identification of resources; manipulation of resources through representations; self-descriptive messages; and, hypermedia as the engine of application state.

    • tasuki a year ago

      Perhaps I'm a zealot: HATEOAS is one required part of REST as per Roy Fielding, who came up with both.

      What does "REST" mean to you?

      • abraae a year ago

        "When I use a word," Humpty Dumpty said in rather a scornful tone, "it means just what I choose it to mean — neither more nor less."

        To me, like most people, REST means an http-based API where we use GET, POST, PUT, DELETE etc. to interact with resources. This in full understanding that under a literal interpretation of the enlightened one's (Roy Fielding's) utterances, REST mandates HATEOAS, meaning that there are only a few true REST APIs to be seen in the wild.

        I also usually use the term RESTful, this seems to be a somewhat widely accepted middleground.

        • asalahli a year ago

          But that makes it hard to refer to, search for or have a discussion about real REST, the architectural style. Why use a term that means a different thing when you can just invent/use a new term describing what you want to describe. REST, as Roy Fielding defined it, is an extremely interesting concept to me and I'd love to explore it more, but good luck doing any research on it without getting flooded with countless junk about "REST" or "RESTful" APIs.

          Sort of like how youtube is filled with videos about "damascus steel" which is nothing more than steel of random, irrelevant composition with certain visual pattern akin to actual damascus steel. But again, good luck finding a video about the real deal.

          Same can be said about ASMR, and many other misappropriated terms people throw around for clickbait.

          Rant over.

      • criddell a year ago

        It seems like when taken to an extreme, strong HATEOAS would mean you could write a universal client that works across all APIs.

        I don't think service developers take HATEOAS super-seriously. From what I've seen, returning resources with a URI rather than an identifier is about all many services do to be HATEOAS-ish. Clients are coupled to the server, but that's often okay.

        • steve-chavez a year ago

          > It seems like when taken to an extreme, strong HATEOAS would mean you could write a universal client that works across all APIs.

          And if you go with HAL or JSON-LD, you have to use a specific clients for those formats, so not universal anyway.

          I tend to agree with "HATEOAS is for Humans"[1], HATEOAS makes sense when interacting with HTML.

          [1]: https://intercoolerjs.org/2016/05/08/hatoeas-is-for-humans.h...

        • discreteevent a year ago

          > strong HATEOAS would mean you could write a universal client that works across all APIs.

          It would be hard not to make that client behave like a browser from the user's point of view.

          • marcosdumay a year ago

            Exactly. HATEOAS implies as much flexibility as a browser, and as much understanding of the server data as a browser. So if you want a HATEOAS API, just make it output HTML and go with the crowd.

  • quickthrower2 a year ago

    What is the acronym for when something needs to be done, so you create an endpoint to do that thing, maybe return HTML if convenient (if it is a page request) or JSON if it is XHR, and then move on to the next user story?

nattaylor a year ago

Reminds me of datasette, which does similar things for SQLite (and much more, too). Docs at https://docs.datasette.io/en/stable/json_api.html

Example: Retrieve JSON for https://fivethirtyeight.datasettes.com/polls/president_prima... simply with https://covid-19.datasettes.com/covid/ny_times_us_counties.j... (I added a filter for Texas)

  • bmn__ a year ago

    Big problems with the design of the response here, too. Example: https://fivethirtyeight.datasettes.com/polls/president_prima...

    The schema is ad-hoc and requires a human to interpret it. Link relations (at the bottom) are expressed the wrong way: a machine could understand the "next" or "license" relations, but not an arbitrary "next_url" or "license_url" JSON key.

    Recommendation: adopt relevant IETF/IANA standards and reformulate the response to take advantage of them.

colonwqbang a year ago

> when you’ve validated the requirements, you can keep the existing façade and replace PostgREST with your custom-developed API.

Why? What is it about postgrest which makes it unsuitable for "real" usage? The author doesn't mention it.

  • madsbuch a year ago

    Postgres is amazing, but there are some issues you quickly hit with systems like this:

    1. Side effects: Want to send an email when something happens

    2. conditional permissions: Need access to data under certain circumstances not modelled by the security language?

    3. Subset access: Access to parts of the row / document.

    4. Aggregate access: Access to eg. a count of rows you can not access.

    These are usually solved using serverless functions – or an API written in code.

    Personally I err to the side of just writing some code, maybe because I enjoy that part of the project. Then I might be more inclined to use no-code solutions for the frontend, where others want the freedom and flexibility.

    • cpursley a year ago

      Supabase handles all of this.

      I handle #1 with by listening to database changes then reacting as needed with a change capture / event-like system:

      https://github.com/cpursley/walex

      • giraffe_lady a year ago

        Does supabase do anything particular to address those or just hand you postgres's RLS? It's been a minute since I used supabase but when I last did you were basically on your own with that.

        Which is not necessarily bad, RLS is sophisticated enough to handle all real world cases I've come across. But if it's just that I think it's more correct to say postgres already handles all of this.

      • discreteevent a year ago

        The problem with event driven systems is that they can end up being a kind of spaghetti. It can be hard to track down all the side effects. There's no place where you can read some code to see what happens sequentially.

  • k__ a year ago

    I would go the other way around.

    Why start with an SQL database? When I start something new, I don't want to fiddle around with tables every time I change my model.

    • quickthrower2 a year ago

      I agree but the baby of “joins” is often thrown out with the bathwater of “schemas”

      Is there a relational schemaless DB I wonder?

      • alex_suzuki a year ago

        Most of my tables have a column called “metadata”, “extra” or “ payload” with jsonb type… for the data i don’t want to model the rigid way.

        It’s no longer relational vs. NoSQL, you can get the best of both worlds. Assuming of course you don’t need “planetscale”.

        • quickthrower2 a year ago

          Nice, and by the looks of it you can still do joins to what is inside the jsonb. I presume indexes are possible too (or if not just pull that data out to actual columns).

          • alex_suzuki a year ago

            Yup, it’s way more than a blob. You can query it.

      • falcolas a year ago

        There are, but they’re pretty niche because their performance is really bad. Niche enough I can’t recall a single name today.

        But really, my advice for early prototyping is split in two: either keep it in memory and serialize to disk, or use SQLite.

        Note that even using serialized memory constructs will require data migrations, aka schema updates.

      • Aeolun a year ago

        Sounds like a contradiction in terms to me.

        • layer8 a year ago

          No contradiction at all. Imagine a relational database where the only column type is a variant (union) of the usual datatypes, and where new tables and new columns are auto-created whenever an insert/update references them (and, possibly, unused tables and columns are garbage-collected). You’d still use regular SQL and have joins and everything, the tables would still be relations, but there’s no defined schema.

    • cpursley a year ago

      Then use a Postgres jsonb column. Makes it easier to migrate to a table when you have you model down.

  • quickthrower2 a year ago

    Business logic in your preferred language paradigm, rather than sprocs/triggers

  • jansommer a year ago

    Agree. I've used PostgREST for production and it works great. No need to rewrite the API.

maartenh a year ago

Hm, using Apache APISIX for DDoS protection on a single node? That won't really stop a real DDoS. Not much you can do on a single server, if a botnet is saturating the network links to your server(s), without help from your infra provider.

This setup can be used to prevent the backends from being overloaded, which one can probably already do from a single host, and depending on the speed/amount of work by the backends done, not a lot of bandwidth is required to overload most systems that have a limited amount of request processing capacity.

I would argue that this is load management/shedding though, and not DDoS protection.

  • 411111111111111 a year ago

    Hasn't DDoS protection always been a small misnomer?

    You can't really protect against it, you can only have enough bandwidth to handle everything.

    • oogali a year ago

      The root of the issue has always been the definition of DDoS.

      Network engineers define it as one thing: a massive amount of abnormal traffic generated by a large number of sources (distributed) that you need adequate bandwidth to soak up without impacting normal traffic.

      Software engineers define it as another thing: unwanted traffic that causes resource exhaustion and should be blocked. They're almost always thinking of DoS but refer to it as DDoS.

      That's why when TFA talks about DDoS, the example immediately attached to it is rate limiting at what to me (network engineering background) seems like an absurdly low limit (1 request every 5 seconds).

    • bee_rider a year ago

      “Mitigation” would probably be a better name (and sounds fancier, but maybe not as definitive).

siscia a year ago

Maybe I am biased, but at this point I would find simpler to just give access to the DB.

Let user write their own SQL queries and meter how much time they use for billing or abuse prevention.

  • falcolas a year ago

    Did you know that databases can read from disk?

    Databases are huge security vulnerabilities and should always have some kind of shim over them. Never expose your relational DB publicly if you want any control over it.

    • asalahli a year ago

      So can the services providing http APIs. What does that have anything to do with security?

  • andai a year ago

    At that point you might as well just let them download the .sqlite file ;)

    Or (as is the fashion recently) compile sqlite to wasm and run it in browser.

jlengrand a year ago

That's what I actually admire about Supabase [0].

The folks created an almost full fledged Firebase replacement by clipping pieces together (similar to the article, replacing APISIX with Kong) and it allows them to iterate super fast.

Pragmatic, and nice if you're on the receiving end of it .

[0] https://github.com/supabase/supabase#how-it-works

KronisLV a year ago

Here's a page that talks a bit more about Apache APISIX: https://apisix.apache.org/docs/apisix/getting-started/

I find it a bit ironic that they don't support Apache2 as the web server, you know, it being an Apache project and all, instead going for Nginx or OpenResty (though admittedly they're great projects).

Even nowadays Apache2 is pretty okay: https://blog.kronis.dev/tutorials/how-and-why-to-use-apache-... (especially if you disable .htaccess for less disk I/O and use it with a single file based config)

senko a year ago

Or just use an API generator like https://apibakery.com , https://amplication.com or another with your favorite framework and call it a day.

  • bmn__ a year ago

    API Bakery requires a user to tediously and manually replicate the DB schema. In my eyes, that makes it worse than the solution under discussion.

    • senko a year ago

      Oh if you have a database already and want to expose it, then yeah, it does make sense to go with the API adapter bolted onto it.

      If you're starting from scratch (as the post is, since PostgreSQL is advertised as part of the solution), then you have to manually create the DB schema. I would argue that a Node developer would find it easier to do that in Prisma, or a Django developer in Django ORM, than to do raw CREATE TABLE queries manually.

  • ushakov a year ago

    Do I have to generate my API from scratch if the schema changes? If so, this solution is a no-go

tiborsaas a year ago

This is far from what I'd call "poor man's" thing.

I'd want to host it for free, so Vercel and Netlify comes to my mind. But to go real poor man style, I'd use Google sheets as a backend and throw some caching layer on top of it. Firebase or one of its competitors is also a good idea to get some API up and running for free.

  • cpursley a year ago

    Stick it on the Render free or cheap tier. Or Supabase for free in just a single click:

    https://supabase.com/pricing

    You're better off starting with Postgres than having to migrate from some kind of no-sql setup later.

    • tiborsaas a year ago

      I'd love to have the migration problem because it indicates what I'm building has something to it.

      If nobody uses it then no regrets doing it stupid fast.

      • cpursley a year ago

        It's more than just a migrating issue, it's thinking about the structure of your data from the beginning. Your data is your application. Web framework flavor of the day should be a secondary consideration.

brianjacobs a year ago

Not to be mean, but from my reading this needs Heroku free tier which no longer exists? I am very sorry if I got that wrong.

  • edmundsauto a year ago

    Supabase has a free tier with all this included, plus more. It's the best install of postgres I've seen and I'm obsessed with how it makes triggers, auth, etc., all included.

    • rmason a year ago

      Did a user group presentation on Supabase, no one in the audience had ever heard of it. I came away quite impressed. Plan on building a complex API with it soon. If I was currently employed as a backend developer I'd be worried.

      • edmundsauto a year ago

        This just removes the need for some backend engineering, but will leave the actual hard work (schema design, etc)

        More and more, developers need to add value to the business, instead of being the key holders of arcane invocations. It used to be that knowing how to use a computer was a technical skills; now it’s evolved into understanding the business.

      • cpursley a year ago

        Don't worry, somebody still has to design all the database tables/relations, views, triggers, etc. Which means we all become better database developers, which is a very good thing.

        Also, these tools can't handle certain back end logic (ie, API interactions and integrations, long running processes, etc). Which is the complex bit, anyways.

    • is_true a year ago

      I think they should create a tier between the free and pro with a price of USD5 that doesn't pause projects and has backups

      • edmundsauto a year ago

        Agree on the backups piece, that should even be in the free plan as that much space is not likely to be expensive if they used cold storage.

        Otoh, if you aren’t sending 1 request per week to an installation, it’s really not active at all. The overheard of supporting this niche is probably too much for a company that clearly is in ruthless prioritization mode.

  • quickthrower2 a year ago

    NKOTB: https://neon.tech/

    Free Tier includes:

    compute up to 1 vCPU / 256 MB

    storage up to 10 GiB

    3 projects per user

    • boomskats a year ago

      Neon is fresh out of private beta (or should be soon). I can't wait to play with it. However they're all about Hasura lately rather than PostgREST, which this article talks about.

      Nikita, their CEO, was a guest on the Changelog[0] podcast a month or so ago. Well worth a listen, great episode.

      [0] https://changelog.com/podcast/510

      • cpursley a year ago

        Supabase requires Postgres logical replication which Neon doesn't quite have yet have (but it's on the roadmap)

        https://community.neon.tech/t/plans-for-logical-replication/...

        I'm also really excited about Neon.

        My dream stack is Neon (Postgres) + ReadySet (database caching proxy) + Supabase.

        • boomskats a year ago

          Interesting. I assume Supabase requires it because supabase/realtime depends on it?

          Personally I'm more interested in vanilla PostgREST than Supabase's implementation of it, or their realtime implementation. It seems really cool but I'm not 100% sold on their ability to evaluate complex RLS rules, which is an important one for me.

          ReadySet looks cool, I'll check it out. Thanks!

          • cpursley a year ago

            That's right (regarding listening to the WAL - replication). I actually just stole that part of the Supabase realtime code for my own lib (so I can do logic in Elixir).

            I haven't used their role based stuff but it looks pretty good.

          • kiwicopple a year ago

            > I'm not 100% sold on their ability to evaluate complex RLS rules

            You don’t need to be sold on our ability - all rules are run on the database itself. Supabase is just Postgres, we don’t run any forks. We run vanilla PostgREST too (behind a proxy)

            • boomskats a year ago

              I've always known that you run vanilla PostgREST, and that it evaluates RLS in-pg, and it all works great. However when I first looked at walrus[0] a while back (18 months ago?), I couldn't quite figure out what the mechanism of action was, i.e. whether the RLS rules were being evaluated in the database or being parsed in from the DDL and then (partially?) reinterpreted by your WAL subscriber. I hope that explains my comment.

              I've had another brief look at that repo, and either you've clarified a few things since I last looked at it, or I didn't look at it closely enough in the first place. It makes far more sense to me now, the impersonation + re-query mechanism puts me at 100%.

              Thanks for the response, I appreciate it.

              [0]: https://github.com/supabase/walrus

      • nikita a year ago

        Out of private beta in the 6th of December

    • judge2020 a year ago

      Doesn't look like you're the owner, but are there any notable clients using this service? With Heroku it was legit enough that you knew enough enterprise users were subsidizing the free tier to where it won't just disappear overnight.

      • quickthrower2 a year ago

        No idea. This is an early stage startup so who knows. But if you need guarantees (it ain’t a hobby project) then best to just pay for the service from any of the big or small clouds.

    • quickthrower2 a year ago

      Edit: that was beta free tier. might change.

  • chairmanwow1 a year ago

    Render is a fantastic replacement

    • quickthrower2 a year ago

      No free psql tier

      • selcuka a year ago

        Fly supports PostgreSQL [1], but it's not managed as you would expect.

        [1] https://fly.io/docs/postgres/getting-started/what-you-should...

        • langsoul-com a year ago

          Use Laravel to generate the api. The boilerplate get you 90% of the way. Host laravel and pgsql on fly for free.

          Could do other things like directus (handles the api gen), planetscale has free mysql tier or even using airtable as the db, it provides the api part.

          • selcuka a year ago

            Sorry, what I meant by "not managed" was they just support PostgreSQL as in "yet another executable running on their containers", so they don't actually optimise or configure PostgreSQL for you.

            I find Eloquent's approach to disconnect models from migrations a bit dangerous, but it could be me as I don't have much experience with Laravel. PlanetScale/Vitess doesn't support foreign keys, which is nice for distributed (planet scale, ha!) apps, but could be a problem for certain types of applications. I'd rather stick with (stock-ish) PostgreSQL.

  • tptacek a year ago

    Bunch of providers have free tiers.

pram a year ago

Ehh this seems more annoying than just writing an MVP in (your language of choice) given you already have the expertise.

  • blowski a year ago

    Many projects grow to the point that the limitations of writing everything in views and stored procedures becomes more painful than writing the CRUD code in Python, Node, Ruby, PHP, or whatever.

    Personally, I’d give a shout for https://api-platform.com/.

    • cpursley a year ago

      What are the particular pain points for you?

      For me - using a similar system (Hasura), it's migrating things dependent on views. Feels like this should be a solved problem (database migration issue, not due to Hasura).

      But I still prefer it over writing CRUD.

      • blowski a year ago

        At some point CRUD goes from being pure CRUD to mostly CRUD with a few special requirements - e.g. complex RBAC rules, decorating data, API versions, integrating third-party APIs, scaling. They're easy to do in most app frameworks (Laravel, Django, Rails), tricky in something like Postgrest. I haven't use Hasura.

        • giraffe_lady a year ago

          It's not necessarily my preference but the last few years a lot of my work has been in systems that are just a thin api wrapper around postgres.

          In practice the biggest issues with it are that the tooling is significantly behind the state of the art of mainstream dev tooling, both for working engineers and ops. It's also not built with this with style of working in mind, so you accumulate a lot of little struggles that are hard on engineering morale and confidence in the deployed system. DBAs have their own practices, but admin is a different discipline from dev and they aren't always directly compatible.

          The other problem is that for most modestly sized money-making applications, the database is the single most resource-intensive and expensive necessary part of the stack. Putting a bunch of business logic in there will never help that, and query optimization is a difficult to acquire skill. You pretty much always end up with just a couple of people who can talk the query planner down off a cliff when you get into the shit.

        • cpursley a year ago

          You can get really far with database views (decorating data, especially), functions (all sorts of biz logic) and Postgres roles for RBAC. But I hear you on integrating with 3rd party APIs, etc. Fwiw, there's a number of PostgREST backend client libraries to help with that.

hanzhangzeng a year ago

This is informational and reminds me of that funny github repo porsager/postgres. Thanks for sharing this!

bmn__ a year ago

> PostgREST is a fast way to construct a RESTful API.

That's false. The responses don't even have hyperlinks.

  • GolDDranks a year ago

    Just to clear things up a bit (as I remember not knowing about this side of "RESTfulness" a while ago myself): an important part of REST is relying on hypertext, that is pointing to other resources by using URLs. Just returning an ID of a referenced object/resource as a plain number isn't very RESTful, but returning an URL of that object/resource is.

    • omnibrain a year ago

      Yes, but who really does this? I never encountered an API doing it. (Not saying they don't exist!)

  • Aeolun a year ago

    If you already have relationships between tables, then it shouldn't be that hard to add right?

    • bmn__ a year ago

      Right. The problem is not difficulty, but motivation. The responsible programmer begriffs does not have the understanding why it is a good idea to do so, despite bug reporters on Github having spent years in discussions.

      • steve-chavez a year ago

        To be fair, begriffs never shut down any discussion of HATEOAS, previous contributors did - because at the time users were asking for more JSON capabilities from PostgREST and we needed to focus on those.

        We're still open on adding HATEOAS, I've just opened an issue for it: https://github.com/PostgREST/postgrest/issues/2579

edgyquant a year ago

Why not just use postgraphile?

  • cpursley a year ago

    Haskell (performant functional language) > JavaScript (soggy wet noodles)

12tvende a year ago

Pretty cool postgress feature

  • bmn__ a year ago

    It's third-party software.

sgt a year ago

Is APISIX any good?

  • jjzhiyuan a year ago

    Depends on your needs of API Gateway, Ingress Controller, and Service Mesh.

    According to Apache APISIX Slack[1] channel discussions and GitHub Issues/Discussions, most people focus on Security, Feature Rich, and Performance.

    1. Security: As an Apache project, there have a lot of users and maintainers who are watching project activities, and the Security Team follows up very quickly.

    2. Feature Rich: Just join the Slack channel and you will find that lots of questions are related to "features".

    3. Performance: https://api7.ai/blog/apisix-kong-3-0-performance-comparison

    [1] https://apisix.apache.org/slack