shepherdjerred 3 years ago

> In 2021, during the 66-hour Amazon Prime Day shopping event, Amazon systems - including Alexa, the Amazon.com sites, and Amazon fulfill- ment centers, made trillions of API calls to DynamoDB, peak- ing at 89.2 million requests per second, while experiencing high availability with single-digit millisecond performance

The scale that DDB operates at is mind-boggling. Where would someone even start when designing a system that can handle nearly 100 million requests per second?

  • Gh0stRAT 3 years ago

    Yeah, it totally blew my mind when I first heard these performance numbers as well. RE "nearly 100 million requests per second": as of Prime Day 2022, it has actually handled >100 Mreq/sec!

    >Over the course of Prime Day, these sources made trillions of calls to the DynamoDB API. DynamoDB maintained high availability while delivering single-digit millisecond responses and peaking at 105.2 million requests per second. [0]

    [0] https://aws.amazon.com/blogs/aws/amazon-prime-day-2022-aws-f...

  • ZephyrBlu 3 years ago

    Definitely insane scale, but I wonder how much of that is horizontal scaling.

    • yazaddaruvala 3 years ago

      Almost all of it.

      “During Prime day” implies this is all read dominated traffic. The RCUs are all going to be provisioned so there are appropriate replicas pre-created in the correct regions.

      Disclaimer: Used to work at Amazon.

  • explaingarlic 3 years ago

    > Where would someone even start when designing a system that can handle nearly 100 million requests per second?

    In the case of DynamoDB, it's just a streak of use-case-appropriate sharding techniques, and a whole lot of scalability elsewhere :P

    I would imagine a good chunk of the DynamoDB team had to work on the requirements side of engineering, or at the very least it took a lot of research into the matter of how DynamoDB would be used.

  • potamic 3 years ago

    You can easily hit 100k rps on a typical NOSQL cluster. Scaling it to 100 million is just a matter of running a few thousand instances. Of course, operating a system with thousands of nodes is an engineering feat, but from a design perspective it's not super complicated.

    • otterley 3 years ago

      “Just a matter of...”

      Anyone who trivializes the complexity of actually operating such a system should be forced to build and operate it themselves, and be held to account if it fails.

    • arinlen 3 years ago

      > Scaling it to 100 million is just a matter of running a few thousand instances.

      Please post the link to any GitLab/GitHub you own where you showcase running "a few thousand instances" of anything at all.

  • jjtheblunt 3 years ago

    a multicore machine? perhaps realized across physical cpus and os instances, or tons of memory with a high core count cpu?

samsquire 3 years ago

I wrote a toy database that can be queried similar to dynamodb

HTTPS://GitHub.com/samsquire/hash-db

It's a trie in front of a HashMap.

  • eatonphil 3 years ago

    Wow this is a really, really cool project!

    • samsquire 3 years ago

      Wow you are so kind.

      Thank you! The code I tried to keep simple and I tried to keep it small. I'm trying to do the most basic thing that shall work.

      I want to add document storage and unify the storage mechanism so the database can be multimodel like OrientDB and ArrangoDB. So graphs should be stored same way as documents and SQL.

      Currently the graph data model is separate from the SQL data model so you cannot query a graph as SQL or vice versa.

gime_tree_fiddy 3 years ago

> because it doesn't need to, Shared Nothing systems look similar, and the authors know exactly who the readers of this paper are ^_^)

I didn't understand this. Who is the author referring to, and what is he implying?

  • c4pt0r 3 years ago

    I think a lot of shared-nothing systems are similar from high level, connection handling / storage node (with small shards) / Metadata (routing)

simlevesque 3 years ago

The link does not work for me.

  • binwiederhier 3 years ago

    Oh my. The author put his website on a host ame that includes an underscore: http://_.0xffff.me/dynamodb2022.html

    While underscores are valid hostnames per the DNS spec, they are not valid for hostnames in URLs. Firefox honors the HTTP spec and fails the request, but Chrome seems more lenient and displays the page.

    To the author. Please put your site on a valid hostname.

    Edit: Better explanation: https://stackoverflow.com/a/2183140

    • philipkglass 3 years ago

      Firefox honors the HTTP spec and fails the request, but Chrome seems more lenient and displays the page.

      I just successfully opened this link in Firefox 103.

      • LilBytes 3 years ago

        Doesn't work on the Firefox android app.

        • binwiederhier 3 years ago

          Yeah that's what I tested with. Firefox (desktop) works. Firefox mobile (Android) does not.

    • buzer 3 years ago

      Works for me on Firefox nightly. One thing to note is that it's HTTP-only site, so if you have some kind of extension/setting to force HTTPS it's not going to work.

      • binwiederhier 3 years ago

        It doesn't work with Firefox mobile (Android).

xthrowawayxx 3 years ago

Here's my notes on DynamoDB: How to spend $100k on what would cost $10k with an sql server for a 100x worse service

  • sass_muffin 3 years ago

    This is the video I recommend to others when working with dynamodb. The video is by Rick Houlihan about dynamodb modeling. In my experience most developers that complain about dynamodb don't fully understand it.

    https://www.youtube.com/watch?v=HaEPXoXVf2k

    • SPBS 3 years ago

      DynamoDB can model relational data just fine, if you're okay with setting your query access patterns in stone and never changing them again.

    • newlisp 3 years ago

      And many developers don't fully understand that he's a good salesman and can't see through the BS.

      • sass_muffin 3 years ago

        All technologies have their pros and cons. They have use cases where they make sense and use case where they don't. The job of an engineer to decide which tool fits which use-case. To dismiss a useful technology as "BS", especially one used by companies all over the world for over a decade without any backing data seems a bit disingenuous.

        • newlisp 3 years ago

          All technologies have their pros and cons. They have use cases where they make sense and use case where they don't. The job of an engineer to decide which tool fits which use-case.

          Exactly. But that's not how he paints it, I have seen him bashing RDBMs as been a thing of the past and his promoted way of data modeling and "new" database technology is how companies should start today or be moving to.

  • shepherdjerred 3 years ago

    > In 2021, during the 66-hour Amazon Prime Day shopping event, Amazon systems - including Alexa, the Amazon.com sites, and Amazon fulfill- ment centers, made trillions of API calls to DynamoDB, peak- ing at 89.2 million requests per second, while experiencing high availability with single-digit millisecond performance.

    Yeah, good luck beating DDB on that one.

    • Demiurge 3 years ago

      Even better luck building an Amazon scale business after spending 100k on dynamodb.

  • kumarvvr 3 years ago

    I think it is more of an issue with not able to effectively model your data to suit the DDB paradigm.

    DDB absolutely shines when you have to scale. I mean, have you ever tried setting up a cluster of SQL servers. It's a nightmare.

    DDB is breezingly easy, as long as you know how to model your data effectively.

    • hw 3 years ago

      Rdms is breezingly easy, as long as you know how to operate your clusters effectively

      • fubbyy 3 years ago

        I could be wrong, but at truly large scale RDMS can’t compete, right? SQL simply can’t horizontally scale in the same way?

        • oceanplexian 3 years ago

          Sure it can, and I’ve operated MySQL (Percona) at large scale for a social media company. You shard requests by user or something else, doesn’t matter if you have 50 DBs or 50,000. However in most cases you have to write the sharding mechanism yourself, and understand your workload and what such a system can and cannot do.

        • fragmede 3 years ago

          Given perfect knowledge of access patterns, I bet you could. Especially since it's basically all reading and not writing. Horizontally scaled with many, many read-only replicas. But then there are lies, damn lies, and benchmarks. All the big companies running huge Oracle db installations are busy running their workloads, which probably don't look like Amazon Prime day traffic.

          It's also impossible to have perfect knowledge of access patterns.

      • dalyons 3 years ago

        even if you know how to do it, running large sharded rdbms clusters is incredibly far from "breezingly easy"

  • __alexs 3 years ago

    At a previous job I moved a giant Postgres server to Dynamo and it was cheaper, faster and had better resiliency.

    Years later some people moved it back to SQL and made it cost 2x as much...

    Bad engineering is possible with all technologies.

  • atwood22 3 years ago

    A master carpenter always blames his tools.

    • btown 3 years ago

      A master carpenter is knowledgeable enough to be critical of those who aggressively peddle tools to unsuspecting customers who are likely to be unfamiliar with just how dangerous and potentially project-destroying those tools can be when not the right tools for the job.

    • otterley 3 years ago

      That’s not the saying.

  • boruto 3 years ago

    I was part of project where we moved user transaction lists from pg to dynamo.

    While there are pros like ease of scale and all, the biggest was to tell product and higher-ups that the out of place feature with groupbys was simply not possible and there by ending the whole discussion.

  • icedchai 3 years ago

    You need to use the right tool for the right job. I know people using DynamoDB for a tiny dataset that would easily fit in sqlite (or any other DB) running on a $20/month VPS. That wouldn't be serverless, of course, so it's a no-go.

    • slyall 3 years ago

      Not sure what you mean by "tiny dataset" by DynamboDB is great for something with 100 or a few thousand items. Especially if these are only occasionally accessed but need to be shared.

      Half the time it'll be in the Free Quota or perhaps $1/month. Certainly cheaper than creating an instance.

      • icedchai 3 years ago

        I’m basically talking a couple gigabytes of data. Something non-trivial but also doesn’t need a massive distributed DB.

    • arinlen 3 years ago

      > I know people using DynamoDB for a tiny dataset that would easily fit in sqlite (or any other DB) running on a $20/month VPS.

      I have to say your comment comes off as very ignorant. If you are a AWS customer then you either pick any of the database offerings, such as DynamoDB or Amazon RDS, or run your own database on a EC2 instance. Except running your own db in EC2 can cost around the same as running Amazon RDS, and DynamoDB has a very roomy free tier.

      Therefore the piece of info you somehow left out is that DynamoDB is free for "a tiny dataset", and you do not have to manage anything at all with DynamoDB too.

      • icedchai 3 years ago

        I already know all that. I’ve been using AWS for over 10 years. I’ll just say I prefer the relational model when starting out and leave it there. I’ve had good luck with RDS.

        I’ve seen people paint themselves into a corner by screwing up their DDB keys too many times and having to export and reload all their data. If you don’t think ahead about your access patterns this is very easy to do. Nobody thinks ahead with “agile.” You’re better off starting with SQL and migrating things to Dynamo where it makes sense.

    • cebert 3 years ago

      That same dataset cloud then be modeled and stored in DynamoDB for even leas than that, right?

      • BreakfastB0b 3 years ago

        Yeah I have no idea what icedchai is talking about, DynamoDB free tier is super generous https://aws.amazon.com/dynamodb/pricing/on-demand/. It's going to cost you nothing until you have enough customers to afford to pay for it. Correctly modelling single table design on the other hand ...

        • glenngillen 3 years ago

          I use it for lots of stuff like this. The pay-per-use/on demand pricing makes it incredibly cheap even if I get occasional bursts of activity. With much better availability than SQLite running on a single VPS.

        • icedchai 3 years ago

          1) Latency. 2) Ease of data manipulation.

          Using Dynamo for a small data set is overkill. You can manipulate the data way faster on a local server, where it is basically in memory (disk cache), and not have to deal with any modelling issues.

          I guess some people like the DynamoDB API? I find it incredibly awkward.

          • blackoil 3 years ago

            You can be even faster if you store data in client. Though different use case different solutions.

    • Closi 3 years ago

      > I know people using DynamoDB for a tiny dataset that would easily fit in sqlite (or any other DB) running on a $20/month VPS.

      Depending on the use cases, there are plenty of reasons you might want to go down a NoSQL route other than price - schemaless makes it much easier and quicker to hack together new projects for instance (and more fun too!)

  • osigurdson 3 years ago

    SQL server seems like an odd choice outside of the enterprise. Suggest running Postgres on Linux.