dismalpedigree a year ago

I was a very early adopter of AWS back when instances did not have permanent storage options. AWS and other cloud providers were a huge part of my career for over a decade.

I have found myself shunning them in recent times. Not only for the cost, but the excessive complexity. Not every application needs to hyperscale. Most of the time a simple server takes care of it and we can move on to better value adds.

With the commonly available gig+ fiber plus good open source tools I find myself hosting things myself at home or on simple clouds like linode, digital ocean, or ovh.

Recently got a NUC i7 with dual 2TB SSD, 8/16 cores, 64gb of ram for $1200. I run proxmox on it as a hypervisor and management. I front it with a $5 haproxy on linode and connect it all up with Nebula mesh vpn. It works really well.

  • tatersolid a year ago

    Without ECC I’d be wary of catastrophic data corruption on that rig. It’s laptop parts in a tiny desktop case. There’s a reason all us Enterprise(TM) types shell out for “real servers”, and it’s not because we’re stupid or enamored with the vendor’s ability to pay for rounds of golf.

    • dismalpedigree a year ago

      Yes. If i was running mission critical systems it would be another story. In this case i have triple backups so worst case I take some downtime.

      Still easier (and faster) than running the same servers on AWS, which is where I migrated them from.

    • otabdeveloper4 a year ago

      Run a proper high-load high-availability architecture instead. You'd need it anyways and it will turn out cheaper in the end.

      • tatersolid a year ago

        You misunderstand the impact of silent RAM errors; when the data is corrupted in RAM then written to disk you have persistent, unrecoverable data corruption. Your backups are also corrupt and no common HA mechanism will even detect the failure unless it causes a crash. This sort of memory corruption happens in the real world and is often written off as software bugs.

  • ksec a year ago

    >Not every application needs to hyperscale. Most of the time a simple server takes care of it and we can move on to better value adds.

    Yes. And that wont true in the early days of AWS. Now we are marching towards 128 ARM core with 128 GB Memory on a single node, you could fit 2 to 3 node in a 1U unit. Or 128 Core AMD EYPC Zen 4C with 256 vCPU Thread.

User23 a year ago

Back in the early 2000s getting compute as an internal Amazon developer or dev manager was a rather ridiculous process. Officially you were supposed to get it from the infrastructure team, but there was no real spare capacity to give out. So savvy dev managers would over-request and then hoard. You'd even see managers engaging in horse-trading with other managers to get capacity! Even infra devs had to play the game, although they had it a little bit easier for obvious reasons.

rektide a year ago

The pull quote in the center is so divine, tells the entire story of where we were vs where we are:

> Instead of working on the core of the code and focusing on the performance of a self-contained application, developers are now forced to act as some kind of monstrous manual management layer between hundreds of various APIs, puzzling together whether Flark 2.3.5 on a T5.enormous instance will work with a Kappa function that sends data from ElephantStore in the us-polar-north-1 region to the APIFunctionFactoryTerminal in us-polar-south-2.

From Vickyli Boyis a couple days ago, https://vickiboykis.com/2022/12/05/the-cloudy-layers-of-mode...

encoderer a year ago

I don’t know what’s happened to me, but I just cannot get through big “magazine” pieces like this with tons of build up / fluff anymore. Just please get to your point early and then tell me stories to support it.

bitwize a year ago

I keep hearing a lot about "edge computing", so here's my guess.

Back in the 90s, companies would buy rooms full of servers for their big workloads and the user-facing stuff would run on people's individual PCs.

So the next big thing with cloud is, cloud will combine with edge computing to reverse this relationship: users' PCs will do analytics on behalf of big companies locally, and all the user facing stuff will run in big companies' datacenters, i.e., the cloud.

  • treeman79 a year ago

    Yep. I had a pair of work stations that I had to flash the bios to get corporate security off of. So I could run Linux. Ran 4 call centers off those machines. Had huge signs on them not to touch. Anything I tried to do officially would cost 10-50 times as much and have a year delay in approval. I still refuse to work with IBM for infrastructure because of that job.

  • ghaff a year ago

    Edge is certainly part of the the answer of what is already happening to a significant degree--both in terms of what's new and in terms of essentially rebranding things that have been around for a while like distributed retail (which is also increasingly pulling in technologies like Kubernetes for various reasons).

    But I'm not sure that really captures what platforms look like which evolved with server virtualization and evolved with container orchestration. Certainly lots of rationalization and standardization still to happen in the current cycle but not sure there's a real view into what happens in the next major platform cycle. Edge really introduces more complexity, not less.

willnonya a year ago

I have to disagree with the premise that cloud infrastructure was either a faster horse or a car. It's more like a chariot, useful in specific circumstances against certain challenges but hardly the universal, flexible good of the automobile.

ghaff a year ago

There was also some discussion of "What's next" at Kubecon as well and, somewhat to Steve's point, discussion about addressing complexity for both developers and operations. Layer on top of that tension between simplification and locking people into a vendor-specific abstraction.