greatjack613 7 days ago

To put this in perspective, The new $500 ryzen 9 3900x matches this $1700 xeon server in multi core geekbench, and kicks it in single core perf with a fraction of the power usage.

  • snuxoll 7 days ago

    It also won’t have iLO, etc.

    Even for a homelab these are useful features to have, I don’t want to go hooking up my monitor and keyboard every time I need to troubleshoot some boot issue or install a new OS. I can also get used DDR3 ECC memory for a hell of a lot cheaper than DDR4 right now.

    Unfortunately the person who wrote this article is in the EU who has much slimmer picking in the second-hand market. I can buy a Dell R620 for $200-300, two 10-core Xeons for another $300 if it doesn’t come with enough cores, and 128GB of 16GB DDR RDIMMs for $200 - total price is under $1000USD, and that’s excluding any components that come with the server I may opt NOT to replace.

    • mey 7 days ago

      Useful iDRAC features like remote display and remote ISO install isn't in the base Dell iDRAC license. So I still need to hook up keyboard/monitor if it's more than a reboot.

      Edit: iDRAC is the Dell LOM, iLO is HP

      • herpderperator 7 days ago

        If you buy Supermicro motherboards, which come in regular form factors too, like for example X9SCM-F [0], or X11SSL-F [1], they come with IPMI (which is what the -F designates), which is exactly what iLO, DRAC, etc., is and in a normal form factor. You don't have to buy fancy jet-engine "server" boxes when you can get the same stuff in a mATX form factor. No license needed.

        I have been using the X9SCM-F for the last 9 years, and enjoy being able to modify the BIOS remotely, boot remote ISO images, or troubleshoot any boot problems without physical presence.



        • aetimmes 7 days ago

          Original parent comment was talking about installing a Ryzen consumer CPU, which is generally mutually exclusive with any board manufacturer (SMC, Dell, HP, or otherwise) that provides any sort of baseboard management controller.

          Some of the SMC boards listed support i3 processors, but generally, if you want a BMC, you're stuck with enterprise-grade processors.

          • walterbell 7 days ago

            ASRock X470D4U is a server board, with HTML5-based KVM via BMC, which uses Ryzen consumer CPUs.

            • ksec 6 days ago

              Wow, I wonder how many people knows about this? I was also expecting BMC to be EPYC, and Enterprise exclusive only.

              • walterbell 5 days ago

                It's often been sold out at Newegg, even with purchase limits. Price recently increased from $240 to $350.

                No BMC license fee. Officially supports ECC UDIMM, Linux and PCIe bifurcation.

            • sitkack 6 days ago

              Mr Erbell, you made my day!

          • herpderperator 7 days ago

            I listed two motherboards I personally researched in the past and knew about already. There are plenty of consumer Supermicro boards that have IPMI. Here's one I found just by browsing the first page of their workstation motherboard section. [0]


            • rhizome 7 days ago

              None of yours are Ryzen, though.

              • bsder 7 days ago

                Um, these are AMD EPYC (the server processor):


                All Supermicros for EPYC have IPMI.

                This EPYC is $599:

                • rhizome 6 days ago

                  I'm not disputing, just seeing: this in the page from your previous comment:

                  Intel® 8th/9th Generation Core™ i9/Core™ i7/Core™i5/Core™i3/Pentium®/Celeron® series Processor, Intel® Xeon® E-2100 Processor, Intel® Xeon® E-2200 Processor. Single Socket LGA-1151 (Socket H4) supported, CPU TDP support Up to 95W TDP

                • aetimmes 6 days ago

                  Right, but the distinction we're making upthread is not Intel vs AMD, it's consumer CPU vs server CPU.

                  You can get out-of-band management with Xeon motherboards, or with Epyc motherboards, but you're likely not going to get a motherboard with out-of-band management with a Ryzen CPU or a Core i[3579].

                  • oarsinsync 6 days ago

                    But hilariously you can get IPMI with Atom motherboards

        • dahfizz 7 days ago

          > they come with IPMI... which is exactly what iLO, DRAC, etc., is

          iLO / iDRAC implement IPMI, but they are not the same thing. iLO and iDRAC provide web interfaces, remote iso mounting, granular boot control, remote vga, etc. There's a lot of useful features not available in a "standard" IPMI implementation.

          • herpderperator 7 days ago

            You are confusing IPMI 1.0 with IPMI 2.0.

            > Systems compliant with IPMI version 2.0 can also communicate via serial over LAN, whereby serial console output can be remotely viewed over the LAN. Systems implementing IPMI 2.0 typically also include KVM over IP, remote virtual media and out-of-band embedded web-server interface functionality, although strictly speaking, these lie outside of the scope of the IPMI interface standard. [0]


          • mopsi 7 days ago

            I used to have X9SCM-F, and its implementation also had a web interface, remote ISO mounting and remote VGA.

            • herpderperator 7 days ago

              Agreed; I also have X9SCM-F, and was able to do all that (which I already specified in my parent comment.)

        • jlgaddis 7 days ago

          I've got Supermicro -F servers and I've got PowerEdge servers with iDRAC Enterprise. Yes, the Supermicro servers have basic remote management features but the iDRAC Enterprise offers so much more.

          If you're just running a couple servers in a home lab, yeah, it's probably not worth it but for a company with a bunch of servers the additional management features are absolutely worth it, IMO.

          Redfish [0] is looking like it's gonna be pretty awesome too.


          • bubblethink 7 days ago

            They are both different levels of bad. At least supermicro is free. The whole low level bios and BMC space is stuck in some 90s parallel universe. coreboot+openbmc can't come to the masses soon enough. It would be like what linux felt like after windows.

      • snuxoll 7 days ago

        Sure, but the licenses to unlock them are easy to acquire.

        Dell ties them to the service tag, which is quite easy to set via racadm and you can buy a license off eBay. Older Dell systems required a $30 part you can find on eBay. HP just uses a key they send you on paper.

        Hell, many times these decoms still have the license applied to them. I have 2xR210 II’s, a R320, R520 and formerly had a HP ML10 and a Lenovo TD340. The Lenovo was the biggest pain to get VKVM because of the stupid hard to find dongle.

        Don’t buy Lenovo servers for a homelab; BTW. Dell lets you override the ramp in fan speed when a non-Dell branded PCIe card is installed, newer Lenovo servers ramp them up to 100% and you can’t turn it off (I had to leave my TD340 on the initial BIOS revision with security issues because of this).

        • bubblethink 7 days ago

          The 100% fan on non-whitelisted card is such a dick move. Is there a single legit reason for this ? Did someone's building burn down because they didn't buy the quadro gpu that Dell sells ?

          • snuxoll 7 days ago

            It’s not even limited to GPU’s, a quad-port Intel NIC in my TD340 causes it to max fan speeds if I’m not running the A01 BIOS.

            The argument is they don’t know the thermal requirements of the card because it’s not OEM-branded so they max the fans “to be safe”, but not letting you override it is just bullshit.

            • sitkack 6 days ago

              They do something similar on Chromebooks and 3rd party mini pcie WiFi cards. If you insert a non Lenovo branded card, it will fail to even enter the bios screen with a Dick Move error.

              Also, same stupid justification,”this configuration hasn’t passed FCC EMI tests” when the cards a qualified directly, they are the FCC certification boundary.

              • bubblethink 6 days ago

                I think lenovo has stopped doing that recently. However, we now have soldered wifi cards.

        • lozaning 7 days ago

          Back when I was in the HP VAR game iLO licenses where some of my highest money makers. MSRP for those licenses was like $250.00. I was buying them grey market from ESISO for like $35 a pop.

          • vbezhenar 7 days ago

            I bought ilo advanced license for my microserver for $5 at ebay. It worked just fine.

            • gsich 7 days ago

              Probably from a keygen or duplicate use. But won't matter anyway. The ILO (or HP) will never know.

        • mey 7 days ago

          I have some principals with my homelab. Not doing grey market licensing is one of them. That's my personal preference, but just wanted to point out that iDRAC enterprise is additional.

        • louwrentius 7 days ago

          > Hell, many times these decoms still have the license applied to them

          I was indeed that lucky. Thank you <bank>.

      • phasor 6 days ago

        Actually, remote display is possible with the base Dell iDrac Express license by enabling ssh console COM2 redirection. This is not enabled by default and not very obvious that it is even possible (probably because Dell wants to sell these iDrac licenses). For me the COM2 redirection is all that I need for a server in a datacenter without physical access. For example, if the server OS is not booting, I can still ssh to the idrac and change BIOS options or select alternative boot options in grub. For OS installs I use chroot. So really no need for an idrac license in my case.

    • unethical_ban 7 days ago

      Meh. I bought a $55 7" LCD screen and a $10 tenkeyless mini keyboard. Much cheaper and more flexible than iLO for how often I need to have non-network access.

    • AdrianB1 7 days ago

      I am running home servers for ~ 10 years, I never felt the need for iLO even when I moved the stuff in the basement. I have iLO on the servers at work, so I am not dismissing the utility, just the need at home.

      • rhizome 7 days ago

        It can happen once in awhile. I've been not-at-home the past few months and my mailserver fell over one day. I went to my apartment, turned it on, made sure mail was flowing, then went home. It had turned itself off again in the meantime. I drove out again, and turned it on, babysitting it for a few hours to see what might be happening. It's been up ever since, so, mystery. I would have liked a telephone/internet rebooter during all of this!

      • snuxoll 7 days ago

        I’ve got 4 servers in my rack at home, I don’t like getting behind it to swap peripherals around. I plan on getting more eventually for a Ceph cluster too.

        If you’ve got one or two, then, sure.

        • jagger27 7 days ago

          This is where I really appreciate front-side VGA ports.

      • vbezhenar 7 days ago

        I'm reinstalling Linux few times per year, so iLo comes very handy.

    • mrweasel 7 days ago

      > It also won’t have iLO, etc.

      You could install the free version of VMWares ESXi and use that as a sort of iLO. In most case that would be a more pleasant experience.

      • rzzzt 7 days ago

        I'd really love to see a hypervisor that emulates IPMI/iLO/AMT/whichever flavor management hardware on machines that lack the capability. It should run a single VM only, and pass through all other hardware, but also cordon off eg. a single TCP port, and allow the user to power cycle the VM, along with remote control (could be done through VNC). Bonus points for exposing temperature and other sensors.

        • snuxoll 7 days ago

          But when that hypervisor fails to boot you are SOL. I have vFlash cards in all my Dell servers with the OS install media and a recovery image so I can fix broken installs quickly. Oh, and that stupid PCIe training error from a NIC that went bad would have required I plug in my monitor as well.

          It would be useful, but why not just run VMWare or Proxmox at that point?

      • Nextgrid 7 days ago

        At a potential performance penalty and significant extra attack surface (you now have to worry about your hypervisor’s attack surface and your actual OS’ attack surface, where as on bare metal you only have a single OS to worry about). Not to downplay the advantages of virtualisation, but if all you needed was a bare metal server then running a hypervisor adds one extra thing that can go wrong that you didn’t ask for.

        • amanzi 7 days ago

          Most (if not all) home lab users will be using virtualisation to spin up multiple VMs.

        • zrm 7 days ago

          > you now have to worry about your hypervisor’s attack surface and your actual OS’ attack surface

          But the same is true of iLO. Worse in that case, because the iLO won't receive security updates for as long and is less well scrutinized in general than common Linux distributions.

    • dcm360 7 days ago

      You can get a Tyan Tomcat EX or one of the ASRock Rack AM4 boards that have an Aspeed AST2500 chip to provide remote management. However, it might be difficult (and expensive) to get 4 32GB UDIMMs to work properly.

    • zamadatix 7 days ago

      I have an x570 board with IPMI, it's not unheard of.

      Waiting 2 months for Threadripper would probably be more sensible unless you need it RIGHT NOW though.

    • paulmd 6 days ago

      IPMI boards are increasingly available for AMD systems now. There is the Asrock X470D4U and X399D8A-2T and Supermicro also had a couple in the works I think.

  • btgeekboy 7 days ago

    That $1500 includes the rest of the computer, though. That adds up.

    • awinder 7 days ago

      I almost went down this sort of road and went 3900x instead. I think the one non-replicated benefit of the older server route is cheap & large amounts of ECC ram, but you're also getting slower ddr3 ram which is why the pricing works out the way it does. I was in that same pricing ballpark with a new-build ryzen with a 2070 super, I think you could make it pretty much equivalent but maybe trading 64gb of fast ddr4 ram for the 128gb of ddr3. As long as you can "make by" with 24 cpu threads I think you'll be in a much better position at same-ish cost but with new / full warranty components but that's just my math :-).

    • greatjack613 7 days ago

      correct, but you can get build a full system with the Ryzen 9 plus 128gb of ram etc for definitely less than $1700. Especially given the drop in ram and ssd's these days

      • pedrocr 7 days ago

        128GB of ECC RAM itself is going to be ~800-1000$ as far as I can find some to fit in a Ryzen build. So it won't be trivial to reach the price point for a full build although Ryzen is clearly great value anyway.

        • greatjack613 7 days ago

          True, but only if ECC is necessary. In this case where he is mainly using the server for experiments and to try software, regular ram will do absolutely fine.

          Here is a 128gb kit for under $600 dollars.

          • close04 7 days ago

            > only if ECC is necessary

            If your workloads require 128GB of RAM then you probably want ECC. You'd be skimping in the wrong place.

          • mciancia 7 days ago

            This is kit of 8 sticks. Ryzen supports max of 4 dimms, so you need 32GB per stick.

          • ska 7 days ago

            > True, but only if ECC is necessary.

            big "if" there really, and that makes it an apples-and-oranges comparison IMO.

        • abarringer 7 days ago

          I received a price quote on HPE ECC Server RAM this week from HPE that is $230 per 64GB stick. So make that $500 with shipping and tax for 128GB.

          • pedrocr 7 days ago

            Consumer Ryzen requires unbuffered ECC RAM so most usual RAM options are not compatible.

            • mamon 7 days ago

              SUPPORTS ECC RAM, but does not require it. You can use plain old RAM chips with it.

              • kadoban 7 days ago

                Grandparent's point, presumably, was that _buffered_ ECC RAM would not work. Which is definitely correct. I'm not sure if the sticks in question are buffered or not though.

                • snuxoll 7 days ago

                  Likely registered, not buffered - FBDIMMs fell out of favor some time ago. Technically still “buffered”, but only the control lines and it maintains electrical compatibility with UDIMMs.

                  • kadoban 7 days ago

                    Ah, thanks for that. I looked up the difference, I'll have to update my terminology.

    • louwrentius 7 days ago

      Exactly, watch what happens to the price when you add 128GB of DDR4 memory.

    • burtonator 7 days ago

      I mean why isn't this the first comment reply.

      The motherboard, RAM, SSDs, CPU, case, powersupply all have to be factored.

      This is an apples to oranges comparison.

      IMO it always makes more sense to build your own PC vs buy a used server.

      It's going to be have better ventilation and be much quieter in a home setup.

  • tgtweak 6 days ago

    Yes but build it with 128GB of ram and try and get a lights-out kvm on it.

    Much better value can be had than OPs buy - just picked up a Dell R820 (quad 8core,3.2ghz v2 Xeons, hyperthreading is overrated yo) with 768GB of ram and dual 10G nics for $1500. Idles around 170W (about $5/mo where I live). The real magic: 8 full (16x 3.0) pcie slots.

    The 3900x is a beast for the workstation though, and I have it running there with nothing but praise.

    • sireat 6 days ago

      That's a great value!

      I just checked eBay and it seems R820 with 128GB and 4x v2 Xeons goes for 1200-1300.

      How did you manage to get the extra 640GB for 200-300 ?

      PS 6 months ago I upgraded my 2x Xeon 2697v2 Dell Precision T7610 and paid $200 for 128GB DDR ECC (8x16) Didn't expect to see another price drop.

      • tgtweak 6 days ago

        They come up often enough - someone will be selling a lot of 20+. They go quick but you usually have enough time to send a few messages before they sell out.

  • buildbuildbuild 7 days ago

    Good point. If looking for server features on AM4 I highly recommend the ASRock Rack X470D4U motherboard for IPMI. Buying used HPE G8/9servers is still a great deal when you need redundant power and large amounts of RAM. Their 10g and 40g FlexibleLOM cards can be found on the cheap too. TechmikeNY is a great source of refurbished servers in the NYC area.

  • parsimo2010 6 days ago

    It seems like the author wasn’t targeting benchmarks exclusively, and comparing the price of a CPU to the price of a whole system isn’t fair. I spent about $2200 on a system that was targeting compute value- I have 4x E7-4890v2 for a total of 60 cores and can get >80000 on multicore geekbench. I also have 512GB of RAM, which I think is more than Ryzen can handle.

    I paid $85 per CPU, so while a new Ryzen 9 does make sense for a lot of people with its fast single threaded speed and low power consumption, the old server gear I bought still wins in highly parallel tasks for less money.

    Edit for proof of my geekbench score: That was before I expanded the RAM, but the score didn’t change with the extra memory.

    • nessunodoro 6 days ago

      Thanks for this. How are the energy needs/costs?

      • parsimo2010 6 days ago

        I haven't measured power consumption with a meter, but each CPU has a TDP of 155 W, so potentially 620 W plus whatever is required to keep the motherboard, RAM, and disks alive. My ballpark guess is 750 W at full load and under 200 W at idle. It runs off a 1600 W power supply, but I suspect that is overkill. It's definitely less power-efficient than a new Ryzen CPU.

  • srmatto 7 days ago

    I bought an HP Proliant DL380 G7 with Dual Xeon Hex-Core X5660 at 2.8GHz, 76GB DDR3 RAM, and about 1TB of 15K HDD for $370.

    • bluedino 7 days ago

      I always forget that used servers are dirt cheap in the USA compared to other countries.

      The other good thing about geting a Dell/HP rack server is you can have 16 drive bays (or more) easily.

    • greatjack613 7 days ago

      The actual hardware is not bad, its just the perf per $ ratio is not great when you compare it to zen 2.

      Kudos to AMD

      • srmatto 7 days ago

        Fair, but at $370 I think the ratio gets much better.

        • roel_v 7 days ago

          Not after you price the power. I have a Xeon system that is plenty powerful for my (C++) dev work, but I'm thinking of replacing it with an AMD build as it'll probably be cheaper over time. Then again, this is at Dutch energy costs, I'd imagine at (say) Texas cost it'd be different.

        • ThrowawayR2 7 days ago

          Not sure it's significantly better after factoring in power and auxiliary costs like cooling.

    • aantix 7 days ago

      Where from?

  • rubyn00bie 7 days ago

    I have an almost identical setup hardware-wise (I opt'd for dual 2687 v2s [better single core performance]) and it only cost me $1000. It also supports 20 physicals disks with no extra hardware or configuration needed (aside from just plugging in the disks and adding 'em to ZFS).

    RAM (DDR3) and storage capacity (cheap SAS drives) in these systems is really where the savings is... I don't think I'll ever own a "regular" desktop again.

  • Aperocky 7 days ago

    Well that's used stuff, so Moore's laws ghost still in effect.

    The biggest problem I have with a used server is the inability to incorporate a GPU into the architecture. But I guess that really depend on what you want to do with it, serving a website will absolutely not require a GPU.

    • ivalm 7 days ago

      Iirc dl380 gen 8 support 2x risers for gpu.

    • gnopgnip 7 days ago

      What kind of server does not support a GPU?

      • roel_v 7 days ago

        The form factor is (can be) an issue. I used a 4u case for my build a few years ago, and to fit my 1070 I had to bring out the dremel to remove some parts of the cross bars.

  • patrioticaction 7 days ago

    No worries, I am sure the Gen8 server makes for a good doorstop when it dies, have tested before.

  • gameswithgo 7 days ago

    With the ryzen 3950 with 16 cores coming soon for a bit more!

    • tgtweak 6 days ago

      Notably more...

      Epyc2 is what I'm fantasizing over lately.

  • paxys 7 days ago

    Something that was $1700 in 2013 is cheaper now? Shocker.

    FYI the E5-2680 is like $300 on Amazon right now, so that's a better comparison.

    • r00fus 7 days ago

      The CPU was $1700. The entire setup is now less.

eropple 7 days ago

Neat machine, but 96W idle is a lot for a home server, IMO. Maybe you're somewhere where power's super cheap (and hopefully clean), but a lot of folks aren't.

I run my old desktop (a i7-6700K) in a rack in my basement, now, with 64GB of RAM, a Mellanox Connect-X for 10G networking, and half a dozen disks, and it idles under 15W. The entire rack, UniFi stuff/POE wifi APs and all, sits around 50W. 96W just for a single machine is A Lot.

  • atonse 7 days ago

    In addition to wattage, my other concern for a home server was fan noise. If it's anywhere that I am, I really don't want to hear it. So I got these amazing Noctua fans and simply can't hear the server. (Updating language based on replies) A datacenter-like server probably isn't designed for that. So it wouldn't necessarily work out.

    I have an 8 core i5. I bought Intel (18 months ago) because I didn't want to deal with any AMD incompatibility especially since I wanted to also run it as a gaming machine using VT-d. If I were doing it again, I'd definitely go with Ryzen.

    • skunkworker 7 days ago

      I’m not sure what you’re getting at with the home server fans and not working out. I’ve been using a noctua on my Xeon home server without and issues, and reconfigured the fan sensors and control using ipmi. There’s no problem making a quiet Xeon home server.

      • snuxoll 7 days ago

        I’ve got four servers and two switches in my home office, noise is around 45dBA and with the door closed it’s inaudible. Total power draw idle is around 340W (3.1A measure by my PDU @ 110V).

      • atonse 7 days ago

        Sorry I wasn't clear. I meant something like a Rack mounted server designed for a datacenter. Would it become whisper quiet if you just replaced the fan? Is it that simple?

        • snuxoll 7 days ago

          Replacing the fans on server systems is tricky, many “quiet” fans are such because they have a lower max RPM which will freak the hell out of your management controller. Also, there’s only so much optimization you can do to 40mm fans in a 1U chassis.

          The fans in my 1U dell servers are fairly quiet when idle around 3600RPM. They make noise, you wouldn’t want to be sleeping or watching TV in the same room - but with the door to my office closed they can’t be heard in the hallway.

          • skunkworker 7 days ago

            Ah I understand what they were getting at. If you're using a server motherboard in a regular 1-4u case this becomes a different issue entirely. I'm using a supermicro x9 with dual xeons in a upright enthoo pro case with it's own fans. You're right about the management controller freaking out because the IPMI threshold settings are expecting the supermicro 3 fan assembly instead of different case fans. Fortunately with some tweaking and ipmi tool you can effectively use fans that have a much larger or smaller range of acceptable RPMS without your management controller thinking the fans have gone down into lower non recoverable.

            Right now my noctuas are running about 1000rpm and keeping the Xeons around 40c (under load this will increase with minimal db)

            • paulmd 6 days ago

              My solution was to take some spare Noctua low-noise adapters (basically inline resistors with fan connectors attached) and just drop the max fan speed. The fan controller can freak out all it wants, it can never go above about half rpms, and it still generates a noticeable breeze through the hotswap bays.

              My CPU is a 7100 (ECC supported!) with a Noctua L9i so I never have problems there either. Power draw is a little high at about 70W with 8 3.5" drives spinning, but most of that is the HDDs (rule of thumb is 5W per drive) and the alternative would be spinning them down, which isn't ideal.

    • bitmadness 7 days ago

      I think you meant an 8-core i7, there are no 8-core i5's.

      • atonse 6 days ago

        Oops 6 core i5.

  • stordoff 7 days ago

    Worth noting:

    > By default, this server is turned off. When I want to do some work, I turn the server on using wake-on-lan. I do this with all my equipment except for my router/firewall.

    If it is only on when it is doing work, idle power doesn't matter _too_ much.

    • dr_zoidberg 7 days ago

      The Intel ark site says the TDP for that xeon is at 115W, and that's two of them. When you're running at full power (which could happen) you're looking at the CPUs alone drawing over 200W. Another pointer to high power consumption is the 750W PSU. All in all we're not looking at a particularly efficient machine (even if it's unfair comparing it to 2019 hardware).

      • dredmorbius 7 days ago

        The PSU's rating is for its potential to supply power, whilst draw components (CPUs, disk, etc.) are rated by use.

        The actual efficiency is defined by capacity, load, and miscellaneous other factors, but should be in the neighbourhood of 20% of delivered power.

        If the system idles at ~100 watts, the PSU might add another 20-30 to the mix, not 750.

        • brokenmachine 6 days ago

          Agree with all of that, but I think OP was referring to the fact that higher-rated PSUs will use more power than a lower rated PSU when idle.

          • dredmorbius 6 days ago

            Running a PSU at ~50% of its rated max is generally recommendable. At 115W * 2, plus other draws (disk, mobos, blinkenlights), that's around 300W.

            Again: peak draw, when on.

            I'm not arguing that this system is particularly efficient, only that you don't want to add 750W for the PSU to the draw.

            I've not specced out low-power systems myself. I doubt you could get 40 hyperthreads running way below this, though there are definitely some low-power systems which might have a total budget below 50W. Reddit's HomeLabPorn may have some more useful guidance:

            (Not my area of expertise, I've never really stayed current in HW. Though I'm aware power/thermal budget has been a major focus, both mobile and server, for the past decade or so.)

            • dr_zoidberg 6 days ago

              The 24 cores/48 threads EPYCs that come with 128mb L3 cache have a TDP that's between 155W and 180W. If you want to go with a lower core count (because you have a more modern architecture), you could even go with the 16c/32t EPYC 7282 (64MB L3) that sits at 120W. All in all, you're looking at almost half the power draw (AMD measures it differently than Intel, so aboutish the same) for the same performance.

              I specifically mention the L3 cache size because, while not being substantial, a large cache can get you 10 to 20% improved performance due less CPU stalling from cache misses. For comparison, the Xeon in question has 25MB L3, so we'd be looking at 50MB cache, split in two dies (so it doesn't quite work as a whole block of 50mb cache).

              greatjack613 also mentioned the fact that an AMD Zen 3900x matches the multicore performance (and has better single core perf) for a fraction of the TDP, see

              All in all that was the general idea: yes, you can get a used server for a good price, but we shouldn't forget the efficiency aspect of it, compared to new hardware.

              • dredmorbius 6 days ago

                Good points, and again, not something I keep current in, generally, as the field changes so quickly, particularly relative to my own replacement cycles.

  • louwrentius 7 days ago

    I turn this server off when I'm done experimenting. I turn it on remotely using wake-on-lan.

    • apexalpha 6 days ago

      What exactly do you use it for if it isn on 24/7?

      I bought a used intel Nuc a while back with 15w idle. And it runs 24/7 and does everything I need from downloading to nextcloud to plex, my website, my emails etc...

      I really wonder what you do with this kind of power.

  • CommieBobDole 7 days ago

    You know, that raises a question - how expensive does power get, anyway? Last time I posted my home lab setup (couple of old R610s, Cisco small business switch, ASA 5510), I got a lot of pushback on how much power that consumes and how expensive it is to run. I did the math, and I was looking at maybe $20-$30/month depending on load. Which is not really a lot of money considering how much use I get out of it.

    I'm definitely getting cheaper power than a lot of people at $.08/kWh, but it looks like the US average is only about $.12 - are there places where a couple hundred watts is going to be a significant financial burden on the average IT worker?

    • eropple 7 days ago

      To me it's less a financial burden and more the discomfort of burning electricity just to run a few Docker containers and some databases for personal projects.

      I expect that once, next summer, we get the house kitted with solar and batteries, I will feel a little more okay about it.

  • egwynn 7 days ago

    To contextualize this, ~100W will probably cost between $10-$15/mo to run 24/7.

    EDIT: Assuming a US household paying between $0.10-$0.20 per kwh.

    • jdnenej 7 days ago

      Where I am it costs 37c per kWh. It's no wonder almost everyone has solar.

  • abstractbarista 7 days ago

    Some people like me will justify it due to being a huge hobby for them. It takes the place of what other people would spend going out often, traveling, streaming, etc...

    I've got a rack that pulls ~600W 24/7 but that's across 4 machines, 3 UPS's, >30 HDD's yielding >20TB across multiple zpools, >256GB RAM. And I'm actually using most of that capacity, not just idling away.

    I do hope to upgrade soon to power-sipping platforms like you mention but currently I'm still on R710/R810 stuff (Westmere Xeon's).

    • louwrentius 7 days ago

      That is awesome. Do you have something written up about your setup online?

    • Dayshine 7 days ago

      I have to ask, using it to do what?

      • abstractbarista 7 days ago

        I wish I had something written up to share. But it's honestly a mix of things I love to play with. Several full blockchain nodes (especially uploading to light clients), bittorrent client (seeding tons), plex, home assistant, boinc, tor bridge, grafana, web and email servers, zoneminder CCTV, mandelbulber (clustered rendering of 3d fractals), four different Minecraft servers (some modded), ADS-B plane tracking, weather station and radiation monitoring, and a bunch more.

        It's really just one big toy for me. I'm using Proxmox for high availability of over 30 VM's (still want to play with containers soon).

        EDIT: OPNSense firewall says I uploaded over 15TB in the last month. Fortunately Google fiber doesn't care!

      • berbec 7 days ago

        Linux ISO plex server.

  • AdrianB1 7 days ago

    Probably an error in measuring the power consumption, there is no way to go to 15W, double that or more.

    • rhinoceraptor 7 days ago

      It's believable (if only obtained with a lot of BIOS tweaking), I run an older Dell 1u server as my router. It's got an E3-1220 V2 (2012 era, 69 watt TDP), one 2.5" SSD, 8gb of RAM and after a bit of tweaking it sits at around 30 watts.

      • eropple 7 days ago

        It wasn't a lot of BIOS tweaking, but yes, some undervolting was required. ;)

        I also hdparm'd it such that my spinning disks park pretty aggressively.

  • rhn_mk1 7 days ago

    Did you do anything particular to get 15W? I have recently measured my AMD AM1 system, which sits at 40W with 8GiB RAM, 2 HDDs and a couple mostly idle VMs on it.

    • rhinoceraptor 7 days ago

      I'd check the BIOS to see if there are any power saving options, and/or unclocking/undervolting the CPU.

      2.5" HDDs and SSDs use about the same amount of power, but 3.5" HDDs use 2-3x more. So you could potentially save 12 watts if you're using 3.5" drives.

      Make sure your PSU is a high efficiency one, and isn't too low, or or too high wattage. From what I've read the maximum efficiency is in the 40-80% range.

      • scns 7 days ago

        Maximum efficiency is at 50%

    • eropple 7 days ago

      I do undervolt it and I aggressively spin down the drives. (My concern is mostly storage and redundancy, not performance.)

  • tbyehl 7 days ago

    Or it's a ton of compute in a single system for just 96W. Average US rates are roughly $1/yr per continuous watt.

  • Havoc 7 days ago

    >idles under 15W

    Damn...meanwhile my fairly modern Asus laptop idles at 27W.

    (Probably the GPU...think I forced it to dedicated only)

  • wil421 7 days ago

    I have a UniFi USG, UniFi 150W switch (using maybe 30w), a freeNAS with a i3-8100, a mATX super micro board and 6 4TB WD Red drives. My backup power supply says maybe 120w.

    • CSDude 7 days ago

      My USG runs very hot, do you experience it?

      • atonse 7 days ago

        All my unifi stuff runs hot. The ERLite 3 and switches, etc. I think they count on using the cases as heatsinks.

        • eropple 7 days ago

          Same here. Those things get toasty. I accidentally placed a RPi3 (passively cooled but with a metal case) on top of my USG once and it got hot enough not just to throttle it, but also to cause HDMI artifacting.

      • wil421 7 days ago

        Not really. I have a NanoHD and my father in law has one too. His runs really hot and mine is cool to the touch. Mine has more traffic and was made before his NanoHD.

        • CSDude 7 days ago

          I also have a NanoHD and its acceptable but USG (non-pro) runs so hot that I fear leaving it in a non airconditoned cupboard that my internet exit resides

      • michaelper22 6 days ago

        Yup. My USG and US-8-60W both run hot enough to not want my hand on for very long (the power supplies are pretty warm too).

  • Aeolun 7 days ago

    That costs you maybe $.5 dollars a day? Maybe $1 if you are unlucky. This seems fairly reasonable?

  • louwrentius 7 days ago

    I've done a measurement with a power meter and it's even 110W. (it includes ilo).

  • fnord77 7 days ago

    would newer hardware idle at even lower wattage than 15W? Or is that about the best one could expect?

    • rhinoceraptor 7 days ago

      For a full desktop class chip, I think that's about as low as it can go. You could go lower with something like a NUC, but 15 watts is really good. 15 watts idle would cost something like $1.50 to $2 to run 24/7.

    • eropple 7 days ago

      Newer, maybe slightly but not hugely. (It's all still kinda warmed-over Skylake.) You could get cheaper with lower-TDP chips, but there are definitely diminishing returns on the floor.

old-gregg 7 days ago

Ugh... that's crazy. Instead, look at ASRock X470D4U AM4 server motherboard (with IPMI and ECC) paired with something Ryzen 2700, which should give you near complete silence, 25W idle power usage and 8 modern cores boosting up to 4Ghz, for well under $1K


  • whalesalad 7 days ago

    Where can you find 128gb of ECC memory for that board for under 1k (w/ processor and board)?

zokier 7 days ago

I got bitten by homelab fever few years back and got myself a small server. I had such grand dreams with it that never materialized. Now it sits unplugged in the corner deprecating itself :(

Stuff that I was planning to do:

* Managed VM platform (~"EC2")

* Centralized auth (FreeIPA)

* ZFS NAS (also possibly ceph) + backuping

* Container platform

* Your typical web/email stuff

* Monitoring/alerting/log management

* VPN endpoint (and other more advanced networking stuff)

* Probably something more I have already forgotten

I realized that building a private cloud actually takes serious effort and not just putting some lego pieces together. There is also bit of circulatory stuff there that makes bootstrapping more difficult, especially on one single box.

  • mey 7 days ago

    To make it simple, since you already have the hardware, I would suggest setting up the server as a VM Host to allow for experimentation with little effort.

    VMware vSphere Hypervisor, Proxmox, or Microsoft Hyper-V Server 2019 are all free options. It makes it easy to experiment with the above in parallel.

    My current VM Host has

    * One VM as a docker host (turtles all the way down) for development tools. Build server, bug tracker, private artifact repo (Sonatype Nexus).

    * 3 VMs as a Kubernetes cluster for experimentation

    Building a NAS is the only thing that would take significant effort and is a project unto itself.

    Unless you have a strong desire to experiment with failure modes (network dropping out, killing iSCSI, online VM fail over), stick to a single box.

  • Art9681 6 days ago

    I provisioned a cloud in my home PC using the free open source equivalents of the Red Hat Cloud Suite. It's not trivial, but I did it in about two weeks.

    Red Hat Cloud Forms -> ManageIQ (to manage your virtual private clouds, hypervisors, etc) Red Hat OpenStack -> DevStack (the cloud itself) Red Hat OpenShift -> OKC (container orchestration) Red Hat Virtualization -> oVirt (for VM's) Red Hat Ansible Tower -> AWX (to automate everything, including deploying all the previous software listed)

    If you plan on doing all of this from one machine, understand that you will need to enable nested virtualization which will require some BIOS/OS configuration to make it work.

  • jdnenej 7 days ago

    Try out yunohost. It makes this kind of thing trivial because they have gone and written all the config files and gotten single sign-on working with everything so all you have to do is push a button to install services.

  • bsder 7 days ago

    > ZFS NAS (also possibly ceph) + backuping

    This part, though, is really easy nowadays. FreeNAS is idiot friendly.

    At some point, I need to migrate my current FreeBSD/ZFS setup to something newer, and I'll probably use FreeNAS next round simply because it's so much easier to manage. (Yes, I can do it from the command line--but I do it so rarely that I always have to go reload all the ZFS command set into my working memory.)

    Why didn't you at least do this?

  • vorpalhex 7 days ago

    I've been pretty successful in meeting my private cloud dreams. I use FreeNAS as the bare metal OS, and run Ubuntu VMs which host Docker Containers (using NFS to keep actual persistent data on the underlying FreeNAS box).

    Docker/Docker-Compose isn't quite lego.. but it's awfully close.

  • roel_v 7 days ago

    If you want something that does these things and mostly just works, get a Qnap. More expensive, of course.

867-5309 7 days ago

$1700 seems like an awful lot to spend on this setup, even with 1yr warranty. I recently bought the following from eBay on a £500 budget:

Asrock Rack c602 mobo 2x Xeon e3-2650Lv2 (20c/40t) 8x8GB Samsung DDR3 ECC generic EATX case with fans 2x CPU heatsinks 3x case fans XFX fanless modular PSU

if the price and specs alone aren't compelling enough: it runs idle at ~50w it has a similar passmark of >15000 it has 4x GbE ports it has 4x PCIE3 slots (!)

I'd never heard of iLO, which other commenters mention as a selling point, but a quick search leads me to believe this is HP's take on IPMI, which this mobo has.

originally built as an HTPC server, I had 2 main criteria for my build: cool and quiet. hence opting for low powered processor versions, 0db PSU and PWM case fans. if you don't require these criteria you can knock ~20% off the budget.

there was so much power being unused that I binned a few other devices (namely crappy ISP-provided router and tv box) and made this build the heart of my home network. it is now my family's router, firewall, adBlocker, movie and tv server, game server, free cloud storage manager (synced to every household device), OS updates cache, music streamer, torrent client/server, VM server, web server, database server, VPN client, proxy server, etc., the list is virtually endless. these are all run simultaneously with ample resources leftover for frequent workstation usage.

I should admit that I thoroughly researched every component's specifications and price, and as such it took me around 3 months of waiting to source them.

I also admit this use case and learning curve is not for everyone, but it was ultimately a rewarding experience for both my brain and wallet.

  • louwrentius 7 days ago

    That's the first good example I've seen that really undercuts my price, but it also has 60% of the single threaded performance. Probably still enough, I can see that, it works for you.

    I admit I was more lazy, I have a separate room for this machine so I don't mind the noise that much. I just configured what I wanted and ordered it. I only did some research on the CPU performance.

    • 867-5309 7 days ago

      yes I only really notice the single core performance on OpenVPN streams which, despite the CPUs' AES-NI and offloading to mobo's Intel-branded Ethernet chips, still cap out at ~80Mbps. it's fine for most internet needs and a few simultaneous video streams, but really bottlenecks torrenting and shifting large files around the web, on 1GBps FTTH. I toy with the idea of buying a generic fanless Chinese 4-port i7-8th Gen 15W 'U' variant to handle most networking duties, thus rendering the behemoth to be purely on-demand, WoL, semi-idle, etc. which would cut costs on electricity long term, but with such devices currently priced ~$300 and with Brexit threatening to bloat that, I am not in any rush. plus it gives me more time to research / discover / await / implement a multi-core VPN solution

      • 867-5309 7 days ago

        edit/corrections: E5-2650Lv2 not E3-2650Lv2 1Gbps not 1GBps (I wish! :)

ChuckMcM 7 days ago

For lab set ups "last year's" enterprise hardware can be a good deal. A couple of notes on the article

1) Setting up a RAID array isn't that difficult and it makes for more reliable storage.

2) Using dual supplies actually lowers the fan noise because the supply running at half power generates less heat than one running at full power. You can plug them both into the same outlet strip :-)

3) These things have a "lifetime" which is the point where things are easily found on the web which support them. And then they become "anchors" without all that support. Very carefully and diligently download and archive all of the necessary software, drivers, manuals, and extra cables for the system so that in another 5 years when it breaks you can reconstruct it successfully.

  • louwrentius 7 days ago

    1) That is true. My storage is all SSD and I'm deploying everything through Ansible, I can always redeploy.

    If I really need data backup, I can use one of the other SSDs or even the single spinning disk I put in as a backup target.

    2) That's interesting, I don't think the noise comes from the PSU though, mainly the six case fans.

    3) True point. I'm not too worried. The machine is fully supported by Linux (no drivers required) and the latest SPP is applied. I never expect to do hardware changes/upgrades down the road. And in five year's who then lives, deals with the shit ;-)

kart23 7 days ago

Boot time of 4 minutes! Is that normal for servers? And why would it be this long if so?

  • sh-run 7 days ago

    It's been a while since I was in a position where I regularly watched servers boot, but 2015ish I was doing a lot of 'boots on the ground' sysadmin and virtualization work. Servers do a lot of additional testing during boot-up, temperature sensors, RAID cache batteries, memory and RAID arrays are all checked. Some of those checks can be disabled, but you don't really reboot production servers regularly so you typically wouldn't want to. The extra 3 minutes of boot time is much easier to deal with than a bad host coming online.

    On top of all that it's pretty standard (at least in the VMware world) to store the OS on an SD card. So the OS has to be read into memory and ESX is kinda slow to boot even if installed to a disk.

    • kart23 7 days ago

      makes sense. thanks for the answer!

  • nodja 7 days ago

    Yes. It's just not a priority for servers.

    The reason it takes this long is that it does a bunch of self tests, and then it has to load all the ROMs for the components (NICs, HBAs, etc.) which often trigger messages like "X Loaded, press CTRL+L to configure" which stay on screen for 5-10 seconds each.

  • whalesalad 7 days ago

    Yes and no. There is a longer boot time because the POST process is a bit more intense. ECC RAM is a big part of that. Chances are this particular caes is likely becuase the boot order needs to be reconfigured and it is hunting for PXE/network stuff before hitting a timeout?

    If you yield control to some kind of Broadcom controller it'll do all kinds of shit before giving up and handing you back to boot a disk.

    • louwrentius 7 days ago

      I boot straight from HDD so no waiting on PXE boot timeouts or other shenanigans (boot media, etc).

  • aetimmes 7 days ago

    I rebooted a series of Oracle SPARC machines a couple of weeks ago that took about 20 minutes to boot/self-test.

    Servers are built to be reliable. It is better for them to boot slowly and correctly than quickly with silent memory failures.

  • edwintorok 7 days ago

    Unfortunately it is quite common that servers are slower to boot than desktops if it has a lot of CPUs, or a lot of memory, or just a very slow BIOS impl that spends a lot of time probing and initializing everything. Servers with hundreds of cores are a lot slower to boot than my laptop too.

Havoc 7 days ago

I wonder if a old server can be used for literally exhaust the air into a duct and pipe it into the desired room

..In that context crazy inefficient boxes make way more sense. Esp with the reduced e-waste contribution

  • opportune 7 days ago

    Yep, it can. Although you would want the server to be spending time doing something productive rather than just idling or spinning worthless cycles

    There are people who use GPUs and ASICs mining cryptocurrency as heaters for greenhouses. Spending the energy on crypto mining is basically a way to recoup some of your energy costs.

    • Havoc 7 days ago

      >doing something productive rather than just idling or spinning worthless cycles

      Saw a site recently that loans out home GPUs for tensorflow airbnb style. Nifty idea

      (don't recall the name sorry)

      • dest 7 days ago


  • eemil 6 days ago

    This is exactly what I will be doing next month. Moving into a house with electric heating, so I'll leave my desktop and home server on to mine cryptocurrency most of the time. Since the electricity will be effectively "free", I can make some profit. And since I can also deduct electricity used from taxed income (in my jurisdiction, "expenses for the production of income") that should in itself reduce heating costs around 25-30%.

  • gdubs 7 days ago

    They make water heaters now that use ambient heat in a room to transfer that heat to the water — I believe they’re essentially heat pumps. Maybe you could put your water heater in a closet with the server and make use of some of the wasted energy.

  • jdnenej 7 days ago

    Using it only as a heater would be a bad idea. Heat pumps are much much more efficient than resistance heaters which your server would be.

otter-in-a-suit 7 days ago

Sometimes I wonder whether going the "homelab" route would have been easier/cheaper for me. I built my server a couple of months ago, from scratch.[1]

However, being forced to use a proprietary tool (ssacli) and limited drive compatibility do not sound desirable. This seems like an odd limitation - is this a normal thing with these types of projects/machines?


  • louwrentius 7 days ago

    Nice blogpost!

    The tool is not mandatory. You can fully configure storage by booting into the storage utility of the RAID controller.

    It just saves you a ton of time.

hrangozz 7 days ago

My home lab system of choice is the lenovo m92p tiny (or updated version)

It's a headless system in a tiny enclosure. i5 processor with up to 16GB ram. Power consumption is nice and low. And they feature remote management via serial over IP, remote power cycle, etc.

They can be had for less than $100 on ebay, and for me have been rock solid.

muro 7 days ago

I didn't buy a single computer currently at home new - a few year old hardware is almost new and undistinguishable in performance. However I would stay away from servers - workstations (e.g. Z440) can be bought with almost the same hardware and for similar cost, yet are quiet.

  • davidgerard 6 days ago

    been eyeing up workstations for a gaming rig. Second-hand, 6-12 months' warranty, a ton of CPUs, a ton of memory, Nvidia Quadro cards aren't directly equivalent to the same generation of GTX but your game will run just fine ...

    • davidgerard 6 days ago

      aaand we just ordered a Dell T3610 (from 2014) with four-core Xeon E5-1620v2 and Quadro K4200 4GB (also from 2014) and 32GB RAM. Just under £600. The loved one is looking forward to her new video production workstation and gaming rig.

      • muro 5 days ago

        Maybe a newer graphics card would make sense (depending on the games), otherwise nice find!

        • davidgerard 4 days ago

          oh yeah, a K4200 is about £100-200 second-hand. About equivalent to a GTX 960/970 for gaming. Hot stuff for 2014!

          The loved one spent about five minutes yesterday evening just watching the rendering of the water in WoW. Quite the improvement over her HP Envy laptop, l33t as it was for a laptop ...

          The PC is a huge black monolith. Easy to open, add cards to, etc - extremely maintainable. Also just about silent, hauntingly so.

          But really - everyone after a new box should go to eBay, PCs, search on "workstation". There's lots of ex-corporate beast machines just waiting for a home. Put 20 threads to work on your compilation!

          • muro 20 hours ago

            I think a K4200 is the equivalent of gtx760, so a generation earlier than 960.

            The K is for Kepler :)

blackflame7000 7 days ago

I built a very similar computer and tbh I prefer to use an 8th gen core-i7 at 4.8ghz as it is faster and less power-hungry for most everyday applications. Javascript, for example, is mostly single-threaded and you can see a noticeable difference in web loading times.

sgt 7 days ago

Been there, done that. Today it's obvious that your home server should be one place only; in the Amazon cloud, running as individual lambdas and perhaps some m4.large EC2 instances.

Jokes aside, I ran a DL160 server at home for a couple of years until the motherboard started acting up. The fans would all go to 100%, and then back down again, then remain at 100% for a while.

Then there were intermittent crashes.

The only solution was to replace the entire motherboard. At this point I stopped looking and replaced the whole darn thing with an old headless Macbook Pro running Linux.

For my purposes it is fine, and I haven't looked back. The power savings are great too.

  • louwrentius 7 days ago

    I haven't touched upon this in the article but for $1700 you can get quite some cloud stuff.

    Still, even with power cost factored in, I think I'll be better of with my own hardware.

    • sgt 6 days ago

      I also prefer having something local. Then you're also completely immune to any network issues outside of your house.

briffle 7 days ago

I bought a 2013 Era Workstation last year. (Dell Precision T3600) It had a 6 core xeon, and 32GB of ECC ram, and was $300 (I had my own hard drives, and got a decent video card)

it works great as a linux workstation, but its just so hot. My whole room is the warmest room in the house. I actually have to run a window air conditioner in the room to make it comfortable (I work from home, so use the office all day). I imagine its probably one of the largest power consumers (including the cost to cool with the window ac) in the house.

I look forward to tax season, I am going to replace it with a new AMD 3700x based system next year.

jagger27 7 days ago

I picked up a Dell R820 with quad 8-core (E5-4650L) and 96GB (24x4GB) RAM for USD$700 in March this year. Because of the memory mezzanines, it’s only half full in this configuration. And if I find a decent deal on E5-46xx V2s at some point I could get up to 96 threads.

I even managed to get it to boot from a PCIe NVME drive with an internal USB stick running the Clover bootloader (yes, the Hackintosh bootloader) to bootstrap into Ubuntu. It makes for a great VM server.

Also helps that iDRAC 7 is aeons ahead of the horrible iDRAC 6 servers I was using before.

  • whalesalad 7 days ago

    > iDRAC 7

    Being able to completely control every aspect of the thing from my Mac via a simple Browser + VNC combo is incredible (even when it is powered off)

parkaboy 7 days ago

When I was looking at Monero mining ages ago, I built a rig using the Dell Poweredge R810. It has 4x CPU sockets in which I have Xeon E7-4860 (for mining that uses AES) that each have 10 cores/20 threads -- 40 cores/80 threads total. I found cheap deals on both scouring eBay. The entire setup cost me < $500. was noisy AF and consumed something like a few hundred Watts. Maybe as much as 500. Needless to say, I have not been running it. Does anyone have some tips on where to get cheap power? ;)

  • OrangeMango 7 days ago

    > Does anyone have some tips on where to get cheap power?

    Do you live in a location that offers real-time pricing? Where I live, you can opt-in to such a scheme and then monitor an API from the power company and adjust/schedule your power usage to favor times in which electricity is very inexpensive. [1] Sometimes, you might even get paid to consume electricity:

    > Negative Prices: With real-time hourly market prices, it is possible for the price of electricity to be negative for short periods of time. This typically occurs in the middle of the night and under certain circumstances when electricity supply is far greater than demand. In the market, some types of electricity generators cannot or prefer not to reduce electricity output for short periods of time when demand is insufficient, and as a result some generators may provide electricity to the market at prices below zero. Since Hourly Pricing participants pay the market price of electricity, they are actually being paid to use electricity during negative priced hours. Delivery charges still apply.


  • semi-extrinsic 7 days ago

    Depends on where you live, but round here, I use both the GPU mining rig and the home server (Dell R630) to heat my basement. For >70% of the year I would be running electric heaters down there anyway, so it's "free" electricity.

zantana 7 days ago

I'm always wondering what are people doing in these homelabs that they need this type of hardware. I worked for an MSP and sometimes people would take home an old DL380 or something, but it always seemed like a waste.

With a 16GB NUC I can easily provision 6-7 small vms without an issues which is enough for general self-host and exam prep. With docker you can run a simple instance of just about anything and leave it up all the time.

  • louwrentius 7 days ago

    Maybe I have to admit my purchase is not entirely rational. But it's fun!

    • zantana 6 days ago

      Well I can't argue with that.

sigmonsays 7 days ago

Where do you store this machine? How is the noise? also, I wonder about the heat generated. Does it keep a room toastier than the rest of the house?

  • OJFord 7 days ago

    Many examples on - including even more hungry (or several of them in a rack(s)) systems.

    The short answer is Yes, servers built to be servers are designed to get the heat out and keep internal temp. down - so noisy fans and as much heat as you generate outside of the box.

    Of course, nothing stops you replacing fans with quieter ones (at least one of more expensive or air movement) or putting consumer hardware (which has different design goals that you might prefer in the home) in a rack-mount chassis.

  • sannee 7 days ago

    I live in a dorm room and keep an R610 about three meters from my bed. You can use some magical IPMI commands to disable the internal fan feedback loop and replace it with a custom one (usually the defaults cool the server to around 30C, which is not really necessary). This makes the server more quiet then the small fridge I also have in the room.

    The 100W power consumption definitely makes the room warmer. It is approximately like having another person present.

  • neogodless 7 days ago

    The article has some answers to these questions:

    > My very subjective opinion is that at 50 dB the sound level is reasonable for a server like this, but it's definitely not quiet. I would not be able to work, relax or sleep with this server in the same room.

    > Although this server is fairly quiet at idle, it does need it's own dedicated room. When the door is closed, you won't hear it at idle, but under sustained load, you will hear high pitched fan noise even with a closed door.

Havoc 7 days ago

This is the reason I browser /r/homelab on reddit. It doesn't make any sense to me, but is somehow cool anyway

dev_dull 7 days ago

> The server reports around 98 Watt of power ... by default, this server is turned off.

This is a poor tradeoff to me. A low power computer can be left on all the time, and be there for you when you need it for things like openvpn or ssh tunnels etc.

Leaving this on and idle would cost around $20 a month in electricity here in the bay area.

  • louwrentius 7 days ago

    It would probably cost me 2 euros per year per watt. So I think it's about the same cost.

    I have other low-power hardware for the OpenVPN / firewall /routing stuff so I don't need this machine turned on.

adolph 7 days ago

It would be interesting to compare this approach with an updated Atwood's Scooter Computer.

  • louwrentius 7 days ago

    sysbench --test=cpu --cpu-max-prime=20000 run

    I got 10 seconds single thread on the DL380p. The scooter is indeed the opposite of this (awesome too btw).

    So to have fun: 1 core is ~3 scooters. So this box can do the work of 60 scooters ;) (only counting real cores).

wodenokoto 7 days ago

In this context, what is a lab setup? A LAMP stack? A tensor flow training server?

  • louwrentius 7 days ago

    In my case a KVM server to spin up virtual machines in which I can test anything from playing with Kubernetes to Grafana, Elasticsearch, Ubuntu MAAS, or whatever I want to toy with.

timw4mail 7 days ago

So...what's the real advantage of using server-grade hardware for this use-case? Wouldn't a desktop/workstation with consumer parts work just as well, and give better performance for a similar price?

  • louwrentius 7 days ago

    Some examples are reported by people in this post. Where I live, I could not get 20c/40t + 128GB for this price point any other way.

holy_city 7 days ago

I think for $1700 with the memory/storage capacity that's a great deal, but those benchmarks can be topped by the higher-tier consumer CPUs today.

The point on people being happy with slower CPU cores is kind of weird to bring up with a server. Most games don't push CPUs that hard, you usually need a really expensive GPU before you see noticeable benefits in gaming from faster processors.

Having done some core critical work for the last few years (media processing/systems programming), my recent upgrade from a 4th gen i7 to a Zen2 CPU is paying in spades. If I was building a server to do some of the batch processing stuff I'd like, I would definitely invest in a faster, cooler, more power efficient machine. But that's just me. I don't think I could beat that price point though.

  • whalesalad 7 days ago

    Dunno... I have an R720 with similar specs (half the ram, though) and it was only $400. I also idle around ~100w because I ripped all the SAS 10k drives out and put SSD's in.

    It sits powered-off most of the time though because I haven't been able to put it to good use, yet.

    For a while it was running my Unifi controller + Pihole ... but you don't need the Unifi controller unless you are actively performing maintenance, and Pihole happily hums along on a Rpi 3 that uses far less power.

aussieguy1234 6 days ago

How good would this box be at mining CPU based cryptocurrencies, like Monero?