I've been eyeing a NanoPi M4 [1] with a "SATA HAT" [2] for replacing my ageing QNAP NAS. It should have similar specs as the OP, with the added advantage of a 4-port SATA controller sitting directly on a PCIe bus.
I tried searching the blog post or this thread for use cases, but didn't find even a mention; it seems that people use these NATs with something like 4 to 16 TB capacities at home; but I can't imagine a non-professional reason to store so much data.
Personally, if I wanted to store valuable stuff locally, like photos, I'm guessing 250gb of space would be enough for a lifetime.
So what do people use these huge personal storage systems for?
I have 96TB at home, though that's much more than I'm currently using.
I store the usual music (298GB), movies (3.2T) and TV shows (3.5T) which I think is what most normal people would do.
Where the less normal storage comes in is my own photos (5.2T) and videos (2.6T), an archive of all my "rewards" from Patreon (765G), an archive of English manga translations (5.8T), an archive of anime/manga fanart (7.2T) and a bunch of home security camera footage (2.2T).
In total I'm currently at 42T used but I see that increasing, hence the 96T of available disk.
You can find a lot of people doing archiving YouTube videos, educational content, porn, games, pretty much anything really on Reddit's /r/DataHoarder. Pretty much nobody is a professional.
VMs will eat up that space like nobody's business. Especially if you like to keep specific versions of OSes for various reasons (exploit development, home lab environments, etc.).
Especially with the cost of cameras that shoot 4K video coming down. You can get a good camera that shoots 4K video for under $1000, (e.g. BlackMagic Pocket Cinema Camera), and there are cheap cameras (under $100) that can shoot 4K.
I have about 2.5TB of data, quite a lot of which is movies and TV. I would guess around 0.5TB are personal data such as photographs, important documents, my company records etc.
I have a 5TB capacity NAS, which ought to be OK for the forseeable.
Other folks I know with larger storage requirements tend to be into virtualisation tech/openstack etc.
My setup, which I need to finish, is an old laptop.
You can get a SATA hard drive adapter for the optical slot on many laptops. So you can put in two 2TB hard drives. My laptop only has USB 2.0, so that's a downside. But you have a built-in UPS (the battery), and my battery is a OEM high-capacity one which still works OK.
So the only new parts are the adapter (< $10 USD) and the SATA 2.5in hard drives. Just be aware of the height restrictions, some internal hard drive cavities on laptops can only take the 7mm or 9.5mm high drives, and not the 15mm drives that can be used on a NUC.
My old laptop is a quad-core AMD (which is OK-ish) and has 8GB of RAM, so it will be running some other stuff as well.
Nice, how much will it end up costing for the spec and accessories you want? I bought a HP microserver a while back and am curious how the cost compares.
I shudder at the idea of using USB connectors for important data. I have recently purchased a Kobol NAS https://kobol.io/. It is amazing because it has the following features in a good price point:
It only says "preorder now" or do I miss something? And given the price point of 300USD (with taxes and shipping) you are well in the terrain of x86-Atoms, heck you might even get a good deal on a Xeon-D...
I built a very similar home NAS with the newer RockPro64 and a pair of 4TB HDDs in RIAD1, all on top of Debian[0]. I found OpenMediaVault to be overkill and kept me from really understanding what was going on. Plus there are a million guides to setting up SMB, rsync, Borg, etc.
The RockPro64 hardware is great. Very performant, especially when using the PCIe to SATA card instead of USB like the OP did.
I asked this on another thread, but I don't understand why you would see real world difference between PCIe vs USB3.
When using this setup as a NAS, isn't your bottleneck the gigabit network from machine to machine? You'll saturate the 1 Gigabit link ( or even 2 Gigabit duplex ) before you can reach 5 Gigabits ( USB3 ) or 6 Gigabits ( PCIe )
RAID seems very popular for 2 disk setups, but I don't like the cost benefit. With RAID under about 5 disks, you aren't getting much speedup, under 4 disks you can't do a proper fail-out and rebuild of a larger multi-disk volume.
I use 2 x 4TB disks in my homeserver, but I keep one online and have a script that brings up the other one periodically and rsyncs everything. This gives me a local backup, something RAID lacks, so I'm protected from fat fingering or accidentally deleting stuff. I also have very minimal downtime, because I can mount that drive in the place of the primary drive in just afew seconds.
I run xfs on the primary drive, and btrfs on the mirror, so I can take snapshots after I rsync and maintain differentials easily.
My point is, you should consider getting rid of RAID and just use the bare drive or LVS Volume.
Agreed, RAID makes little sense if you want a quiet home storage that is used only occasionaly. Additional disk is better used for regular backups. If you need to minimize downtime and have the storage accessible 24/7, then RAID makes sense.
FWIW, I run 10 x 4TB raidz2 NAS with 4GB using ZFS on Linux.
ZFS on Linux isn't as memory-efficient as it would be on FreeBSD, which has a much closer file system cache / memory manager architecture to Solaris, but it works fine.
ZFS memory consumption really rockets when you want to use dedupe, which basically wants to store a hash of every block in memory. The sweet spot for its use is in things like multiple VM images - where there's a lot of duplication inside large files that are otherwise different. But there's often ways to structure things to gain back the same space, e.g. with stackable file systems.
Without dedupe, and for a NAS scenario where there isn't going to be a massive working set (typically source or sink for backups, streaming video, etc.), 4G has been more than sufficient for me for years.
Have you considered Btrfs? I'm using it on my primary OS drive and my backup data drive. I keep my primary data on xfs to give me some cross platform resiliency. Btrfs allows me to take snapshots and uses space very efficiently. I haven't tried the deduplication features.
I've been running a 8TB homeserver for the cost of an optiplex on eBay (80) and 2 8tb external drives on Amazon (300). I cronjob an rsync every night and that's that. Cheap, works great, and (surprisingly) I understand how it works. I also have cloud storage for the subset of things I don't want to lose.
A thing I'm working on right now is for the rsync disk to be on a switch I control with an Arduino/pi so it's only powered during the backup -- I reckon the drive will last longer that way. Homeservers are fun :)
If you're ok losing 24hours of data and not a performance junky, the backup plan becomes trivial and the raiding isn't really necessary.
Daily rsync alone isn't enough. Any data which gets corrupted on the main drive will, within 24 hours, also be corrupted on the 'backup'.
If you're confident enough that your computer won't get stolen, burn, or zapped by a power surge, then you can keep the same hardware setup you have, but:
1. Use borg backup (or restic) instead of rsync (so that you can restore to any day in the past, not only to yesterday)
2. Disconnect the 'backup' drive when it's not being used (so that it's protected from malware or fat fingers).
Even better would be:
3. Have a separate computer host the borg backups, and have it run 'borg serve' in append-only mode (so that, no matter what you do the main computer, you cannot destroy or corrupt past backups).
Even better:
4. Have this separate computer in a different physical location (so that you're protected from fire, theft and the like).
Not the parent, but I use a similar setup. I avoid this be keeping my 2nd drive formatted as btrfs, primary is xfs. After I rsync to the btrfs drive, I take a snapshot.
I only rsync every week, my data doesn't change often and if it does, I can trigger a manual rsync using the same script in my /etc/cron.weekly/
edit: I do use borg to backup to my cloud storage server, since it's encrypted.
I didn't think corrupted data would change the size of the file or the mtime? If it doesn't then rsync wouldn't overwrite the second drive as it would see it as an existing, good copy. Unless you had rsync using checksum comparisons of course.
You're probably right for corruption caused by hardware failure or filesystem bugs. I was thinking about corruption in the wider sense, e.g. caused by application bugs.
borg looks awesome. It comes up all the time when I research efficient data storage replication (buzzhash) so now I'm really going to try it out, thanks for the tips! :)
The heads fly on a fluid cushion when spun up, they must physically land when the thing spins down. The head landing/takeoff operations are sources of physical wear.
Assuming a brushless motor and quality sealed ball bearings, it could be argued that the drive wears less kept spun up all the time.
My understanding is that spinning them down is primarily a power-saving measure.
When I last read up on this, which admittedly was over a decade ago, head parking was just a matter of locating the heads over an unused region of the platters. The head still goes through a landing and takeoff cycle, so the head still sees physical contact. It's just avoiding doing it over the data portion of the platters.
zfs or bust. If you are not using zfs for your long term storage and backups, you might want to consider what exactly you are doing and why you aren’t paying someone else to make sure your 1s stay as 1s and 0s as 0s.
To be fair, ZFS isn't the only game in town. Depending on what you're doing, gluster, ceph, minio, restic/borg/etc. may prove sufficient. I like ZFS, don't get me wrong, but if the goal is bitrot protection there are a number of different tools to solve it at different places in the stack.
For a NAS, I don’t think backup tools with checksumming would be sufficient. I also am not a fan of them as the only tool to prevent bitrot. If borg detects that a backup is corrupt, my option is what? To ditch that backup? zfs can not only detect it, but also recover from it. That’s not something that most tools can do.
To be fair, I don’t know about the capabilities of gluster, ceph, and minio.
Has it gotten less crash crash eat your data stash? Last I checked, distros were putting it into their install options while it was still happy to crash the FS with major data loss. I haven’t checked on whether it was worth it since.
It's been the default OS for opensuse for at least afew years. I've been using it on root filesystems and data volumes with no problems. I was backing up to a 1TB btrfs volume but it filled up with snapshots and cleaning them didn't cause any problems. I've since moved to a 4TB drive and I rsync another drive to it weekly and take a snapshot. I've had no issues.
"I was averaging 60-113mbps depending on the size of files being moved. This was a successful build in my mind! After selling my QNAP NAS, I came out ahead on my expenses for the build."
113Mbps is considered "high performing"?
"on a shoestring budget with the Rock64 SBC"
"TOTAL: $172.39 CAD before tax"
This is without the cost of the hard drives, doesn't seem ot include the cost of an enclosure, and results in:
* 1 Quad-core A53
* 2GB ram (the 4GB version is $45 USD without shipping, so it couldn't have been this one), which you can't ever change
He is from Canada, your suggestion is clearly not in CAD and from a brief Ebay search, is at least 500$ CAD, which is now 3x what he paid for and much more than his maximum budget.
He also switched because of the noise of the previous setup. I feel like your suggestion doesn't cover that at all.
Except higher performance (which doesn't seems to be a requirement for him considering his QNAP gave much worse performance and it was still not an argument for new gear), what would he get from your 3x cost?
Yup, I've been running an N40L microserver for almost 10 years now. Put 16 gb in it & some hard drives and never looked back. It just works.
I first ran FreeNAS, but eventually moved it to Ubuntu 18.04 so that I could run some containerized services on it (file sharing, Plex, and a remote virtual desktop).
16gb RAM is about right for running ZFS. 8gb worked OK, but I'd notice slowdowns from time to time.
If this machine ever decides to pack it in, I'll get a gen10.
If you ever get the urge to play around with FreeNAS again, they really improved linux emulation and virtualization; I've been pleasantly surprised with how much I can get done with bhyve.
That's only if you feel like tinkering, though; Ubuntu's a great choice as well.
I had the NL40. The power supply fan was very loud considering how dog-slow it was. I hope they fixed that. I use a raspberry pi now, which is also slow but it's much, much smaller and cheaper in every way.
I always suspected mine must be faulty since it was so loud. You could swap the power supply to something else but that was quite costly and left you with a sea of wires sticking out of your otherwise very smart case.
I have been running an N36L with FreeNAS for 7 years now. I don't keep it running all the time, so it's not really a "NAS" anymore, but more of a backup system since I configured the disks to use RAID-Z.
Well your mistake is thinking that the rock64 has power consumption anything like an opteron.
A low power opteron is like 55W just for the processor and a rock64 is like 5W for the whole SBC.
All of the always on parts of my home network are built out of compact low power parts that are faster than the rock64 and, no surprise it's expensive.
I used a nano-itx celeron that's much faster than a rock64 and gives me 2 gbe links to bond. It would have cost him 10 dollars more and would use 2-5x the power.
When I built an ARM based NAS, I chose to use the Banana Pi BPI-R2 because it was one of the very few boards with 2x SATA ports using PCI-E. It worked fine and got good speeds. It's difficult to run a current kernel on the BPI-R2 though (it is slowly creeping towards mainline). If I built another NAS again I'd just use an x86_64 or aarch64 SBC with a PCI-E port and connect a good SATA controller.
I'm surprised I don't see more people with my NAS setup. I run a HP Gen8 MicroServer which cost about £120 for the base machine a few years back. The default specs were a bit feeble but perfectly serviceable for running a NAS, though mine has now been modified a bit since then to sport a quad core E3 and 16GB of ECC RAM. If you pick up an adapter lead you can fit an SSD in the optical drive bay (which I have done). One nice thing about the microserver is it has two built-in ethernet ports, so mine run in a link-aggregation group, which is well utilised (I run a lot off this little server, like Plex, ZoneMinder etc).
I run ZFS on Ubuntu (ZFS is superb and one of the best things I've learnt in years) with a RAID 10 setup (2x2TB mirrored pair, 2x4TB mirrored pair). I'm actually just resilvering one of the 4TB drives right now as I finish migrating to them from a couple of old 1TB drives. RAM usage was high when I used dedup but now I've ditched that it's running much lower. I think the hardware will start to struggle once I get up over 10TB of usable space, but considering the cost of the actual machine I really don't feel like I've wasted any money at all.
I picked RAID 10 for this because I feel like with only 4 drives available it makes the most sense, weighing up resilver speed vs slightly better failure recovery if it was RAIDZ. On bigger servers I tend to go for RAIDZ3 instead.
For backups I use syncthing on various machines to copy data to the server, and then rsync the stuff I care about every hour (with a lockfile to stop overlap) to rsync.net, which then performs all the snapshots. I used to run borg but I swapped back to plain rsync because I felt more confident in being able to restore from it. rsync.net is fantastic. I'm currently storing about 1TB with them, and although it looks more expensive than other options for home backup, I feel like the lack of hassle is really worth it for me.
I'm not usually a fan of HP btw, but these little microservers are really well built and I actually ended up using a ton of them at work as well; we only ever had one failure (which I believe was due to PoE somehow being delivered to it, which was really fun to debug :|). When this one eventually can't keep up I suspect I'll just go for a newer model of it. The form factor is very convenient for the home, especially if you don't have the convenience of a spare room in which to put a rack.
I couldn't get the low voltage Xeons for a sane price in Germany, so I settled for a second-hand i3, which turned out to be more than enough for what I'm throwing at it.
I'm running ESXi as a hypervisor managing two Ubuntu VMs at the moment: a router and a fileserver managing two volume groups (6 TB dump storage for movies, music, etc. that's not too important and thus unmirrored, 2x2TB mirrored storage for photos and other important data that should not get lost). Thanks to the two network ports I can completely isolate the fileserver from the open internet as only the router VM has access to hat port. If I want to access my stuff from outside, I can VPN into the router and thus have access to the fileserver.
The next points on the list is Plex or something similar so I don't have to manually re-encode movies when my TV doesn't like the audio codec. Here, the i3 might struggle a bit, but we'll see.
As for the cost: I spent 200€ on the server itself, 88€ on 16GB RAM, ~20€ for the SSD case and power adapter cable and ~65€ on the SSD. The i3 was ~25€, so I'm up to roughly 400€ for the entire server (plus storage costs, but you'll have those anyways). Compared to what you'll get for that price it's an absolute steal, especially when you compare it with several "low budget"/homemade NAS builds floating around the net/youtube.
Bonus points: you can stack them on top of each other.
Heh. I've still got a gen7 happily chugging away - it is happier streaming 1080p x264 rather than x265, but aside from that we never notice it's so feeble. That's easy enough - we only rip x264 now. It's heavily used, ZFS performs flawlessly - running on FreeNAS, and runs a surprising amount of plugins and jails without issue.
I haven't looked at gen 8 or 10 - where was 9? - since the arbitrary HP switch in policy to say you need an active support contract to download a BIOS update. For most people buying a MicroServer that's BIOS updates in first year only, which is far too hostile.
I'm more likely to self-build using a U-NAS 8 bay or something else with more bays next time around, or pick up a rack off ebay.
Yeah those E3s are really awkward to find. I ended up buying it secondhand from the US for about £85. I think you'll be fine with the i3 for Plex encoding though; I can get a few decent streams going on mine before it starts to tax it.
I used to run a dual network setup similar to that actually, and had my server's only access to the internet via a VPN tunnel on the router. I've stopped doing that recently as I got more interested in the link-aggregation setup, but I might do something more complex at the router-end to achieve a similar outcome at some point.
I'm running a very similar setup. One of the HUGE benefits of these boxes is the iLO for fixing grub, / hardware changes / etc. Saves schlepping a motitor and just makes life so much easier.
Most people don't realize you can ssh to ILO and use it to access a virtual serial port on your server. That gives you console access without an ILO license.
It can be a little complicated depending on your Linux, but if you google "ILO virtual serial port" you should be able to find some decent directions.
I must admit that I never got around to setting up the iLO interface on mine! It's on my list of things to do in the next few months as I've finally got my home network set up how I want it and a large enough switch sat next to it now :D
I did think about doing that but I actually choose not to do much in the way of snapshots on my server and instead use rsync.net's snapshotting, mostly to save myself money on storage costs locally (as files change quite a lot but I rarely need to restore anything). Not sure if the costs work out in my favour long-term or not so I might revisit that in future :D
I was still picking them up in the UK for about £120 until last year, maybe the stock has finally run out. I did notice that the price of the Gen10 does seem a bit steep, although I would say that the base specs are quite notably better than the Gen8 base model so perhaps it works out reasonable by that point. Mine has probably cost about £400 after the upgrades I made to it.
Mine is built around an even cheaper used Atom fanless Mini-itx board plus Nas4Free (1) which is more than enough for serving multiple full HD videos (2) around the house, so speed isn't that important; the network would be the limit anyway.
(1) I know it's old, but attempting a live upgrade to XigmaNas would be risky, so I'll reinstall it at the next maintenance stop.
(2) as files through NFS and SMB shares, not transcoded and streamed; ie, the player does the dirty work, not the NAS.
I ran NAS4Free for years but found that I could not configure it to email me an alert when the chassis fan failed or the RAID array degraded. You can guess what happened - fan failed and a drive cooked. Was this ever addressed/fixed? Otherwise I thought it was pretty great.
You will probably pay that much ($60) more per year in power than using an ARM SBC. I'm not saying it's not a good option, just want to be fair. I think using a regular old x86 PC with lots of space gives you far more flexibility. It's just larger, noisier and more expensive to operate. They both have their pros and cons. Personally I use a small NAS enclosure with a mini-itx board and x86 CPU.
I used an aluminium enclosure for a long time and no problems with heat. And perhaps that's why there were also no lifetime issues either. "Dog slow"? Yeh I agree but mine was USB 2.0. It's much less of an argument with USB 3.0 and even less again if you don't need high performance for your use-case (e.g. backups only).
So I don't think we have answer yet as to why you wouldn't use USB.
After having wrecked many MicroSD cards with ARM SBCs, I have found that the Samsung High Endurance MicroSD cards[1] have held up to consistent use so far, including being used as backing scratch stores / swap space for calculations on an ODROID-C2. They're also pretty cheap for how reliable they have been. Of course, it's just anecdata, so YMMV.
Yeah, even the name brand SD cards blow up for me after about 6 months. I've switched over to the Samsung "FIT" USB sticks that fit flush with the port, and boot off that. And leave the SDXC slot free for importing images from the camera.
So far, my experience has been SD cards faceplant by going read-only. They will mount rw, but a single write will cause a long hang, followed by a bunch of sd driver errors, and then it goes read only. It is still readable but will accept no writes, including blkdiscard (which gets translated into whatever the sd card equivalent for TRIM is).
I’ve done something like this with a SolidRun CuBox and a cheap external 2-disk RAID1 enclosure connected via eSATA... 8 years ago. It’s still running - I’ve only had to replace disks as they failed, which is expected, with zero data loss. It appears as a Time Machine on the network, backing up a couple of laptops, and as a shared SMB drive for bits and bobs.
To be on the safe side, I mounted /var on the enclosure, so the SD card in the CuBox is not stressed, and that has not failed once in 8 years. It wasn’t the cheapest thing (and it’s absolutely wasted on this task - it has good media capabilities and wifi that I don’t ever touch), but it sits in a small recess out of the way, and has been rock-solid. Every few months I dump all its contents to Glacier because I fear the enclosure will fail before the little box does.
I expect I’ll eventually replace the CuBox with a Raspberry Pi when it dies, but the overall model is sound, for something as simple as a fileserver. Just don’t open it to the internet - a lot of these small boards stop getting OS updates from manufacturers after a year or two, and their custom chipsets make it difficult to work with vanilla Linux - particularly if you need wifi or bluetooth.
> my old QNAP NAS would average around 17-22mbps transfer speeds
This seems a bit too low in my opinion, I would expect an average consumer-grade NAS to perform a little bit better than that. Perhaps there was a bottleneck somewhere (e.g. network issues)?
I didn't test transfer speeds on my cheap Thecus NAS (with 2X WD Red 3TB in RAID 1) but now I'm curious so I think I will check when I get home.
What's your experience with NAS transfer speeds, HN people?
22 MBps sounds like what my former DNS-320 achieved. IIRC, encrypted data transfer (WebDAVs instead of NFS) usually reduced the transfer speed a lot, due to a weak CPU. The value above is the one I recall for encrypted transfers (note Bytes, not bits).
Does either the Rock64 or RockPro64 run mainline Debian with the Debian generic arm64 kernel yet? Or do you have to use a board-specific kernel like the ones that Armbian provides?
I've been thinking of setting up a NAS like this for a while. Am I missing something, though, or does a setup like this need 5 or 6 power adapters to work (one for the Rock64, one each for the USB enclosure, and possibly one for the USB hub)?
Not a deal killer, for sure, but it becomes a mess of cables quickly.
Check the allowable voltage input ranges for each and if they have overlap just buy a big power brick (like a laptop charger), cut the ends off of each of the supplied wall warts and solder+shrinkwrap them all together into a nice neat harness. This has the added benefit of saving a tiny bit of power assuming you get a decent main supply.
Tip: thread the shrinkwrap first before soldering. This will save you much swearing.
Did you do any testing to figure out how many amps you're drawing from the single wall wart? Although, modem/routers in normal usage are pretty low draw.
ZFS has a nifty, sort of low-rent safety that can be used for this sort of thing - the 'copies' param. Just be sure to understand what it does and doesn't give you.
I personally value the data I store long-term more than this, but most likely would have played around with it when I was more cash strapped and daring/stupid about it.
Storing files in a resilient way (usually just RAID 1). Allowing those files to be accessed from all my devices. Only doing what would be guessed by the words Network Attached Storage.
Don't give me stuff for security cameras, transcoding, things for my thermostat, vertialization, email, etc etc.
Basically just a personal Dropbox.
By the way, when using a backup software like Borg or Restic that are supposed to have replication built-in, do you still need things like RAID (if you intend to use the drive only for backup) ?
Why don't these people just buy one of those power efficient i3's and throw it in an old PC case ? Seems like a lot more power and flexibility compared to using Rock64's. My 'nas' is just a headless linux computer with a bunch of drives in there. My only complaint is that 6 HDD ports aren't enough. It can stream HD video to multiple devices at the same time no problem which I feel like would choke if you tried it on the Rock64.
I use a gnubee nas. Its an open source hardware nas that can run debian. Its not a shoestring budget device but its a lot cheaper than the commercial ones.
But "recommended RAID levels are 0 and 1 under LVM and MD, and Linux MD RAID 10"[0]?
What you likely want is a 1, and if so it seems easier to keep two drives in your PC and rsync from one to the other[1]. (Also, such is the awesomeness of rsync that the syncs from one 10 TB drive to another literally takes seconds...)
I thought NASs' niche was more complex RAID levels which mitigate the data corruption our last-gen filesystems are still prone to. :D
1: Cron "rsync -aPq --delete {$source} {$destination} >/dev/null 2>&1". (The scary delete flag deletes files on the destination that are no longer on source, handling renames and unwanted data as RAID would.)
It's a little more than a shoestring budget, but I built my NAS from an old Supermicro server I bought on ebay. I found a 4U 24-bay machine for $1200 with dual xeons, 256GB (!) of RAM, 2x PSUs. All I had to add was a RAID controller ($100~) and a flash drive for the OS (because I was being greedy and wanted all 24 bays for storage).
It works great -- although it is as loud as a jet engine.
I have Raspberry Pi 3 B+ hooked with two LaCie Porsche Design P'9220 1T drives set up in RAID 1. I had to increase the USB current by configuring max_usb_current=1 in /boot/config.txt and also use good USB charger and cable (really, some of the cables I had were not good enough) to get it to work reliably.
It's been working perfectly. Sure, it's not the fastest thing ever, but mostly the limiting factor is our home WiFi, not the the RasPi.
EDIT: The USB drivers I use do not need external power supply, so the whole thing needs only two cables; power supply and Ethernet. Ethernet cable is not necessary if you are ok with WiFi speeds.
I have a usb harddrive connected to an odroid c2, it used to be connected to an rpi3, but if I'm not mistaken I got 2-3x the write performance when using the odroid. I have an odroid n2 now, but I haven't bench marked it.
This is pretty cool! Back in 2012, as a hackathon project, we compiled our then product (MagFS) to run on raspberry pi - and it worked beautifully (minimal work needed, as the client was in C++, mostly compiler ifdef's).
For context: MagFS (sold to EMC, as part of the acquisition of Maginatics) was a full-fledged cloud NAS (mostly following SMB2/3 semantics, although it was also mostly POSIX-compatible) and used cloud objects (e.g. S3 or OpenStack) for raw data storage instead of HDD blocks.
I got tired of ARM-board NAS boxes a few years ago, the performance has always been pretty bad, usually because of the USB attached storage. Although I had an orion5x-based WD ShareSpace a few years ago with 4 sata connectors, and that was terrible too!
These days I'm running a SFF PC with 7TB of SSDs wedged in it, seems to be doing the job nicely.
I've got a Qotom PC for this purpose. Mine, unfortunately, is an older model with only USB 2 and SATA, but my Orico RAID box has got both USB and eSATA, but even then since I run borg backups in the background, I don't miss much performance over USB (I've been too lazy to reconnect it to SATA).
If you consider peace of mind, backup in the event of fire, not having to maintain hard drives after failure, energy cost (and Apple running iCloud on renewables)... the $10/month iCloud account (2 TB) account is quite cost competitive with doing it myself...
My local network speed and latency is much better to the point that I can use my NAS as a local disk for some applications (ie. Lightroom) - no way I could do that with a cloud only storage solution.
Depends on your requirements. I have 2x12TB drives in a NAS that I rsync. I backup some of it to the cloud (software repos, processed photos, misc documents, etc - maybe 200-300GB) and a lot of it just stays locally (raw video, linux iso, all my original, unprocessed photos, etc).
I also have an old 10TB disk in a USB enclosure encrypted with LUKS that sits in my cabinet at work which serves as my off-site backup. Every ~6 months or so I bring it back and rsync it with a script and take it back to work.
Unfortunately, Apple doesn't even offer anything larger than 2TB, which is terribly small for anyone who's serious about photos or video. I have hours and hours of 4k60 (~400MB/min) video from events and vacations that I want to store. And this was taking using the smartphone in my pocket, I'm not a professional.
What’s your estimate for the ~cost of the setup, and how many years do you think it will last? Curious if the cost/month is similar to what I calculated.
Good question, I never considered it. The motherboard, CPU (i7 4770) and RAM are a very, very old desktop setup.
I checked Amazon and the hard drives were purchased on 10/8/2017 and they were $479/ea. The U-NAS case was $150 and the SSD was a random 120GB SSD I had after upgrading my desktop (it was the smallest of the "spare" SSDs I had in a drawer in my desk).
The 8TB HDD were Seagate Archive Drives that used to be the primary storage until I upgraded, so they became offsite backups. Those were purchased 5/8/2016 for $248. The external enclosures I use were $19.99 ("ORICO Toolfree blah blah" I can get you the exact model if you're interested but I assume you can find something newer and better).
So direct cost, I'm not really sure. A lot of it was "free" for me in the sense that it was spare hardware. But should be enough for you to guestimate what a setup would cost.
How long it will last is really hard to say. I monitor the disk health via smart and by keeping two copies I feel pretty safe. Luckily I'm also a Google Fiber customer (which means 1TB free Google Drive) and I sync a lot of critical stuff (software repos, personal photos, documentation like taxes, etc) to Google Drive for a third backup (following 3-2-1 rule). Right now I still have >1TB free on my primary 12TB drive, and the growth rate isn't very high. The most intensive thing I store is video from my iPhone, so that's really what dictates my rate of growth.
I've been eyeing a NanoPi M4 [1] with a "SATA HAT" [2] for replacing my ageing QNAP NAS. It should have similar specs as the OP, with the added advantage of a 4-port SATA controller sitting directly on a PCIe bus.
[1] https://www.friendlyarm.com/index.php?route=product/product&...
[2] https://www.friendlyarm.com/index.php?route=product/product&...
I tried searching the blog post or this thread for use cases, but didn't find even a mention; it seems that people use these NATs with something like 4 to 16 TB capacities at home; but I can't imagine a non-professional reason to store so much data.
Personally, if I wanted to store valuable stuff locally, like photos, I'm guessing 250gb of space would be enough for a lifetime.
So what do people use these huge personal storage systems for?
I have 96TB at home, though that's much more than I'm currently using.
I store the usual music (298GB), movies (3.2T) and TV shows (3.5T) which I think is what most normal people would do.
Where the less normal storage comes in is my own photos (5.2T) and videos (2.6T), an archive of all my "rewards" from Patreon (765G), an archive of English manga translations (5.8T), an archive of anime/manga fanart (7.2T) and a bunch of home security camera footage (2.2T).
In total I'm currently at 42T used but I see that increasing, hence the 96T of available disk.
You can find a lot of people doing archiving YouTube videos, educational content, porn, games, pretty much anything really on Reddit's /r/DataHoarder. Pretty much nobody is a professional.
640k should be enough for anyone?
I need more than 250GB just to download and install all the games I've bought on Steam at the same time.
VMs will eat up that space like nobody's business. Especially if you like to keep specific versions of OSes for various reasons (exploit development, home lab environments, etc.).
My iCloud photo library is 250gb and I don't post anything on social media or take a particularly large number of photos.
Also a new game install plus dlc can be 50-100gb.
So the are valid reasons. But also Plex.
Video and photos keep getting larger and larger.
Especially with the cost of cameras that shoot 4K video coming down. You can get a good camera that shoots 4K video for under $1000, (e.g. BlackMagic Pocket Cinema Camera), and there are cheap cameras (under $100) that can shoot 4K.
I have about 2.5TB of data, quite a lot of which is movies and TV. I would guess around 0.5TB are personal data such as photographs, important documents, my company records etc.
I have a 5TB capacity NAS, which ought to be OK for the forseeable.
Other folks I know with larger storage requirements tend to be into virtualisation tech/openstack etc.
Probably in large parts movies and TV shows, both ripped from bought discs and acquired from other sources.
>> I'm guessing 250gb of space would be enough for a lifetime.
HA! God no. Not for anyone that shoots RAW at least. I probably shoot 100Gb of photos a year. Plus I’ve got 300gb of music I’m not letting go.
This SBC looks amazing, how have I not discovered it sooner?! My current setup is an old laptop hooked up to a USB hard drive.
My setup, which I need to finish, is an old laptop.
You can get a SATA hard drive adapter for the optical slot on many laptops. So you can put in two 2TB hard drives. My laptop only has USB 2.0, so that's a downside. But you have a built-in UPS (the battery), and my battery is a OEM high-capacity one which still works OK.
So the only new parts are the adapter (< $10 USD) and the SATA 2.5in hard drives. Just be aware of the height restrictions, some internal hard drive cavities on laptops can only take the 7mm or 9.5mm high drives, and not the 15mm drives that can be used on a NUC.
My old laptop is a quad-core AMD (which is OK-ish) and has 8GB of RAM, so it will be running some other stuff as well.
Nice, how much will it end up costing for the spec and accessories you want? I bought a HP microserver a while back and am curious how the cost compares.
How do you power the harddrives? Are there power adapters with sata power connectors?
Yep. The SATA hat takes in 12v, converts it to 5v power for the nanopi and has a molex connector for powering the hdds.
I shudder at the idea of using USB connectors for important data. I have recently purchased a Kobol NAS https://kobol.io/. It is amazing because it has the following features in a good price point:
1. Open source
2. 2GB ECC ram
3. 4 x SATA
4. Hackable GPIOs
It only says "preorder now" or do I miss something? And given the price point of 300USD (with taxes and shipping) you are well in the terrain of x86-Atoms, heck you might even get a good deal on a Xeon-D...
It's the same as the Helios in OP's article, seems to be a Kickstarter doing limited runs.
What about USB makes you shudder. Error detection and correction is built into the protocol and should be quite reliable.
The power connector on that thing makes me shudder.
Why? It seems like some kind of mini-DIN and this is pretty common on industrial T&M equipment around here (think CANbus et al)
I built a very similar home NAS with the newer RockPro64 and a pair of 4TB HDDs in RIAD1, all on top of Debian[0]. I found OpenMediaVault to be overkill and kept me from really understanding what was going on. Plus there are a million guides to setting up SMB, rsync, Borg, etc.
The RockPro64 hardware is great. Very performant, especially when using the PCIe to SATA card instead of USB like the OP did.
[0] https://ameridroid.com/blogs/ameriblogs/how-to-build-your-ow...
I asked this on another thread, but I don't understand why you would see real world difference between PCIe vs USB3.
When using this setup as a NAS, isn't your bottleneck the gigabit network from machine to machine? You'll saturate the 1 Gigabit link ( or even 2 Gigabit duplex ) before you can reach 5 Gigabits ( USB3 ) or 6 Gigabits ( PCIe )
On a low-power CPU, overhead for USB protocol handling may be less efficient than PCIe DMA?
RAID seems very popular for 2 disk setups, but I don't like the cost benefit. With RAID under about 5 disks, you aren't getting much speedup, under 4 disks you can't do a proper fail-out and rebuild of a larger multi-disk volume.
I use 2 x 4TB disks in my homeserver, but I keep one online and have a script that brings up the other one periodically and rsyncs everything. This gives me a local backup, something RAID lacks, so I'm protected from fat fingering or accidentally deleting stuff. I also have very minimal downtime, because I can mount that drive in the place of the primary drive in just afew seconds.
I run xfs on the primary drive, and btrfs on the mirror, so I can take snapshots after I rsync and maintain differentials easily.
My point is, you should consider getting rid of RAID and just use the bare drive or LVS Volume.
Agreed, RAID makes little sense if you want a quiet home storage that is used only occasionaly. Additional disk is better used for regular backups. If you need to minimize downtime and have the storage accessible 24/7, then RAID makes sense.
IIRC, SATA over USB3 can offer competitive performance for sequential I/O, but PCIe will give better random I/O performance and lower CPU usage.
tkaiser of the armbian project has a lot to say about this stuff. I find his posts in the cnx-software.com comments and armbian forums very helpful.
https://forum.armbian.com/topic/8097-nanopi-m4-performance-a...
https://github.com/ThomasKaiser/Knowledge
PCie 3 x1 is 8 gigabit/s. But yes you are right, the gigabit network should be the bottleneck either way.
For transfers, yes, but for housekeeping activities such as scrubbing, defragmenting, indexing, and deduplicating, not at all.
Are you using ZFS by any chance? I was considering a Helios, by the ram requirements of ZFS turned me away from their offering.
FWIW, I run 10 x 4TB raidz2 NAS with 4GB using ZFS on Linux.
ZFS on Linux isn't as memory-efficient as it would be on FreeBSD, which has a much closer file system cache / memory manager architecture to Solaris, but it works fine.
ZFS memory consumption really rockets when you want to use dedupe, which basically wants to store a hash of every block in memory. The sweet spot for its use is in things like multiple VM images - where there's a lot of duplication inside large files that are otherwise different. But there's often ways to structure things to gain back the same space, e.g. with stackable file systems.
Without dedupe, and for a NAS scenario where there isn't going to be a massive working set (typically source or sink for backups, streaming video, etc.), 4G has been more than sufficient for me for years.
Have you considered Btrfs? I'm using it on my primary OS drive and my backup data drive. I keep my primary data on xfs to give me some cross platform resiliency. Btrfs allows me to take snapshots and uses space very efficiently. I haven't tried the deduplication features.
I've been running a 8TB homeserver for the cost of an optiplex on eBay (80) and 2 8tb external drives on Amazon (300). I cronjob an rsync every night and that's that. Cheap, works great, and (surprisingly) I understand how it works. I also have cloud storage for the subset of things I don't want to lose. A thing I'm working on right now is for the rsync disk to be on a switch I control with an Arduino/pi so it's only powered during the backup -- I reckon the drive will last longer that way. Homeservers are fun :)
If you're ok losing 24hours of data and not a performance junky, the backup plan becomes trivial and the raiding isn't really necessary.
Daily rsync alone isn't enough. Any data which gets corrupted on the main drive will, within 24 hours, also be corrupted on the 'backup'.
If you're confident enough that your computer won't get stolen, burn, or zapped by a power surge, then you can keep the same hardware setup you have, but:
1. Use borg backup (or restic) instead of rsync (so that you can restore to any day in the past, not only to yesterday)
2. Disconnect the 'backup' drive when it's not being used (so that it's protected from malware or fat fingers).
Even better would be:
3. Have a separate computer host the borg backups, and have it run 'borg serve' in append-only mode (so that, no matter what you do the main computer, you cannot destroy or corrupt past backups).
Even better:
4. Have this separate computer in a different physical location (so that you're protected from fire, theft and the like).
rsync supports versioning, eg https://netfuture.ch/2013/08/simple-versioned-timemachine-li...
Not the parent, but I use a similar setup. I avoid this be keeping my 2nd drive formatted as btrfs, primary is xfs. After I rsync to the btrfs drive, I take a snapshot.
I only rsync every week, my data doesn't change often and if it does, I can trigger a manual rsync using the same script in my /etc/cron.weekly/
edit: I do use borg to backup to my cloud storage server, since it's encrypted.
Time4vps has super cheap storage servers, 1TB for about $20 a quarter. (my affiliate if anyone is interested; https://billing.time4vps.eu/?affid=1881)
I didn't think corrupted data would change the size of the file or the mtime? If it doesn't then rsync wouldn't overwrite the second drive as it would see it as an existing, good copy. Unless you had rsync using checksum comparisons of course.
You're probably right for corruption caused by hardware failure or filesystem bugs. I was thinking about corruption in the wider sense, e.g. caused by application bugs.
borg looks awesome. It comes up all the time when I research efficient data storage replication (buzzhash) so now I'm really going to try it out, thanks for the tips! :)
:)
Here is the documentation for borg serve with append-only:
https://borgbackup.readthedocs.io/en/stable/usage/serve.html
The heads fly on a fluid cushion when spun up, they must physically land when the thing spins down. The head landing/takeoff operations are sources of physical wear.
Assuming a brushless motor and quality sealed ball bearings, it could be argued that the drive wears less kept spun up all the time.
My understanding is that spinning them down is primarily a power-saving measure.
Hence head parking.
That's automatic in virtually all HDDs.
When I last read up on this, which admittedly was over a decade ago, head parking was just a matter of locating the heads over an unused region of the platters. The head still goes through a landing and takeoff cycle, so the head still sees physical contact. It's just avoiding doing it over the data portion of the platters.
Does that do anything against bitflips/rot? I feel like for any real use I should have checksums/scrub available.
zfs or bust. If you are not using zfs for your long term storage and backups, you might want to consider what exactly you are doing and why you aren’t paying someone else to make sure your 1s stay as 1s and 0s as 0s.
To be fair, ZFS isn't the only game in town. Depending on what you're doing, gluster, ceph, minio, restic/borg/etc. may prove sufficient. I like ZFS, don't get me wrong, but if the goal is bitrot protection there are a number of different tools to solve it at different places in the stack.
For a NAS, I don’t think backup tools with checksumming would be sufficient. I also am not a fan of them as the only tool to prevent bitrot. If borg detects that a backup is corrupt, my option is what? To ditch that backup? zfs can not only detect it, but also recover from it. That’s not something that most tools can do.
To be fair, I don’t know about the capabilities of gluster, ceph, and minio.
btrfs has checksums to prevent bitrot.
Has it gotten less crash crash eat your data stash? Last I checked, distros were putting it into their install options while it was still happy to crash the FS with major data loss. I haven’t checked on whether it was worth it since.
It's been the default OS for opensuse for at least afew years. I've been using it on root filesystems and data volumes with no problems. I was backing up to a 1TB btrfs volume but it filled up with snapshots and cleaning them didn't cause any problems. I've since moved to a 4TB drive and I rsync another drive to it weekly and take a snapshot. I've had no issues.
https://en.opensuse.org/SDB:BTRFS
edit: I've had a couple hard crashes in this period. My homeserver's power button attracts little fingers.
Synology are using it on many of their NAS units, alternative being ext4fs.
"Building a high performing home NAS"
"I was averaging 60-113mbps depending on the size of files being moved. This was a successful build in my mind! After selling my QNAP NAS, I came out ahead on my expenses for the build."
113Mbps is considered "high performing"?
"on a shoestring budget with the Rock64 SBC"
"TOTAL: $172.39 CAD before tax"
This is without the cost of the hard drives, doesn't seem ot include the cost of an enclosure, and results in:
* 1 Quad-core A53
* 2GB ram (the 4GB version is $45 USD without shipping, so it couldn't have been this one), which you can't ever change
* 0 SATA ports
* 1 USB 3 port
* 2 USB 2 ports
* 1x GbE
I would probably rather consider an HP Micro Server (e.g. https://www.amazon.com//dp/B079MFYDSL ), which for $282 provides:
* 1.6GHz dual-core Opteron
* 2 DIMM slots, 8GB DDR4 provided
* 5x SATA ports (4x in the disk bays, 1 in the optical bay)
* 4 USB3 ports, 2 USB2 ports, internal USB port
* 2x GbE
* 2x PCIe slots
* Hardware RAID support (I wouldn't use it, but it's available).
It would meet all of these requirements the poster had:
* Shoestring budget, under $300 preferred
* Scalable, not locked into a certain number of drive bays
* Gigabit ethernet
* USB 3.0 (for fast backups to my backup drives)
Preferred: compatibility with open source NAS offerings (OpenMediaVault, FreeNAS, etc.)
Low power usage (I don’t have high computing requirements for this system, so it doesn’t need to be driving up my electricity bill to run every month)
(I guess the Rock64 may have lower power consumption, but I don't think vastly lower).
I have a much older Microserver G7 (or NL54), which was much cheaper (I paid about $100), and it's great.
Edit: formatting
He is from Canada, your suggestion is clearly not in CAD and from a brief Ebay search, is at least 500$ CAD, which is now 3x what he paid for and much more than his maximum budget.
He also switched because of the noise of the previous setup. I feel like your suggestion doesn't cover that at all.
Except higher performance (which doesn't seems to be a requirement for him considering his QNAP gave much worse performance and it was still not an argument for new gear), what would he get from your 3x cost?
Yup, I've been running an N40L microserver for almost 10 years now. Put 16 gb in it & some hard drives and never looked back. It just works.
I first ran FreeNAS, but eventually moved it to Ubuntu 18.04 so that I could run some containerized services on it (file sharing, Plex, and a remote virtual desktop).
16gb RAM is about right for running ZFS. 8gb worked OK, but I'd notice slowdowns from time to time.
If this machine ever decides to pack it in, I'll get a gen10.
If you ever get the urge to play around with FreeNAS again, they really improved linux emulation and virtualization; I've been pleasantly surprised with how much I can get done with bhyve.
That's only if you feel like tinkering, though; Ubuntu's a great choice as well.
I run opensuse on my home "NAS".I get so much more with a full Linux install and it's still headless and let power.
I just use an old desktop with a pretty efficient CPU.
I had the NL40. The power supply fan was very loud considering how dog-slow it was. I hope they fixed that. I use a raspberry pi now, which is also slow but it's much, much smaller and cheaper in every way.
I have the N54L - it's pretty ok, it's silent. But it's absolutely not fast and also wasn't really back when I bought it :P
I always suspected mine must be faulty since it was so loud. You could swap the power supply to something else but that was quite costly and left you with a sea of wires sticking out of your otherwise very smart case.
I have been running an N36L with FreeNAS for 7 years now. I don't keep it running all the time, so it's not really a "NAS" anymore, but more of a backup system since I configured the disks to use RAID-Z.
I was disappointed too
Well your mistake is thinking that the rock64 has power consumption anything like an opteron. A low power opteron is like 55W just for the processor and a rock64 is like 5W for the whole SBC.
All of the always on parts of my home network are built out of compact low power parts that are faster than the rock64 and, no surprise it's expensive. I used a nano-itx celeron that's much faster than a rock64 and gives me 2 gbe links to bond. It would have cost him 10 dollars more and would use 2-5x the power.
This shit gets really tricky
60-113Mbps is absolutely TERRIBLE performance. Now if the author means MBps (bytes not bits) it's better but still bad.
Just needed to rent, sorry, but why the heck do people frequently don't know there is a difference between lowercase b(it) and uppercase B(yte)...
When I built an ARM based NAS, I chose to use the Banana Pi BPI-R2 because it was one of the very few boards with 2x SATA ports using PCI-E. It worked fine and got good speeds. It's difficult to run a current kernel on the BPI-R2 though (it is slowly creeping towards mainline). If I built another NAS again I'd just use an x86_64 or aarch64 SBC with a PCI-E port and connect a good SATA controller.
Full build log: https://bburky.com/NAS/
I'm surprised I don't see more people with my NAS setup. I run a HP Gen8 MicroServer which cost about £120 for the base machine a few years back. The default specs were a bit feeble but perfectly serviceable for running a NAS, though mine has now been modified a bit since then to sport a quad core E3 and 16GB of ECC RAM. If you pick up an adapter lead you can fit an SSD in the optical drive bay (which I have done). One nice thing about the microserver is it has two built-in ethernet ports, so mine run in a link-aggregation group, which is well utilised (I run a lot off this little server, like Plex, ZoneMinder etc).
I run ZFS on Ubuntu (ZFS is superb and one of the best things I've learnt in years) with a RAID 10 setup (2x2TB mirrored pair, 2x4TB mirrored pair). I'm actually just resilvering one of the 4TB drives right now as I finish migrating to them from a couple of old 1TB drives. RAM usage was high when I used dedup but now I've ditched that it's running much lower. I think the hardware will start to struggle once I get up over 10TB of usable space, but considering the cost of the actual machine I really don't feel like I've wasted any money at all.
I picked RAID 10 for this because I feel like with only 4 drives available it makes the most sense, weighing up resilver speed vs slightly better failure recovery if it was RAIDZ. On bigger servers I tend to go for RAIDZ3 instead.
For backups I use syncthing on various machines to copy data to the server, and then rsync the stuff I care about every hour (with a lockfile to stop overlap) to rsync.net, which then performs all the snapshots. I used to run borg but I swapped back to plain rsync because I felt more confident in being able to restore from it. rsync.net is fantastic. I'm currently storing about 1TB with them, and although it looks more expensive than other options for home backup, I feel like the lack of hassle is really worth it for me.
I'm not usually a fan of HP btw, but these little microservers are really well built and I actually ended up using a ton of them at work as well; we only ever had one failure (which I believe was due to PoE somehow being delivered to it, which was really fun to debug :|). When this one eventually can't keep up I suspect I'll just go for a newer model of it. The form factor is very convenient for the home, especially if you don't have the convenience of a spare room in which to put a rack.
A second vote for HP's Gen8 MicroServer!
I couldn't get the low voltage Xeons for a sane price in Germany, so I settled for a second-hand i3, which turned out to be more than enough for what I'm throwing at it.
I'm running ESXi as a hypervisor managing two Ubuntu VMs at the moment: a router and a fileserver managing two volume groups (6 TB dump storage for movies, music, etc. that's not too important and thus unmirrored, 2x2TB mirrored storage for photos and other important data that should not get lost). Thanks to the two network ports I can completely isolate the fileserver from the open internet as only the router VM has access to hat port. If I want to access my stuff from outside, I can VPN into the router and thus have access to the fileserver.
The next points on the list is Plex or something similar so I don't have to manually re-encode movies when my TV doesn't like the audio codec. Here, the i3 might struggle a bit, but we'll see.
As for the cost: I spent 200€ on the server itself, 88€ on 16GB RAM, ~20€ for the SSD case and power adapter cable and ~65€ on the SSD. The i3 was ~25€, so I'm up to roughly 400€ for the entire server (plus storage costs, but you'll have those anyways). Compared to what you'll get for that price it's an absolute steal, especially when you compare it with several "low budget"/homemade NAS builds floating around the net/youtube.
Bonus points: you can stack them on top of each other.
Heh. I've still got a gen7 happily chugging away - it is happier streaming 1080p x264 rather than x265, but aside from that we never notice it's so feeble. That's easy enough - we only rip x264 now. It's heavily used, ZFS performs flawlessly - running on FreeNAS, and runs a surprising amount of plugins and jails without issue.
I haven't looked at gen 8 or 10 - where was 9? - since the arbitrary HP switch in policy to say you need an active support contract to download a BIOS update. For most people buying a MicroServer that's BIOS updates in first year only, which is far too hostile.
I'm more likely to self-build using a U-NAS 8 bay or something else with more bays next time around, or pick up a rack off ebay.
Yeah those E3s are really awkward to find. I ended up buying it secondhand from the US for about £85. I think you'll be fine with the i3 for Plex encoding though; I can get a few decent streams going on mine before it starts to tax it.
I used to run a dual network setup similar to that actually, and had my server's only access to the internet via a VPN tunnel on the router. I've stopped doing that recently as I got more interested in the link-aggregation setup, but I might do something more complex at the router-end to achieve a similar outcome at some point.
I'm running a very similar setup. One of the HUGE benefits of these boxes is the iLO for fixing grub, / hardware changes / etc. Saves schlepping a motitor and just makes life so much easier.
Most people don't realize you can ssh to ILO and use it to access a virtual serial port on your server. That gives you console access without an ILO license.
It can be a little complicated depending on your Linux, but if you google "ILO virtual serial port" you should be able to find some decent directions.
I must admit that I never got around to setting up the iLO interface on mine! It's on my list of things to do in the next few months as I've finally got my home network set up how I want it and a large enough switch sat next to it now :D
Have you looked at syncing ZFS snapshots for backups? (that way if you just move a file, it doesn't reupload the whole file; just the disk metadata)
I did think about doing that but I actually choose not to do much in the way of snapshots on my server and instead use rsync.net's snapshotting, mostly to save myself money on storage costs locally (as files change quite a lot but I rarely need to restore anything). Not sure if the costs work out in my favour long-term or not so I might revisit that in future :D
Well, part of the issue is that it's no longer available, and HP seem much more resistant to deep-discounting the Gen10 replacement.
I was still picking them up in the UK for about £120 until last year, maybe the stock has finally run out. I did notice that the price of the Gen10 does seem a bit steep, although I would say that the base specs are quite notably better than the Gen8 base model so perhaps it works out reasonable by that point. Mine has probably cost about £400 after the upgrades I made to it.
Yep, and new old stock of the gen8 were listed around £400 last I checked. Any word if there'll be a new model that improves on the gen10?
You could also get a Dell Optiplex with a 2nd gen i5 from EBay for about $60 + the cost of hard drives.
Mine is built around an even cheaper used Atom fanless Mini-itx board plus Nas4Free (1) which is more than enough for serving multiple full HD videos (2) around the house, so speed isn't that important; the network would be the limit anyway.
(1) I know it's old, but attempting a live upgrade to XigmaNas would be risky, so I'll reinstall it at the next maintenance stop.
(2) as files through NFS and SMB shares, not transcoded and streamed; ie, the player does the dirty work, not the NAS.
I ran NAS4Free for years but found that I could not configure it to email me an alert when the chassis fan failed or the RAID array degraded. You can guess what happened - fan failed and a drive cooked. Was this ever addressed/fixed? Otherwise I thought it was pretty great.
You will probably pay that much ($60) more per year in power than using an ARM SBC. I'm not saying it's not a good option, just want to be fair. I think using a regular old x86 PC with lots of space gives you far more flexibility. It's just larger, noisier and more expensive to operate. They both have their pros and cons. Personally I use a small NAS enclosure with a mini-itx board and x86 CPU.
USB for a nas?! nope nope nope
~2012 mini-itx AMD C60 does 60Mb/sec, at the time was ~60€
Can you explain why?
Most external USB enclosures I've used run hot, die early and are dog slow...
I used an aluminium enclosure for a long time and no problems with heat. And perhaps that's why there were also no lifetime issues either. "Dog slow"? Yeh I agree but mine was USB 2.0. It's much less of an argument with USB 3.0 and even less again if you don't need high performance for your use-case (e.g. backups only).
So I don't think we have answer yet as to why you wouldn't use USB.
How did that bad reputation arise? A combination of factors:
* USB 2.0
* Platforms like Raspberry Pi, where both disk and ethernet are on the same USB link.
* Sellers marketing products with the USB link bitrate (5 Gbit/s) rather than the rate of data transfer from the ideal disk.
* Embedded platforms that don't have the performance needed to saturate a gigabit ethernet link (or 5 gigabit USB 3 link)
* Articles like this, reporting "60-113mbps" without going into enough detail to determine which components are the limiting factors on performance.
"32GB MicroSDXC – $17.98 CAD"
You don't want to do this. Buy a Swissbit Card of you don't want data corruption on your system.
After having wrecked many MicroSD cards with ARM SBCs, I have found that the Samsung High Endurance MicroSD cards[1] have held up to consistent use so far, including being used as backing scratch stores / swap space for calculations on an ODROID-C2. They're also pretty cheap for how reliable they have been. Of course, it's just anecdata, so YMMV.
[1] https://www.amazon.com/Samsung-Endurance-64GB-Micro-Adapter/...
Get an industrial grade microSD indeed.
Swissbit is one of the brands who deliver such. But there are other options as well [1]. I evaluated a bunch of ATP cards and thus far I'm satisfied.
[1] https://www.digikey.com/products/en/memory-cards-modules/mem...
Yeah, even the name brand SD cards blow up for me after about 6 months. I've switched over to the Samsung "FIT" USB sticks that fit flush with the port, and boot off that. And leave the SDXC slot free for importing images from the camera.
So far, my experience has been SD cards faceplant by going read-only. They will mount rw, but a single write will cause a long hang, followed by a bunch of sd driver errors, and then it goes read only. It is still readable but will accept no writes, including blkdiscard (which gets translated into whatever the sd card equivalent for TRIM is).
Sounds like the SD card internals for boosting voltage to do the erase go bad.
I still wonder why EEPROMs don't have another pin for a high-voltage supply.
Especially for SSDs with multiple chips, would make sense to have 1 efficient voltage boost, instead of in-silico charge pumps on each chip.
I’ve done something like this with a SolidRun CuBox and a cheap external 2-disk RAID1 enclosure connected via eSATA... 8 years ago. It’s still running - I’ve only had to replace disks as they failed, which is expected, with zero data loss. It appears as a Time Machine on the network, backing up a couple of laptops, and as a shared SMB drive for bits and bobs.
To be on the safe side, I mounted /var on the enclosure, so the SD card in the CuBox is not stressed, and that has not failed once in 8 years. It wasn’t the cheapest thing (and it’s absolutely wasted on this task - it has good media capabilities and wifi that I don’t ever touch), but it sits in a small recess out of the way, and has been rock-solid. Every few months I dump all its contents to Glacier because I fear the enclosure will fail before the little box does.
I expect I’ll eventually replace the CuBox with a Raspberry Pi when it dies, but the overall model is sound, for something as simple as a fileserver. Just don’t open it to the internet - a lot of these small boards stop getting OS updates from manufacturers after a year or two, and their custom chipsets make it difficult to work with vanilla Linux - particularly if you need wifi or bluetooth.
> my old QNAP NAS would average around 17-22mbps transfer speeds
This seems a bit too low in my opinion, I would expect an average consumer-grade NAS to perform a little bit better than that. Perhaps there was a bottleneck somewhere (e.g. network issues)?
I didn't test transfer speeds on my cheap Thecus NAS (with 2X WD Red 3TB in RAID 1) but now I'm curious so I think I will check when I get home.
What's your experience with NAS transfer speeds, HN people?
I generally don't have trouble nearly saturating a gigabit Ethernet link on a entry level 2-bay Synology NAS (800+Mbps).
22 MBps sounds like what my former DNS-320 achieved. IIRC, encrypted data transfer (WebDAVs instead of NFS) usually reduced the transfer speed a lot, due to a weak CPU. The value above is the one I recall for encrypted transfers (note Bytes, not bits).
IIRC I had about 60 MB/sec in 2008 with some old Phenom of mine with dedicated Areca raid controller plus 4x 1 TB and encrypted storage.
I get several Gbps from my NAS (serving multiple ESXi hosts, plus NFS/SMB for laptops and desktops) but my NAS is a "real" 2U server.
Does either the Rock64 or RockPro64 run mainline Debian with the Debian generic arm64 kernel yet? Or do you have to use a board-specific kernel like the ones that Armbian provides?
Yep, the community has done quite a bit of work to mainline support, Mali (old and new) drivers were just recently mainlined.
https://forum.libreelec.tv/thread/17540-early-mainline-image...
I've been thinking of setting up a NAS like this for a while. Am I missing something, though, or does a setup like this need 5 or 6 power adapters to work (one for the Rock64, one each for the USB enclosure, and possibly one for the USB hub)?
Not a deal killer, for sure, but it becomes a mess of cables quickly.
Check the allowable voltage input ranges for each and if they have overlap just buy a big power brick (like a laptop charger), cut the ends off of each of the supplied wall warts and solder+shrinkwrap them all together into a nice neat harness. This has the added benefit of saving a tiny bit of power assuming you get a decent main supply.
Tip: thread the shrinkwrap first before soldering. This will save you much swearing.
You can also find barrel splitters. I use one so that my 12V router and 12V modem both run off 1 wall-wart.
Look to be common for security cameras.
Like so: https://www.ebay.ca/itm/113713154440
Did you do any testing to figure out how many amps you're drawing from the single wall wart? Although, modem/routers in normal usage are pretty low draw.
Probably just my paranoia, but I'd be a bit nervous about corruption running many large drives from a single usb3 port.
ZFS has a nifty, sort of low-rent safety that can be used for this sort of thing - the 'copies' param. Just be sure to understand what it does and doesn't give you.
https://linux.die.net/man/8/zfs
I personally value the data I store long-term more than this, but most likely would have played around with it when I was more cash strapped and daring/stupid about it.
Personally I don't trust cheap USB enclosures at all. I had a 2-drive IcyBox die and take one of the drives with it.
Is there any NAS that has design and simplicity as a primary focus?
I have hated all NAS that I've tried due to each one wanting to have more features than the next and all resulting in slow complicated messes.
Which features would you keep/prioritize?
Storing files in a resilient way (usually just RAID 1). Allowing those files to be accessed from all my devices. Only doing what would be guessed by the words Network Attached Storage. Don't give me stuff for security cameras, transcoding, things for my thermostat, vertialization, email, etc etc. Basically just a personal Dropbox.
And maybe make it look nice and also small. Maybe use laptop drives. Make it wireless too. Focus on nice UI instead of looking like Windows 95
By the way, when using a backup software like Borg or Restic that are supposed to have replication built-in, do you still need things like RAID (if you intend to use the drive only for backup) ?
EDIT: I guess no : https://borgbackup.readthedocs.io/en/stable/faq.html#can-pro...
Under 3 drives, raid is wasted. Just mirror the drives periodically and you get better performance and a temporary backup with minimal failover time.
I think the answer is actually yes, you need RAID as borg is not taking care of replication for you.
Yes sorry, I meant yes, you still need RAID.
Why don't these people just buy one of those power efficient i3's and throw it in an old PC case ? Seems like a lot more power and flexibility compared to using Rock64's. My 'nas' is just a headless linux computer with a bunch of drives in there. My only complaint is that 6 HDD ports aren't enough. It can stream HD video to multiple devices at the same time no problem which I feel like would choke if you tried it on the Rock64.
I use a gnubee nas. Its an open source hardware nas that can run debian. Its not a shoestring budget device but its a lot cheaper than the commercial ones.
Lol the GnuBee is adorable!
But "recommended RAID levels are 0 and 1 under LVM and MD, and Linux MD RAID 10"[0]?
What you likely want is a 1, and if so it seems easier to keep two drives in your PC and rsync from one to the other[1]. (Also, such is the awesomeness of rsync that the syncs from one 10 TB drive to another literally takes seconds...)
I thought NASs' niche was more complex RAID levels which mitigate the data corruption our last-gen filesystems are still prone to. :D
0: https://www.crowdsupply.com/gnubee/personal-cloud-2
1: Cron "rsync -aPq --delete {$source} {$destination} >/dev/null 2>&1". (The scary delete flag deletes files on the destination that are no longer on source, handling renames and unwanted data as RAID would.)
You can set it up any way you like really. You have access to a root debian shell and can do anything at all that can be done on debian.
It's a little more than a shoestring budget, but I built my NAS from an old Supermicro server I bought on ebay. I found a 4U 24-bay machine for $1200 with dual xeons, 256GB (!) of RAM, 2x PSUs. All I had to add was a RAID controller ($100~) and a flash drive for the OS (because I was being greedy and wanted all 24 bays for storage).
It works great -- although it is as loud as a jet engine.
(2018)
The ROCKPro64 with the PCIe slot has been out for a while.
Does PCIe vs USB3 matter when building a NAS in this case? Your bottleneck is going to be the gigabit network anyway.
PCIe might be good so you can use an used enterprise raid card in JBOD mode. Probably can attach way more drives that way.
how good is it?
The reviews of the rk3399 boards are very positive. It's great bang for the buck with plenty of performance.
Just go with a full tower, you will be able to fit more cheap HDD's. And use ZFS mirrors.
I have Raspberry Pi 3 B+ hooked with two LaCie Porsche Design P'9220 1T drives set up in RAID 1. I had to increase the USB current by configuring max_usb_current=1 in /boot/config.txt and also use good USB charger and cable (really, some of the cables I had were not good enough) to get it to work reliably.
It's been working perfectly. Sure, it's not the fastest thing ever, but mostly the limiting factor is our home WiFi, not the the RasPi.
EDIT: The USB drivers I use do not need external power supply, so the whole thing needs only two cables; power supply and Ethernet. Ethernet cable is not necessary if you are ok with WiFi speeds.
I have a usb harddrive connected to an odroid c2, it used to be connected to an rpi3, but if I'm not mistaken I got 2-3x the write performance when using the odroid. I have an odroid n2 now, but I haven't bench marked it.
rpi3b+ got pretty good upgrades for networking and usb speeds compared to rpi3b. But sure, it is not the fastest thing in town.
This is pretty cool! Back in 2012, as a hackathon project, we compiled our then product (MagFS) to run on raspberry pi - and it worked beautifully (minimal work needed, as the client was in C++, mostly compiler ifdef's).
For context: MagFS (sold to EMC, as part of the acquisition of Maginatics) was a full-fledged cloud NAS (mostly following SMB2/3 semantics, although it was also mostly POSIX-compatible) and used cloud objects (e.g. S3 or OpenStack) for raw data storage instead of HDD blocks.
I got tired of ARM-board NAS boxes a few years ago, the performance has always been pretty bad, usually because of the USB attached storage. Although I had an orion5x-based WD ShareSpace a few years ago with 4 sata connectors, and that was terrible too!
These days I'm running a SFF PC with 7TB of SSDs wedged in it, seems to be doing the job nicely.
I've got a Qotom PC for this purpose. Mine, unfortunately, is an older model with only USB 2 and SATA, but my Orico RAID box has got both USB and eSATA, but even then since I run borg backups in the background, I don't miss much performance over USB (I've been too lazy to reconnect it to SATA).
I just bought a used Synology NAS for about $30 and added HDD. Synology OS is awesome, with everything I need wrapped in a nice GUI.
What do you thing on using an Odroid XU4 for that? (with a big USB3 HDD with multimedia basically)?
Also for about $300 you can buy a used workstation with relatively beefy specs and two 40Gbps fiber cards.
If you consider peace of mind, backup in the event of fire, not having to maintain hard drives after failure, energy cost (and Apple running iCloud on renewables)... the $10/month iCloud account (2 TB) account is quite cost competitive with doing it myself...
My local network speed and latency is much better to the point that I can use my NAS as a local disk for some applications (ie. Lightroom) - no way I could do that with a cloud only storage solution.
Depends on your requirements. I have 2x12TB drives in a NAS that I rsync. I backup some of it to the cloud (software repos, processed photos, misc documents, etc - maybe 200-300GB) and a lot of it just stays locally (raw video, linux iso, all my original, unprocessed photos, etc).
I also have an old 10TB disk in a USB enclosure encrypted with LUKS that sits in my cabinet at work which serves as my off-site backup. Every ~6 months or so I bring it back and rsync it with a script and take it back to work.
Unfortunately, Apple doesn't even offer anything larger than 2TB, which is terribly small for anyone who's serious about photos or video. I have hours and hours of 4k60 (~400MB/min) video from events and vacations that I want to store. And this was taking using the smartphone in my pocket, I'm not a professional.
What’s your estimate for the ~cost of the setup, and how many years do you think it will last? Curious if the cost/month is similar to what I calculated.
Good question, I never considered it. The motherboard, CPU (i7 4770) and RAM are a very, very old desktop setup.
I checked Amazon and the hard drives were purchased on 10/8/2017 and they were $479/ea. The U-NAS case was $150 and the SSD was a random 120GB SSD I had after upgrading my desktop (it was the smallest of the "spare" SSDs I had in a drawer in my desk).
The 8TB HDD were Seagate Archive Drives that used to be the primary storage until I upgraded, so they became offsite backups. Those were purchased 5/8/2016 for $248. The external enclosures I use were $19.99 ("ORICO Toolfree blah blah" I can get you the exact model if you're interested but I assume you can find something newer and better).
So direct cost, I'm not really sure. A lot of it was "free" for me in the sense that it was spare hardware. But should be enough for you to guestimate what a setup would cost.
How long it will last is really hard to say. I monitor the disk health via smart and by keeping two copies I feel pretty safe. Luckily I'm also a Google Fiber customer (which means 1TB free Google Drive) and I sync a lot of critical stuff (software repos, personal photos, documentation like taxes, etc) to Google Drive for a third backup (following 3-2-1 rule). Right now I still have >1TB free on my primary 12TB drive, and the growth rate isn't very high. The most intensive thing I store is video from my iPhone, so that's really what dictates my rate of growth.
I like to keep local backups in addition to cloud backups.