I'm of the "my NAS/home server is an old Laptop and a bunch of more or less random drives in a box" persuasion, but this is great work and great documentation! :)
The solution works really well though - a reliable, name-brand SSD (sic) in an old $100 Dell Latitude as the workhorse. The Latitudes allow changing the charging profile in BIOS to "always plugged in" amongst the other power friendly things (disable TurboBoost, etc.). Built in UPS, keyboard, monitor, wifi/eth; portable, quiet and with the right tuning (disabling a lot of modules in GRUB or module blacklists) runs most of the time without kicking the fan on. I run 2, one doing all the work and another backing it up. I run with only a partition encrypted (requires manual unlock after boot, my choice) so that I can remote login after boot to mount that, never have to actually open the laptop unless, say, wifi goes south. Laptop dies (they never do), just pull the SSD and slap in another one it's Linux it'll boot.
I agree, especially about the integrated USP and peripherals.
In my case, I currently use a Macbook Air M1 (my personal one which I currently don't need as a laptop). Absolute overkill and in some aspects annoying - only two USB-C ports for example - but it was available and it's fanless and doesn't need much power.
Just a random powered USB-C Hub with a few external drives on one port and a Thunderbolt SSD I had from an old project on the other.
For now, I just use the SMB server built into MacOS because I've not gotten around to installing Asahi on it.
I think I turned auto update off on this machine but if it reboots, you would have to login first. Doesn't bother me, though. I don't have any uptime requirements.
Depends on what you need or want to do with it, I guess. Storage via USB-C or Thunderbolt and Ethernet over an adaptor works fine and is plenty fast for my music and video library and as a backup target.
> I’ll end up releasing the raw shapr3D files after a month or two of debut online; I like to make sure I work out any earth shattering issues there may be prior to dumping the source for everyone to poke freely
To take a look at the 3D files for the chassi was literally the reason I open the blog post, sadly they haven't been published yet!
In the meantime, what are people's favorite 3D printed desktop/NAS chassis? I'm currently sitting and sketching one myself and looking for inspiration :)
I've got a handful of 4-bay Mediasonic Proboxes, they're about as barebones of a JBOD enclosure as you can get and run about $110 each unless they're on sale. I have stuffed these things with over 40TB of drives and they've worked solidly for many years.
They typically present as 4 USB connected external drives, which you can softraid into whatever. They also support eSATA. At one point I had 3 of these plugged into a spare intel NUC and used it as a NAS running first Windows, then Ubuntu, then unRAID over about 5 years. It wasn't blazing fast, but got the job done for home use.
I love seeing these cheap builds like this. IMHO NAS boxes are way overpriced, and underpowered. I had multiples the performance and capability piecemeal hacking together my old setup than buying the all-in-one systems that will lose vendor coverage in 2 years.
The intention is to get one of those ~$100 N100 minipcs for the guts. Still coming in under budget vs the commercial option.
All that being said, there are more affordable NAS cases that would eliminate the labor. This is both an art piece and a because-I-can kind of venture.
Thanks for posting this, it looks like an amazing value. I need to rebuild my home server/nas soon and finding removal sled backplane is hard enough by itself. I’d prefer a 1U layout, but this is pretty good.
Thanks for posting this. I have several family members & friends at work who can use this AOOSTAR unit. I’d love to install TrueNAS SCALE on it, Jellyfin, tailscale, docker and perhaps some VMs via Promox. I’ve been looking for a product just like this and the price is reasonable.
I need to get a NAS, and this just gave me a whole new rabbit-hole to go down. The other builds on their site e.g. (archive.org link because the WP site is struggling) https://web.archive.org/web/20250505085226/https://jackharve... are also very interesting.
I should probably just hook something up with, say, the old Mac Mini I have lying around, but it's fun to consider the options.
Having built a NAS the actual build is only half the story. There’s a hole heap of nonsense you have to go through to tune the software. I ended up bailing and just buying a NAS. Just too many ticks and glitches.
There are good all-in-one software solutions for all this stuff (unraid comes to mind) and I really feel this kind of article is unfinished without going into that kind of thing.
> Having built a NAS the actual build is only half the story. There’s a hole heap of nonsense you have to go through to tune the software.
Having done it, the list for me is pretty much: install using the file system you fancy, setup the raid you want, install samba and nfs, eventually avahi and encryption and that’s pretty much it.
I wouldn’t personally call that non sense. Is there something I am missing?
> install using the file system you fancy, setup the raid you want, install samba and nfs, eventually avahi and encryption and that’s pretty much it.
It isn’t though.
I got that far and I wasn’t happy. There’s a lot of extra stuff you have to think about when you want a system to run unattended, vs something that’s a workstation.
First thing I noticed was the noise and the heat. Had to install a daemon to power down the hard disks when not in use. Then when I wanted RAID I had to go spend half a day learning to do that. Then there was software updates. So on and so forth.
I also had some lingering concerns about power consumption which meant I was going to have to get out some power monitors and read some stuff about Linux power tuning.
After a while the sense of “all in one” like packages TrueNAS and unraid started to make sense.
I was all set for unraid but luckily I came into a bit of extra money and I was able to buy some of my life back with a simple box with all of this designed and configured by people who know what they are doing.
Now all I need to do is manage storage and services which is all I should have to do with a NAS.
All this stuff is interesting to learn but there’s only so many hours in the day.
By all means if you want a hobby project or you’re not concerned with non functional aspects like reliability, power consumption and security and maintenance or if you’re very poor then building your own NAS makes sense.
> Had to install a daemon to power down the hard disks when not in use
It depends on the drives you buy, of course, but unnecessary spin-up/spin-down cycles will wear out drives faster. Many NAS drives will keep running intentionally for this reason. If the drives didn't spin down by themselves, it's possible they weren't designed to start/stop that often.
My personal NAS solution is a combination of Debian (for stability and unattended updates) with a motherboard that's in low-power mode, with ZFS + the usual software for accessing the NAS installed. I could write a script that reboots the system based on /var/run/reboot-required but a monthly scheduled reboot works fine too.
In my experience, this setup rarely requires any maintenance. It's quite boring for a homelab project. Once every few years I need to upgrade to a newer version of Debian (last time I went from 10 to 11 to 12 in one go) but even that isn't much of a spectacle if you don't mess with non-Debian package repositories.
I have basically the same setup using Arch, because I don't trust distributions that patch a lot, and btrfs, because I use disks of mismatched sizes.
I used to change the frequency governor for my cpu on my previous NAS but the default Linux setup (schedutil) is now perfectly adequate. Same with disk power, default is fine.
The whole thing just shugs along happily with basically no effort on my side apart from the occasional upgrade requiring manual intervention.
It certainly didn't require anything which I would consider non sense - and I have seen plenty of things I would call that in the past, looking at you OpenLDAP and PAM. Sure, you need to have a vague idea of what RAID is but then again building a NAS and expecting to need zero knowledge of storage seems extremely ambitious to me. Then again, I realise that what I consider basic knowledge taken for granted might not seem to be so basic someone else point of view.
The black-and-white pic just to show the brown color of the Noctua fan is taking me out
To add something useful, this is very cool but I wish it came in JBOD form so I could run it off a Mac Mini instead, and the pricing listed in TFA seems like an idealistic lower limit.
Pi's lack IO so they have quite poor performance for a lot of drives. They also aren't really very competitive price wise compared to low end AMD64 machines that have considerably more computation and IO capabilities at this low end price point. What Pi's are good at is very low power consumption and being small, once you add 4 hard drives small and low power don't matter much anymore and you wont take up much more power and space moving to MITX.
ARM64 does limit what software you can run as well as does the anaemic GPU hardware. They are good for other things and very small servers (like PIHoles) but once multiple drives are involved they aren't the right solution anymore.
It would be possible with a Pi or any other ARM or RISCV board, but most of those don't have NVMe or SATA and have very limited bandwidth to the USB ports, PCIe, or ethernet. If you don't need high performance that can be perfectly usable, but it will run into bottlenecks if you push beyond basic use.
Interesting, I was just mumbling to myself the other day that I really needed to find someone to fold sheet metal before I needed another computer case.
But this article points out that there really isn't any reason not to make this stuff out of printable plastic.
Or: don't build a NAS at all and just put the hard drives in the machine you intend to use. This has many advantages including: price, performance, reliability, not having to run an entire second computer (just leave the computer on and sharing files if other computers on your LAN also use the files), intuitive and natural file management using non-proprietary software, simple drive formats that are easily recoverable with simple/normal tools if there is a problem, no silly tuning stuff because the files are already there.
NAS do make sense in some contexts. But over the last decade they've become something people do just to do because it's hip. It usually makes much more sense just to put your storage drives in the computer you'll use them in.
I wouldn't be surprised if a lot of people setting up NASes didn't have a desktop to put drives into. Lots of people are only using laptops or phones nowadays
That's true. But those aren't the people thinking about setting up NASes. Or at least not in my tech support experience. Those people will just have giant fragile octopuses of external USB drives.
But it is a problem that most people don't have capable computers anymore. :\
Your comment makes much more sense if you also want to have a desktop,and only in some network configurations. It's most likely the minority use case, but worth considering if it might fit.
A lot of people just use laptops and tablets / phones with limited storage expansion space at home, and want a dedicated device for a firewall/router anyways (no worries about reboots or other interruptions due to personal work on that computer, etc).
One more powerful computer to use as both NAS and firewall (and for various shared services...) makes a lot of sense much of the time - hence the popularity of proxmox.
I don't understand this. I use a laptop that I don't want to carry many files on, and that I often bring with me when I leave the house. I have NAS at home to hold my data and run services, including CI. It seems perfectly fine, even wonderful. I use mdadm and cryptsetup, which are non-proprietary pieces of software that set up simple, documented drive formats that are easily recoverable with simple/normal tools if there is a problem.
I have no idea how having a NAS would make somebody hip. Very strange disco. Everybody should have one.
The m.2 to sata adapter. Does this serialise the disk io or does it use multiple lanes of the pci bus?
I mean, pricewise and avoiding usb, it's a nice build. I have something not a million miles off using a zimablade and a pci card doing sata, with an ali express sata disk box and an atx psu. Even got 12v off the atx to power the zimablade via DCin
> Does this serialise the disk io or does it use multiple lanes of the pci bus?
That's not how PCIe lanes work. Regardless of how many PCIe lanes are used to connect the CPU to the HBA, only one packet will be traversing that link at a time (striped across the lanes), but that imposes no direct constraints on what the other side of the HBA is doing with its several SATA links simultaneously. With 16Gb/s of PCIe bandwidth, that HBA should have no trouble keeping more than one 6Gb/s SATA link busy.
Thanks for the cluestick hit. This implies storage on the HBA or SSD to buffer data being committed, to exploit faster bus speed than write speed to media.
I guess you could run sata ssds with this, but I suspect more people would be running spinners... Spinners certainly have buffers to manage the difference between bus speeds and media speeds; I think most sata ssds do too.
The HBA may not have or need much buffering, as transfer to/from system memory with DMA should be pretty fast... I'd guess enough memory to fully buffer one outgoing command for each port, and one or two to buffer a response for each port as well; they may have more, but not much is necessary.
Some multi-port sata cards do terrible things though. Rather than using a proper controller for the number of ports, they have a one port controller and then a sata port multiplier. These do result in a meaningful restriction in bandwidth, and some multipliers require waiting for a response before communicating with another drive which would result in poor throughput for most NAS workflows.
A particularly terrible idea would be a port multiplier hooked up as a m.2 sata device, rather than a pci-e sata controller. I don't know that I've seen that one (edit, found one on amazon), but I'm pretty sure I've seen the controller + port multiplier combo. It's definitely worth finding out what controller is on the board before buying to avoid surprises.
I think this latter design pattern is what the m.2 6 sata cards do. It's typically something like an ASM1166 chip. (the example below) or a Jmicron equivalent.
They say the bandwidth far exceeds the individual SATA port speeds. But, there's little to no visible buffer on the card.
I dreamed of diskless, got 6 2TB SSD in SATA form factor. They run hot. I wound up with a noctua fan for the disks, the zimablade is fanless passive cooling. And now with an ATX psu and a mini pc case I have a power supply fan, and got a 6 disk chassis which has very quiet fans built in. But it is quiet at least.
I'm of the "my NAS/home server is an old Laptop and a bunch of more or less random drives in a box" persuasion, but this is great work and great documentation! :)
The solution works really well though - a reliable, name-brand SSD (sic) in an old $100 Dell Latitude as the workhorse. The Latitudes allow changing the charging profile in BIOS to "always plugged in" amongst the other power friendly things (disable TurboBoost, etc.). Built in UPS, keyboard, monitor, wifi/eth; portable, quiet and with the right tuning (disabling a lot of modules in GRUB or module blacklists) runs most of the time without kicking the fan on. I run 2, one doing all the work and another backing it up. I run with only a partition encrypted (requires manual unlock after boot, my choice) so that I can remote login after boot to mount that, never have to actually open the laptop unless, say, wifi goes south. Laptop dies (they never do), just pull the SSD and slap in another one it's Linux it'll boot.
I agree, especially about the integrated USP and peripherals.
In my case, I currently use a Macbook Air M1 (my personal one which I currently don't need as a laptop). Absolute overkill and in some aspects annoying - only two USB-C ports for example - but it was available and it's fanless and doesn't need much power.
What storage enclosure (Thunderbolt?) and network filesystem do you use?
When there's a MacOS security update and the Macbook reboots, do NAS functions work before a local user has logged in?
Just a random powered USB-C Hub with a few external drives on one port and a Thunderbolt SSD I had from an old project on the other. For now, I just use the SMB server built into MacOS because I've not gotten around to installing Asahi on it.
I think I turned auto update off on this machine but if it reboots, you would have to login first. Doesn't bother me, though. I don't have any uptime requirements.
You can fully control when macOS does updates of any kind with a couple of UI toggles.
Any launch daemon service will load and/or start at boot, only user-domain services require a login (like the Dock or menu bar would)
How do you deal with the very low number of SATA ports and slow Ethernet speeds?
I see the appeal but I’m curious about the performance.
Depends on what you need or want to do with it, I guess. Storage via USB-C or Thunderbolt and Ethernet over an adaptor works fine and is plenty fast for my music and video library and as a backup target.
> I’ll end up releasing the raw shapr3D files after a month or two of debut online; I like to make sure I work out any earth shattering issues there may be prior to dumping the source for everyone to poke freely
To take a look at the 3D files for the chassi was literally the reason I open the blog post, sadly they haven't been published yet!
In the meantime, what are people's favorite 3D printed desktop/NAS chassis? I'm currently sitting and sketching one myself and looking for inspiration :)
There is a makerworld link at the bottom of the page:
https://makerworld.com/en/models/1644686-n5-mini-a-3d-printe...
isn't it not a great idea to put magnets right next to magnetic hard drives?
I've got a handful of 4-bay Mediasonic Proboxes, they're about as barebones of a JBOD enclosure as you can get and run about $110 each unless they're on sale. I have stuffed these things with over 40TB of drives and they've worked solidly for many years.
They typically present as 4 USB connected external drives, which you can softraid into whatever. They also support eSATA. At one point I had 3 of these plugged into a spare intel NUC and used it as a NAS running first Windows, then Ubuntu, then unRAID over about 5 years. It wasn't blazing fast, but got the job done for home use.
I love seeing these cheap builds like this. IMHO NAS boxes are way overpriced, and underpowered. I had multiples the performance and capability piecemeal hacking together my old setup than buying the all-in-one systems that will lose vendor coverage in 2 years.
Mine randomly or at least consistently eather, placed drives in hibernation and spun them down. This created havoc.
it's already obsolete.
for USD699 (no ram and no storage) you can get this
6 SATA, 5 NVME drives, OCUlink and much more.
https://aoostar.com/products/aoostar-wtr-max-amd-r7-pro-8845...
Guessing you just read the first sentence? The build the article is about is a 200 DIY, not the N5 mentioned in the intro.
$200 + board/CPU, the $699 Aoostar includes board/CPU.
The intention is to get one of those ~$100 N100 minipcs for the guts. Still coming in under budget vs the commercial option.
All that being said, there are more affordable NAS cases that would eliminate the labor. This is both an art piece and a because-I-can kind of venture.
Thanks for posting this, it looks like an amazing value. I need to rebuild my home server/nas soon and finding removal sled backplane is hard enough by itself. I’d prefer a 1U layout, but this is pretty good.
Thanks for posting this. I have several family members & friends at work who can use this AOOSTAR unit. I’d love to install TrueNAS SCALE on it, Jellyfin, tailscale, docker and perhaps some VMs via Promox. I’ve been looking for a product just like this and the price is reasonable.
I need to get a NAS, and this just gave me a whole new rabbit-hole to go down. The other builds on their site e.g. (archive.org link because the WP site is struggling) https://web.archive.org/web/20250505085226/https://jackharve... are also very interesting.
I should probably just hook something up with, say, the old Mac Mini I have lying around, but it's fun to consider the options.
Having built a NAS the actual build is only half the story. There’s a hole heap of nonsense you have to go through to tune the software. I ended up bailing and just buying a NAS. Just too many ticks and glitches.
There are good all-in-one software solutions for all this stuff (unraid comes to mind) and I really feel this kind of article is unfinished without going into that kind of thing.
> Having built a NAS the actual build is only half the story. There’s a hole heap of nonsense you have to go through to tune the software.
Having done it, the list for me is pretty much: install using the file system you fancy, setup the raid you want, install samba and nfs, eventually avahi and encryption and that’s pretty much it.
I wouldn’t personally call that non sense. Is there something I am missing?
> install using the file system you fancy, setup the raid you want, install samba and nfs, eventually avahi and encryption and that’s pretty much it.
It isn’t though.
I got that far and I wasn’t happy. There’s a lot of extra stuff you have to think about when you want a system to run unattended, vs something that’s a workstation.
First thing I noticed was the noise and the heat. Had to install a daemon to power down the hard disks when not in use. Then when I wanted RAID I had to go spend half a day learning to do that. Then there was software updates. So on and so forth.
I also had some lingering concerns about power consumption which meant I was going to have to get out some power monitors and read some stuff about Linux power tuning.
After a while the sense of “all in one” like packages TrueNAS and unraid started to make sense.
I was all set for unraid but luckily I came into a bit of extra money and I was able to buy some of my life back with a simple box with all of this designed and configured by people who know what they are doing.
Now all I need to do is manage storage and services which is all I should have to do with a NAS.
All this stuff is interesting to learn but there’s only so many hours in the day.
By all means if you want a hobby project or you’re not concerned with non functional aspects like reliability, power consumption and security and maintenance or if you’re very poor then building your own NAS makes sense.
> Had to install a daemon to power down the hard disks when not in use
It depends on the drives you buy, of course, but unnecessary spin-up/spin-down cycles will wear out drives faster. Many NAS drives will keep running intentionally for this reason. If the drives didn't spin down by themselves, it's possible they weren't designed to start/stop that often.
My personal NAS solution is a combination of Debian (for stability and unattended updates) with a motherboard that's in low-power mode, with ZFS + the usual software for accessing the NAS installed. I could write a script that reboots the system based on /var/run/reboot-required but a monthly scheduled reboot works fine too.
In my experience, this setup rarely requires any maintenance. It's quite boring for a homelab project. Once every few years I need to upgrade to a newer version of Debian (last time I went from 10 to 11 to 12 in one go) but even that isn't much of a spectacle if you don't mess with non-Debian package repositories.
I have basically the same setup using Arch, because I don't trust distributions that patch a lot, and btrfs, because I use disks of mismatched sizes.
I used to change the frequency governor for my cpu on my previous NAS but the default Linux setup (schedutil) is now perfectly adequate. Same with disk power, default is fine.
The whole thing just shugs along happily with basically no effort on my side apart from the occasional upgrade requiring manual intervention.
It certainly didn't require anything which I would consider non sense - and I have seen plenty of things I would call that in the past, looking at you OpenLDAP and PAM. Sure, you need to have a vague idea of what RAID is but then again building a NAS and expecting to need zero knowledge of storage seems extremely ambitious to me. Then again, I realise that what I consider basic knowledge taken for granted might not seem to be so basic someone else point of view.
I thought TrueNAS was mostly plug and play for standard use cases nowadays?
Yes TrueNAS is supposed to be good too.
The black-and-white pic just to show the brown color of the Noctua fan is taking me out
To add something useful, this is very cool but I wish it came in JBOD form so I could run it off a Mac Mini instead, and the pricing listed in TFA seems like an idealistic lower limit.
Why so much power for a simple NAS? Wouldnt a Pi5 be sufficient?
Pi’s are a pain when it comes to power, storage and expansion.
They’re hardly competitive on price compared to an old nuc too. Even a new N100 with ram and ssd can be purchased heaper than a barebones Pi 5.
Pi's lack IO so they have quite poor performance for a lot of drives. They also aren't really very competitive price wise compared to low end AMD64 machines that have considerably more computation and IO capabilities at this low end price point. What Pi's are good at is very low power consumption and being small, once you add 4 hard drives small and low power don't matter much anymore and you wont take up much more power and space moving to MITX.
ARM64 does limit what software you can run as well as does the anaemic GPU hardware. They are good for other things and very small servers (like PIHoles) but once multiple drives are involved they aren't the right solution anymore.
Thanks.
It would be possible with a Pi or any other ARM or RISCV board, but most of those don't have NVMe or SATA and have very limited bandwidth to the USB ports, PCIe, or ethernet. If you don't need high performance that can be perfectly usable, but it will run into bottlenecks if you push beyond basic use.
Plex/jellyfin, torrents, immich, other server stuff.
Interesting, I was just mumbling to myself the other day that I really needed to find someone to fold sheet metal before I needed another computer case.
But this article points out that there really isn't any reason not to make this stuff out of printable plastic.
I'd guess the only real issue is cost.
Or: don't build a NAS at all and just put the hard drives in the machine you intend to use. This has many advantages including: price, performance, reliability, not having to run an entire second computer (just leave the computer on and sharing files if other computers on your LAN also use the files), intuitive and natural file management using non-proprietary software, simple drive formats that are easily recoverable with simple/normal tools if there is a problem, no silly tuning stuff because the files are already there.
NAS do make sense in some contexts. But over the last decade they've become something people do just to do because it's hip. It usually makes much more sense just to put your storage drives in the computer you'll use them in.
I wouldn't be surprised if a lot of people setting up NASes didn't have a desktop to put drives into. Lots of people are only using laptops or phones nowadays
That's true. But those aren't the people thinking about setting up NASes. Or at least not in my tech support experience. Those people will just have giant fragile octopuses of external USB drives.
But it is a problem that most people don't have capable computers anymore. :\
Your comment makes much more sense if you also want to have a desktop,and only in some network configurations. It's most likely the minority use case, but worth considering if it might fit.
A lot of people just use laptops and tablets / phones with limited storage expansion space at home, and want a dedicated device for a firewall/router anyways (no worries about reboots or other interruptions due to personal work on that computer, etc).
One more powerful computer to use as both NAS and firewall (and for various shared services...) makes a lot of sense much of the time - hence the popularity of proxmox.
I don't understand this. I use a laptop that I don't want to carry many files on, and that I often bring with me when I leave the house. I have NAS at home to hold my data and run services, including CI. It seems perfectly fine, even wonderful. I use mdadm and cryptsetup, which are non-proprietary pieces of software that set up simple, documented drive formats that are easily recoverable with simple/normal tools if there is a problem.
I have no idea how having a NAS would make somebody hip. Very strange disco. Everybody should have one.
The m.2 to sata adapter. Does this serialise the disk io or does it use multiple lanes of the pci bus?
I mean, pricewise and avoiding usb, it's a nice build. I have something not a million miles off using a zimablade and a pci card doing sata, with an ali express sata disk box and an atx psu. Even got 12v off the atx to power the zimablade via DCin
> Does this serialise the disk io or does it use multiple lanes of the pci bus?
That's not how PCIe lanes work. Regardless of how many PCIe lanes are used to connect the CPU to the HBA, only one packet will be traversing that link at a time (striped across the lanes), but that imposes no direct constraints on what the other side of the HBA is doing with its several SATA links simultaneously. With 16Gb/s of PCIe bandwidth, that HBA should have no trouble keeping more than one 6Gb/s SATA link busy.
Thanks for the cluestick hit. This implies storage on the HBA or SSD to buffer data being committed, to exploit faster bus speed than write speed to media.
I guess you could run sata ssds with this, but I suspect more people would be running spinners... Spinners certainly have buffers to manage the difference between bus speeds and media speeds; I think most sata ssds do too.
The HBA may not have or need much buffering, as transfer to/from system memory with DMA should be pretty fast... I'd guess enough memory to fully buffer one outgoing command for each port, and one or two to buffer a response for each port as well; they may have more, but not much is necessary.
Some multi-port sata cards do terrible things though. Rather than using a proper controller for the number of ports, they have a one port controller and then a sata port multiplier. These do result in a meaningful restriction in bandwidth, and some multipliers require waiting for a response before communicating with another drive which would result in poor throughput for most NAS workflows.
A particularly terrible idea would be a port multiplier hooked up as a m.2 sata device, rather than a pci-e sata controller. I don't know that I've seen that one (edit, found one on amazon), but I'm pretty sure I've seen the controller + port multiplier combo. It's definitely worth finding out what controller is on the board before buying to avoid surprises.
I think this latter design pattern is what the m.2 6 sata cards do. It's typically something like an ASM1166 chip. (the example below) or a Jmicron equivalent.
They say the bandwidth far exceeds the individual SATA port speeds. But, there's little to no visible buffer on the card.
It's not an HBA, as much as a port "multiplier"
https://www.newegg.com/orico-pm2ts6-bp-pci-express-to-m-2-ca...
Most of them seem to be X2 gen3, so decent fit for a N100 build that is gen3 anyway.
Seen some reports of people running into heat issues with them though if they run them full. So they end up using 3 out of 5 slots etc.
I dreamed of diskless, got 6 2TB SSD in SATA form factor. They run hot. I wound up with a noctua fan for the disks, the zimablade is fanless passive cooling. And now with an ATX psu and a mini pc case I have a power supply fan, and got a 6 disk chassis which has very quiet fans built in. But it is quiet at least.
Probably depends on type of ssd. They shouldn’t really run hot
I went with a bunch of second hand enterprise ones (also sata) and seems to be working out great