jrrv 16 minutes ago

What makes a motherboard a NAS motherboard, precisely? I've got a decent Mini-ITX sitting around and I've been contemplating setting up/getting a NAS. Would be nice if I could re-use what I already have and save some money.

  • nickdothutton 10 minutes ago

    For me, ECC RAM, large enough number of SATA ports, ability to run rest of my software stack well enough (in my case FBSD and ZFS).

mvkel 8 hours ago

Wait. You build a new one every -year-?! How does one establish the reliability of the hardware (particularly the aliexpress motherboard), not to mention data retention, if its maximum life expectancy is 365 days?

  • SirFatty 4 hours ago

    How else is one to get the clicks?

    • cube00 3 hours ago

      Plus the commission from the undisclosed Amazon affiliate links in the post.

      They're tagged for the post and year so must be worth it to go to that trouble rather then using generic tag for the whole blog.

      tag=diyans2024-20, tag=diynas2025-20,tag=diynas2026-20

  • p1necone 8 hours ago

    Looks like they built a new NAS, but kept using the same drives. Which given the number of drive bays in the NAS probably make up a large majority of the overall cost in something like this.

    Edit: reading comprehension fail - they bought drives earlier, at an unspecified price, but they weren't from the old NAS - I agree, when lifetimes of drives are measures in decades and huge amounts of tbw it seems pretty silly to buy new ones every time.

    • adastra22 6 hours ago

      MB and other elements are more concerning than the drives.

      • zdragnar 3 hours ago

        For system failure, yes, but not if data retention and recovery is your primary concern.

        When building a device primarily used for storing personal things, I'd much prefer to save money on the motherboard and risk that failing than skimping on the drives themselves

        • bostik 16 minutes ago

          Don't skimp on the power supply either. A dodgy PSU can torch all devices attached to it.

          How do I know? I've had two drives and one MB fail in quick succession thanks to a silently failing power supply.

        • aynyc 2 hours ago

          You actually want reliable MB & RAM to ensure data doesn't get corrupted in memory first. Since you have various ways of writing data to disks that offer you resiliency.

        • embedding-shape 3 hours ago

          Eh, cheap motherboards aren't a panacea that can't hurt the rest of the hardware, I personally don't skimp on motherboards, and would much rather skimp on the drives themselves as I have redundancy and 1-2 drives failing wouldn't hurt too much. And data retention is my top priority.

          Motherboards have fried connected hardware before, poor grounding/ESD protections, firmware bugs together with aggressive power management, wiring weirdness and power related faults have broken people's drives before.

          What I've never heard about is a drive breaking something else in a system, but broken motherboards have taken friends with them more than once.

    • throwaway2037 3 hours ago

      This is the funniest edit have read in while.

VTimofeenko 10 hours ago

Built a NAS last winter using the same case. Temps for HDDs used to be in mid-50s C with no fan and about 40 with the stock fan. The case-native backplane thingamajig does not provide any sort of pwm control if the fan is plugged in, so it's either full blast or nothing. I swapped the fan for a Thermalright TL-B12 and the HDDs are now happily chugging along at about 37 with the fan barely perceptible. Hddfancontrol ramps it up based on the output of smartctl.

Case can actually fit a low-profile discrete GPU, there's about half height worth of space.

  • embedding-shape 3 hours ago

    > any sort of pwm control if the fan is plugged in, so it's either full blast or nothing

    Got a new network switch that runs somewhat hot (TP-Link) and it's behaving the same way, built-in fan runs either not at all, or at 100% (and noisy at that). Installed OpenWRT on it briefly, before discovering 10Gbe NIC didn't work with OpenWRT, and it had much better fan control. Why is it so hard to just place a basic curve on the fan control based on the hardware temperature? All the sensors and controllers are there apparently, just a software thing...

    • citrin_ru 2 hours ago

      I have an impression that both noise level and power consumption are not a priority for home network equipment manufactures. After moving to a new house and connecting to another ISP I've got an ISP modem-router which: 1. has a fan and while it's quiet it's not silent 2. consumes around 20 Wt, not much but working 24x7 it would cost around £45/year at current electricity rates.

      I think it's technically possible to make a modem which will consume less power and use passive coiling but I don't think they (ISP and device manufacturer) care.

anentropic 40 minutes ago

I used to have big HDDs attached to my Thunderbolt dock.

But it was always annoying having to 'eject' them before unplugging the laptop from the dock. Or sometimes overnight they would disconnect themselves and fill up my screen with dozens of "you forgot to eject" notifications. Yes I'm on macOS.

Do NAS avoid this issue? Or you still have to mount/unmount?

Why does there seem to be much more market for NAS than for direct attached external HDD?

Eventually I got a new laptop with bigger SSD, started using BackBlaze for backups, and mostly stopped using the external HDDs.

I always assumed NAS would be slower and even more cumbersome to use. Is that not the case?

  • yabones 25 minutes ago

    A NAS will use a network file protocol (SMB/NFS/AFP/SFTP etc) to access data rather than direct disk access, so the types of failures are different. Generally you don't really have to "eject" but disconnecting during a large transfer can cause incomplete writes.

    The main risk with directly attached storage is that most kernels will do "buffered writes" where the data is written to memory before it's committed to disk. Yanking the drive before writes are synced properly will obviously cause data loss, so ejecting is always a good idea.

    Generally, NAS is a bit safer for this type of storage because the protocols are built with the assumption that the network can and will be interrupted. As a result, things are a bit slower since you're dealing with network overhead. So, like everything, there are some trade-offs to be made.

dllu 11 hours ago

Very sad that HDDs, SSDs, and RAM are all increasing in price now, but I just made a 4 x 24 TB ZFS pool with Seagate Barracudas on sale at $10 / TB [1]. This seems like a pretty decent price even though the Barracudas are rated for 2400 hours per year [2] but this is the same spec that the refurbished Exos drives are rated for.

By the way, interesting to see that OP has no qualms about buying cheap Chinese motherboards, but splurged for an expensive Noctua fan when the cheaper Thermalright TL-B12 perform just as well for a lot cheaper (although the Thermalright could be slightly louder and perhaps be a slightly more annoying spectrum).

Also, it is mildly sad that there aren't many cheap low power (< 500 W) power supplies for SFX form factor. The SilverStone Technology SX500-G 500W SFX that was mentioned retails for the same price as 750 W and 850 W SFX PSUs on Amazon! I heard good things about getting Delta flex 400 W PSUs from Chinese websites --- some companies (e.g. YTC) mod them to be fully modular, and they are supposedly quite efficient (80 Plus Gold/Platinum) and quiet, but I haven't tested them out yet. On Taobao, those are like $30.

[1] https://www.newegg.com/seagate-barracuda-st24000dm001-24tb-f...

[2] https://www.seagate.com/content/dam/seagate/en/content-fragm...

  • meindnoch 4 hours ago

    >I just made a 4 x 24 TB ZFS pool

    How much RAM did you install? Did you follow the 1GB per 1TB recommendation for ZFS? (i.e. 96GB of RAM)

    • dwood_dev an hour ago

      That's only for ZFS deduplication which you should never enable unless you have very, very specific use cases.

      For normal use, 2GB of RAM for that setup would be fine. But more RAM is more readily available cache, so more is better. It is certainly not even close to a requirement.

      There is a lot of old, often repeated ZFS lore which has a kernel of truth but misleads people into thinking it's a requirement.

      ECC is better, but not required. More RAM is better, not a requirement. L2ARC is better, not required.

    • execution 2 hours ago

      I think you should be fine with 64GB (4x16GB ECC), I have 8x10TB RAID-Z2 and it uses around 34GB.

    • zfs-myths 2 hours ago

      Some myths never die, I guess..

  • Glemkloksdjf 4 hours ago

    Are you running this in Raid-Z2?

    I'm way to bothered by how long it would take to resilver the disks that size.

  • WarOnPrivacy 11 hours ago

    > $10 / TB

    That's a remarkably good price. If I had $1.5k handy I'd be sorely tempted (even tho it's Seagate).

    • dddgghhbbfblk 4 hours ago

      It's a good price but the Barracuda line isn't intended for NAS use so it's unclear how reliable they are. But it's still tempting to roll the dice given how expensive drive prices are right now.

    • execution 2 hours ago

      I was tempted by 4 x 28TB (Recertified Seagate ST28000NM000C) but could not work out what I would use it for.

    • rubatuga 11 hours ago

      I've recently shucked some Seagate HAMR 26Tb drives hopefully they last

  • ghthor 11 hours ago

    Not surprised by the fan, once I went noctua I didn’t go back.

dewey 30 minutes ago

Maybe I'm out of the loop but I've never heard of "Topton". As this brand is being mentioned 16 times in this one blog post I'm just assuming that's a sponsored blog post and not an objective overview.

  • jffry 16 minutes ago

    Every time I've looked into doing a DIY NAS in the last few years Topton seems to come up - as far as I can tell it's because they make MiniITX boards with a boatload of SATA ports.

mzhaase 8 hours ago

I would like to point people to the Odroid H4 series of boards. N97 or N355, 2*2.5GbE, 4*SATA, 2 W in idle. Also has extension boards to turn it into a router for example.

The developer hardkernel also publishes all relevant info such as board schematics.

  • andruby an hour ago

    I've had an H3 for a few years and it runs amazing. Very low power usage, small footprint and great stability. I run it with an M.2 ssd for power considerations.

    Before that I had a full size NAS with an efficient Fujitsi motherboard, pico-psu, 12V adaptor and spinning HDD's. That required so much extra work for so little power efficiency gains vs the Odroid.

  • antonkochubey 5 hours ago

    And the best feature is they have in-band ECC, which can correct one-bit and detect two-bit errors. No other Alder Lake-N or Twin Lake SBC exposes this feature in UEFI.

  • kajika91 7 hours ago

    I also have an older Odroid HC4, it's been years it is running smoothly and not only I cannot use 1000$ for a NAS as the current post implied but the power consumption seems crazy to me for a mere disk-over-network usage (using a 500W power supply).

    I like the extensive benchmark from hardkernel, the only issue is that any ARM-based product is very tricky to boot and the only savior is armbian.

speff 11 hours ago

Q - assuming the NAS was strictly used as NAS and not as a server with VMs, is there a point in having a large amount of RAM? (large as in >8GB)

I'm not sure what the benefit would be since all it's doing is moving information from the drives over to the network.

  • aunty_helen 19 minutes ago

    People get carried away with their home lab setups. There's a distinct type of person that thinks they need 100tb of storage in their own house.

    If you're running a NAS for a company that has many users and multi disc access at the same time, sure. But then you're probably then not buying hdds to shuck and cheap components off ebay.

  • firecall 9 hours ago

    I am not at all an expert, I can only share my anecdotal unscientific observations!

    I'm running a TrueNAS box with 3x cheap shucked Seagate drives.*

    The TrueNAS box has 48GB RAM, is using ZFS and is sharing the drives as a Time Machine destination to a couple of Macs in my office.

    I can un-confidently say that it feels like the fastest TM device I've ever used!

    TrueNAS with ZFS feels faster than Open Media Vault(OMV) did on the same hardware.

    I originally setup OMV on this old gaming PC, as OMV is easy. OMV was reliable, but felt slow compared to how I remembered TrueNAS and ZFS feeling the last time I setup a NAS.

    So I scrubbed OMV and installed TrueNAS, and purely based on seat-of-pants metrics, ZFS felt faster.

    And I can confirm that it soaks up most of the 48GB of RAM!

    TrueNAS reports ZFS Cache currently at 36.4 GiB.

    I dont know why or how it works, and it's only a Time Machine destination, but there we are those are my metrics and that's what I know LOL

    * I don't recommend this. They seem unreliable and report errors all the time. But it's just what I had sitting around :-) I'd hoped by now to be able to afford to stick 3x 4TB/8TB SSDs of some sort in the case, but prices are tracking up on SSDs...

  • mewse-hn 11 hours ago

    ZFS uses a large amount of ram, i think the old rule of thumb was 1GB ram per 1TB of storage

    • yjftsjthsd-h 10 hours ago

      That's only for deduplication.

      https://superuser.com/a/993019

      • Lammy 10 hours ago

        I do like to deduplicate my BitTorrent downloads/seeding directory with my media directories so I can edit metadata to my heart's content while still seeding forever without having to incur 2x storage usage. I tune the `recordsize` to 1MiB so it has vastly fewer blocks to keep track of compared to the default 128K, at the cost of any modification wasting very slightly more space. Really not a big deal though when talking about multi-gibibyte media containers, multi-megapixel art embeds, etc.

        • zenoprax 7 hours ago

          Have you considered "reflinks"? Supported as of [OpenZFS 2.2](https://github.com/openzfs/zfs/pull/13392).

          Haven't used them yet myself but seems like a nice use case for things like minor metadata changes to media files. The bulk of the file is shared and only the delta between the two are saved.

        • yegle 9 hours ago

          cross-seed | cross-seed https://www.cross-seed.org/

          • bscphil 8 hours ago

            I believe they are saying they literally edit the media files to add / change metadata. Cross-seeding is only possible if the files are kept the same.

      • ekropotin 7 hours ago

        ZFS also uses RAM for read through cache aka ARC. However, I’m not sure how noticeable the effect from increased RAM would be - I assume it mostly benefit for read patterns with high data reuse, which is not that common.

    • 01HNNWZ0MV43FF 11 hours ago

      Huh. More than just the normal page cache on other filesystems?

      • WarOnPrivacy 11 hours ago

        Yes. Parent's comment matches everything I've heard. 32GB is a common recommendation for home lab setups. I run 32 in my TrueNAS builds (36TB and 60TB).

        • magicalhippo 8 hours ago

          You can run it with much less. I don't recall the bare minimum but with a bit of tweaking 2GB should be plenty[1].

          I recall reading some running it on a 512MB system, but that was a while ago so not sure if you can still go that low.

          Performance can suffer though, for example low memory will limit the size of the transaction groups. So for decent performance you will want 8GB or more depending on workloads.

          [1]: https://openzfs.github.io/openzfs-docs/Project%20and%20Commu...

      • tekla 11 hours ago

        ZFS will eat up as much RAM as you give it as it caches files in memory as accessed.

        • ac29 10 hours ago

          All filesystems do this (at least all modern ones, on linux)

  • PikachuEXE 11 hours ago

    If you use ZFS you might need more RAM for performance?

  • loloquwowndueo 11 hours ago

    Caching files in ram means they can be moved to the network faster - right?

    • ac29 10 hours ago

      Depends on the network speed. At 1Gbps a single HDD can easily saturate the network with sequential reads. A pair of HDD could do the same at 2.5Gbps. At 10Gbps or more, you would definitely see the benefits of caching in memory.

      • butvacuum 10 hours ago

        Not as much as expected. I have several toy ZFS pools out of ancient 3tb wd reds, and anything remotely home-grade (stripped mirrors, 4,6,8 wide raidz1/2) saturates the disks before 10gig networking. As long as it's sequential, 8gb or 128gb doesn't matter.

    • speff 11 hours ago

      Makes sense. I didn't know if the FS used RAM for this purpose without some specialized software. PikachuEXE and Mewse mentioned ZFS. Looks like it has native support for caching frequent reads [0]. Good to know

      [0]: https://www.truenas.com/docs/references/l2arc/

  • thefz 5 hours ago

    ZFS cache.

  • justsomehnguy 10 hours ago

    As the other said already if you have more RAM you can have more cache.

    Honestly it's not that needed but if you would really use the 10Gbit+ networking then 1 second is ~125Mbytes. So depending on your usage you can never even more than 15% utilization or have it almost all if you constantly running something on it ie torrents or using it a SAN/NAS for VM on some other machine.

    But for a rare occasional home usage nor 32Gb nor this monstrosity and complexity doesn't make sense - just buy some 1-2 bay Synology and forget about it.

    • ekropotin 7 hours ago

      I won’t be able to sleep having my data just on 1 disk

      • kalleboo 41 minutes ago

        No matter what you should have an off-site backup as well in case of lightning, flood, fire, virus, etc.

cm2187 4 hours ago

HDD have to be bought new, as well as anything mechanical (eg fans). But for motherboards, CPU, RAM and SSD, there is great value in buying used enterprise hardware on ebay. It is generally durable hardware that spent a quiet life in a temperature controlled datacentre, server motherboards from 5 years ago are absolute aircraft carriers in term of PCIe lanes and functionalities. Used enterprise SSDs are probably more durable than a new retail SSD, plus power loss protection and better performance.

The only downside is slightly higher power consumption. But just bought a 32 core 3rd gen Xeon CPU + motherboard, 128GB RAM, it idles at 75w without disks which isn't terrible. And you can build a more powerful NAS for a third of the price of a high end Synology. Unlikely that the additional 20-30w idle power consumption will cost you more than that.

  • Helmut10001 4 hours ago

    I wouldn't say that being new is an absolute requirement. I recently upgraded my ZFS pool from SATA to SAS HDDs. Since SAS HDDs have much better firmware for early error detection and monitoring, I decided to buy 50% refurbished. Even if I lost half of them, I would still be safe. I also have offsite backups. This setup worked really well for me, and I feel completely confident that my data is safe while not wasting unnecessary resources. Whether to use new or used equipment therefore depends on the setup.

    • cm2187 4 hours ago

      Agree, but that's taking a risk with your data (whereas if a MB fails, you likely just need to replace it but your data is fine), and HDD kind of have a finite number of hours in them. Where buying them used I think makes sense is for a backup server, that you leave off except the few hours in a week where you do an incremental backup. Then it doesn't really matter that the drives have already been running for 3 or 4 years.

    • Glemkloksdjf 4 hours ago

      So you buy used enterprise disks because their error detection is 'better'?

      Do you have any source for this claim? Why would be the firmware so different? Software is cheap i don't think they would be that different.

      I mean a used enterprise disk gets sold after it was running on heavy load for a long time. Any consumer hdd will have a lot less runtime than enterprise disks.

  • pmontra 4 hours ago

    Maybe 75 W without disk is not terrible but it's not good. My unoptimized ARM servers idle at about 3 or 4 W and add another 1 or 10 W when their SSDs or HDDs are switched on.

    75 W probably need active cooling. 4 W do not.

    Anyway you can probably do many more things with that 75 W server.

  • nicolaslem 4 hours ago

    75W idle is 650kWh a year, that's quite significant in the context of a home.

    • cm2187 29 minutes ago

      Well, a Synology NAS would probably consume like 30-40w, so we are talking about an excess of $70-100 a year where I live. Depends on one's budget of course, but not really a big deal for me. And certainly less than what I am saving on the upfront cost.

    • Glemkloksdjf 4 hours ago

      260 Euros in Germany. And this heat also has to be moved out

  • gorbachev 4 hours ago

    The datahoarder community frequently utilizes used hard drives.

    That's perfectly fine, if your NAS has redundancy and you can recover from 1 - 2 disk failures, and you're buying the drives from a reputable reseller.

    • t-3 3 hours ago

      I usually buy used hard drives, but prices are strange for all electronics right now. It's a bad time to buy anything computer-related, but especially used goods which aren't discounted as much as normal are priced higher due to massive inflation (to the point that refurbished drives I bought 5 years ago have a better dollar/GB ratio than refurbs I can buy today).

  • Glemkloksdjf 4 hours ago

    Enterprise hardware is very seldom a good idea.

    The hardware has different form factors (19"), two power supplies, very loud, very power hungry.

    There are so many good combinations of old and still functional hardware for consumers.

    My main pc 6 years ago had a powerful cpu and idle load of 15 watts due to the combination of mainboard and amount of components i had in it (one ram block instead of 2 or so).

    And often enough, if you can buy enterpsire hardware, the hardware is so outdated that a current consumer system would beat it without looking at it.

    If you then need to replace something, its hard to find or its differennt like the power supply.

zdw 10 hours ago

The Jonsbo N3 case which is 8x 3.5" drives has a smaller footprint than this, which might be better for most folks. Needs a SFX PSU though, which is kind of annoying.

If you get an enterprise grade ITX board that has a 16x PCIe slot which can be bifurcated into 4 M.2 form factor PCIx4 connections, it really opens up options for storage:

* A 6x SATA card in M.2 form factor from Asmedia or others will let you fill all the drive slots even if the logic board only has 2/4/6 ports on it.

* The other ports can be used for conventional M.2 nVME drives.

  • ehnto 9 hours ago

    That's what I built! It's a great case, the only components I didn't already have lying around were the motherboard and PSU.

    It's very well made, not as tight on space as I expected either.

    The only issue is as you noted, you have to be really careful with your motherboard choice if you want to use all 8 bays for a storage array.

    Another gotchas was making sure to get a CPU with integrated graphics, otherwise you will have to waste your pcie slot on a graphics card and have no space for the extra SATA ports.

vbezhenar an hour ago

No ECC, no remote KVM. HP Microserver remains the only viable option.

  • pi-rat 24 minutes ago

    Built in KVM is not that important any longer, with all the new options for adding an external one. gl-inet comet kvm, nanokvm pro, jetkvm, etc.

  • conorcleary 26 minutes ago

    for thousands of dollars, plus what if you don't want remote-in?

  • avhception 40 minutes ago

    Came here to remark about ECC as well.

    The remote KVM options from HP and Dell and whatnot are usually so useless they might as well not exist except from remote power up / down, so I don't really care about that.

starky 6 hours ago

I think the worry about power consumption is a bit overblown in the article. My NAS has an i5-12600 + Quadro P4000 and uses maybe 50% more power than the one in this article under normal conditions. That works out to maybe $4/month more cost. Given the relatively small delta, I'd encourage picking hardware based on what services you want to run.

  • silversmith 6 hours ago

    Less power, less heat. Less heat, less cooling required. At some point that allows you to go fanless, and that's very beneficial if you have to share a room with the device.

    • embedding-shape 3 hours ago

      Since this is about NAS, you very likely have a bunch of HDDs connected to it. And if you do, I feel like they'll "out-noise" a lot of cooling solutions as long as the fans are not spinning at max by default.

  • execution 2 hours ago

    Indeed, I always compare it with what I get if I ran it via cloud services and the electricity cost pales in comparison.

    My NAS is around 100W (6-year old parts: i3 9100 and C246M) which comes to $25/£18 per month (electricity is expensive), but I can justify it as I use many services on the machine and it has been super reliable (running 24/7 for nearly 6 years).

    I will try to see if I can build a more performant/efficient NAS from a mix of spare parts and new parts this coming month (still only Zen 3: 5950X and X570), but it is more of a fun project than a replacement.

  • rr808 2 hours ago

    $4/mo is more than I expected. I always compare to cloud storage and $50/yr is significant.

  • dontlaugh 6 hours ago

    It depends how much electricity costs where you live. I’m quite pleased mine idles at ~15W.

  • queenkjuul 5 hours ago

    I'm with you, but my "NAS" is also really just a server, running tons of other services, so that justifies the power consumption (it's my old 2700X gaming rig, sans GPU).

    But i do have to acknowledge that the US has relatively low power costs, and my state in particular has lower costs than that even, so the equation is necessarily different for other people.

aynyc an hour ago

I did some shopping recently, the market is very weird right now. Given the pricing of hardware recently, pre-built NAS now is actually on par pricing wise for DIY.

andruby an hour ago

Looking at the Power Consumption section:

How can the total average Wattage be lower than any of the lines it consists of?

Total average power is 66.49W, yet average _Idle_ power is noted as 66.67W.

  • Mashimo an hour ago

    I think the Total is not the combination of the above listed items. The listed items are just sub categories. See the "Duration" column.

    Out of 108h he did a 18h burn in.

evanjrowley 7 hours ago

I would have chosen the i3-n305 version of that motherboard because it has In-Band ECC (IBECC) support - great for ZFS. IBECC is very underrated feature that doesn't get talked about enough. It may be available for the N150/N355, but I have never seen a confirmation.

  • zenoprax 7 hours ago

    Can you explain why ECC is great for ZFS in particular as opposed to any other filesystem? And if the data leaves the NAS to be modified by a regular desktop computer then you lose the ECC assurance anyway, don't you?

    • supermatt 6 hours ago

      ZFS is about end-to-end integrity, not just redundancy. It stores checksums of data when writing, checks them when reading, and can perform automatic restores from mirror members if mismatches occur. During writes, ZFS generates checksums from blocks in RAM. If a bit flips in memory before the block is written, ZFS will store a checksum matching the corrupted data, breaking the integrity guarantee. That’s why ECC RAM is particularly important for ZFS - without it you risk undermining the filesystem’s end-to-end integrity. Other filesystems usually lack such guarantees.

    • adastra22 6 hours ago

      The oversimplified answer is that ZFS’ in-memory structures are not designed to minimize bitflip risk, as some file systems are. Content is hashed when written to memory cache, but it can be a long time before it then gets to disk. Very little validation is done at that point to protect against writing bad data.

  • Alive-in-2025 7 hours ago

    what is the impact on performance, does it require special ram? just heard about this here

    • gforce_de 7 hours ago

      sorry for the german comment - ECC is mandatory!

      Obligatorische Pastete: "16GB Ram sind Flischt, ohne wenn und aber. ECC ist nicht Flischt aber ZFS ist dafür ausgelegt. Wenn in Strandnähe Daten gelesen werden und es kommt irgendwie was in den Arbeitsspeicher, könnte eine eigentlich intakte Datei auf der Festplatte mit einem Fehler "korrigiert" werden. Also ECC ja. Das Problem bei ECC ist nicht der ECC-Speicher an sich, der nur wenig mehr als konventioneller Speicher kostet, es sind die Mutterbretter, die ECC unterstützen. Aufpassen bei AMD: Oft steht dabei, dass ECC unterstützt wird. Gemeint ist aber, dass ECC-Speicher läuft, aber die ECC-Funktion nicht genutzt wird. LOL. Die meisten MBs mit ECC sind Serverboards. Wer nichts gegen gebrauchte Hardware hat, kann z.B. mit einem alten Sockel 1155-Xeon mit Asus-Brett ein Schnäppchen machen. Ansonsten ist die Asrock Rack-Reihe zu empfehlen. Teuer, aber stromsparend. Generell Nachteil bei Serverboards: Die Bootzeit dauert eine Ewigkeit. Von Consumerboards wird man mit kurzen Bootzeiten verwöhnt, Server brauchen da oft mal 2 Minuten, bis der eigentliche Bootvorgang beginnt. Bernds Server besteht also aus einem alten Xeon, einem Asus Brett, 16GB 1333Mhz ECC-Ram und 6x 2TB-Platten in einem RaidZ2 (Raid6).6TB sind Netto nutzbar. Ich mag alte Hardware irgendwie. Ich reize Hardware gerne bis zum Gehtnichtmehr aus. Die Platten sind auch schon 5 Jahre alt, machen aber keine Mucken. Geschwindigkeit ist super, 80-100MB/s über Samba und FTP. Ich lasse den Server übrigens nicht laufen, sondern schalte ihn aus, wenn ich ihn nicht brauche. Was noch? Komression ist super. Obwohl ich haupsächlich nicht weiter komprimierbare Daten speichere (Musik, Videos), hat mir die interne Kompression 1% Speicherplatz beschert. Bei 4TB sind das ca. 40GB Platz gespart. Der Xeon langweilt sich trotzdem ein bisschen. Testweise habe ich gzip-9-Komprimierung getestet, da kam er dann schon ins Schwitzen."

bhattisatish 3 hours ago

Are there any tape based solution which can be used at home? I don't care about time retrieval. It's more for home archival purpose.

I have two NAS servers (both based on Synalogy). But I need something where I can back it up and forgot about it till I want to restore the stuff. I am looking at a workflow of say, weekly backup to tape. Update the index. Whenever I want to restore a directory or file, I search the index, find the tape and load the same for retrieval.

NAS can be used for continuous backup (aka timemachine and timeshift). And archival at a weekly level.

  • progbits 3 hours ago

    If you "back up and forget" there is a good chance you will not be able to restore the tapes when the time comes.

    At least with drives you can run regular health checks a corruption scans. Tape is good for large scale but you must have automation that keeps checking the tapes.

  • mm0lqf 3 hours ago

    Tape drives are generally SAS so you will need a controller card

    I've got a HP StorageWorks Ultrium 3000 drive (It's LTO-5 format) connected to one (LSI SAS SAS9300-4i), in my NAS/file server (HP Z420 workstation chassis). Don't go lower than LTO-5 as you will want LTFS support.

    About £150 all in for the card and drive (including SFF-8643 to SFF-8482 cables etc..) on EBay

    Tapes are 1.5TB uncompressed, and about £10/each on Ebay, you'll also want to pick up a cleaning cartridge.

    I use this and RDX (1TB cartridges are 2-4 times the price, but drives are a lot cheaper, and SATA/USB3, and you can use them like a disk) for offline backup of stuff at home.

    • embedding-shape 3 hours ago

      Not OP, but similar situation, trying to figure out tape archiving, already using SAS.

      However, is there no open formats? The whole LTO ecosystem of course reeks of enterprise, and I'd expect by now at least one hardware hacker had picked together some off-the-shelf components to build something that is magnitude cheaper to acquire, maintain and upgrade.

      • uniqueuid 2 hours ago

        Short answer: no

        Tape is really complicated and physically challenging, and there are no incentives for people investing insane amounts of time for something that has almost no fan base. See the blog post about why you don’t want tape from some time ago.

        Edit: https://blog.benjojo.co.uk/post/lto-tape-backups-for-linux-n...

        • embedding-shape 2 hours ago

          > there are no incentives for people investing insane amounts of time for something that has almost no fan base

          Like that has stopped anyone before? :p Probably explain why we haven't seen anything FOSS in that ecosystem yet though.

fmajid 6 hours ago

I upgraded my home backup server a couple of months ago to a Minisforum N5 Pro, and am very happy with it. It only has 4 3.5” drive slots, but I only use two with 2x20TB drives mirrored, and two 14TB external drives for offsite backups. The AMD AI 370 CPU is plenty fast so I also run Immich on it, and it has ECC RAM and 10G Ethernet.

Keyframe 3 hours ago

This is all fine, but price came around same-ish as UGREEN DXP8800, if we're considering price alone.

StrLght 4 hours ago

I did something similar last year. Market for mITX NAS boards is pretty bad. I went for ASRock N100DC-ITX – it has 2x SATA ports, but there's also PCIe 3 x4.

The main benefits of this board were:

* it's not from an obscure Chinese company

* integrated power supply – just plug in DC jack, and you're good to go

* passive cooling

Really hope they make an Intel N150 version.

  • enchanted-gian 3 hours ago

    Question,whats the problem with a motherboard being from an obscure Chinese company?Is it because its harder to find replacements or some other reason. i ask because i recently built my own homelab like 4 months ago and source all my parts from Aliexpress which as way cheaper than any name brand on amazon an they're all relatively obscure. My homelab is running perfectly though so why the apprehension?

    • StrLght 2 hours ago

      I am more worried about them in the long-term. Reviews are usually not as detailed as I'd like them to be (shout out to ServeTheHome – they're doing a great job in terms of that), or they're nearly impossible to find.

      For me personally, there are two things I am concerned about:

      1. Issues that can only be resolved via BIOS update. Almost all obscure Chinese SBCs won't get any updates, so you're stuck with whatever issues you encounter.

      2. In case of hardware failures, there's a 0% chance for RMA. You are not getting a replacement or your money back.

esskay 4 hours ago

> HDD have to be bought new

In A DC environment sure. In a home NAS not so much. I'm on Unraid and just throw WD recertified drives of varying sizes at it (plus some shucked external drives when I find them on offer), that's one of its strengths and makes it much cheaper to run.

dbalatero 11 hours ago

I researched a bunch of cases recently and the Jonsbo, while it looked good, came up as having a ton of issues with airflow to cool the drives. Because of this, I ended up buying the Fractal Node 804 case, which seemed to have a better overall quality level and didn't require digging around AliExpress for a vendor.

  • no_time 8 hours ago

    lol same. All my parts arrived except the 804. The supply chain for these cases appears to be imploding where I live (Hungary). The day after I ordered it either went out of stock or went up by +50% in all webshops that are reputable here.

    I’m still a bit torn on whether I made the good call of getting 804 or the 304 wouldve been a enough for a significantly smaller footprint and -2 bays. Hard to tell without seeing them in person lol.

    Are you satisfied with it? Any issues that came up since building?

    • nicolaslem 4 hours ago

      I have been running my NAS on the 304 for 5 years. It fits natively 6 HDDs but I think it is possible to cram two more with a bit of ingenuity. It is tucked away in an Ikea cabinet that I have drilled the back of for airflow.

exmadscientist 10 hours ago

Are there any NAS solutions for 3.5" drives, homebrew or purchased, that are slim enough to stash away in a wall enclosure? (This sort of thing: https://www.legrand.us/audio-visual/racks-and-enclosures/in-... , though not that particular model or height.) I'd like to really stash something away and forget about it. Height is the major constraint, you can only be ~3.5" tall. And before anyone says anything about 19" rack stuff, don't bother. It's close but just doesn't go, especially if it's not the only thing in the enclosure.

  • jmb99 10 hours ago

    > And before anyone says anything about 19" rack stuff, don't bother. It's close but just doesn't go, especially if it's not the only thing in the enclosure.

    Do you have to use that particular wall enclosure thing? A 1U chassis at 1.7” of height fits 4 drives (and a 2U at ~3.45” fits 12), and something like a QNAP is low-enough power to not need to worry about cooling too much. If you’re willing to DIY it would not be hard at all to rig up a mounting mechanism to a stud, and then it’s just a matter of designing some kind of nice-looking cover panel (wood? glass in a laser-cut metal door? lots of possibilities).

    I guess my main question is, what/who is this for? I can’t picture any environment that you have literally 0 available space to put a NAS other than inside a wall. A 2-bay synology/qnap/etc is small enough to sit underneath a router/AP combo for instance.

    • exmadscientist 9 hours ago

      > Do you have to use that particular wall enclosure thing?

      It's already there in the wall. All the Cat5e cabling in the house terminates there, so all the network equipment lives in there, which makes me kind of want to also put the NAS in there.

  • butvacuum 10 hours ago

    1 liter PC's (tiny/mini/micro), or some N100 type build + external bay is likely your best bet. If it's really that small, you might have heat issues.

p1mrx 9 hours ago

I recently got a used QNAP TS-131P for cheap, that holds one 3.5" drive for offsite backup at a friend's house. It's compact and runs off a common 12V 3A power supply.

There is no third-party firmware available, but at least it runs Linux, so I wrote an autorun.sh script that kills 99% of the processes and phones home using ssh+rsync instead of depending on QNAP's cloud: https://github.com/pmarks-net/qnap-minlin

disambiguation 10 hours ago

I too was in the market recently for a NAS, downgrading from a 12 bay server because of yagni - it's far too big, too loud, runs hot, and uses way too much energy. I was also tempted by the jonsbo (it's a very nice case) but prices being what they are it was actually better to get a premade 4 bay model for under $500 (batteries included, hdds are not). It's small, quiet, power efficient, and didnt break the bank in the process. Historically DIY has always been cheaper, but that's no longer the case (no pun intended)

mtlynch 3 hours ago

I appreciate Brian's posts and they've helped me learn to build my own NAS systems, but there's a scammy angle to his articles.

All of the merchant links are affiliate links, which he (illegally) does not disclose.[0] He's effectively acting as a sales rep for these brands, but he's presenting himself as an unbiased consumer.

The affiliate relationship incentivizes Brian to recommend more expensive equipment and push readers to the vendors that pay Brian the most rather than the vendors that are the best for consumers.

I recognize that it's an unfortunate truth that affiliate links are one of the few ways to make money writing non-AI content about computer hardware. I'm fine with affiliate links, but the author should disclose the conflict of interest at the top of the post before getting into the recommendations.

In the interest of full disclosure, I also write about NAS builds on my blog, so I somewhat compete with Brian's posts, but I stopped using affiliate links five years ago because of the conflict of interest.

If you're not familiar with how affiliate relationships create dangerous incentives, I recommend reading the article, "The War To Sell You A Mattress Is An Internet Nightmare."[1] tl;dr - All the top mattress-in-a-box reviewers were just giving favorable reviews to the company that paid the best affiliate rates, even going so far as to retroactively update old reviews if the payout rates changed.

[0] https://www.ftc.gov/business-guidance/resources/ftcs-endorse...

[1] https://www.fastcompany.com/3065928/sleepopolis-casper-blogg...

  • alias_neo 3 hours ago

    Just skimming over the article, I feel like the blue link coloured word "Topton" has been seared into my retinas.

    That aside, as someone who has been building computers for nearly 3 decades, and NAS's for a decade plus, I dislike almost everything about this build.

    Spending a lot on the PSU is a good move, but the motherboard is a bad choice for the price when a much more capable socketed board + CPU could be had for around the same price, and the use of no-name SSD and NVMe is an absolute no-no for me.

    The impression I got from so many linked mentions of Topton this and Topton that, is that this was mostly done to push that particular brand for a sponsorship or affiliate program.

    Youtube has already long gone the way of being untrustworthy for advice on this sort of thing due to sponsorships and affiliates etc, perhaps I should blog my advice and experience that no body pays to influence, in a more generic sense for those who actually need guidance on where to focus when building hardware like PCs and NASs.

    I'm not going to suggest that hardware I chose for my "NAS" as it would be universlly bad advice for most people, but there is some generic knowledge to be shared here.

    Sometimes it feels just like telling my kids to "learn from my mistakes", does anyone actually want to hear it?

gorbachev 4 hours ago

The motherboard seems quite expensive.

kotaKat 3 hours ago

While not DIY, I would like to also call out an interesting discovery lately.

https://www.ugreen.com/blogs/news/ugreen-makes-strategic-ent...

UGREEN has apparently inked deals to drop their DXP2800s into (some) Walmarts, which also included bringing in some 10/12TB Toshiba N300 Pro drives as well to go with them on the shelves. Being a super-rural American, I was a bit surprised to see this on my local shelf as a nearly turnkey solution in an area where there's nothing remotely close to a Best Buy, even.

Even more surprisingly: they've been sold by Walmart below minimum advertised prices at UGREEN a few times normally...

  • Xiol32 3 hours ago

    I had one of the DXP4800 Plus NASes for about a month before RMAing it.

    The CPU would immediately hit 100C with even the slightest whiff of load.

    The entire thing was also unstable and would regularly just lock up without any kernel panic or other error message available, could even get kdump to gather anything (I'd binned their dodgy NAS OS and installed Debian).

    It also seemed to amplify the noise of the hard drives within. Every thunk of a drive head moving around would be audible from a different room. Not sure how they managed to do that, but it's an acoustic nightmare.

  • AlanYx 3 hours ago

    One thing to be aware of is that many (all?) current UGREEN boxes can't support ECC RAM. For anyone looking to use ZFS like the article linked here, that may be an issue, depending on one's view of the debate about whether ECC is necessary with ZFS.

aetherspawn 8 hours ago

Obligatory comment every time one of these threads comes up that Synology, sure, the hardware is a bit dated but… as far as set and forget goes:

I’ve run multiple Synology NAS at home, business, etc. and you can literally forget that it’s not someone else’s cloud. It auto updates on Sundays, always comes online again, and you can go for years (in one case, nearly a decade) without even logging into the admin and it just hums along and works.

  • PeterStuer 7 hours ago

    Until you get the blue flashing light of dead. Luckily I was able to source an identical old model of eBay to transfer the disks to.

  • imiric 6 hours ago

    What makes you think that Synology hardware is special in that sense?

    Most quality hardware will easily last decades. I have servers in my homelab from 2012 that are still humming along just fine. Some might need a change of fans, but every other component is built to last.

    • aetherspawn 4 hours ago

      It’s the software and stability of the software (between updates for example) that’s impressive.

DeathArrow 8 hours ago

I wonder how many consumer level HDDs in RAID5 will take to saturate a 10Gbps connection. My napkin math says that from 1,250 MB/s we can achieve around 1,150 MB/s due to network overhead so it means about 5 Red Pro/ Ironwolf Pro (reading at about 250–260 MB/s each) in RAID5 to saturate the connection.

  • ekropotin 7 hours ago

    I though raid5 is highly discouraged

    • Mashimo 6 hours ago

      I can't remember the details, but was that not specifically for hardware raid controllers? 2000s style.

      I think for home use with MDADM or raid z2 on zfs it's just gucci. It's cost effective.

      • Maakuth 3 hours ago

        Z2 means you'll have two parity disks, like in RAID-6. That should be okay. The trouble with RAID-5 are the rebuild times that rise to multiple days with modern disk sizes. The duration of time you run effectively without redundancy grows uncomfortably large. Especially if you don't have a hot or even cold spare around.

        • Mashimo an hour ago

          Ah yes, I mixed up raid5 and 6.

          I think it's still fine for casual home setups. Depending on data and backup strategy.

jaimex2 9 hours ago

What's the plan if your house burns down?

  • aurea 5 hours ago

    Ideally: off-site backup and archive-tier object storage in the cloud.

  • ekropotin 7 hours ago

    The loss of your vacation photos will be less of your worries

pSYoniK 6 hours ago

TL;DR - please stop wasting tons of resources putting together new servers every year and turning this into yet another outlet for "I have more money than sense and hopefully I can buy myself into happiness". Just get old random hardware and play around with it and you'll learn so much that you will be able to truly appreciate the difference between consumer and enterprise hardware.

This seems awfully wasteful. One of the main reasons for which I've built my own homeserver was to reduce resource usage - one could probably argue that the carbon footprint of keeping your photos in the cloud and running services is lower than building your own little datacentre copy locally and where would we be if everyone builds their own server, then what? Well, I think that paying Google/Apple/Oracle/etc whoever money so that they continue their activities has a bigger carbon footprint than me picking up old used parts and running them on a solar/wind only electricity plan. I also think I'm going a bit overboard with this and I'm not suggesting to vote with your wallet because that doesn't work. If you want real change this needs to come from the government. You not buying a motherboard won't stop a corporation from making another 10 million.

Anyway, except for the hard drives, all components were picked up used. I like to joke it's my little Frankenstein's monster, pieced together from discarded parts no one wanted or had any use for. I've also gone down the rabbit hole to build the "perfect" machine, but I guess I was thinking too highly of myself and the actual use case. The reason I'm posting this is to help someone who might not build a new machine because they don't have ECC and without ECC ZFS is useless and you need Enterprise drives and you want 128 GB of RAM in the machine and you could also pick up used enterprise hardware and you could etc...

If you wish to play around with this, the best way is to just get into it. The same way Google started with consumer level hardware so can you. Pick up a used motherboard, pick up some used ram, a used CPU, throw them into a case and let it rip. Initially you'll learn so much and that alone is worth every penny. When I built my first machine, I wasn't finding any decently used former desktop form hp/lenovo/dell so I found a used i5 8500t for about 20$, 8 gb of ram for about 5$, a used motherboard for 40$, case was 20$ and PSU was $30. All in all the system was 115$ and for storage I used an old 2.5inch ssd for boot drive and 2 new NAS hard drives (which I still have btw!). This was amazing. Not having ECC, not having a server motherboard/system, not worrying about all that stuff allowed me to get started. The entry bar is even lower now, so just get started, don't worry. People talk about flipped bits as if it happens all day every day. If you are THAT worried, then yeah, look for a used server barebone or even a used server with support for ecc and do use ZFS, but I want to ask, how comfortable are you making the switch 100% now over night without having ever spent any time configuring even the most basic server that NEEDS to run for days/weeks/months? Old/used hardware can bridge this gap and when you're ready it's not like you have to throw out the baby with the bathwater. You now have another node in a proxmox cluster. Congrats! The old machine can run LXCs, VMs, it could be a firewall it could do anything and when it fails, no biggie.

Current setup for those interested:

i7 9700t

64 GB DDR4 (2x32)

8, 10, 12, 12, 14 TB HDDs (snapraid setup and 14 TB HDD is holding parity info)

X550 T2 10Gbps network card

Fractal Design Node 804

Seasonic Gold 550watts

LSI 9305 16i

  • nicolaslem 4 hours ago

    The author is not suggesting anyone should rebuild their NAS every year. Instead he is investigating which options make sense in year X. I remember reading his recommendations back when I built my NAS in 2021 but that doesn't mean I bought new hardware since then.

  • imiric 6 hours ago

    It's a bit patronizing to tell people what to do with their money. If you care more about the environment than enjoying technology, then go ahead and do what you suggest. If you want to be really green, how about giving up technology altogether? Go full vegan, abandon all possessions, and all that? Or if you really want to help the planet, have you considered suicide?

    There's always more you can do. I'd rather enjoy my life, and not tell others how to enjoy theirs, unless it's impacting mine. Especially considering that the impact of a single middle-class individual pales in comparison to the impact of corporations and absurdly wealthy individuals. Your rant would be better served to representatives in government than tech nerds.

    • hexbin010 2 hours ago

      > have you considered suicide

      How uncouth, even just as rhetoric.

    • pSYoniK 3 hours ago

      It is however very patronising to tell people to "Go full vegan, abandon all possessions, and all that".

      It also isn't useful to reduce the conversation and assume that critique directed at the idea of necesarily going out and buying new hardware is a critique against technology or ownership, but, myself included, we do seem to read what we want. You also missed the point I made when I did clearly say voting with your wallet doesn't work. You didnt address the other, more salient point I was getting across, but obviously failed to do so - when starting out, don't worry too much, just get whatever and start learning. Questions will be easier answered when you already have some hardware.

      Anyway, enjoy your day

    • gjvc 4 hours ago

      It's a bit patronizing to tell people what to do

      on this website?!

      with their money

      in this economy?!