How long should I expect a NAS to last?

akilou@sh.itjust.works to Piracy: ꜱᴀɪʟ ᴛʜᴇ ʜɪɢʜ ꜱᴇᴀꜱ@lemmy.dbzer0.com – 90 points –

First off, I'd normally ask this question on a datahoarding forum, but this one is way more active than those and I'm sure there's considerable overlap.

So I have a Synology DS218+ that I got in 2020. So it's a 6 year old model by now but only 4 into its service. There's absolutely no reason to believe it'll start failing anytime soon, and it's completely reliable. I'm just succession planning.

I'm looking forward to my next NAS, wondering if I should get the new version of the same model again (whenever that is) or expand to a 4 bay.

The drives are 14 TB shucked easy stores, for what it's worth, and not even half full.

What are your thoughts?

71

The NAS itself will likely outlive the drives inside, just the nature of things. Hard drives follow a sort of curve when it comes to failure, most fail either immediately or in a few 10000 hours of run time. Other factors include the drives being too hot, the amount of hard power events, and vibration.

Lots of info on drive failure can be found on Backblaze's Drive stat page. I know you have shucked drives, these are likely white label WD Red drives which are close to the 14TB WD drive backblaze uses.

Yeah they're reds. Is there a way I can check how many hours they have on them? 10,000 is just over a year. They're a couple years old now.

I'm not too concerned about them failing, I can afford to replace one without notice and they're mirrored. And backed up in some other easy stores.

smartctl

But 10.000 seems on the low side, i have 4 datacenter toshiba 10tb disks with 40k hours and expect them to do at least 80k, but you can have bad luck and one fails prematurely.
If its within warranty, you can get it replaced, if not, tough luck.

Always have stuff protected in raid/zfs and backed up if you value the data or dont want a weekend ruined because you now have to reinstall.
And with big disks, consider having more disks as redundancy as another might get a bit-error while restoring the failed one. (check the statistical averages of the disk in the datasheet)

I wouldn't start worrying until 50k+ hours.

There should be a way to view SMART info and that includes an hour count.

That info can be found in the smart data for the drives, but I didn't mean 10,000 hours, more like > 50,000

I believe the synology DSM should have a feature for this. Try the storage manager app and it should tell you SMART info.

I've got a 12TB Seagate desktop expansion which contains a Seagate ironwolf drive. According to the link you shared, I'll better look for a backup drive asap.

Edit: the ones in the backblaze reference are all exos models, but i still have no profounf trust in Seagate.

Yes, according to their historical data Seagate drives appear to be on the higher side of failure rates. I've also experienced it myself, my Seagate drives have almost always failed before my WD drives.

Interesting. When I researched drives for my NAS the general conclusion was to avoid the reds. Go with iron wolf.

🤷🏻‍♂️

Mine aren't even on the list :(

My Synology NAS was running for 6+ years before I replaced it last year. And the only reason I replaced it was to upgrade the hardware to be able to act more like a home server running some more demanding services.

I've since given the NAS away to a friend who is still running it... As always back up your data just in case, but I wouldn't expect the hardware to crap out on you too soon

Oh, I don't need to back it up because I have two drives running in RAID.

😜

As others have said, you should really be careful treating your RAID as a backup. I for one do all of my backing up on Playstation 1 memory cards... I had to buy a couple storage containers to store them all, but I guess that technically counts as off-site

Dude, it was a joke. I've heard the advice a million times

Ehm, probably 2 disks bough from the same batch. They usually die together. ;)

Same here. Last year I upgraded from a DS214+ and it was still running great. The only reason I upgraded to the DS220+ was so I could run docker containers.

I sold it for $200 which meant I ran it for 9 years for about $57 a year (CAD). I'm hoping to get even better bang for the buck with the new unit.

I still have my DS1812 which I bought for ~1200 when it came out in 2012/2013 as well.

It only runs NFS/SMB atorage services. Still is an amazing unit. It has been through 7 house moves 2 complete failures, and about 4 raid rebuilds.

Considering it's 2024 now and it's been running for nearly 12 years, it's the reason I recommend paying out the arse for Synology hardware even if it is overpriced. I still get security patches, and I got a recent (2 years ago?) OS upgrade.
It can still run the occasional docker containers for when I need to get the latest ISOs or for running rclone to backup.

If I bought a new unit I would be happy for another 10+ years with it no doubt. As long as I purchased as much ram as possible to put in it because 3GB ram in this unit is what really kills the functionality, besides from the now-slow cpu

I have an 1813+ and it's also been a champ. Unless the computer inside it dies, I will continue to use it indefinitely.

However, I have offloaded all server duties other than storage to other devices. I do not ask my Syno to run Plex or any other services else besides DNS. As long as those SMB shares stay up, it's doing what it needs to do. And gigabit will be fast enough for a long time to come.

With a free os: indefinitely

With a proprietary os: hope and pray the maker doesn't discontinue it tomorrow

From what I've heard in this regard: synology bad, qnap good?

Your NAS will last as long as your storage medium.

HDD lasts 5-10 years, SSD lasts like 10+

Not the batch of WD Red SSDs I got in 2022. 3 of the 4 have failed. I'm assuming the 4th is going to die any day now. Fortunately WD honors their warranties, and only one drive died at a time so the my RAID was able to stay intact.

I feel like I must have gotten 4 from the same bad batch or something. One dying felt like bad luck, but when another died every 3 months it seemed like more than a coincidence. And none of the replaced ones have died, just the original batch.

So how long does an SSD last? YMMV.

Imho there's no reason to change or upgrade if your current setup works and you're satisfied with it. Keep your money, you'll see what the market has to offer when you need it.

I had a Drobo 5N for over 10 years. It lasted longer than the company itself.

I've bought a Synology DS415+ back in December 2014. So it just turned 9 and it's still kicking. (Even with the C2000 fix.)

Although Synology stopped delivering updates, I'll keep it as long as it does what I need it to. However, my next device will be a TerraMaster where I'll install OMV on. Can't get a NAS with custom OS in a smaller form factor.

I'd say 6-12 years. Maybe including about 1 hard disk failing. I forgot what the mean to failure is for a harddisk. And in a decade I probably have all the disks filled to the brim, my usage pattern changed and a new one has 10x the network speed, 4x more storage and is way faster in every aspect.

I had my DS213+ for a bit over 10 years, with no failures of any kind, just a bit of drive swapping for more storage space. Finally upgraded last year to a 4-bay with better performance and Docker support, but I would have kept using the old one otherwise.

What do you mean by "last"? I know it's a common term, but when you dig deeper, you'll see why it doesn't really make sense. For this discussion, I'm assuming you mean "How long until I need to buy a newer model?"

First, consider the reasons you might have for buying a newer model. The first is hardware failure. Second is obsolescence - the device cannot keep up with newer needs, such as speed, capacity, or interface. The third is insecurity/unsupported from the vendor.

The last one is easy enough to check from a vendor's product lifecycle page. I'll assume this isn't what you're concerned about. Up next is obsolescence. Obviously it meets your needs today, but only you can predict your future needs. Maybe it's fine for a single 1080p* stream today, and that's all you use it for. It will continue to serve that purpose forever. But if your household grows and suddenly you need 3x 4k streams, it might not keep up. Or maybe you'll only need that single 1080p stream for the next 20 years. Maybe you'll hit drive capacity limits, or maybe you won't. We can't answer any of that for you.

That leaves hardware failure. But electronics don't wear out (mechanical drives do, to an extent, but you asked about the NAS). They don't really have an expected life span in the same way as a car battery or an appliance. Instead, they have a failure rate. XX% fail in a given time frame. Even if we assume a bathtub curve (which is a very bold assumption), the point where failures climb is going to be very unclear. The odds are actually very good that it will keep working well beyond that.

Also of note, very few electronics fail before they are obsolete.

*Technically it's about bitrate, but let's just ignore that detail for simplicity. We'll assume that 4k uses 4x as much space as 1080p

TL;DR: It could fail at any moment from the day it was manufactured, or it could outlast all of us. Prepare for that scenario with a decent backup strategy, but don't actually replace it until needed.

Using unraid is nice because you can keep replacing drives with lawyer ones as you need, or adding new drives to the array. It’s very flexible that way, despite some of its shortcomings.

I’m still running a DS414 filled with WD Red drives. I’ve only swapped out one of the drives as it was starting to have issues. I’ve considered upgrading for more features (Docker, Home Assistant etc) but can’t justify the expense just for “nice to have” instead of “need”. Realistically it only stores Linux ISOs that I get with Download Station.

Yeah, performance is not an issue for me. I stream some Linux ISOs and so do a few friends. Pihole, photos backup, documents backup. That's about it.

I just recently upgraded from my 2 bay NAS, simply because I ran out of storage and attaching more drives via USB just seemed silly at this point (I was already at 5).

I now have a DS2422+ 12 bay with 6x 20TB plates. And I very much expect the NAS to last past 10 years. HDDs can be added and replaced if you have raid setup. Not very feasible in 2 bay NAS.

What about building your own NAS? If you've the time / skills you can pick a very small micro ITX motherboard and a NAS case and build your own. This way you can run open-source software and it will have more features / expandability and potentially last way longer.

Do you have any examples of a NAS case? I'm looking at possibilities for redoing my setup which is currently an old Ryzen PC stuffed with 9 or so drives running Windows, SnapRAID, and DrivePool. I'd love to scale it down horsepower wise to make it more efficient and reliable (Windows sucks for this) along with separating it from my general PC usage. Some sort of 8-bay drive enclosure that can directly connect to a thin client PC, and handle different capacity drives (6TB-14TB) would be ideal.

Could you run open source software on something like a Synology if they stop supporting it?

No always, and that's the reason why I would never buy their hardware. There are some older models that can be hacked to install a generic Linux but the majority can't. It's just easier to get something truly open.

That sucks. Well, I still am fine with my purchasing decision. It's a good stepping stone into learning out to network. In the future, I'll definitely build it myself and get my own domain for remote access.

You don’t need to buy a domain for that. There are plenty free dynamic DNS options that will give you a free subdomain that will work perfectly fine for that.

I'd be more concerned about the longevity of the drives than any NAS itself. I moved from commercial NAS appliances to a self-built one. It turns out that they cost about the same (depending on the hardware configuration you end up choosing, evidently), but are MUCH better performance-wise.

I built my 10ish TB (usable after raidz2) system in 2015. I did some drive swaps but I think it might have actually been a shoddy power cable that was the problem and the disks may have been fine.

I had a DS212j for about 10 years before I replaced it, and it was working just fine, so I sold it on ebay. It just couldn't keep up with the transcoding plex that I was using it for. Heck, 7 of those years it was running on a shelf in my garage getting covered in dust, and spiderwebs.

I imagine a + model will last even longer than that.

Both DS220+ and DS224+ has been a pleasure to setup, but I wouldn't replace your DS218+ just because. Just make sure your RAID is healthy and your backup too.

An alternative to a standalone NAS is to setup your own little ITX server. Only if you enjoy tinkering though, Synology is definitely easier.
At home I'm currently running Server/NAS/Gaming PC all in one.
It's a Debian 12 KVM/QEMU host with an m.2 NVME disk for host OS + VM OS and 2x16TB Seagate Exos disks in RAID1 for data storage. The other hardware is a B650 ITX Motherboard, AMD Ryzen 7600 CPU, 2x32GB DDR5 RAM and AMD Radeon 6650 XT, Seasonic FOCUS PX 750W PSU.
With my KVM/QEMU host, Game Server and Jellyfin Server online it eats about 60W-65W, so not that bad.
The GPU and an USB Controller is passed through with VFIO to a virtual Fedora that I use as a desktop and gaming client.
Just make sure to have a sound dampening pc case so you can keep the servers online without being bothered. The GPU goes silent when the gaming VM is off.

I've had my Synology DS215 for almost ten years. I've recently thought about replacing it, but I don't really see the benefit. I'll just replace the drives some time.

The NAS will most likely outlive the software support and by far the HDDs you are putting in them.

Ah, I see you also choke yours with softwarr...

I have the same model as you and I also wonder when it will explode lol (mostly because I have it in my ROM and can hear when it is struggling).

I have it with lots of docker containers (I can't help it, it is my only server) and the drives never cease to spin.

I actually don't recall since when I have it but it must be similar as you as well...

Just as of recently started to do clean up of containers and such, mostly because I did a fuck up (I deleted with Portainer all my unused volumes which, strangely enough for me, got rid of Portainer's volume, I needed to recreate all my stacks/compose from portainer each one, so I cleaned up some stuff in the process).

It'll last long as it's useful to you barring any disasters. I've got a HP gen8 microserver that I've been running as a free/truenas box 8/9 years now and I'm only thinking of replacing it now as I need more performance than the CPU in it can give.

Had my Western Digital My Cloud since 2015.

my longest running drive is a WD. probably the same vintage with less uptime.

shucked

oh you are dancing with the devil. not sure there's a way to check actual SMART data in Synology's OS but I would be very interested in those logs.

I've found over the years that the second I think about backing up the drive is about to fail.

I would update to a 4bay and invest in actual NAS drives. (and I will personally be looking for 10gbe lan but this isn't homelab)

There's nothing wrong with shucked drives, and they are frequently relabelled NAS drives anyway.

just a dice you don't need to roll

Packaging a drive for sale in an external enclosure doesn't make it any more prone to failure compared to one that wasn't.

except you don't know what you're buying.

the fact it's typically cheaper than buying the naked drive should tell you everything you need to know about the risk involved.

This is misinformation, I have always known what drives to expect when shucking. Not only that, but you can tell what drive is inside just by plugging it in before shucking to check. I've shucked over 16 drives so far and all were exactly as expected.

The drives for WD are white label, but they're WD Reds. They're cheaper because they're consumer facing, no more, no less. Have you been bitten by shucking in the past? I'm confused why else you'd be saying it's a risk. The only risk associated is warranty related.

You have an idea of what you’re buying and you know what you have once you’ve shucked it. The worst case scenario is that it’s not what you expected, isn’t suited for that use case, you can’t find another use for it, and you can’t return it… but it’s not like anyone is forcing you to add an unsuitable drive to your setup.

There is not even any proof from any independent media that special certified drives have a longer lifespan. You can see it when you compare OEM prices for different drives. Quite often Data Center labeled cards are more expensive then the prosumer drives, because consumers are idiots and buy into marketing.

There are other problems with shucking like warranty but the dice role is not certainly it.

That the market buying internal drives is generally willing to pay more for the product vs the people buying an external drive? Because cost of the parts (AKA Bill of Materials, or BOM) is only a small part of what determines the price on the shelf.

The fact the WD has a whole thing about refusing to honor the warranty (likely in violation of the Magnuson-Moss Warranty Act) should tell you what you really need to know.

They're cheaper because they sell a higher volume of them to consumers but that doesn't mean they're any worse than bare drives. I've been running numerous shucked drives in my PC 24/7 since 2018 and have had zero issues thus far. Folks from r/datahoarder/ have collectively been buying large quantities of these drives for years and still recommend them. If they were a bunch of lemons, you'd think people would have made a big stink online about it by now.

People have tested them long term at this point. Outside of a few rare exceptions, there's not a noticeable difference in reliability between shucked drives and 'normal' drives. They're the same stock but just rebranded and have to be cheaper because they're marketed primarily for retail as opposed to enthusiast/enterprise who are willing to pay more.