70
Is RAID still needed? (programming.dev)

Me and my friend were discussing this the other day about how he said RAID is no longer needed. He said it was due to how big SSDs have gotten and that apparently you can replace sectors within them if a problem occurs which is why having an array is not needed.

I replied with the fact that arrays allow for redundancy that create a faster uptime if there are issues and drive needs to be replaced. And depending on what you are doing, that is more valuable than just doing the new thing. Especially because RAID allows redundancy that can replicate lost data if needed depending on the configuration.

What do you all think?

top 50 comments
sorted by: hot top controversial new old
[-] mindlight@lemm.ee 50 points 7 months ago

Yeah and Titanic was unsinkable.

If the controller in your SSD fries, it doesn't matter how many unused gigabytes your SSD has got for relocating bad sectors. It is still fried. For you, that data is forever gone.

This is why you have redundancy. Full redundancy. You can go for RAID1, one disk die and you still have no data loss, or go bananas with RAID6, two full disks can die and you're still going strong.

Ps. Spinning harddrives have had hidden sectors used for relocation of bad sectors for ages. It's nothing new. If you have to much time on your hand, Google harddrive hidden sectors nsa.

[-] Doombot1@lemmy.one 34 points 7 months ago

…absolutely, positively, super false. I work in a sector where we’re constantly dealing with huge capacity enterprise SSDs - 15 and 30 terabytes at times. Always using RAID. It’s not even a question. Not only can you have controller malfunctions, but even though you’ve got what’s known as “over provisioning” on the SSDs, you still need to watch out for total disk failures!

[-] redcalcium@lemmy.institute 33 points 7 months ago* (last edited 7 months ago)

Unlike hdd, I never experienced graceful disk failures on ssd. Instead, they just randomly decided to die at the most inconvenient time. Raid 1 saved my hide a couple times now from those ssd failures.

[-] r00ty@kbin.life 5 points 7 months ago

Yep. While it has been decades since I had a home SSD failure. But I have had 2 SSD failures in the last 10 years in server hardware. In the first case it was RAID striped and I needed to restore from backup. In the second case it was part of a raid 1 array and I just requested a replacement and got on with my day.

In my house, I have non raid SSDs on my own PC. But important stuff is on my NAS made up of 4xHDD drives in raid 5 (that also has the important folders backed up to an encrypted cloud).

RAID still has a place in an overall data security solution. Especially for servers that you want to keep up.

load more comments (1 replies)
[-] lemmyreader@lemmy.ml 23 points 7 months ago

Reminds me of the days that cdroms were brand new and advertised like indestructible, with photos of elephants walking over it. Having said that I assume SSD disks can break like other hard disks can break, and in that case RAID can save a lot of time to get a computer back up especially when a lot of data is involved.

[-] JusticeForPorygon@lemmy.world 6 points 7 months ago

Had a microsd card literally break in half last week. They're definitely not invincible

[-] Clydesdalecrusher@programming.dev 2 points 7 months ago

Yeah they sometimes get touted as that

[-] winnie@lemmy.ml 1 points 7 months ago

Was that a SteamDeck? 🙃

[-] JusticeForPorygon@lemmy.world 1 points 7 months ago
[-] winnie@lemmy.ml 2 points 7 months ago

Ok. Coz it is really common for SteamD users to forget removing SD card when didassembling device. Lots of cards have been lost

load more comments (1 replies)
[-] howrar@lemmy.ca 4 points 7 months ago

Funny. Growing up, I was taught to be extra careful with CDs because the moment you look at them wrong, all your data gets corrupted.

[-] thorbot@lemmy.world 16 points 7 months ago

This is a total load of bullshit, your friend is wrong

[-] winnie@lemmy.ml 16 points 7 months ago

you can replace sectors within them if a problem occurs

That won't help you if sector where your data is located dies!

[-] originalucifer@moist.catsweat.com 16 points 7 months ago

its not about the individual drive... its about total drive failure.. if that ssd's controller dies it doesnt matter if it has extra data sectors.

that said, I moved on from raid by mirroring multiple , unraided NAS devices for redundancy with data stored specifically on the drives in such a way as to eliminate cross disk logical volumes.

[-] Dekkia@this.doesnotcut.it 15 points 7 months ago* (last edited 7 months ago)

I don't think the internal wear-leveling and overprovisioning of SSDs can or should be able to replace raid. Disregarding a dead sector without losing capacity is great, but it won't help you when (for example) the controller dies.

Depending on the amount of data you're storing SSDs also might be too expensive.

The only exception is maybe Raid 0 in a normal PC. Here it's probably better to just get one disk for each logical drive.

[-] Revan343@lemmy.ca 5 points 7 months ago

RAID0 has always been playing with fire

[-] spaghetti_carbanana@krabb.org 11 points 7 months ago

Its very much still needed and heavily utilised in the enterprise world. Volume size is usually the lowest priority when it comes to arrays, redundancy and IOPS (the amount of concurrent transactions to the storage) is typically the priority. The exception here would be backup and archive storage, where IOPS is less important and volume size is more important.

As far as replacing sectors goes, I've never heard of this and I might just be ignorant on the subject but as far as I know you can't "replace" a bad sector. Only mark it as bad and not use it, and whatever was there before is gone. This has existed since HDD days. This is also why we use RAID - parity across disks to protect data.

Generally production storage will be in RAID-10, and backup/archive storage in RAID-6 or in some cases RAID-60 but I'm personally not a fan.

You also would consider how many disks are in the volume because there is a sweet spot. Too many disks = higher likelihood of total array failure due to simultaneous disk failures and more data loss in the event it does, but too few disks and you won't have good redundancy, capacity or performance either (depending on RAID level).

The biggest change I see in RAID these days is moving away from hardware RAID cards and into software-based solutions like Microsoft Storage Spaces, md, ZFS and similar. These all have their own way of doing things and some can even synchronise the data with other hosts.

Hope this helps!

[-] Blue_Morpho@lemmy.world 4 points 7 months ago

As far as replacing sectors goes, I've never heard of this and I might just be ignorant on the subject but as far as I know you can't "replace" a bad sector.

Ssds maintain stats on cell writes and move data when a cell nears it end. They keep spare capacity hidden from end users for this. Not using part of the drive increases also this spare capacity.

However ssds do fail and moving data to spare cells doesn't change that.

[-] AlternateRoute@lemmy.ca 8 points 7 months ago
  • Bit rot is still a problem, you need a high integrity file system and or RAID to avoid that
  • Full drive failure is still about as likely, IE the main reason for RAID of multiple drives in the first place.

A good read on the problems with SSDs SSD 101: How Reliable are SSDs?

[-] Clydesdalecrusher@programming.dev 2 points 7 months ago

I found this article from the one you posted. It is crazy think DNA can be used for storage one day.

storage tech of the future

[-] dbilitated@aussie.zone 6 points 7 months ago

I do recall google apparently stopped using raid in some data centres, but it was because they had whole-machine redundancy.

RAID is probably redundant for some of the uses it used to have, like optimising read performance by using many drives (SSD is fast) and honestly I suspect that SSDs are probably more reliable as they don't have a bunch of platters and bearings and screaming rotational speeds.

So if you needed it for a base level of reliability, an SSD on its own may have exceeded that.

I suspect there are still uses for drive redundancy in some high availability setups.. although your friend might be right. If the likelihood of drive failure is lower than other parts in the machine and you need high redundancy for availability it might make more sense to replicate the whole machine rather than the drives.

It's possible redundancy specifically for the drives was an artifact of unreliable drives back in the day 🤔 they might have a point! I think it's likely still useful at times though.

I'd rather hotswap a drive than set up a new server, even if it's a less likely scenario.

[-] neidu2@feddit.nl 5 points 7 months ago* (last edited 7 months ago)

I wholeheartedly agree with you. It is worth noting that a lot of the use cases of RAID can now be solved via software, but there are some places where hardware RAID still shines, such as redundancy. Yes, software also can provide redundancy, but I still haven't seen a software solution that is equivalent to a proper RAID controller with a dedicated battery to keep the I/O buffer alive in case of hardware failure. That one has saved me a few times.

Source: I'm in charge of 6 storage clusters at work. Beegfs is what takes care of the actual clustering, resulting in each cluster clocking in at 1.2PB of storage. Each cluster consists of four machines with three storage volumes each.
Each storage volume consists of 12 drives in a RAID6 configuration.

I can yank faulty drives and toss them out and have them replaced with no downtime. I know some like to set up hot spares, but I for one don't. I've even had entire servers die on me, and thanks to additional redundancy provided by beegfs, I've changed motherboard with no cluster downtime either. Just move the drives over to an identical machine (yes, each cluster has a dedicated spare machine), import the RAID, and you're good to go.

load more comments (5 replies)
[-] xkforce@lemmy.world 4 points 7 months ago

Higher end Samsung ssds were dying a lot faster than they should. I dont know what drugs your friend is on thinking they cant fail but theyd better have enough for the rest of the class.

[-] SkaveRat@discuss.tchncs.de 3 points 7 months ago

due to how big SSDs have gotten and that apparently you can replace sectors within them if a problem occurs

True, but that's something an SSD does internally and is just there to prolong the lifespan.

You definitely still want a raid if you want to keep a system running during a disk failure. No amount of extra sectors and wear leveling will safe you from that

[-] dbilitated@aussie.zone 1 points 7 months ago

yeah but if SSD failing is now less likely that other parts of the machine it might be better to focus on a redundant server to fail over to.. it's an interesting thought. RAID isn't obsolete I don't think but it's an interesting question

[-] szczuroarturo@programming.dev 2 points 7 months ago

Hmm but in a server enviroment wouldnt it be possible for ssd to reach their wear level much faster and therefor fail due to that ( depending on the workload of course ).

load more comments (1 replies)
[-] lemmylommy@lemmy.world 3 points 7 months ago

This has nothing to do with ssd or their size. Harddisks also have a little spare area (though not as big) and can mark and remap failing sectors.

RAID (1) is still (possibly) good for the only thing it ever was (possibly) good for: Keeping the system running long enough for you to put in a new harddisk if one fails.

Think of industrial systems where every minute of downtime can cost thousands of dollars. And even there the usefulness of RAID can be questioned: should you not in that case have a whole spare system, easy to swap in, because more than just storage can fail?

And what about the RAID controller itself? Does it not add complexity and another point of failure to the whole system?

And most importantly: will anyone actually get notified of a failing disk and replace it quickly? Or will the whole thing just prolong the inevitable?

Would you even trust a system that had one disk fail already to keep going in a critical place? Or would it not be safer to just replace the whole thing anyway after one failure?

[-] redcalcium@lemmy.institute 1 points 7 months ago

And what about the RAID controller itself? Does it not add complexity and another point of failure to the whole system?

This is why people prefers software raid these days instead of hardware raid.

load more comments (2 replies)
[-] tobogganablaze@lemmus.org 3 points 7 months ago

He said it was due to how big SSDs have gotten and that apparently you can replace sectors within them if a problem occurs which is why having an array is not needed.

Buying SSDs with the same capacity as my NAS with 70TB (after raid 6) would cost almost tripple of what my setup (including the NAS) costs.

So unless you shit money, SSDs are not an option for anything with a decent capacity.

[-] bluGill@kbin.social 2 points 7 months ago

Raid often comes with snapshots which can recover from your mistakes. Often the raid can even recover after malware encryhts your disk. you still need offline, offsite backups for the best protection but raid is still a useful part of your data safe

[-] WeirdGoesPro@lemmy.dbzer0.com 2 points 7 months ago* (last edited 7 months ago)

Is your friend going to pay for enough SSD’s for me to store my 60 TB of data?

[-] scrubbles@poptalk.scrubbles.tech 2 points 7 months ago

SSDs man, I personally still don't trust them for primary storage. My data array is unraid, several spinning disks. Spinners just always work for me, there are gotchas of being jostled or turned off incorrectly, but if you treat them well they'll last a real long time. Plus the double redundancy of my array and I'm very happy with it. (Plus I don't see 20TB ssds on the market for 300 bucks either).

SSDs though wear out, they only have so many IOPS in them. I had some in a traditional raid and it just ate through them. Too many writes and I had 5/6 fail on me. I use them now as cache drives, for unraid you can set a faster drive to store data temporarily, and then it will move it off the cache drive later onto the main array, and that's a level of risk I'm happy with.

[-] LemmyHead@lemmy.ml 2 points 7 months ago

I'd say "old" RAID could be dead if you have proper backups and have the ability to replace a defect drive fast in the case uptime is crucial. But there's also modern RAID like btrfs and zfs that also can repair corrupted filed, caused by bitrot for example. Old RAID can't do that also hardware based RAID couldn't either when I used it until years ago. Maybe that changed but I don't see the point of hardware based RAID in most cases anymore

[-] mosiacmango@lemm.ee 10 points 7 months ago* (last edited 7 months ago)

Hardware raid can 100% do any of the above tasks, and has always been able to do them. You need an actual raid card, not some half assed baked in mobo raid.

Hardware RAID was doing all of the above before software RAID was available to end users.

[-] winnie@lemmy.ml 1 points 7 months ago

I wonder how to detect real raid card from simple switch? I guess to look at price and it should be really high?

[-] mosiacmango@lemm.ee 2 points 7 months ago

Most discrete raid cards will do the job, but look for on card caching and a battery for "quality."

[-] winnie@lemmy.ml 1 points 7 months ago

But AFAIK real RAID don't perform CRC, thy rely on drive to report bad sector. In case if on one drive data got corrupted, it would return data from one drive or another. In case of mirroring. If we aren't talking about RAID 6 I think.

[-] winnie@lemmy.ml 1 points 7 months ago

AFAIK only officially supported RAID modes in BTRFS are RAID0 and RAID1.

RAID56 is officially considered unstable.

[-] LemmyHead@lemmy.ml 1 points 7 months ago

Raid56 is a risky one in more filesystem than just btrfd though, but if you have a ups as backup, you should be fine.

[-] winnie@lemmy.ml 2 points 7 months ago

UPS won't protect from Kernel Panic, sadly

[-] LemmyHead@lemmy.ml 1 points 7 months ago
[-] winnie@lemmy.ml 1 points 7 months ago

What about dm-raid? Is it still risky? I guess so, because it's separate devices. So any software raid with 5-6 would be problematic?

[-] BorgDrone@lemmy.one 1 points 7 months ago

I'd say "old" RAID could be dead if you have proper backups and have the ability to replace a defect drive fast in the case uptime is crucial.

RAID and backups serve different purposes. Backups are to prevent data loss, RAID is to prevent downtime in case of hardware failure. They are not interchangeable.

[-] LemmyHead@lemmy.ml 1 points 7 months ago

Different purposes true, but not exclusively. RAID only has effect on drive failure specifically. If downtime is intolerable then it's not the right solution to just use RAID and you should look into total redundancy of the hardware and more. It also comes with performance bottlenecks or improvements depending on the setup, that's another factor to take into account. So in the end it really depends on your requirements and backups can actually serve as an alternative, depending on your setup and as long as it meets your RTO

[-] KeepFlying@lemmy.world 1 points 7 months ago

Maybe maybe MAYBE for a prosumer desktop situation it's less necessary than it used to be. But it's absolutely still needed, your friend is dumb and reckless with their data.

Drives fail all the time, not just sectors.

load more comments
view more: next ›
this post was submitted on 05 Apr 2024
70 points (97.3% liked)

Asklemmy

43885 readers
835 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy 🔍

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~

founded 5 years ago
MODERATORS