Jump to content

RAID Advice


Jdiesel

Recommended Posts

Jdiesel

I'm in the process of setting up my new server and am looking for some advice on how to setup my storage. I will have 12x2TB SAS drives at my disposal. My priorities in order are:

 

1. Maximize usable storage

2. Maximize read speads

3. Fault tolerance

 

Am I correct with my decision to go with a RAID 6 setup? If not why? I unfortunately won't have the option to use a SSD.

Link to comment
Share on other sites

Guest asrequested

I'd suggest not using RAID at all, and use drive pooling. You'll use all of the space, have different options for fault tolorence, and can use stripe reading for bandwidth. And it's much easier to maintain. Building a raid takes hours, pooling, seconds.

  • Like 1
Link to comment
Share on other sites

Jdiesel

And I think that one SAS port supports one SAS device. If I'm right, you'll need 12 SAS ports.

 

Most ports support 4 drives per port. You can also use expanders to add even more drives.

Link to comment
Share on other sites

PenkethBoy

If you have 12 drives i would build two arrays as high numbers of spindles say above 8 are to be avoided

 

So if you have say 2x Raid6 arrays of 6 disks - you will have 2x 6.8TB usable space - depends if thats enough - and the overhead of managing two arrays

 

I have used raid for years and it has it plus and minus points - but i have in the last year or so moved away to Drivepool - Drive Bender does essentially the same thing

 

A single DP pool of 12 disks ~ 20TB usable space - with 2x duplication - this comes down to ~10TB obviously :)- but the key point is that you can add as many drives as you like as they work "independently" unlike raid - so disks are only spun up when needed. You can of course add disks to raid arrays - downside is you then have to rebuild the whole array which can take hours even days - with the potential of a disk failure while rebuilding - one of the reasons RAID is not a backup solution.

 

If a drive dies in DP - you can "just" pull it and replace - DP then re dups the pool - it's a bit more complicated than that but a simpler and easier process than a raid rebuild etc

 

A raid of more than 2 disks will have a greater read speed than DP or DB as more spindles providing the data - but what do you need to read off the disks as a single disk will saturate a 1GB network - so what's your usage requirements etc - will help narrow the options down a bit :)

 

Are you thinking of using the RAID card to build the arrays or going with software raid?

 

Most SAS Raid controllers can do 4 drives per port and have two ports - so 8 drives. Are you planning on using 2 controllers or a SAS expander? Enterprise class controllers can usually support up to 256 drives so depending on your intended setup one card should be enough. Or do you have a server case with a sas expander built in?

 

One other option that some do is use Unraid "underneath" DP and run with no duplication - you just lose one disk to parity - not something i have tried - as it seems an over complicated solution with extra management overhead - but each to their own :)

 

Anyway a few things to think about - if you can provide mor detailed info we can help you decide what's best for you :)

 

Have fun

  • Like 1
Link to comment
Share on other sites

Jdiesel

The last server I owned was build around unRaid using a single parity drive, I assume this was RAID 5, and I was lucky enough to never have failure over its lifespan. I have since retired it and have moved to renting dedicated servers. My current server is setup as 4 individual 2TB drive with no pooling or RAID. I manually balance the files over the 4 drives. At one point I had a HDD failure which wiped out 2TB of movies, not the end of the world but a pain to replace. I would like to minimize the potential loss of data as it is not practical for me to backup 24TB of data. I see the value in a JBOD setup when if a drive fails I only lose the data on that drive but I would rather minimize the loss all together.

 

RAID 10 results in too little usable space for me. My data isn't that important that I would consider having only 12TB of usable space.

RAID 5 does not give me enough protection in the event I have a disk fail and need to rebuild. The loss of 2TB of usable space is worth the added peace of mind.

 

My server runs linux and I plan to utilize RAID capabilities of my storage controller should I go that route. I believe the raid card is a LSI MegaRAID 9260-16i. Seeing as the equipment is rented I am not concerned about drive wear, spin-up, or power usage. I want the quickest read times possible without going to an SSD which is not an option. 

Link to comment
Share on other sites

Guest asrequested

With drive pooling, you can set what redundancy you want, even after the pool is made. It's very flexible. 1x backup (1 drive loss), 2x backup (2 drive loss) etc. And the software will automatically balance everything. You can add any drive at any time.

Link to comment
Share on other sites

PenkethBoy

if its linux then ZFS is an option as long as you can set the card to HBA mode and pass through the disks?

 

Raid6 is better than raid10 as you have one extra level of redundancy as if you lose the wrong two drives the raid10 is toast and your data with it. With todays cpu's the parity calc has minimal hit on raid speed so Raid10 is only slightly faster - balance of risk vs speed. If its a hosted solution and you are accessing it remotely then speed is only going to affect local usage

 

Raid5 is ok for say 4 drives but beyond that i would go Raid6

 

One thing to consider/confirm with your supplier is - if the card dies will they replace with like for like as if you have to change the card and you raid setup is on the card - importing the setup might be an issue.

Link to comment
Share on other sites

Jdiesel

One thing to consider/confirm with your supplier is - if the card dies will they replace with like for like as if you have to change the card and you raid setup is on the card - importing the setup might be an issue.

 

Good point

Link to comment
Share on other sites

Guest asrequested

I just think that RAIDs are too rigid. If something fails, it's really difficult to correct. Even a 'simple' rebuild is pain. I recently had a drive die, and the replacement was so easy. Plug it in, no formatting required, add to the pool, walk away.

Edited by Doofus
Link to comment
Share on other sites

Jdiesel

No specific reason other than I don't have much experience with pooling. My only strict requirement is that I run Ubuntu server as the OS. I have used LVM in the past but made a mistake that took out my entire logical volume so I am a bit reluctant to experiment on my production server. 

Link to comment
Share on other sites

Guest asrequested

Ah, I see. I don't remember if stablebit runs that OS. I would imagine it does. And you can test for a month for free. You could try it on two old random drives, and see if you like it. It's pretty straightforward, and easy to use. And as I said, you can change the settings at any time. Nothing you do is set in stone. I'd encourage you to give it a whirl.

  • Like 1
Link to comment
Share on other sites

dcrdev

Use Linux lol

 

Not Ubuntu though...

 

If your insistent on raid I would highly recommend it over traditional hardware raid, it negates the need to hunt down the same controller to recover your data should the current one die for any reason. It also does automatic error recovery, checksumming and instantaneous immutable snapshots. ZFS is really the only valid option for a software RAID implementation, only downside is it's very heavy on ram.

 

Drive pooling however, is something I know very little about - a lot of people seem to like it. If you don't want to use Windows (and I get the many many reasons you wouldn't lol) then I think unraid works around drive pooling. It's based on slackware and more of an embedded system, but you can run kvm instances on it - so you could run another distro in tandem.

Link to comment
Share on other sites

BAlGaInTl

I'm also considering moving from my 5 disk mdadm (Linux Rocks) raid to a pooled solution. 

 

My data is fairly static, so I think pooling with redundancy could work.  Other than unRaid, I haven't found a way that I'm really comfortable with yet.

 

I use OMV as my NAS OS, and would like to stay on FOSS software.

Link to comment
Share on other sites

dcrdev

I'm also considering moving from my 5 disk mdadm (Linux Rocks) raid to a pooled solution. 

 

My data is fairly static, so I think pooling with redundancy could work.  Other than unRaid, I haven't found a way that I'm really comfortable with yet.

 

I use OMV as my NAS OS, and would like to stay on FOSS software.

 

Heard some good things about snapraid...

 

But seriously zfs...

Link to comment
Share on other sites

BAlGaInTl

Heard some good things about snapraid...

 

But seriously zfs...

 

If I wanted to use ZFS, I think I should change my whole environment and go with something like FreeNAS.  I'm not interested in learning a whole new system for my personal NAS.

 

Why not btrfs since I'm on *Nix.

Link to comment
Share on other sites

dcrdev

If I wanted to use ZFS, I think I should change my whole environment and go with something like FreeNAS.  I'm not interested in learning a whole new system for my personal NAS.

 

Why not btrfs since I'm on *Nix.

 

Wouldn't you then have to learn btrfs?

 

Don't use btrfs for anything except raid 0/1 because up until recently there has been massive bugs around parity - this is now much improved, but this only happened very recently and it's far too early to use in production. On their own wiki they even say that raid 5/6/10 is only "mostly stable" - btrfs is good though, just not ready yet.

 

As for zfs on Linux - it's pretty much on parity with FreeBSD, but has the slight added complication of being out-of-tree i.e. it comes in the form of kernel modules; in Ubuntu though this is a non issue as it's in the base repos. Ultimately if you just want a NAS - go FreeNAS, if you want to do other things with your system - go zfs on Linux.

Edited by dcrdev
Link to comment
Share on other sites

BAlGaInTl

Wouldn't you then have to learn btrfs?

 

Don't use btrfs for anything except raid 0/1 because up until recently there has been massive bugs around parity - this is now much improved, but this only happened very recently and it's far too early to use in production. On their own wiki they even say that raid 5/6/10 is only "mostly stable" - btrfs is good though, just not ready yet.

 

As for zfs on Linux - it's pretty much on parity with FreeBSD, but has the slight added complication of being out-of-tree i.e. it comes in the form of kernel modules; in Ubuntu though this is a non issue as it's in the base repos. Ultimately if you just want a NAS - go FreeNAS, if you want to do other things with your system - go zfs on Linux.

OMV is Debian, but i believe they have support for ZFS.

 

I'll have to look in to it.

 

Sent from my A0001 using Tapatalk

Link to comment
Share on other sites

aptalca

Btrfs is still in development and has issues with filesystem corruption as well as high disk io (at least experienced on unraid)

 

Why not use unraid since you already have a license?

 

It has come a long way in the last couple of years. It is not raid. It is actually drive pooling with up to 2 parity drives. It also allows cache drives or pools (btrfs based) for fast writes.

 

It has full support for docker and kvm with very nice interfaces.

 

And it was created for media storage specifically.

 

You get to mix and match different drive sizes and file systems. It's super flexible (I stick with xfs)

 

Over the years I've had a few drive failures, but never lost any data. I can now tolerate two simultaneous drive failures and still not lose data (without having to rely on backups). Although I also have a second unraid server for backing up crucial data.

Edited by aptalca
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...