Jump to content

Storage Spaces and ReFS


JeremyFr79

Recommended Posts

JeremyFr79

Alright, so tomorrow is going to start an adventure for me.  I have been a lifelong hardware RAID guy.  My current configuration includes a 24 drive RAID50 array and several other RAID5's  Mind you I don't rely on RAID for data integrity/redunancy but I do want high throughput capabilities.  I use several layered backup methodologies for the "important" data on my home network everything else is easily replaced.

 

Sadly though I currently find myself highly unsatisfied with the performance of my various array's in one aspect or another.

 

My current main fileserver runs on dual quad core xeons, 48GB of ram, a 60GB SSD and 24 1TB disks running off an Adaptec 5805 controller.

 

As of tomorrow I'll be upgrading to a faster 120GB SSD for the OS, and I'm going to be finally upgrading from Server 2k8 R2 to 2012R2.  I have after careful consideration decided to go with Storage Spaces and ReFS.  I've also decided that I'll share what I glean from my experience and try to share some performance numbers etc.  I'm fairly excited due to some of the great features offered like Deduplication, thin provisioning, self healing, etc.

 

Now mind you this box is only a fileserver, Emby run's on a dedicated server with much more horsepower under the hood.

 

So I will end with this, if you have any questions feel free to ask, I'll try to answer best I can and share my honest thoughts and opinions about my experiences with this.  What has really piqued my interest is the simplicity of which you can upgrade to larger disks etc without constant rebuilds and downtime.  How easily you can expand and shrink pools an volumes.  Personally I think this is going to be awesome.

  • Like 1
Link to comment
Share on other sites

Beardyname

I would like to know how it turns out with ReFS :)

 

My next box will be an dedicated storage box, and I have not yet decided between ReFS and zfs (ms vs unix really) Getting someone to talk about their experience is more worth than reading articles on the subject :)

The build is probably a few months of, but the more info i have the merrier :D

  • Like 1
Link to comment
Share on other sites

legallink

Sadly though I currently find myself highly unsatisfied with the performance of my various array's in one aspect or another.

 

My current main fileserver runs on dual quad core xeons, 48GB of ram, a 60GB SSD and 24 1TB disks running off an Adaptec 5805 controller.

 

This has you unsatisfied?  How many people are you serving with this?  That is a beefy machine.

 

I am also curious as to how REFS works out.

Link to comment
Share on other sites

JeremyFr79

Storage is a PITA sometimes especially in a setup like mine.  As I type this I'm finishing up moving all the data currently on the server to a couple of NAS systems.  Moving nearly 12TB of data takes a while lol.

 

Once that's complete I'll do some benchmarks on the current array and post up and then I'll be taking it completely offline for the SSD & OS upgrades. And then playtime.

Link to comment
Share on other sites

JeremyFr79

This has you unsatisfied?  How many people are you serving with this?  That is a beefy machine.

 

I am also curious as to how REFS works out.

LOL, there are things I've had issue's with that I have not been able to remedy even with RAID tuning etc.  Copying a file for instance on the array I typically only see copy speeds around 50-75MBps.  This pales in comparison to previous setups I had where with an 8 Drive RAID5 I'd see file copy speeds of 300+MBps on the array.  Read/Write is everywhere and I've never been able to figure out why.  I think in all honesty it's an IO and parity calculation issue with the controller and the number of drives in the array.  Other issues I have are not directly related to this and are more network issues which I'll be playing around more after I get everything back onto the main server.  For instance my throughput to my 2 Qnap TS-809's is ABYSMAL.  I have quad bonded gigabit on the file server and each Qnap has dual bonded and I can't even saturate a single gigabit link between.  Yet going to other devices on the network I can easily saturate 1-2Gbps without issue.

 

To answer your question though I have about 15 devices both local and remote that rely on this server on a regular basis.

Link to comment
Share on other sites

legallink

Is it because you have so many small drives?  I'm just curious because I have a 16 TB setup, running just with a core i5, and I get throughput/write speeds of around 100MBs and read speeds higher than that.

Link to comment
Share on other sites

JeremyFr79

Is it because you have so many small drives?  I'm just curious because I have a 16 TB setup, running just with a core i5, and I get throughput/write speeds of around 100MBs and read speeds higher than that.

Honestly like I said it seems to be an IOP issue with the controller itself.  the drives are all enterprise class, each by itself able to yield sustained throughput of around 120 or so MBps.  Now don't get me wrong there are times I can completely saturate the PCI-E bus maxing out throughput of around 600MBps, but it's very very very rare now.  Where as before in my older setup I could do that all day long.

 

Ironically I had actually grabbed a couple of new controllers with larger cache etc was going to go with dual SAS Raid cards to split the load but after seeing how easily drives can be added/removed and storage can be expanded/shrunk etc with Storage Spaces/Pools in WS2012R2 it just seems like a much easier route in the long run.

Link to comment
Share on other sites

If you're doing Storage Spaces, do NOT use parity. It is dreadfully slow. Mirroring is OK. I tried this setup about a year ago and my opinion is it's really not that ready for prime time. A solution like Stablebit DrivePool is exponentially better than Storage Spaces.

  • Like 1
Link to comment
Share on other sites

JeremyFr79

If you're doing Storage Spaces, do NOT use parity. It is dreadfully slow. Mirroring is OK. I tried this setup about a year ago and my opinion is it's really not that ready for prime time. A solution like Stablebit DrivePool is exponentially better than Storage Spaces.

Did you use it under 2012? or 2012R2? I only ask because everything I've read indicates vast improvements in 2012R2 for Storage Spaces/REFS

Link to comment
Share on other sites

JeremyFr79

Ok So here's benchmarks of the current setup (well most of it)

 

The top benchmark is the current SSD drive used for OS only, it's an older Kingston V200 60GB, not to bad a performance really for what it is.  It's getting replaced with a Patriot Blaze 120GB tonight.

 

The next benchmark is the main array.  As stated it is a 24 Drive (1TB Hitachi Ultrastars) RAID50 Array.  It is a pair of 12 drive RAID5's stripped into RAID0.  Now sequential read/writes are "OK" but as you can see it falls on it's face just about every where else.  And this also seems to be really hit or miss.  I can copy a file from one location on the array to another and it will run about 300MBs, other times it crawls at 50MBps.  The files are always large 8-20GB files so it should all be sequential writes.  

 

Now the last benchmark pictured is one of my 2 Qnap NAS's through iSCSI.  it's running an 8 drive RAID5 array of 10TB of capacity.  As you can see even it's sequential R/W blows the main array away, and that's running through iSCSI on top of it all.

 

Hopefully this give's some insight to my disappointment with the ARRAY

 

Lastly ignore the "error" showing for the main array, it's complaining about failed fans on the backplane that don't exist.563bc0c9cd31d_prebench.jpg

Link to comment
Share on other sites

2012R2. For mirroring performance is good. But parity was just awful. I have a similar server setup as you.

Link to comment
Share on other sites

MrFlibbles

I'm fairly excited due to some of the great features offered like Deduplication, thin provisioning, self healing, etc.

 

I'm pretty sure you can't use deduplication with ReFS. I would still consider using NTFS as the version in 2012 R2 is superior to 2008 R2.

Link to comment
Share on other sites

JeremyFr79

I'm pretty sure you can't use deduplication with ReFS. I would still consider using NTFS as the version in 2012 R2 is superior to 2008 R2.

You are correct, NTFS it is, was looking forward to the enhanced resiliancy, oh well.  Currently getting things up and running as I type.  The new SSD is WAY faster than the old one was at least it feels faster lol.  Got my pool built, working on provisioning volumes at the moment.  Also got my iSCSI situation figured out.  I'm now running 6 NICS, 4 are dedicated to MPIO to the 2 NAS'.  Just in a few test file copies the difference in performance is quite apparent.  

Link to comment
Share on other sites

JeremyFr79

Well this is going to be a quick thread lol, after some testing I just couldn't get Storage spaces to produce the kind of performance I wanted especially on writes's.  Reads were actually a bit better but writes were ABYSMAL.  So I'm gonna have to play around with tuning the hardware RAID see if I can eek out some better performance.  May still end up going to dual HBA's split the drives 12 and 12 on each HBA.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...