Jump to content

My new setup (A little bragging, a little help)


JustEric78

Recommended Posts

JustEric78

I will start by saying I am an IT geek that is about to finish a masters degree and have been an IT Manager for years so this is not palatable for the general public. With that being said, I had to do MANY hours of research to get to my new build so I thought I would share what I learned in hopes to save someone else the time and grief I went through.

 

The build: A back end true server (Server 2012 R2) with 16TB of usable storage on a 24TB raid 6

 

The struggle: To find the cheapest way to build a storage server that can grow indefinitely for the future.

What I started with: Intel motherboard and I7 3rd gen that I was given by Intel at one of their events. Add to that 32GB of DDR3 which was running this system in an Antec server case with an 8TB raid 5 and was quickly running out of space.  

 

My Chore: To find how to create a raid 6 to be run by a raid controller capable of pushing 20 drives without being bottle necked by bandwidth.

 

Solution:

Server case: http://www.newegg.com/Product/Product.aspx?Item=N82E16811219033

Was on sale for 270 during their holiday sale, usually sells for $330 and is still a good value. Capable of 20 hot swappable sata drives. The case is not bad for the price as long as you know you are not buying an enterprise quality case. Back planes are stable but are missing a few features the enterprise community would require. This is basically a low cost home storage server option only. 

 

Raid Controller: This is the problem here, they get very expensive, very fast. My solution uses an LSI 9260CV-4i

This controller has one 4xsas/sata controller which can connect as it sounds 4 sata drives at 6gb/s

I wanted though to control up to 20 drives so how is this done? The big amount of my research was in this catagory. There were a few options but I ended up going with Intel's SAS Expander solution. 

 

Raid Card:

http://www.newegg.com/Product/Product.aspx?Item=N82E16816118166&Tpk=LSI%20LSI00280

SAS Expander

http://www.newegg.com/Product/Product.aspx?Item=N82E16816117207

 

The raid controller connects to one of the ports on the Intel card. The rest of the 5 ports on the intel card connects to the backplane on the Norco case and connects all potential 20 drives to the one input of the raid controller. Since it is 4x6GB/s on the raid controller the bandwidth to the drives is basically 24GB/s which is plenty for media streaming. The card is capable of raid 6 which gives two drive parity meaning that you can lose up to two drives and not lose any data then rebuild as you replace the drives. 

 

Drives:

I went with Seagate's NAS quality drives which are somewhere between enterprise and consumer level products. They are not the fastest but come with a good warranty and will work again for my purposes. I got 6 of these and since I will lose 2 for parity that will leave me with the 16(ish)TB of storage. The benefit here is that I can simply pop in a new drive when I start to run low on space and real time expand my raid 6 to include the new drive. I plan to add a second raid card when I reach 10 drives and create a second raid 6 for safety reasons. 20 drives is a lot even when you can lose two drives. 

 

Other inclusions: I added a nice 650w PS to push all the drives which will be plenty when basically the only thing this is pushing is drives (no video card or other extra's). 

 

Slim DVD drive since the case will not take a standard sized drive. They also had a sale going on the new Intel 530 series 240GB SSD drives at Amazon which I took advantage of and will ghost my OS to it from my existing SSD. 

 

The solution was although not cheap, very reasonable for what I have gained in future expansion possibilities. 

 

Cost:

What I already owned:

Intel motherboard (~100)

I7 proc (~300)

Ram (~300)

 

What I purchased: 

Norco case (~300)

Raid Controller (~300)

SAS Expander (~300)

HDD's (~200/ea = $1200)

PS 650w bronze rated single rail for the many drives (~80)

Slim DVD drive (~30)

Misc cables/adapters/connectors (~50)

 

I may switch the raid card to a newer generation card with the 1gb of ram on board and dual core raid chip. I havent decided yet if I would even see any benefit with my setup. My current plans are to move up to 10 drives then get a second card running a raid 6 for the other ten drives. If you have any input on this feel free to post. I was looking at the 9360 LSI card for reference. 

 

 

If anyone has any questions please feel free to ask. 

 
Link to comment
Share on other sites

joshstix

Some discussion points, no criticism meant.

 

In terms of expansion does that RAID controller support adding disks to an active array as need be?  My previous experience with hardware RAID has not allowed this.

 

Is there a reason why you chose to go with hardware RAID rather than one of the modern software alternatives?

 

I've got a very simillar hardware sertup in a Norco 4224 but I have ESXi setup to virtualise my storage server and also run a media server in it's own environment.  I run Open Indiana with all the storage disks in two 8x1.5TB and 8x2TB ZFS RaidZ2 arrays currently with the SAS controllers passed through to that VM to give complete access to the disks.  I have a seperate windows VM on the server to run Media Browser Server.  This setup has been 100% reliable for me but I am going to move away from ZFS and traditional RAID solutions in search of power savings as the performance is simply not needed.  I'm going to rebuild replacing the OI VM with a Server 2008 VM using FlexRAID to pool and protect the storage initially with 8x4TB Seagate disks with two being used for parity, to move all of the current data to and then I'll move the old disks over and get rid of the OI install completely.  Moving away from a striped solution will allow me to sleep the unused disks which I'm assuming will be a significant power reduction on a machine with 24 3.5" disks that runs 24x7.

Edited by joshstix
Link to comment
Share on other sites

JustEric78

I looked into FlexRaid and Unraid etc. when putting this build together. The problem I had with just going with an HBA and saving money over a hardware raid was that I could not for the life of me find any definitive benchmarks or other performance statistics on how well the software raid performs. My experience with software raids in the past has been less then palatable when talking about potentially losing my collection which is to large to really backup. I felt more comfortable with this setup as I know if the card goes bad I can replace it with nothing lost and similarly if a drive or two goes bad I can replace those as well. With a software raid if something goes wrong with my OS or the software itself I am just SOL. Again, this is experience in the past and the technology may have come a long way I just could not find enough information to make me feel comfortable with moving forward with it. I would love to hear more about it because I plan on getting another card after I hit ten drives so and HBA would save me a ton over a hardware raid card. 

Link to comment
Share on other sites

joshstix

My ZFS array can easily be moved between any OS install that supports a fairly recent version of the ZFS pool, it actually was built in Open Solaris but I forgot the root password so rebuilt with Open Indiana.  Performance of ZFS when setup correctly is VERY good.

 

As to FlexRaid the disks are just NTFS disks that can be read in any windows box even if you lose more disk than your parity tolerance level the remaining flexraid disk are readable.  FlewRaid performance, being non striped, is going to be very dependant on number of reads happening and how your data is laid out.  Each file is only stored on a signle disk so you're only going to get single disk performance for reads or writes of a single file but in theory you could be reading or writing as many files concurrently as you have disks and get total throughput about as high as a striped solution.  The reality for a media server though is that the read speed of a decent single disk is enough to feed multiple full bit rate BlueRay streams so it's a question of how much performance is enough?

 

Windows software RAID in the past has definitely been terrible but the world of storage has moved a long way on from those times.

Link to comment
Share on other sites

JustEric78

If it is though using cache from each individual disk on its own it would seem to me that the performance of the raid would be quite a bit less then the combined cache of my 6 disks in a raid 6. Is it just that FlexRaid is fast ~enough~ or does it actually compete based off of the lack of loss in speed from the parities in a hardware raid? I really wish they would put out some benchmarks. I feel that if I ran a program like that and I know I could compete then I would WANT to post benchmarks but they are no where to be found. This tells me that for the average consumer they do work since they are easy and cheap in comparison to a true hardware raid but do not meet the same performance that one is paying for with a nicer card such as the one I purchased. Perhaps you wouldn't mind downloading a free program to benchmark you're read/writes and post them? I would like to see a few tests with a 5GB chunk if you have a little time? Just run the test 2-3 times and post screens from the results to your FlexRaid or other program you use.

 

This is safe, I use it on a regular basis for work. If you don't feel comfortable downloading it let me know and I will give you a copy of it directly zipped. This request goes out to anyone with one of these software raid programs, would love to see the results. 

 

http://www.totusoft.com/files/LAN_SpeedTest.exe

Link to comment
Share on other sites

joshstix

I only have a single gigabit link between my server and my clients, since a single disk can generally flood that and provide more bandwidth than I need for client playback I don't see a lot of need to benchmark.  When I built the ZFS array it was benching at 1.2GB/s but the only time you could get that transfer rate would be locally so makes no difference for media serving.

 

As I said the outright speed for a single file transfer in FlexRAID is going to be no faster than the individual hard disks you're using.  FlexRAID is in no way a high performance enterprise RAID solution.  It is very much an efficient solution designed to work well for people with home media servers with features such as being able to add a disk full of data straight into an active array and have it all protected once the parity has been calculated, use as many parity disks as you want, take the disks to any other windows machine and read the data as though it is a normal disk etc.

 

ZFS on the other hand is a serious enterprise RAID solution and is the basis of a lot of SAN solutions being sold these days.  The days of hardware RAID being the only fast reliable solution are far gone.

 

Back to the topic of your expansion plans for the future, I've looked at the manual for the RAID card you're using and I see no mention of being able to extend an active array.  What is your plan for getting to a 10 disk array from your current 6 disks before you then start a second array?  I have never used a RAID card that can have an additional disk added to the array and then extend the array to use the additional storage presented by the new disk.  I had the same issues with ZFS which is another of the reasons I've decided to move away from traditional RAID.  My setup had tied me into adding storage in chunks of 8 disks at a time with 2 of those disks used as parity to protect the data.  That gets expensive, I recently spent $1200AU on 8 new 4TB disks when they were on special but I'd much rather be able to add these a single disk at a time as needed and when the prices are best.

Link to comment
Share on other sites

...The reality for a media server though is that the read speed of a decent single disk is enough to feed multiple full bit rate BlueRay streams so it's a question of how much performance is enough?...

 

I have found this to be true, at least with regular DVD ISO files. I don't run any kind of RAID setup, just a couple 3TB drives with DVD iso's on them and have found that I can stream to at least 3 devices at once with no lag or buffering fro a single drive. The drives are just your regular Seagate 3TB SATA drives connected to motherboard SATA ports over a gigabit LAN - nothing special. I think this is because most of the load is sequential reads. If I was running a high-traffic SQL database, web server, and/or file server to many (10+) clients, I'm sure the performance would suffer.

 

For giggles, I ran that LAN Speed Test that JustEric78 posted, and on my measly system, I get 725.7Mbps write and 762.22Mbps read with a 5GB chunk. Although, drives in a media server are likely to spend most of their time reading so the read portion is probably the more relevant test. I then ran three simultaneous instances of that program to a single drive (to different sub-folders on the same share) I get an average write speed between 231-268Mbps and average read speed of 221-231Mbps. Much slower but still plenty fast enough to serve at least 3 clients at once. I can't imagine any RAID setup that wouldn't beat the pants off a single drive.

 

JustEric: Yeah, it's an apples to oranges comparison, but I think your system (which sounds sweet by the way) will run great. I can't imagine you'd see any kind of bottlenecks unless you are planning on feeding 50+clients. :)

Link to comment
Share on other sites

Brendon

Nice setup, same as you i spent forever researching to get the optimium build, didn't score freebies like you but i own a pc store so still cheaply done :)

 

Personally i only went the i3, i don't see a media server as cpu intensive, more RAM in my experience, so i loaded that out to 16GB with some corsair sweetness, used a Gigabyte motherboard, same one in both server and HTPC, (theory being if it dies it's easier to replace the board in the HTPC over the server, so i basically have a back-up i can swap out only leaving me with the job of rebuilding a HTPC and not a server). I also don't use Raid, there is nothing overly important on the server, it's all media that can be replaced easily enough and i would much rather the storage (already out to 27TB, 9x 3TB WD RED). It's all housed in a Coolermaster case, would love to swap that for a server case but have been unable to find one in Oz at a decent price.

Link to comment
Share on other sites

JustEric78

Joshitx: Most enterprise level entry level cards and up are capable of adding storage, changing raid types (parity), and rebuilding on the fly these days. This is the primary reason I am upgrading aside from being able to read 2TB+ sized disks. If you go to the cards documentation and pull up the specs you will find this:

 

 Online Capacity Expansion (OCE) 
• Online RAID Level Migration (RLM) 
• Auto resume after loss of system power during array 
rebuild or reconstruction (RLM)
 
I would not be getting this card if it was not capable of the above. I will look into ZFS though because as I said, it is more expensive to go the route that I did but I wanted to future proof myself. I intend to be able to stream 1080p blu-ray quality rips to multiple machines simultaneously. This is something that my current build cannot do because of hard drive limitations. This card is still not top of the line but it is leaps and bounds ahead of where i am now so I should be good for some time. I plan on my second 10 disk array being a 12GB/s card that I will prioritize for my streaming. Further, when I do the second array I do plan on replacing my current mobo/proc and adding a total of 4 NICs. That will be: two built into the server motherboard at that point then adding a second dual NIC and setup NIC teaming. This will give me the ability to set it up so that four gigabit NIC's are dishing out the media across the house. I know it will be overkill but that is not very cost prohibitive and I will not have to worry about buffering regardless of what guests are visiting and where media is being streamed. With all that being said however, if I can accomplish the same with ZFS I will do it in a second so I will begin researching it. I had contemplated doing the 12GB/s card this time around which is why I had the question but with the price and the fact that it will probably be overkill in 2-3/yrs. I will take your and tohers opinions that I would just not even touch it right now so it is a waste of money. 
 
Lastly, my distributor has some 2000VA UPS's for cheap so I will be adding two of them to the mix to make sure the server does not lose power to make sure I do not lose my 1200+ movies to power loss/brown outs lol!
 
BrianG: Those numbers are as you stated more then adequate to stream a normal sized 1080p rip to a single machine. My fear is however, that it will be inadequate to push a full uncompressed blu-ray rip of LoTR at a stream of 50-60Mgbs/s and still be able to function if anyone else is doing anything through the server at all without interruption! To be honest, I do not even think it would be capable of streaming that ridiculously large 80GB movie lol. This is my main concern in the investment and why I did not go software raid. It would be adequate for the average user streaming to a single machine woth a 5-10Gb movie. I wanted to make sure I was not making my investment into a machine that would not meet my potential need for the future which to a point is unknown. I plan to build on this with a second machine further down the road and do not want to have to rebuild the server a 3rd time (already plan to rebuild a 2nd time in 2-3 yrs.). 
 
With all that being said however, thank you very much for giving out the stats on what your machine is capable of streaming. The small amount of data I could find (including another thread in this forum) shows me that the software raids currently available are literally 10% of the performance receiving from even an entry level hardware raid card. I just do not think it is ready for the kind of bandwidth I would like to receive from my machine in the future. I still though put some time into researching ZFS and perhaps dedicate some drives in the next few months for testing before adding them to my raid 6. If anyone is interested I will post my findings in comparrison to my hardware raid although I am sure it will not compare if it is enough to saturate my network then it will be worth the cost savings. 
 
Brendon: You are right, there was no reason to go to the extent that I did for just a media server. I however, use this as an RDP server as well. So to add the the mess of stats and specifications above I also wanted to be able to work on the machine as a desktop from anywhere while doing homework etc. while encoding a movie (still in school for another six months, so happy to be almost done!). I am sure you understand that this is a tall order regardless of the hardware so I am hoping my little 3rd gen I7 can keep up because I know the I5 that has been in there has not even come close. Keep in mind the only real difference between the I5 and I7 is hyper threading so I truly hope the extra threads come through for me! I know it will be faster to encode but I further hope I can encode while working on my papers or even just browsing for that matter lol. 
 
You are though right on not needing more then what you have in your server if the reason for the server is purely for streaming. Hell, an I3 is overkill for that :)
 
Good plan on having a backup mobo but I am much more concerned then you over having to replace my collection as I have to meet ratio *wink* and do not have the downloads any longer for 99% of what I have on my disk. I would hate to have to start over from scratch, some of what I have was very hard to find! I would much rather have to rebuild my domain from scratch with a new machine then to lose my data and this setup takes me in that direction as opposed to being worried about losing the server build itself. I will re install and setup my server 10x before having to rebuild my collection. I wish there was a good way to backup this much data without having to spend $500+/mo. even with the services we provide to our customers at my cost it would be insane to keep this backed up short of having a duplicate raid of the same size being mirrored or backed up to. I may contemplate that in the future if I am not using the space for my movie collection to be honest. 
 
Since you mentioned looking for a case though I would strongly recommend looking into that Norco. They are still on sale for what I paid for it as of this posting and you are not going to find a case capable of that storage capacity that includes backplanes for anywhere near that cost. It is a pretty nie case for the $ but then again you have to invest in a way to connect 5 SFF-8087 back planes to a card of some sort. It is though only a few hundred dollars for a car capable of that if you do not plan to build a raid and instead just require an HBA card. 
 
On a side note: It is fun seeing all of the spelling corrections programs want to make when spekaing in geek :P
Edited by JustEric78
Link to comment
Share on other sites

joshstix

Good to hear on the RAID card, as I mentioned I haven't bothered to look at physical RAID cards in quite a while.

 

Definately look into ZFS it is a proper striping RAID solution that has massive performance potential if that's what you need.

 

Do keep in mind though that 80GB Lord Of The Rings blueray is ~50 megabits per second and a typical modern hard drive has a sequential read speed of >1000 megabits per second.

 

If the intention for this server is to be a LAN party leaching beast then for sure go for a striped RAID system.but it is definately not needed for any kind of media streaming that would be done in a single household in the foreseeable future.

 

What you need to recognise on performance terms for the solutions like flexraid and snapraid etc is that they are not really RAID at all, the closest real RAID designation to them would be RAID 4 because they store all of the parity info on a dedicated disk but in RAID 4 the data is still striped across multiple disks which does not happen in these solutions.  The only "RAID" characteristics that these solutions have is pooling to present multiple disks as a single target and parity for disk failure recovery.

 

For the multitasking work you're having the server do I would strongly suggest looking into virtualising with ESXi or Hyper Visor.  That way you can dedicate resources to your media serving making sure that it is impossible for other uses to rob resources from the OS serving your media and keeping the quality of service consistent.  Plus virtualisation is just cool tech to play with.

Link to comment
Share on other sites

JustEric78

Yea, I do RDP setups on a regular basis with work and have the licenses to set this up at home but was pushing away from it for ease. You may be right though in setting up a second VM through Hyper-V dedicated to just my encoding tasks and have the media streaming machine separate. Since server 2012 comes with two VMs perhaps I should even push off my RDP services to a second VM so I can work even if something is being encoded on the server. Seems obvious for something I would promote to my customers, not sure why I did not think to go that route myself at home lol. Thank you so much for the suggestion! I will look into the software raids again if nothing else from this post. God knows hardware raids are expensive and software moves forward almost faster then hardware these days... I hope I find something promising to compete with them!

Link to comment
Share on other sites

Even at 50-60Mbps, a single drive is capable of feeding several clients. I'm not trying to say to stay away from RAID, just providing some single drive benchmarks for comparison purposes, basically reinforcing the fact that I doubt you'll have trouble with any kind of RAID given the performance of a single drive.

 

As far as the CPU goes, an i3 is adequate for basic server functions. But, if you have to transcode (not just stream direct) to one or more clients, then that i7 will come in handy. My server, which is an i5, functions as a SQL/IIS/Media server, and it works fine. But when I add transcoding to the mix (like to a web client), then the CPU usage spikes pretty heavily. I think I read on here that the MediaBrowser server does not use the video card to assist in transcoding, which is too bad since even a mid-range video card could take substantial load off the CPU for transcoding. Not sure how much transcoding you plan to do, so all this may not apply.

 

One other little thing I want to mention is the power supply. 650w is plenty for a typical mid to mid-high gaming system, but most of today's power supplies have the lion's share of the power dedicated to the 12v line. Having close to 20 drives may stress the 5v line, so that may be something to verify. According to WD, the WD Red drives take as much as 280mA @ 5v and 250mA @ 12v during read/write (4.4watts total), so you are probably ok. If you are using something like the WD Black though, power is doubled, and you may run into limits faster once you add in the rest of the system's 5v load.

Link to comment
Share on other sites

JustEric78

The 650w PS is a single rail so it will make no difference with that type is pulling, which I did on purpose. It can devote all of the wattage to whatever rail required which used to be very expensive but is fairly common it looks like these days.

 

I run Serviio simultaneously on the server which transcodes to just about any device both in network and externally as well so that was a consideration with the higher end processor as well as the RDP. I am going to move forward with setting up hyper-v on my server and installing two VMs so that I can have a server dedicated to transcoding/encoding and then for RDP and streaming all separate. I am glad I made this posting as I said before, I cannot believe I did not think of doing that before.  

Link to comment
Share on other sites

moviefan

I just moved from my setup which was using an Adaptec 6805 RAID card with eight 4TB disks to a Synology RS10613xs.  My boss just bought me this is a reward or I would never have dropped the 10k it cost for the unit and the drives.

 

During my time using the Adaptec hardware RAID I have had a few issues which have been concerning.  I think at this point outside of a fully enclosed solution like a Synology, I would prefer software RAID like FlexRAID or SnapRAID.for my media storage.  I think that the modern software RAIDs are actually more flexible and reliable than hardware RAID cards which mostly use a sort of dark magic you just hope works without a lot of actual detail.

 

I have a lot of videos, just like you, and could not imagine the possibility of having to recover them.  Several times with the Adaptec card I have had issues which have crept up and caused data corruption.  IME, I just don't think regular hardware RAID controllers do a great job of detecting silent data corruption and software RAID can do a better job of this.

 

Anyway, good luck on the setup.  Sounds like you have devoted a lot of time for sure.

 

BTW, Joshstix  

I have never used a RAID card that can have an additional disk added to the array and then extend the array to use the additional storage presented by the new disk.

 

 

I have never used a RAID card that didn't support OCE.  This is EXTREMELY common and most hardware RAID cards will support this today.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...