dcook 264 Posted April 12, 2018 Share Posted April 12, 2018 My current Emby Server uses Raid, I am looking to build a new server and use DrivePool instead. Most of the motherboards I have been looking at have either 4 or 6 SATA ports. What do you all use with DrivePool to get all the SATA ports? I was looking to put in 10 x 4TB SATA Drives in total. Thanks! Link to comment Share on other sites More sharing options...
CBers 6742 Posted April 12, 2018 Share Posted April 12, 2018 Not DriveBender I have 2 raid cards with 4->1 connections. So 2 raid cards = 8 drives. Link to comment Share on other sites More sharing options...
Happy2Play 8139 Posted April 12, 2018 Share Posted April 12, 2018 Any card that supports JBOD. Link to comment Share on other sites More sharing options...
KMBanana 83 Posted April 12, 2018 Share Posted April 12, 2018 If you're planning to have any redundancy I would avoid DrivePool for 10 drives. Raid 5 allows you to lose 1 disk by sacrificing the storage capacity of 1 drive so the available storage would be 90% of your capacity. Raid 6 allows you to lose 2 disks at the cost of 2 disks worth, leaving you with 80% of your capacity available. DrivePool though duplicates everything to give redundancy, so it's a 50% loss. You'd only have the available storage of 5 of your drives. Link to comment Share on other sites More sharing options...
mediacowboy 438 Posted April 12, 2018 Share Posted April 12, 2018 I'm currently running driver bender with this card and it's is working great. https://rover.ebay.com/rover/0/0/0?mpre=https%3A%2F%2Fwww.ebay.com%2Fulk%2Fitm%2F152937435505 1 Link to comment Share on other sites More sharing options...
dcook 264 Posted April 12, 2018 Author Share Posted April 12, 2018 I have been using RAID for 15 years and looking to build something better and more future-proof so to speak. RAID performance is now substantially less that what I can get using more advanced systems like DrivePool or DriveBender Also I want to be able to add more various sized disks and not have to rebuild a RAID Array So basically what you are all saying I can use any SATA RAID card as long as it has JBOD mode and that will work fine for DrivePool or DriveBender? Link to comment Share on other sites More sharing options...
PenkethBoy 2063 Posted April 12, 2018 Share Posted April 12, 2018 (edited) If you're planning to have any redundancy I would avoid DrivePool for 10 drives. Raid 5 allows you to lose 1 disk by sacrificing the storage capacity of 1 drive so the available storage would be 90% of your capacity. Raid 6 allows you to lose 2 disks at the cost of 2 disks worth, leaving you with 80% of your capacity available. DrivePool though duplicates everything to give redundancy, so it's a 50% loss. You'd only have the available storage of 5 of your drives. so 33 drives is a bad idea then - lol actually you have 100% of your drives usable - just depends what duplication you want which is very different to most raid solutions - and your paritycan be on other drives if you use say unraid as well as DP and you can have any mix of drives you want Edited April 12, 2018 by PenkethBoy Link to comment Share on other sites More sharing options...
PenkethBoy 2063 Posted April 12, 2018 Share Posted April 12, 2018 My current Emby Server uses Raid, I am looking to build a new server and use DrivePool instead. Most of the motherboards I have been looking at have either 4 or 6 SATA ports. What do you all use with DrivePool to get all the SATA ports? I was looking to put in 10 x 4TB SATA Drives in total. Thanks! i have lsi 9211 with a HP sas expander for 24 drives rest are on SATA ports of m/b Link to comment Share on other sites More sharing options...
KMBanana 83 Posted April 12, 2018 Share Posted April 12, 2018 so 33 drives is a bad idea then - lol actually you have 100% of your drives usable - just depends what duplication you want which is very different to most raid solutions - and your paritycan be on other drives if you use say unraid as well as DP and you can have any mix of drives you want DrivePool is pretty cool, I use it myself actually. I backup to another machine though and don't really need to have any redundancy in my pool. If you're replacing a RAID array though it's really important to be aware of how differently they handle redundancy. I have heard you can use unraid/snapraid to add traditional RAID like parity redundancy to a drivepool but haven't experimented with that myself. Link to comment Share on other sites More sharing options...
dcook 264 Posted April 12, 2018 Author Share Posted April 12, 2018 I have separate backup of all my media, I am not worried about redundancy, looking to increase capabilities and performance Link to comment Share on other sites More sharing options...
Guest asrequested Posted April 12, 2018 Share Posted April 12, 2018 Lots of discussion, here https://emby.media/community/index.php?/topic/45382-drive-pooling/page-1# Link to comment Share on other sites More sharing options...
shaefurr 1337 Posted April 12, 2018 Share Posted April 12, 2018 I use FlexRAID, I don't use the snapshot feature anymore, just the storage pooling. My motherboard only has 6 SATA ports (2 of which are now dead), so I have a PCIe SATA card so I can support all 8 of my drives. Nothing fancy, just a cheap sata card like this https://www.amazon.com/Rosewill-RC-209-EX-32bit-66Mhz-Controller/dp/B00552PLCK The read/write speeds kind of suck (around 75mb/s write), but streaming from the drives works fine. Link to comment Share on other sites More sharing options...
dcook 264 Posted April 12, 2018 Author Share Posted April 12, 2018 Can anyone give any real world performance stats for DrivePool or DiskBender? Read and Write speeds? How do they compare to a standard RAID5 Array? Or to Windows Storage Spaces for that matter? With my RAID5 NAS right now I am getting write speeds of about 90Mbps via GB LAN Link to comment Share on other sites More sharing options...
Guest asrequested Posted April 12, 2018 Share Posted April 12, 2018 (edited) With drivepool and not using SSDs, I get the full write speed of my drives, around 160MB/s. You only have a Gb network, so you won't get more than 125MB/s anyway. But most deivepooling software should allow you to add SSDs as a 'cache' drive. With those, I get 350-400 MB/s. As for reading, I have x2 copy, enabled. And read striping, so it allows reading from both copies at the same time. I don't have an actual test result, I'm sure it's far in excess of my needs. Edited April 12, 2018 by Doofus Link to comment Share on other sites More sharing options...
dcook 264 Posted April 12, 2018 Author Share Posted April 12, 2018 Thanks, with the new server I am probably going to get 2 or 4 GB NIC's and set them up in a Team to increase throughput. With drivepool and not using SSDs, I get the full write speed of my drives, around 160MB/s. You only have a Gb network, so you won't get more than 125MB/s anyway. But most deivepooling software should allow you to add SSDs as a 'cache' drive. With those, I get 350-400 MB/s. Link to comment Share on other sites More sharing options...
PenkethBoy 2063 Posted April 12, 2018 Share Posted April 12, 2018 teaming does not work that way think of it as more lanes on a motorway - rather than a bigger pipe a client will only get 1gb no matter how many network ports(motorway lanes) you team together the advantage of teaming is you can have multiple clients all getting 1gb each but no more. One thing that can help if you are on windows is Multipath smb which windows manages between supporting windows os's - then your nic's would work the way you want - i have this between my 2012r2 server and my win10 pc's - any copy between the two goes down both connections and is recombined at the other end (magic!) Link to comment Share on other sites More sharing options...
Guest asrequested Posted April 12, 2018 Share Posted April 12, 2018 (edited) Can't you bond them, so they behave as one? But obviously the clients would only have 1Gb. Edited April 12, 2018 by Doofus Link to comment Share on other sites More sharing options...
PenkethBoy 2063 Posted April 12, 2018 Share Posted April 12, 2018 Can anyone give any real world performance stats for DrivePool or DiskBender? Read and Write speeds? How do they compare to a standard RAID5 Array? Or to Windows Storage Spaces for that matter? With my RAID5 NAS right now I am getting write speeds of about 90Mbps via GB LAN with my 853A i get 118mb/s read and write (large files) when copying from or to my QNAP - this is the max i can get internally running speed tests my raid5 (8 disks) will do ~400mb/s - mainly as its across 8 drives - but is meaningless as it cant be used outside of the box - your nas is the same generation as my 459 Pro II which has an atom processor and could just about do 118Mb/s at a push with 4 disks but the cpu was panting to keep up. When i tried storage spaces last year the performance was horrible and had loads of issues - on 2016 server i would guess its better but it wont be something i would trust my data to for a long while - its why i went with DP and the performance is great especially if you add in ssd caching then a network copy of 500mb/s (essentially the max of the sata ssd's) is very possible if you have a 10g network. Link to comment Share on other sites More sharing options...
PenkethBoy 2063 Posted April 12, 2018 Share Posted April 12, 2018 (edited) Can't you bond them, so they behave as one? But obviously the clients would only have 1Gb. thats what teaming does - to other pc's on your network my nas only has one IP address although its a team of 4 nic's bonding/trunking etc are other names for the same thing [edit] - one important thing i left out is that your switch has to support teaming as well as without that you are not going to get very far. Normal consumer grade switches do not have LACP functionality Edited April 12, 2018 by PenkethBoy Link to comment Share on other sites More sharing options...
Guest asrequested Posted April 12, 2018 Share Posted April 12, 2018 I'm considering selling my spare D-Link 10G switch. Link to comment Share on other sites More sharing options...
CBers 6742 Posted April 12, 2018 Share Posted April 12, 2018 Under Windows 10, and I assume Windows Server 2016, teaming is automatic. Link to comment Share on other sites More sharing options...
dcook 264 Posted April 13, 2018 Author Share Posted April 13, 2018 I was considering 10GB network but none of my other devices (laptops, Roku, etc) use 10GB so I don't think there is any point. Link to comment Share on other sites More sharing options...
Guest asrequested Posted April 13, 2018 Share Posted April 13, 2018 I was considering 10GB network but none of my other devices (laptops, Roku, etc) use 10GB so I don't think there is any point. I like having 10G so I can't saturate it if I'm moving data around while streaming. Link to comment Share on other sites More sharing options...
SHSPVR 119 Posted April 14, 2018 Share Posted April 14, 2018 i have lsi 9211 with a HP sas expander for 24 drives rest are on SATA ports of m/b I assumm your refer to this HP sas expande like in this link ? and dose it support JBOD mode what max disk size ? How come it has 8 port in stead of 6 ?. Link to comment Share on other sites More sharing options...
PenkethBoy 2063 Posted April 15, 2018 Share Posted April 15, 2018 @@SHSPVR yes that's the one has 8 otherwise you cant link your existing sas card to it so 2 ports to connect your two ports on your existing card and the other 6 for 24 drives via breakout cables the 9th is for external connection - but it assumes a tape drive - not tried to use it As for JBOD that relies on your sas card not the expander as its a very dumb card it "just" passes through the disks to the sas controller - works with 6TB disks no problem - not tried bigger but i doubt its limited Link to comment Share on other sites More sharing options...
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now