Jump to content

File Server Upgrade


Swynol

Recommended Posts

Swynol

After seeing everyone's shiny kit thought i would treat my server to an upgrade.

 

At the moment my specs are

 

Random DIY PC case

Asrock 1155 mobo

Intel i3 3420 

16GB DDR3 Ram

Supertrak 8650 raid card 

a few sound cards for Zone control

 

The Server runs 24/7 and runs a list of apps, emby, itunes server, nzbget, sonarr, HTPC manager, unifi controller and a few others.

 

My eventual aim is a more powerful CPU but for the moment i will only be upgrading a few parts. So far I have just bought/found the following kit.

 

Logic Case 4320S (similar to the Norco 4220S)

M1015 Raid Card LSI 9201-8i - flashed to IT mode

Broadcom BCM5719 Quad 1GB network card x2

Qlogic QLE2560 8GB fibre card x2

 

Here are the pics

 

the Logic Case, it can hold a mix of 20x SAS or SATA drives and has a 6Gbps backplane. comes with 3x 120mm fan wall and 2x 80mm exhaust fans. fits a standard size atx mobo and psu. Also holds a few internal 2.5" hard drives and a slim DVD drive.

 

 

58887a849433a_IMG_6094.jpg

 

 

 

58887aabc7738_IMG_6096.jpg

 

 

I managed to pick up a M1015 raid card from ebay which runs the LSI 9201-8i and can be flashed to IT mode.

 

58887b8f468ae_sl1600.jpg

 

i also picked up some SFF-8087 to SFF-8087 cables as previously i was using a forward breakout cable. I also need to purchase a SATA host to SFF-8087 reverse cable so that i can run one bank of drives from 4 SATA controllers on the mobo.

 

Next up is the broadcom 4x1GB nic. i managed to find 2 of these cards for free. no idea if they are any good or even working but might be an upgrade to my single 1GB link i currently have aslong as i can team them. Along with the broadcom nics i also found 2x 8GB fibre cards, no idea if these work however my switch only has a SFP port and not SFP+ so i dont think i will run these cards but might be good for future upgrade if i can get a new switch.

 

58887c55ab66b_IMG_6092.jpg

 

So thats where i am currently. I also have plans to build a small 10u rack which willl hold my server plus have room for either my CCTV PC or maybe a rack mount UPS to replace my desktop type UPS.

Edited by Swynol
  • Like 3
Link to comment
Share on other sites

PenkethBoy

Nice :)

 

Couple of things to note :)

 

Those 8GB cards are fibre channel cards - and i am 99% sure they will not work with a normal switch - i think you need a fibre channel switch - they will work together i.e. connecting 2 machine directly though - say your main pc to the server etc

 

With the quad cards - if you run windows then 8 way multi path would be a thing to see working :) - no need to team them unless you want the redundancy etc - to see the full benefit of multipath will require a multi threaded cpu at either end ideally an i7

 

Lots of new toys to play with :)

Link to comment
Share on other sites

mgworek

Did you replace the fans on that case yet? I can't say for sure but I replaced the back pane on my Norco case (i believe it had 4 fans that were loud as hell) so new back pane had only 3 spots for fans but i made sure to get good quite fans that people recommended it. 

 

I am actually started to go in reverse. As soon as 10TB drives come down in price I am ditched my Norco case and going smaller. Less drives and water cooling. 

Link to comment
Share on other sites

Swynol

Nice :)

 

Couple of things to note :)

 

Those 8GB cards are fibre channel cards - and i am 99% sure they will not work with a normal switch - i think you need a fibre channel switch - they will work together i.e. connecting 2 machine directly though - say your main pc to the server etc

 

With the quad cards - if you run windows then 8 way multi path would be a thing to see working :) - no need to team them unless you want the redundancy etc - to see the full benefit of multipath will require a multi threaded cpu at either end ideally an i7

 

Lots of new toys to play with :)

thanks @@PenkethBoy. They are fibre channel cards, does that mean they wouldnt work with a SFP+ switch? i wasnt planning on using them any way, but its good to know that point to point will work with the cards.

 

The machine is currently running Win10 so will check out multipath. The server is currently only running an i3, would it be better running them teamed so that i get 4x1GB through put to my switch that lots of devices can use it? I have a few i7 and i5 machines in the house but they only have single NIC so not sure they will see any difference.

Link to comment
Share on other sites

Swynol

Did you replace the fans on that case yet? I can't say for sure but I replaced the back pane on my Norco case (i believe it had 4 fans that were loud as hell) so new back pane had only 3 spots for fans but i made sure to get good quite fans that people recommended it. 

 

I am actually started to go in reverse. As soon as 10TB drives come down in price I am ditched my Norco case and going smaller. Less drives and water cooling. 

 

not yet. case turned up this morning and havent had a chance yet. will see what they are like first. I have some corsair SP120s that im not using so might sway out the 120mm fan wall fans for those. I use to just have a small case with only a few drives but as my files grow i need more space and i have just been butchering my case to fit more in. I like the idea of having multiple drives so if a drive does die i dont lose alot of data and are cheaply replaced. I just found 4 brand new 1TB drives that i may add into my drivepool for now or keep them as backup drives.

Link to comment
Share on other sites

Guest asrequested

SFP+ are copper, not fiber. I replaced my fans with noctua fans. A huge difference. I highly recommend doing that. But it looks like fun times ahead. Very nice!

Link to comment
Share on other sites

colejack

Did you replace the fans on that case yet? I can't say for sure but I replaced the back pane on my Norco case (i believe it had 4 fans that were loud as hell) so new back pane had only 3 spots for fans but i made sure to get good quite fans that people recommended it. 

 

I am actually started to go in reverse. As soon as 10TB drives come down in price I am ditched my Norco case and going smaller. Less drives and water cooling. 

 

I'm sure you are planning for it when you go watercooling, but make sure you still have some kind of airflow over your drives.

 

 

 

SFP+ are copper, not fiber. I replaced my fans with noctua fans. A huge difference. I highly recommend doing that. But it looks like fun times ahead. Very nice!

 

SFP+ can be copper in the form of DAC cables, but most probably use transceivers and fiber for longer distance.

  • Like 1
Link to comment
Share on other sites

Guest asrequested

I'm sure you are planning for it when you go watercooling, but make sure you still have some kind of airflow over your drives.

 

 

 

 

SFP+ can be copper in the form of DAC cables, but most probably use transceivers and fiber for longer distance.

So there's dual connections? My SFP+ cables have tranceivers with copper terminals.

Link to comment
Share on other sites

colejack

So there's dual connections? My SFP+ cables have tranceivers with copper terminals.

 

You can use either copper DACs or transceivers and fiber with SPF+. DACs are used for short distances (like within a rack) and fiber is for longer distance (like between racks)

 

DACs have the cable and connectors all together, while transceivers and fiber are separate pieces. Also there are different types of transceivers and fiber (single mode, multi-mode).

Link to comment
Share on other sites

colejack

Ah! OK, so I have DAC cables with copper SFP+ plugs. So in the SFP+ socket, there are fiber ports? When I get my new switch, I'm gonna thoroughly poke around

 

No, transceivers connect to the connector and covert that into light for the fiber. DACs are just copper the whole way.

 

This is a fiber transceiver, see the two ports? One for each half of the fiber pair. This would plug into the SFP+ socket on your switch or NIC and then you would plug fiber into the transceiver.

SFPG1320C.Main.jpg

 

Here is the fiber end

372.jpg

Edited by colejack
Link to comment
Share on other sites

Guest asrequested

Ah! Right, thanks. That's why it wasn't adding up in my head, lol. So SFP+ aren't fiber unless you use transceivers.

Edited by Doofus
Link to comment
Share on other sites

Swynol

I have a few of those fibre transceivers. We call them gbics we use them in the sfp ports. I tried one in my switch but because it's only sfp and not sfp+ it was still only 1gb

 

 

Sent from my iPhone using Tapatalk

Link to comment
Share on other sites

PenkethBoy

thanks @@PenkethBoy. They are fibre channel cards, does that mean they wouldnt work with a SFP+ switch? i wasnt planning on using them any way, but its good to know that point to point will work with the cards.

 

The machine is currently running Win10 so will check out multipath. The server is currently only running an i3, would it be better running them teamed so that i get 4x1GB through put to my switch that lots of devices can use it? I have a few i7 and i5 machines in the house but they only have single NIC so not sure they will see any difference.

Yes you need a fibre channel switch - these are used with SAN's in enterprise environments and are not NIC's

 

Yes you can get NIC's with fibre transceivers but they are different to fibre channel - i am 99% sure you cant mix them - if you could there would be no need for fibre channel switches

 

For Multipath you need to do nothing more than have win8+ as the OS on the server and your client pc's - i.e. a quad port nic in the server and a pc would give you 4 way MP - IF the cpu at each end is man enough

 

I have a DUAL 10g NIc in my server and my main PC - i5 and i7 respectively and MP works well - you see both nic's being used to transfer data rather than the normal single connection

 

Teaming does not give you 4g bandwidth - any connection will be limited to 1g - but it does allow multiple client connections to be at a possible at 1g if you use LACP which requires a layer 2/3 switch - there are IIRC 6 types of teaming which do different things with multiple NIC's - some require the switch to support them LACP being one.

 

To get MP to work each pc needs more than one NIC other than that it generally just works with good cpu's - with teaming the server is key to have multiple nics and you only see benefit if more than one PC is send/receiving data at the same time if the server can supply/receive data that fast - which then comes down to the disks :)

  • Like 1
Link to comment
Share on other sites

Swynol

ok good to know. Will probably give multipath a go, the idea behind it sounds good and at least this wont be a bottleneck in my setup. out of interest if my server had multipath setup with 4 nics and all my other devices are setup with only 1 nic, will it have the same effect as teaming? i.e. 4 devices connect to my server, will they each get around 1GB connection each obvious depending on the CPU at the server end.

 

I have a LACP supported switch which is currently full so i couldnt utilize the 4x nic connection anyway. although when i found the nics and the fibre channel cards there were also 2x Cisco 3560G 48 port POE switches which i may try to get working. 

Link to comment
Share on other sites

PenkethBoy

MP requires both ends to support it - and i am sure multi Nic's also as with a single nic it cant be multi path  :P

 

MP is not teaming so don't assume it will work that way - and there is no setup to do

 

Oh and where can we find these free bits of kit you are getting your hands on?  :P

Link to comment
Share on other sites

Swynol

MP requires both ends to support it - and i am sure multi Nic's also as with a single nic it cant be multi path  :P

 

MP is not teaming so don't assume it will work that way - and there is no setup to do

 

Oh and where can we find these free bits of kit you are getting your hands on?  :P

 

cheers. well i've thrown everything into the case and booted it up. all is looking good. at the same time i swapped my drive from the supertrak 8650 to the m1015 which i flashed to IT mode and thankfully stablebit found all the drives and my pool is all good. 

 

Currently only running 10 drives, 1x SSD sata to mobo, 1x 3TB drive sata to mobo and 8x 3TB from the m1015 controller. i am currently trying to find another 2x SFF-8087 cables so that i can use the supertrak card for another 8 drives which will futureproof it somewhat and finally a 4sata to SFF-8087 reverse cable for the last 4 drive bays.

 

The case - very impressed with the case. The fan wall is awesome with 3x 120mm hotswap fans, makes it easy to change them and take one out if i need better access the backplane. The fans are fairly noisy but they do pump out alot of air so i may leave them unless the 80mm noctua fans pump out more air, can anyone confirm? these are the current fans. can find any details on them.

 

a few issues. my mobo only has 2 pcie x16 slots and ideally i need 3 or 4. 2x for the controllers, 1x 4 port nic 1 spare. so at the moment its running on the onboard nic until i upgrade the mobo and cpu at a later date.

 

as to the free bits of kit well there was an accident at a local data storage site where a water pipe burst ran through a light fixing down the power cables and into the server PSUs. Most of the kit were under 12 months old and all bits of kit were replaced with new however alot of the kit that came out was fine. some PSUs were blown and a motherboard or 2, but i managed to salvage a few drives, xeon cpus, some dell raid cards, nics, fibre cards and much more. There is also a rack mount UPS which is working just requires new batteries. 

Edited by Swynol
Link to comment
Share on other sites

Swynol

ye i have the 120mm fan wall. my rear fans are 80mm. I ran the server earlier using the seagate tools to test the disks and they were hitting 40c idle and a little warmer when working hard. in my previous case they were maxing out at 36c. so now i am a little worried about trying to keep the temps down. 

 

cant really add any more cooling into my cupboard so i might look at improving the airflow through the case some how. unfortunately i cant find the specs of the standard fans, wondering if the SP120s i have will be any better. or possibly leave the top of the case off but make some typr of shroud from the backplane to the fan wall so that the 120mm fans will still be sucking air through the drives.

Edited by Swynol
Link to comment
Share on other sites

Guest asrequested

Your case is almost the same as mine. The drives will run warmer, but SMART reports they are in safe parameters, and no errors are reported. The ambient temperature of my room is pretty cozy, too. The fans that came with, run at 5000rpm and are loud. The noctua fans I put in, only run at 2200rpm. The 120s I'm going to put in, almost double the air flow. So I'll see what difference it will make.

Link to comment
Share on other sites

Happy2Play

My 4TB drives run between 38-40c idle, and 2TB drives 30-33c idle in NORCO RPC-4224.  Only using the stock 4x80mm fan wall, disconnect rear 2x80mm fans.  Haven't taken the time to replace fan wall with my 3x120mm replacement kit.  Just ensure the vents are open on each drive tray.

Link to comment
Share on other sites

Guest asrequested

That's almost exactly the same as mine. It's interesting to see that your temps with the stock fans, are around the same as mine, with fans running at half the speed.

Link to comment
Share on other sites

Happy2Play

That's almost exactly the same as mine. It's interesting to see that your temps with the stock fans, are around the same as mine, with fans running at half the speed.

I can't seem to find the exact specs but there are two different speed fans in this case, only the rear 2x80mm fans are screamers. (at least in my case)

Edited by Happy2Play
Link to comment
Share on other sites

Guest asrequested

I've got a 4216, 4 80mm fan wall, like yours. When I replaced them, I held one in my hand and plugged it in. It was pretty noisy. I'm pretty sure they're between 4000 and 5000rpm. The 120s I'm gonna use, are 1200rpm with double the air flow.

Link to comment
Share on other sites

Swynol

so after some light work i saw my 3TB seagate SAS drives hit 48c. not happy with that. so i swapped one of the 120mm fan wall fans for my sp120. i noticed the sp120 pulling more air than the standards so i will probably start with swapping these out. noise doesnt bother me i would rather get the HDD temps down to under 40c preferably to around what they were before.

  • Like 1
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...