Jump to content

Storage server upgrade


sooty234
Go to solution Solved by sooty234,

Recommended Posts

sooty234
1 hour ago, MRobi said:

I'm curious, why seal off the side vents?

To pull enough air through the drives you have to create significant negative air pressure (suction). Leaving the vents open, the air will be pulled through those and not the drives. The air flow will always take the path of least resistance. That's also why I added weather stripping to the sides of the fan wall. I also considered sealing some of the fan wall cutouts, but with the 80mm fans also pulling air, that helps counteract backflow through the cutouts.

Link to comment
Share on other sites

2 minutes ago, sooty234 said:

To pull enough air through the drives you have to create significant negative air pressure (suction). Leaving the vents open, the air will be pulled through those and not the drives. The air flow will always take the path of least resistance. That's also why I added weather stripping to the sides of the fan wall. I also considered sealing some of the fan wall cutouts, but with the 80mm fans also pulling air, that helps counteract backflow through the cutouts.

That does make sense in theory, although I would have to wonder why they put those air vents there in the first place since even the factory fans need to pull air over the drives to cool them. Possibly because the dual cpu boards in these cases use passive heatsinks and the side vents would ensure the fans get enough air moving over those?

I didn't cover mine and things are OK, but I'm questioning now if it could be better....

Link to comment
Share on other sites

sooty234

I'm not using passively cooled xeons. That's why it's designed like that and they give you an air shroud. I'm just using a standard atx board with an i7, which has its own active cooling. 

I'll update the post to clarify that.

Edited by sooty234
Link to comment
Share on other sites

sooty234
  • 3 weeks later...
rbjtech

The key issue with cooling high density drives with very little air gap around them is you need very high positive or negative pressure in a sealed area to do this.   The other aspect which you may not have realised is you have removed 38mm DEEP - high current fans (~1.5A) and replaced with 25mm deep low current fans. (0.6-8A - a 'PC fan')   These simply cannot produce the pressure you need and tbh I'm surprised you are getting the correct level of cooling .

I run 38mm x 80mm fans in my case (individual 'HDD' enclosures with said fans) with a high current PWM fan controller - the noise is pretty low as it's only at ~20% to maintain the drives @ 30-40 Degrees.

You can test the airflow pretty easily by just putting a 'streamer' in front of the exhaust vent - it's also a great visual indicator that the fan on that enclosure is working .. ;)

Final note - beware the PWM connectors on server fans - a lot are NOT standard 'PC' pinouts - even though they look like they are .. 

 

Link to comment
Share on other sites

  • 1 month later...
CharleyVarrick

My twin servers are regular decent mid towers, but with 3x 5.25" bay replaced by 4x 3.5" caddy.

6x internal sata + 4x sata exp. card = 10 sata each ( have 2x 2.5" mount, plus m/bd m.2 available)

I cannot physically add more 3.5", so my main expansion option is replacing smaller capacity 3.5" with higher cap. one.

In the last 20 years, I upgraded to 500gs, then 1 tb, 2 tb, 6, 8, and in the last years, 10 and 12tb.

I have a box full of smaller drives (6tb and less) that were used in the past, but still pass all tests.

 

I have looked a bit at those server rack like norco rpc-4224 and they're so damn expensive I always end up cheaping out and postponing.

Being somewhat handy in woodworking, I could easily build my own custom 16 or 24 (or more) bay drive enclosure for "free".

But that would only be half the solution, I have no idea what to look for to connect those 24+ sata drives to.

Any keyword suggestions ?

Edited by CharleyVarrick
Link to comment
Share on other sites

CharleyVarrick
On 1/10/2021 at 8:28 AM, MRobi said:

I'm curious, why seal off the side vents?

I have desktop with 10x 3.5" drive.

Stablebit scanner reports on avg running temp 37c with side panels on, but 49c without side panels.

Normally we would think more open the better, but forced air is better cfm than static.

Not sure if its what you meant

  • Like 1
Link to comment
Share on other sites

rbjtech

With high density drives (ie small gaps inbetween the caddy/drives) then air pressure is key to force the air though.  CFM of the fan is not important.  A high CFM fan may move lots of air free flow, but be poor in a restricted enclosure.

The drives/fans HAVE to be sealed on the input side (if pushing) OR exhaust side (if pulling) for this reason or you would not get any pressure build up and the air will simply flow 'around' the drives, taking the easiest path.

  • Like 1
Link to comment
Share on other sites

CharleyVarrick
On 3/14/2021 at 4:50 AM, CharleyVarrick said:

My twin servers are regular decent mid towers, but with 3x 5.25" bay replaced by 4x 3.5" caddy.

6x internal sata + 4x sata exp. card = 10 sata each ( have 2x 2.5" mount, plus m/bd m.2 available)

I cannot physically add more 3.5", so my main expansion option is replacing smaller capacity 3.5" with higher cap. one.

In the last 20 years, I upgraded to 500gs, then 1 tb, 2 tb, 6, 8, and in the last years, 10 and 12tb.

I have a box full of smaller drives (6tb and less) that were used in the past, but still pass all tests.

 

I have looked a bit at those server rack like norco rpc-4224 and they're so damn expensive I always end up cheaping out and postponing.

Being somewhat handy in woodworking, I could easily build my own custom 16 or 24 (or more) bay drive enclosure for "free".

But that would only be half the solution, I have no idea what to look for to connect those 24+ sata drives to.

Any keyword suggestions ?

bump

Reminder I am not asking specifics brand name or model here, just general info on what to look/search for to put inside da box.  

Link to comment
Share on other sites

rbjtech

With that many drives - you need to look for a LSI SAS controller - likely multiple 12 way or 16 way cards.  Assuming these are SATA drives,  you can buy a SAS connector that fans out to the SATA drives or ideally get a SAS backplane for each enclosure.    2nd hand controllers from server hardware is your best bet, but it will need a bit of research to ensure what you are buying will work with your brand/capacity of disks - and some are locked.

If you have lots of PCIe slots, then you could go for multiple 6 way SATA3 cards, but this is not ideal and your sata cabling and SATA power will become a mess really quick.

 

  • Like 1
Link to comment
Share on other sites

MRobi
On 14/03/2021 at 07:02, CharleyVarrick said:

Stablebit scanner reports on avg running temp 37c with side panels on, but 49c without side panels.

On 16/03/2021 at 11:41, rbjtech said:

The drives/fans HAVE to be sealed on the input side (if pushing) OR exhaust side (if pulling) for this reason or you would not get any pressure build up and the air will simply flow 'around' the drives, taking the easiest path.

So if this is the case, why did Supermicro design these cases with the vents on the side? It's not like the case was modified to put the row of drives in the front, so you'd think if it was better for cooling they'd have sealed it from the factory?

 

Link to comment
Share on other sites

rbjtech
8 hours ago, MRobi said:

So if this is the case, why did Supermicro design these cases with the vents on the side? It's not like the case was modified to put the row of drives in the front, so you'd think if it was better for cooling they'd have sealed it from the factory?

 

It doesn't matter where the vents are as long as the enclosure can maintain a positive OR negative pressure.  The easiest/most efficient way to do this is to have multiple smaller enclosures as it means you can maintain different pressures in different parts of the case.  ie high for the HDD's, but medium/low for the cpu/memory etc. 

In a 19" rack, it's pretty standard to have either front>back or back>front type cooling - this is due to having hot and cold aisles of racks - with all hardware drawing in cold air one side and exhausting the other - mixing them leads to air cycling.  On most decent servers, you can reverse the fan modules, so you can specify the air flow direction.

Due to 19" racks having a small gap in between racks, side entry is also an acceptable method for drawing in air for low powered devices such as switches (as the front/rear is high density and has no room for a vent), which then exhaust the air using front/rear - but it is not ideal as the side air is stale and is not cycled like the front/rear air is.

Modern Data Centre design takes this to the extreme where each 19" rack actually becomes a sealed 'enclosure' - meaning it has an individual temperature control - although usually this is done in 'Pods' of 4, 8 or 16 x 19" racks where again, the pod becomes one big sealed enclosure with it's own door.

It's all related to moving the minimal amount of air to maintain a 'normal' temperature for the equipment without hot spots.

 

 

   

 

Edited by rbjtech
  • Like 1
Link to comment
Share on other sites

  • 1 month later...
sooty234
On 1/28/2021 at 9:54 AM, rbjtech said:

The key issue with cooling high density drives with very little air gap around them is you need very high positive or negative pressure in a sealed area to do this.   The other aspect which you may not have realised is you have removed 38mm DEEP - high current fans (~1.5A) and replaced with 25mm deep low current fans. (0.6-8A - a 'PC fan')   These simply cannot produce the pressure you need and tbh I'm surprised you are getting the correct level of cooling .

I run 38mm x 80mm fans in my case (individual 'HDD' enclosures with said fans) with a high current PWM fan controller - the noise is pretty low as it's only at ~20% to maintain the drives @ 30-40 Degrees.

You can test the airflow pretty easily by just putting a 'streamer' in front of the exhaust vent - it's also a great visual indicator that the fan on that enclosure is working .. ;)

Final note - beware the PWM connectors on server fans - a lot are NOT standard 'PC' pinouts - even though they look like they are .. 

 

OK, so let's clarify.

The fans that I used are NF-F12 industrialPPC-3000 PWM. They are not standard PC fans. The fans that came with the Supermicro chassis are, Nidec V80E12BHA5-57

The air flow of the Noctua fans is 186.7 m³/h The air flow of the Nidec's is 124.02 m³/h (73 CFM). I am using 3 fans and the chassis came with 7. 

Noctua: 3 x 186.7 = 560.1

Nidec:   7 x 124.02 = 868.14

Of course we expect more air flow when you use more than twice the number of fans. In the original chassis design, there needs to be more air flow and static pressure because not only do the fans need to pull air through the drives, but the original design needed to push air over the passively cooled Xeons in a 2U space (it's a 4U chassis split top and bottom, with the CPUs on top), through the air shroud. So with side vents, the fans have to work much harder to maintain enough static pressure to achieve the necessary air flow through the front drive bays.

With what I have done (no passively cooled CPUs), I only need to maintain enough static pressure and air flow for the front drives. My CPU has it's own fan. I'm not using a server mb or CPU.

(100 / 868.14) x 560.1 = 64.51%

So, using the 3 Noctua fans, I only lose 35.49% air flow. 

I sealed the side vents to increase the negative air pressure potential. This means that the higher air pressure potential of the Nidec fans is no longer required. Using a standard 120mm PC fan would not provide as much static pressure as the Noctua fans I have chosen to use. 

I also wanted to make sure that the back drive bays would get decent air flow pushed toward them. So I added two 80mm fans (these always run at 100%) behind the 120mm fans, to direct more air toward the back drive bays. These two fans also assist the 120mm fans in creating a little more negative air pressure for the front drives.

The 80mm fan's air flow is 55.5 m³/h 

(2 x 55.5) + (3 x 186.7) = 671.1

(100 / 868.14) x 671.1 = 77.3%

So I'm actually only losing 22.7% air flow, with all fans running at 100%

In the final solution, it appears to work much better. Most of the time, the 120mm fans run at 62 - 68%, which keeps the drives between 35 - 41C.

We've been having 90F (32C) weather, and I don't have air conditioning. To my delight, the fans didn't get above 80% and the drive temps were mid 40s. This is due to the fan curve I made. I could make it more aggressive, but it doesn't appear to be necessary at this point. The S.M.A.R.T. on the drives hasn't reported any warnings.

It's been running for a long time, and is stable and quiet.

  • Like 2
Link to comment
Share on other sites

rbjtech

All sounds good - at the end of the day, the drive temps tell if you have enough cooling and by the looks of it they do. 

35 to 41C is perfect - I personally warn/alert at 45C and shut the server down if any drive reaches 55C - but they can probably take 60C without damage so I'm being over cautious.

 

  • Agree 1
Link to comment
Share on other sites

  • 3 months later...
Dizzy49
On 3/20/2021 at 9:43 AM, CharleyVarrick said:

bump

Reminder I am not asking specifics brand name or model here, just general info on what to look/search for to put inside da box.  

Check out the Backblaze Storage Pods.  They have very detailed build instructions AND part list.  You can get an idea of generic parts you need from there.  Ie, HBA, Backplane, etc.  They include brand/models as well.

  • Like 1
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...