Jump to content

Migrating to new case/components


mastrmind11

Recommended Posts

mastrmind11

I've been considering migrating the guts of my NAS to a new case w/ more HDD capacity, but I'm finding that none of the modern cases (that aren't ugly towers) support full ATX boards anymore.  Not a big deal, I'll just get some new guts, the NAS components are 10 years old at this point anyweay.  I'll obviously be keeping the existing HDDs, and I think I know the answer to these questions already but wanted to get a few opinions from others. 

 

My current HBA is the standard IBM LSI chipset flashed to IT mode for use w/ my ZFS pools.  I assume I can safely just move the controller and attached drives over to a new mobo without hosing anything up?  I also assume this will not hose the ZFS integrity as well?

 

I'm looking to move to a 10 bay case.  I currently have 6 drives, eventually will have 10.  This will be headless and since it's a NAS, I'll be getting the cheapest/most efficient CPU I can find, so the power draw will be negligible.  Any guidance on PSU size considering those factors?

 

I know a few of you run ZFS.  Back when I built the original NAS, ECC RAM was highly encouraged, so that's what I have, 4GB of it.  Is it really required?  Any experience w/ the standard stuff?

 

Appreciate the insight.

Link to comment
Share on other sites

dcrdev

I've been considering migrating the guts of my NAS to a new case w/ more HDD capacity, but I'm finding that none of the modern cases (that aren't ugly towers) support full ATX boards anymore.  Not a big deal, I'll just get some new guts, the NAS components are 10 years old at this point anyweay.  I'll obviously be keeping the existing HDDs, and I think I know the answer to these questions already but wanted to get a few opinions from others. 

 

My current HBA is the standard IBM LSI chipset flashed to IT mode for use w/ my ZFS pools.  I assume I can safely just move the controller and attached drives over to a new mobo without hosing anything up?  I also assume this will not hose the ZFS integrity as well?

 

I'm looking to move to a 10 bay case.  I currently have 6 drives, eventually will have 10.  This will be headless and since it's a NAS, I'll be getting the cheapest/most efficient CPU I can find, so the power draw will be negligible.  Any guidance on PSU size considering those factors?

 

I know a few of you run ZFS.  Back when I built the original NAS, ECC RAM was highly encouraged, so that's what I have, 4GB of it.  Is it really required?  Any experience w/ the standard stuff?

 

Appreciate the insight.

 

Moving the zfs pool to a new system won't cause any issues - you'll probably have to force import the pool though as zfs pools are locked to hostid; that's as simple as specifying "-f" on the import command line arguments. Once the first import is done - it'll be locked to the hostid of your new machine.

 

Although I use ECC memory - a lot of the fuss around it and zfs is completely inaccurate and perpetuated by the know it all types on the FreeNas forum. ZFS is no more susceptible to data corruption when used without ECC than any other file system that is a big misconception. Realistically the only way your going to end up hosing your data as a result of not using ECC memory is if a flipped bit causes a hash collision during a scrub, given that OpenZFS uses SHA-256 for hashing; you can imagine how astronomically unlikely that is.

 

I'd highly recommend watching this:

 

But again ECC memory is good to have in general for a server use-case - if you have the budget for it; I'd recommend it.

 

In regards to PSU - I'd say always go for the most efficient one you can afford, with good caps and always over-provision; at the end of the day a beefier power supply isn't going to consume any more power than a smaller one.

 

I've got 8x WD Reds, 1x 2.5 SSD, 1x NVME Drive and an 85w Kaby Lake Xeon in my system and I use a 500w FSP 80+ Platinum PSU.

Link to comment
Share on other sites

mastrmind11

Moving the zfs pool to a new system won't cause any issues - you'll probably have to force import the pool though as zfs pools are locked to hostid; that's as simple as specifying "-f" on the import command line arguments. Once the first import is done - it'll be locked to the hostid of your new machine.

 

Although I use ECC memory - a lot of the fuss around it and zfs is completely inaccurate and perpetuated by the know it all types on the FreeNas forum. ZFS is no more susceptible to data corruption when used without ECC than any other file system that is a big misconception. Realistically the only way your going to end up hosing your data as a result of not using ECC memory is if a flipped bit causes a hash collision during a scrub, given that OpenZFS uses SHA-256 for hashing; you can imagine how astronomically unlikely that is.

 

I'd highly recommend watching this:

 

But again ECC memory is good to have in general for a server use-case - if you have the budget for it; I'd recommend it.

 

In regards to PSU - I'd say always go for the most efficient one you can afford, with good caps and always over-provision; at the end of the day a beefier power supply isn't going to consume any more power than a smaller one.

 

I've got 8x WD Reds, 1x 2.5 SSD, 1x NVME Drive and an 85w Kaby Lake Xeon in my system and I use a 500w FSP 80+ Platinum PSU.

Perfect.  Great info.  Thanks!

Link to comment
Share on other sites

mastrmind11

Update.  Found a case that supports a full size ATX mobo and isn't completely ugly, so I ordered it.  I honestly couldn't justify dropping $500+ on new guts when the existing components serve my media just fine.  My main concern in the existing case, which I noticed while I was in there replacing some fans, is the drives are hotter than I'd like.  Last time I checked this was years ago, pre kids, so there wasn't much activity on the box.  Anyway, picked this up  as a stop gap until I finally bite the bullet and move everything over to a rack.  I'd like to move this stuff into a closet anyway, so the form factor won't be an issue in another couple of weekends.

  • Like 1
Link to comment
Share on other sites

dbailey75

Update.  Found a case that supports a full size ATX mobo and isn't completely ugly, so I ordered it.  I honestly couldn't justify dropping $500+ on new guts when the existing components serve my media just fine.  My main concern in the existing case, which I noticed while I was in there replacing some fans, is the drives are hotter than I'd like.  Last time I checked this was years ago, pre kids, so there wasn't much activity on the box.  Anyway, picked this up  as a stop gap until I finally bite the bullet and move everything over to a rack.  I'd like to move this stuff into a closet anyway, so the form factor won't be an issue in another couple of weekends.

I was just looking at this case today.   I have large ATX case with 12 external bays, that I populated with 2 4 x 3.5" backplanes, so having the backplane already in the case at this price is a great deal.

Link to comment
Share on other sites

mastrmind11

I was just looking at this case today. I have large ATX case with 12 external bays, that I populated with 2 4 x 3.5" backplanes, so having the backplane already in the case at this price is a great deal.

Agreed. It comes today so I'll report back on my initial impressions.

 

Sent from my SM-G965U using Tapatalk

Link to comment
Share on other sites

mastrmind11

I was just looking at this case today.   I have large ATX case with 12 external bays, that I populated with 2 4 x 3.5" backplanes, so having the backplane already in the case at this price is a great deal.

Got it a couple hours ago.  Switched guts in about an hour.  Everything is up and running.  First impressions, it's a decent case, but you get what you pay for.  Fair bit of plastic.  Also, when migrating make sure you connect the drives to the right port on the backplane. I did SAS ports and couldn't figure out why my raid didn't spin up.... switched after shining a light into the case, switched to SATA, and literally booted right up.  It's a smaller form factor than I expected, but for sure fits the drives.  FYI the drive cages are 100% plastic, but whatever, its better than buried drives, and there are plenty of supplied screws in case you're lacking.  Overall, I recommend.  

 

PS, I'm running 6 3TB drives on a single molex to the backplane and everything is fine.  I don't pound the drives, so I don't anticipate needing more juice, but throwing it out there in case anyone reads this.  I'll probably get a splitter in the coming weeks just cuz.

Edited by mastrmind11
  • Like 1
Link to comment
Share on other sites

dbailey75

Got it a couple hours ago.  Switched guts in about an hour.  Everything is up and running.  First impressions, it's a decent case, but you get what you pay for.  Fair bit of plastic.  Also, when migrating make sure you connect the drives to the right port on the backplane. I did SAS ports and couldn't figure out why my raid didn't spin up.... switched after shining a light into the case, switched to SATA, and literally booted right up.  It's a smaller form factor than I expected, but for sure fits the drives.  FYI the drive cages are 100% plastic, but whatever, its better than buried drives, and there are plenty of supplied screws in case you're lacking.  Overall, I recommend.  

 

PS, I'm running 6 3TB drives on a single molex to the backplane and everything is fine.  I don't pound the drives, so I don't anticipate needing more juice, but throwing it out there in case anyone reads this.  I'll probably get a splitter in the coming weeks just cuz.

still a good deal with the plastic trays,  I'm running this case  with 2 of these back planes, which also has plastic trays,  paid close to $200 if I recall .  again, good deal on your case. 

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...