Guest asrequested Posted December 26, 2016 Posted December 26, 2016 Check out Supermicro for Rackmount cases. I had a Norco way back in the day, found the Supermicros to be much higher quality and you can find em all over Ebay for great prices. And they're true "enterprise grade" built like tanks and built to last. Redundant power supplies etc. I'll take a look. I need to find one that looks good. My system in on display, so it need fit well with the other components.
Guest asrequested Posted December 26, 2016 Posted December 26, 2016 (edited) This is the only supermicro case that might work. It's twice the price, even on ebay. https://www.amazon.com/Supermicro-CSE-743T-500B-Rackmount-Server-Chassis/dp/B003ZU14C0/ref=sr_1_5?ie=UTF8&qid=1482725766&sr=8-5&keywords=supermicro+4u+case Can you put a standard PSU in these? I want to use what I have. It's silent. Edited December 26, 2016 by Doofus
JeremyFr79 228 Posted December 27, 2016 Posted December 27, 2016 This is the only supermicro case that might work. It's twice the price, even on ebay. https://www.amazon.com/Supermicro-CSE-743T-500B-Rackmount-Server-Chassis/dp/B003ZU14C0/ref=sr_1_5?ie=UTF8&qid=1482725766&sr=8-5&keywords=supermicro+4u+case Can you put a standard PSU in these? I want to use what I have. It's silent. No they use hot swappable PSU's,
Guest asrequested Posted December 27, 2016 Posted December 27, 2016 No they use hot swappable PSU's, Down the line, when I get 'serious', I'll look at supermicro. I think they will be too loud for where my rack is. And I'm getting low on money, right about now, lol. I want to get a 10G network pretty soon, so I need to hold off a little. I've got another short list of stuff I'm gonna get, next (the norco case is part of it). It'll work well for my HTPC, that's what it's for. The fractal case doesn't quite fit, the way i want it to.
colejack 30 Posted December 27, 2016 Posted December 27, 2016 (edited) You can get their Titanium "SQ" series PSU's. They are very quiet but cost a bit more. I have dual 500 watt Platinum PSU's in mine now and they are now probably the quietest thing in my rack aside from my R210 II. Down the line, when I get 'serious', I'll look at supermicro. I think they will be too loud for where my rack is. And I'm getting low on money, right about now, lol. I want to get a 10G network pretty soon, so I need to hold off a little. I've got another short list of stuff I'm gonna get, next (the norco case is part of it). It'll work well for my HTPC, that's what it's for. The fractal case doesn't quite fit, the way i want it to. Edited December 27, 2016 by colejack
Guest asrequested Posted December 27, 2016 Posted December 27, 2016 That's good to know. I'll remember that for the future.
JeremyFr79 228 Posted December 27, 2016 Posted December 27, 2016 Down the line, when I get 'serious', I'll look at supermicro. I think they will be too loud for where my rack is. And I'm getting low on money, right about now, lol. I want to get a 10G network pretty soon, so I need to hold off a little. I've got another short list of stuff I'm gonna get, next (the norco case is part of it). It'll work well for my HTPC, that's what it's for. The fractal case doesn't quite fit, the way i want it to. gotcha, thought you were looking to build another server. I've thought about going 10G, but honestly I just LACP the stuff I need better speed on. Each of my Servers run 6 trunked gigabit links and my desktop is running dual links. It's all fast enough for me lol
Guest asrequested Posted December 27, 2016 Posted December 27, 2016 gotcha, thought you were looking to build another server. I've thought about going 10G, but honestly I just LACP the stuff I need better speed on. Each of my Servers run 6 trunked gigabit links and my desktop is running dual links. It's all fast enough for me lol I've thought about doing that, but I don't have a lot of networking experience. I just figured it would be easier to 10G it. The switch is only about $400, so it's not VERY expensive, just a little lol
JeremyFr79 228 Posted December 28, 2016 Posted December 28, 2016 I've thought about doing that, but I don't have a lot of networking experience. I just figured it would be easier to 10G it. The switch is only about $400, so it's not VERY expensive, just a little lol Just seems like a lot to spend when very little would ever be able to saturate that. That's just over 1GBps even my 24 drive RAID50 array maxes out at 650 or so MBps for read/write and that's because it's saturating the PCI-Express bus. Gig, or even bonded is still more than sufficient for 99% of stuff out there. Bonding isn't hard to do at all especially with any modern OS that supports SMB 3.0 Hell even the few data stores I keep on my file server for the VMs are only linked through a 2Gbps link have never needed more.
DAVe3283 7 Posted December 28, 2016 Posted December 28, 2016 (edited) If you're only hitting 650MBps on your RAID controller, it is either old, misconfigured, or broken. Mine will pull about 7000MBps (yes, that's 56Gbps) reading and writing to the cache, and 1000-2000MBps reading or writing to the array. I'm running a 24 drive RAID 60 (2 groups) on an eBay $100 Adaptec card. Really, most videos are way less than 100Mbps, so why do you even need gigabit, let alone 10GigE? Because we don't want to wait and watch a file transfer. It is all what we prioritize. I'm still waiting for 10GigE switches to get a bit cheaper, and upgrade my whole house. I already have CAT6a in the walls, was planning ahead when I wired everything. Sent from my FlashScan V2 Edited December 28, 2016 by DAVe3283
Guest asrequested Posted December 28, 2016 Posted December 28, 2016 (edited) Just seems like a lot to spend when very little would ever be able to saturate that. That's just over 1GBps even my 24 drive RAID50 array maxes out at 650 or so MBps for read/write and that's because it's saturating the PCI-Express bus. Gig, or even bonded is still more than sufficient for 99% of stuff out there. Bonding isn't hard to do at all especially with any modern OS that supports SMB 3.0 Hell even the few data stores I keep on my file server for the VMs are only linked through a 2Gbps link have never needed more. Yeah, I know. I've had trouble accessing the bios of the switch I have. It doesn't seem to agree with windows 10. And there weren't many settings. Maybe it needs to be upgraded Edited December 28, 2016 by Doofus
JeremyFr79 228 Posted December 28, 2016 Posted December 28, 2016 If you're only hitting 650MBps on your RAID controller, it is either old, misconfigured, or broken. Mine will pull about 7000MBps (yes, that's 56Gbps) reading and writing to the cache, and 1000-2000MBps reading or writing to the array. I'm running a 24 drive RAID 60 (2 groups) on an eBay $100 Adaptec card. Really, most videos are way less than 100Mbps, so why do you even need gigabit, let alone 10GigE? Because we don't want to wait and watch a file transfer. It is all what we prioritize. I'm still waiting for 10GigE switches to get a bit cheaper, and upgrade my whole house. I already have CAT6a in the walls, was planning ahead when I wired everything. Sent from my FlashScan V2 RAID card is fine, but I'm stuck with a shitty backplane right now, only 4 sas channels and it only operates SATA at 1.5gbps.
PenkethBoy 2068 Posted December 28, 2016 Posted December 28, 2016 Yeah, I know. I've had trouble accessing the bios of the switch I have. It doesn't seem to agree with windows 10. And there weren't many settings. Maybe it needs to be upgraded Have you upgraded to the latest firmware for the switch? I have the a very similar model and no issues with win 10 accessing the firmware Just waiting myself for Intel/MS to release the teaming drivers they disabled to get LACP bonding on my 10G cards from Win 10 currently my m.2 nvme card can still beat the speed of the 10g network on its own - just looking for a M.2 to put in the server as a cache to see if smb 3 multipath actually works - it appears to just need more speed to see for sure
MSattler 390 Posted December 28, 2016 Posted December 28, 2016 Really, most videos are way less than 100Mbps, so why do you even need gigabit, let alone 10GigE? Because we don't want to wait and watch a file transfer. It is all what we prioritize. One thing to remember is not everyone is running Emby on their Storage Server. In which case unless you are using Kodi or Emby Theater, none of the movies will direct play directly from the Storage Server source. So Emby in those cases is requesting the stream, and then feeding to the client. So 1080p blu ray rips, say 35Mbps each, end up being more like 70Mbps. If the content is being transcoded, then the file may be copied over even faster, although the destination stream will be smaller. I've gotten around this my giving my Emby server 4 bonded 1Gb interfaces, and my storage servers 2 bonded 1Gb interfaces. When my house is fully, network utilization goes up quick and that 1Gb link starts becoming highly utilized.
Guest asrequested Posted December 28, 2016 Posted December 28, 2016 Have you upgraded to the latest firmware for the switch? I have the a very similar model and no issues with win 10 accessing the firmware Just waiting myself for Intel/MS to release the teaming drivers they disabled to get LACP bonding on my 10G cards from Win 10 currently my m.2 nvme card can still beat the speed of the 10g network on its own - just looking for a M.2 to put in the server as a cache to see if smb 3 multipath actually works - it appears to just need more speed to see for sure I haven't, but I may try it, if I decide not to buy the other switch.
JeremyFr79 228 Posted December 28, 2016 Posted December 28, 2016 Have you upgraded to the latest firmware for the switch? I have the a very similar model and no issues with win 10 accessing the firmware Just waiting myself for Intel/MS to release the teaming drivers they disabled to get LACP bonding on my 10G cards from Win 10 currently my m.2 nvme card can still beat the speed of the 10g network on its own - just looking for a M.2 to put in the server as a cache to see if smb 3 multipath actually works - it appears to just need more speed to see for sure You don't need LACP or Teaming for SMB 3.0 to work, you just need both ends to support SMB3.0 I have dual nics in my Desktop and like you don't have a teaming driver (I wouldn't count on them ever coming back in WIN10 btw) and I get the full 2Gbps transfer rates between it and my Server 2012R2 box through SMB3.0. There is nothing to configure it just works. 1
Guest asrequested Posted December 28, 2016 Posted December 28, 2016 You don't need LACP or Teaming for SMB 3.0 to work, you just need both ends to support SMB3.0 I have dual nics in my Desktop and like you don't have a teaming driver (I wouldn't count on them ever coming back in WIN10 btw) and I get the full 2Gbps transfer rates between it and my Server 2012R2 box through SMB3.0. There is nothing to configure it just works. So it's plug 'n' play? Now I'm interested.
PenkethBoy 2068 Posted December 28, 2016 Posted December 28, 2016 yes was not saying you need teaming to get multipath to work but would be nice to get teaming working - they "say" they have found the issue that was stopping it working so we might get lucky do you have similar cpu in both machines as have seen that you need multiple thread cpu's to get it to work reliably i.e. i7 to i7 OK, i7 to i3 no chance etc
JeremyFr79 228 Posted December 28, 2016 Posted December 28, 2016 yes was not saying you need teaming to get multipath to work but would be nice to get teaming working - they "say" they have found the issue that was stopping it working so we might get lucky do you have similar cpu in both machines as have seen that you need multiple thread cpu's to get it to work reliably i.e. i7 to i7 OK, i7 to i3 no chance etc Most my servers are VM's residing on a HyperV host on 2012R2, it's running a total of 4 Xeon L7555's and 6 Intel NICs (4 are dedicated to VMs and setup as 4 vSwitchs, VM's are teamed as needed on a per VM basis., 2 are host traffic only), the file server is running Server 2012R2 on a pair of Xeon L5520's with 6 Intel NIC's all teamed together as one link and my workstation is running WIN10Pro on a Xeon X5570, with I believe broadcom NIC's (not teamed of course). SMB3.0 works like a charm every-time and easily saturates my 2Gps link to my workstation.
JeremyFr79 228 Posted December 28, 2016 Posted December 28, 2016 As a note I don't see much if any overhead when using multi-path either. Back when I had some QNAP's setup in a mini "SAN" I was running MPIO to them from my file server and it couldn't touch what I can get with SMB3.0 now a days.
Guest asrequested Posted December 28, 2016 Posted December 28, 2016 (edited) So what hardware do I need? I'll need to update the switch firmware. If I get 10G network cards, will they be backward compatible? And I don't have a server motherboard. Edited December 28, 2016 by Doofus
JeremyFr79 228 Posted December 28, 2016 Posted December 28, 2016 SMB3.0 is hardware independent. It's handled at the software layer. So you need SMB3.0 capable OS on each end. Essentially Windows 8/2012R2 or higher for MS OS'
Guest asrequested Posted December 28, 2016 Posted December 28, 2016 SMB3.0 is hardware independent. It's handled at the software layer. So you need SMB3.0 capable OS on each end. Essentially Windows 8/2012R2 or higher for MS OS' So once I update the switch firmware and install the network cards, I just have to connect the cables? I guess I need to truncate them in the switch, too?
JeremyFr79 228 Posted December 28, 2016 Posted December 28, 2016 So once I update the switch firmware and install the network cards, I just have to connect the cables? I guess I need to truncate them in the switch, too? neither the switch nor NIC's play any role in SMB3.0. you just need more than one network connection on each device the OS will (or rather should) handle the rest.
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now