Jump to content

Recommended Posts

mediacowboy
Posted
* I know this forum is for Emby and that I am asking about another product. I fell more comfortable asking here and then going to the FreeNAS forums after.*
 

** Let me start off by saying I havent read or looked to deeply into FreeNAS. I know it's free and that there is a emby plugin. I am also a IT guy so I'm not affraid of learning something new.**

 

So with this topic I would like to find out what other user's recommend as far as hardware and running emby on FreeNAS.

 

Okay so my current setup is in my signature. I am wanting to turn this is into my everyday machine and build something smaller and more set it and forget it. With the exception of adding more hard drives as my collection grows. Also the Wife is mad that I am also rebooting and doing other things on the current setup that cause clients to buffer and just act really strange. So without further ado here is the list of parts I am thinking. 

 

New Part's:

 

Case: DS380D

MoBo: ASRock H170M-ITX/DL

CPU: Intel Core i5-6500

Memory: G.Skill

Power: SFX ST30SF

Primary Drive: SAMSUNG 850 EVO

 

Okay that is the basic system. The OS will go on the EVO. I am torn between getting a second one and creating a raid of some sort for the os or getting a second one that is deticated for transcoding. Most of my content is already in stream friendly format but the times I would transcode or external and Live TV. Okay so now you ask what am I going to do for storage. 

 

Storage System:

Controller Card: IBM M1015

Storage Hard Drive: Seagate Archive

 

Okay so the Storage Hard drives will be bought as I need them going up to 8. I will start at with 3 and go from there. This is system will be primarily dedicated for emby. I may add other program's, read SabNZBd, Sonarr, and other such programs, at a later date. Also on this thought I do not want to have to manage each drive individually but add a drive to the chassis and then have FreeNAS pick it up and pool it with the others to form one big drive.

 

I know the CPU may be to much power for FreeNAS but like I said abput the EVO hard drive I do transcode and would like it to finish sooner then later and also be able to use the Intel GPU transcending that is now built into emby.

 

I also now I can probably get a cheaper mother board for that case but will may one day use the other port for home automation and security. 

 

When I have a free day during the week without kids and the wife I would also want to take the FreeNAS training class. 

 
So let me know what are your thought's and advice.

 

 

 

legallink
Posted

So, a couple of points as I have previously gone down the road.

 

1. The case is nice, but I chose the node 804 because it fits the same amount of drives but also micro atx which means motherboard selection is better (especially if you need/want dual nics in the future.). It's also easier to work in from what I have heard, but you lose hot swappable, however I doubt you'll be swapping drives all that often.

 

2. Freenas doesn't require a ton of hardware, so you should really build to what Emby needs. Transcoding, you know, depending on user count, can require a lot.

 

3. Are you intending on going the zfs route?

 

4. You don't need a second ssd for transcoding. Freenas fits on a thumb drive and even then doesn't require a ton of space or usage. You should be fine for longer than the machine is probably in service. That being said, make sure to keep your metadata and cache on the ssd, it will populate everything much faster.

 

5. If you go micro-atx, you don't need the IBM card. Supermicro boards give you enough ports, dual nic, and KPMI.

 

6. Lastly, while overkill now, If you are thinking of doing some vm's on the system, I would bump it to an i7 if the budget allows.

mediacowboy
Posted

1. I liked the hot swap case for the ability to not have to shutdown the machine to add a hard drive. That is one of the Wife's biggest complaint's. Is why do you always have to reboot and shutdown.

 

2. Right now I see trascoding of possible 3 live TV streams at once and maybe 2-3 external connection's.

 

3. I'm not sure on the zfs. I don't know a lot about it yet. Like I said above I want the option to pool and it looks like ZFS is the way to go on that.

 

4. So you think a single 120 would be okay or would you raid 1 two for redunacy?

 

5. I'll have to look into the that recomendation.

 

6. I do like the idea of VM's in the future to be able to run maybe home automation and home security software.

sleeplessone
Posted (edited)

FreeNAS + Emby is amazing.  I'm running it now and have been running FreeNAS as a standlone without Emby for years.

That said.  Putting anything you care about on a ZFS pool without ECC memory is a bad idea.  As as it's a self healing file system if the data in RAM is corrupted by a bad section of RAM then your entire pool can become corrupt before you've even noticed.  Also ZFS performance is very RAM dependent.

 

My current setup is

  • Xeon E3 1225 v5
  • 16GB ECC RAM (this is lacking for the storage it's now handling, realistically I need to add another 16GB)
  • IBM ServerRAID M1015 that's been crossflashed into IT mode (basically no longer a RAID card, just passthrough, not even a BIOS on it so you couldn't boot off the card)
  • 6 x 4TB drives in a RAIDz2 pool
  • 2, 2 drive mirrored Zvol pool (basically a 4 drive RAID 10)
  • Boot drive is a 16GB SATA DOM (basically a tiny SSD that powers off the SATA port designed for it on some MB)
Edited by sleeplessone
legallink
Posted (edited)

I personally would go for larger than 120. I have a 120 right now and with metadata, cache, etc. I only have about 30GB free. It doesn't leave a ton to feel comfy with.

 

The raid on the boot drive I don't think is going to add a very much. Ssd failure is very rare at this point.

 

And if you are thinking of going zfs, then I agree with sleepless, you need to step up to a processor that can handle ecc ram. I think some i7s do as well as naturally Xeon.

Edited by legallink
Posted

I'd only be concerned about the Seagate Archive drives. Last I read Seagate themselves were not recommending them for NAS applications.

 

I've only used Western Digital Red NAS drives since their creation and they've been really solid. Seagate has their own branded NAS drives, if that's the company you prefer.

Posted

I'd only be concerned about the Seagate Archive drives. Last I read Seagate themselves were not recommending them for NAS applications.

 

I've only used Western Digital Red NAS drives since their creation and they've been really solid. Seagate has their own branded NAS drives, if that's the company you prefer.

 

On a synology nas 1511+ or 1515 they get marked as bad or drop out of array (first hand experience). With ZFS and Linux raid had zero issues with them even under stressing loads for exteneded times (dd and zfs resyncs (send|recive) and array rebuilds). Although funny enough with Windows storage spaces I needed to disable drive sleeping or else it would mark the drive has "missing". And both the Dell H700 and H800 have each dropped drives or had drives "missing" at boot time until I increased the "wait" time for a missing  drive &  drive spin up. 

 

It seems 99% of the problems I've had with them are use to the sluggish wake up of the drives. But except for the synology boxes I was able to overcome this issue

and they've been running "flawless" since. 

 

 

 

** With the M1015 you can have the best of both worlds, Cross flash it to a IR instead of IT. This way it's still passthru for anything NON-raid but is still bootable AND can you have a raid 1/etc for your boot drives. It does add a very noticeable about of time to your boot times though.

 

 

** SSD Drives - Researchers at Carnegie Mellon University (Source: http://www.storagereview.com/first_large_scale_in_field_ssd_reliability_study_done_at_facebook)

  • SSDs go through several distinct failure periods – early detection, early failure, usable life, and wearout – during their lifecycle, corresponding to the amount of data written to flash chips.
  • The effect of read disturbance errors is not a predominant source of errors in the SSDs examined.
  • Sparse data layout across an SSD’s physical address space (e.g., non-contiguously allocated data) leads to high SSD failure rates; dense data layout (e.g., contiguous data) can also negatively impact reliability under certain conditions, likely due to adversarial access patterns.
  • Higher temperatures lead to increased failure rates, but do so most noticeably for SSDs that do not employ throttling techniques.
  • The amount of data reported to be written by the system software can overstate the amount of data actually written to flash chips, due to system-level buffering and wear reduction techniques.

  

mediacowboy
Posted

@@NomadCF,

 

Are you saying you that you are currently running the Seagate Archive drives in your setup?

Posted

I currently have 12 of them; 6 per Dell PowerEdge FS12-TY C2100 each set a raid 6 via ZFS using the M1015 in IR mode (raid 1 for my OS and boot-ability, Passthru for the data drives). 

 

 

I had first tired 5 of them my Synology 1511+ (extra  backup server) and then in a 1515 (clients testing server). I then tired them in other Dell servers with the H700 & H800 controllers. And finally in my  testing server running 2012 sever (at that time).

sleeplessone
Posted (edited)

The raid on the boot drive I don't think is going to add a very much. Ssd failure is very rare at this point

 

I agree 100% on this one.  I could have done a mirrored SATA DOM boot for an extra $40 but with ZFS for home use I didn't really see the point.  If the drive dies, I just buy a new one, do a fresh install and restore my config backup. The data pools will be imported automatically.

 

I've had bad luck with Seagate drives, never used the Archive line though.  I usually stick with WD RED or HGST NAS drives, HGST NAS will usually perform better than RED since they are 7200RPM drives.  Looking at the page for them it looks like they are more meant for cold storage, meaning you aren't using them often, more in line with the WD Ae datacenter drives.

Edited by sleeplessone

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...