Jump to content

Hardware advice - moving from Plex to Emby


Recommended Posts

Hi everyone.


I'm done with Plex, the support stinks and I have lifetime emby from mb3 days. If they fix their stuff I have lifetime there as well.


Currently my content is all mkv format video from DVD/bluray. 25TB of storage on flexraid (snapshot) with 2 parity drives.


I have a p9x79-ws workstation Mobo with a 3930k, 32gb of 2133Mhz ddr3, an IBM m1015 flashed to IT mode (jbod) and a hauppauge 2250 dual tuner, with an EVGA gt 730 2gb.


We have a couple iPads, a couple androids, a Roku 3, an Xbox one s, and a couple laptops/desktops. Playback is primarily thru the Xbox and the Roku (still doing 1080p).


I wanted to know about a couple upgrades to my hardware. First, I ordered an e5-2670v2 decacore, which is a hit on single thread but pushes my passmark up over 15,000. It also enables true pcie 3.0 and the other stuff ivy-e improved from sandy-e. (Overall performance gain). For Plex transcodes, this is the right move. For emby?


Second, is it worth dropping a pcie ssd for the transcode folder? The 128gb is only $100 right now.


Third, would it help with network throughput to jamb another 2-4 ports of gigabit Ethernet in? I already have 2 gig-e ports on the mobo.



Any feedback is welcome!

Link to post
Share on other sites

Am not remotely qualified to answer (you lost me at Flexraid) but It might help the hardware types here to mention how many concurrent transcoding sessions you expect to be doing.

  • Like 1
Link to post
Share on other sites



1. More cpu horsepower the better - transcoding (without gpu assistance) is the biggest load in emby - with lots of core you have the ability to do multiple stream transcoding at once

2. A pcie disk for the cache and metadata will make the emby server much snappier to use - transcode folder will see some benefit if you have several streams being transcoded at the same time

3. having multiple network ports is only going to be of benefit if you enable port trunking - i.e. combining them together so they appear as one interface - then multiple different clients can connect at the same time and receive up to 1g throughput each - not that they will need it even with a 4k stream - this assumes your disks can supply the data to multiple clients at the same time. But before you do this i would try running several clients against the server to see if you get any poor performance as getting port trunking to work can be fiddly and you need an expensive switch to support it.


If you give us some more info as the guys above indicated we can help you with any other questions/issues


Have fun 

Link to post
Share on other sites

Nice CPU. That is a 10 core with hyper threading (https://ark.intel.com/products/75275?ui=BIG). I don't know that the chip supports HEVC. Are you going to run on windows or Linux?


How many simultaneous transcodes do you expect to run?


Here, I generally recommend OS and software be placed on an SSD. if you can afford it, cache folders should be placed on a separate SSD. All data should be placed on a quality NAS or enterprise level HDDs. HDD configurations should be setup in a proper hardware supported array/JBOD


10 cores with hyperthreading should allow you to run more threads for transcodes.


Multiple NICS would be useful if you either setup a NIC team or segregate types of data to different NICs.

In order to do a NIC team your switch would need to support it. I don't think that Emby server can separate the types of traffic (internal clients, external clients, live TV, etc) it handles by NIC.



Sent from my iPhone using Tapatalk

Edited by Tur0k
Link to post
Share on other sites

Got it - Thanks all.


My guess is at most 3-4 transcodes at one time. My MKV collection is either from makemkv or dvdfab passthrough, so it's all big files, uncompressed sound etc.


My bluray movies are still done via disc in my Oppo BDP-105, so bluray quality transcodes are limited to TV series and older movies.


For the CPU the 10 core/20 thread seemed to be the way to go for Plex, I was crossing my fingers when I ordered for emby.


I have a GeForce 730 GT that is basically dedicated for transcodes if needed (and if it works), I don't ever actually have a monitor on when running the server unless I'm doing maintenance.


For the rest - windows 7 ultimate x64 w/WMC, dedicated vertex 4 SSD for the OS and drivers and a second vertex 4 SSD for the apps, plus the flexraid pool of hdd's.


My thoughts on the pcie SSD were exactly that, speed up the transcode with a small dedicated drive at stupid fast speed.


With this mobo, I already have two gig-e ports but didn't team them because my switches are unmanaged and I'm not looking to sink a ton into that side until I go 10gig...years away at this point.


I'll run some quick tests on teaming them and stream rates and see how it looks.

Link to post
Share on other sites

That sounds like a sweet rig. I would note that the geforce video card will support 2 transcodes at a time. I would recommend seeing running no hardware acceleration first and see how that goes, then dabble with hardware acceleration if needed. I would recommend getting a good copy of FFMPEG loaded.



Sent from my iPhone using Tapatalk

Link to post
Share on other sites

I don't think you are going to have a problem with the e5 handling those transcodes.


I agree with @@Tur0k...


Run it with no hardware first and see if it works.  You will save yourself a lot of headache trying to get hardware transcoding to work.


Ultimately, Plex and Emby both use similar techniques for transcoding, so I don't think you made a bad decision.

Link to post
Share on other sites

So this ffmpeg thing - this is the biggest difference between plex and emby.


What's a "good version" look like?  Where do I even start? lol


It's my understanding that Plex just uses a custom version of ffmpeg. 


I use the Docker version of Emby and just use the included ffmpeg.  If you aren't going to do any special hardware transcoding, just start with the version that comes with Emby.

Link to post
Share on other sites

So, this got expensive.


Part of the reason I was doing this was because my OS has some catastrophic failure issues on the SSD from a hard power on/power off event. Silly me didn't have write behind caching turned off, and it corrupted...lots of stuff.


So...now I'm expanding. The case I'm using is a chenbro 11069 and I have 10 hot swap bays. Currently they are filled with 5 3tb 7200 and 5 2tb 7200 enterprise drives. 2 of the 3tb are parity drives, the rest are data only.


Since I'm using a m1015 in IT mode, I'm out of space in the case. I also want to pop up my total storage volume.


So, I'm expanding out. I bought 2 8TB WD red's to replace the parity drives, and a 3tb and 4tb red for storage.


So, I'm going to have 4 DRU's at 8TB (pooled to 32TB) and 2 PPU's at 8TB each for parity.


Unfortunately this means I have 14 HDD's to deal with...enter the 9200-8e in IT mode, run to a 12 bay hot swap 4u case, with a supermicro jbod power board, an 8087-8088 slot adapter, a 650w Silverstone ultra PSU, and an Intel raid expander card.


That leaves me 8 more bays to fill and I can use it as a stand for my monitor on the desk.


BUT...the extra unplanned coin drop lol

  • Like 1
Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Create New...