Jump to content

Disk Activity/Latency when running Emby in a VM on Hyper-V


RedBaron164

Recommended Posts

RedBaron164

I've been running Emby in a VM on Hyper-V for a couple of years now. Recently I've been trying to resolve some disk activity/latency issues that have been causing delays while loading and moving around in Emby from various clients. I think I've finally found my solution and wanted to share what I learned on the off chance it might help someone else.

 

Long story short, move the cache/metadata/transcode-temp directory to a separate virtual hard disk (vhd).

 

I noticed that when an Emby client would take a long time to load or start to timeout, disk activity on the server would jump noticeably. I also found that running the disk cleanup task would cause the same kind of usage. While monitoring the disk performance in task manager and resource monitor I would see that disk activity would max out, the disk queue length would be around .90 - 1.5 and average response time for the disk would hit 500ms. All this even though the disk throughput was only being reported at about 500kbps.

 

I had two vhdx's on the VM, a Fixed size Disk for C: and another Fixed size disk for D: which is where shows are recorded. The first thing I did to try and improve the situation was reconfigure my Hyper-V's host storage as a Raid 10 array instead of Raid 5. This helped a little but not significantly and the issues were still present. I kept monitoring disk usage and noticed that the cache directory was being accessed rather frequently so I decided to move the Emby cache to a different disk. I added a third 10gb Fixed size disk and changed the cache path. This helped a little but I would still see performance issues when the Library.db was being accessed heavily. After looking into if I could move the Library.db to a different directory (and finding out it would be rather difficult) I noticed that the Metadata directory would show up frequently so I decided to move that instead. I moved the metadata location to the same drive as the cache and after refreshing the metadata in my libraries I finally noticed a significant improvement in performance. I also decided to move the transcoding-temp directory to the third drive as well just to cover all my bases.

 

I originally did not think that moving the cache/metadata directories to a different virtual disk would make a difference since all the VHD's were on the same physical volume. But doing just that appears to have resolved my issue and now I regret not doing it sooner, I've been running performance tests since making the changes and so far I have not been able to re-produce my original performance issues with a variety of different clients including FireTV/Web/Theater. My Emby clients are much more responsive and not having to sit around and wait while browsing my music library is refreshing. And if I never see that VolleyError timeout message again I'll be very happy.

 

I'm going to keep a close eye on my disk utilization and performance for the next few days but wanted to share what I was experiencing and what I did in case anyone else runs into the same issue.

 

Also, on a side note, I did try enabling Quality of Service Management on the Virtual Disks but it ended up making the situation worse. I'm also not sure if this solution applies to only Hyper-V or also VMware. I'm not running VMware ESXi at home so I can't say for certain. But if you are running Emby in VMware and are having a similar issue then maybe this will help you as well.

  • Like 1
Link to comment
Share on other sites

Hi, thanks for the info ! If you think there's an Emby issue somewhere, then the best thing to do is please discuss an example and then attach the information requested in how to report a problem. thanks !

Link to comment
Share on other sites

happpyg

Thanks for the detailed post!  

 

I too am running Emby in a Hyper-V VM and have been plagued by random slow page load times (about 3-4 per week when accessing Emby via web 10-15 times a week) and have also noted High Disk usage almost identical to your description particularly on the library.db file and metadata files as you suggest.  Even though the VHD is on a RAID 10 volume with fast SAS disks it still randomly takes over 30-60 seconds to load the home page and occasionally times out.  

I have been asked to remove plugins, submit logs etc which I did but never got anywhere with it as I cannot find a pattern and had almost given up.

 

I'll be trying this out and hopefully it resolves the issue!

 

Cheers!

Link to comment
Share on other sites

RedBaron164

Thanks for the detailed post!  

 

I too am running Emby in a Hyper-V VM and have been plagued by random slow page load times (about 3-4 per week when accessing Emby via web 10-15 times a week) and have also noted High Disk usage almost identical to your description particularly on the library.db file and metadata files as you suggest.  Even though the VHD is on a RAID 10 volume with fast SAS disks it still randomly takes over 30-60 seconds to load the home page and occasionally times out.  

I have been asked to remove plugins, submit logs etc which I did but never got anywhere with it as I cannot find a pattern and had almost given up.

 

I'll be trying this out and hopefully it resolves the issue!

 

Cheers!

 

I hope my experience helps you. One other thing I noticed last night during testing is that I do see a difference in disk utilization depending on where I store the cache vhd. In my Hyper-V host I have Hyper-V Standalone installed to an SSD while all the VM's are stored either on the local raid 10 storage array or my Synology via iSCSI. I tried putting the cache VHD on the SSD and saw additional performance gains and reduced disk activity. I'm still currently experimenting with the cache vhd on the host SSD's drive. If that continues to show additional improvements over having it on the same physical volume as the other Emby VM disks I may get a small 32/64gb SSD and just use it to store the cache vhd.

Link to comment
Share on other sites

happpyg

I hope my experience helps you. One other thing I noticed last night during testing is that I do see a difference in disk utilization depending on where I store the cache vhd. In my Hyper-V host I have Hyper-V Standalone installed to an SSD while all the VM's are stored either on the local raid 10 storage array or my Synology via iSCSI. I tried putting the cache VHD on the SSD and saw additional performance gains and reduced disk activity. I'm still currently experimenting with the cache vhd on the host SSD's drive. If that continues to show additional improvements over having it on the same physical volume as the other Emby VM disks I may get a small 32/64gb SSD and just use it to store the cache vhd.

 

Thanks RedBaron, sounds like your setup is similar to mine!  Will see how it goes over the next few weeks.

Link to comment
Share on other sites

Swynol

I'm not using HyperV but i'm tempted to move my metadata and cache folders to a SSD to see if it makes a difference. 

Link to comment
Share on other sites

  • 3 months later...
TimFromFL

I've been running Emby in a VM on Hyper-V for a couple of years now. Recently I've been trying to resolve some disk activity/latency issues that have been causing delays while loading and moving around in Emby from various clients. I think I've finally found my solution and wanted to share what I learned on the off chance it might help someone else.

 

Long story short, move the cache/metadata/transcode-temp directory to a separate virtual hard disk (vhd).

 

I noticed that when an Emby client would take a long time to load or start to timeout, disk activity on the server would jump noticeably. I also found that running the disk cleanup task would cause the same kind of usage. While monitoring the disk performance in task manager and resource monitor I would see that disk activity would max out, the disk queue length would be around .90 - 1.5 and average response time for the disk would hit 500ms. All this even though the disk throughput was only being reported at about 500kbps.

 

I had two vhdx's on the VM, a Fixed size Disk for C: and another Fixed size disk for D: which is where shows are recorded. The first thing I did to try and improve the situation was reconfigure my Hyper-V's host storage as a Raid 10 array instead of Raid 5. This helped a little but not significantly and the issues were still present. I kept monitoring disk usage and noticed that the cache directory was being accessed rather frequently so I decided to move the Emby cache to a different disk. I added a third 10gb Fixed size disk and changed the cache path. This helped a little but I would still see performance issues when the Library.db was being accessed heavily. After looking into if I could move the Library.db to a different directory (and finding out it would be rather difficult) I noticed that the Metadata directory would show up frequently so I decided to move that instead. I moved the metadata location to the same drive as the cache and after refreshing the metadata in my libraries I finally noticed a significant improvement in performance. I also decided to move the transcoding-temp directory to the third drive as well just to cover all my bases.

 

I originally did not think that moving the cache/metadata directories to a different virtual disk would make a difference since all the VHD's were on the same physical volume. But doing just that appears to have resolved my issue and now I regret not doing it sooner, I've been running performance tests since making the changes and so far I have not been able to re-produce my original performance issues with a variety of different clients including FireTV/Web/Theater. My Emby clients are much more responsive and not having to sit around and wait while browsing my music library is refreshing. And if I never see that VolleyError timeout message again I'll be very happy.

 

I'm going to keep a close eye on my disk utilization and performance for the next few days but wanted to share what I was experiencing and what I did in case anyone else runs into the same issue.

 

Also, on a side note, I did try enabling Quality of Service Management on the Virtual Disks but it ended up making the situation worse. I'm also not sure if this solution applies to only Hyper-V or also VMware. I'm not running VMware ESXi at home so I can't say for certain. But if you are running Emby in VMware and are having a similar issue then maybe this will help you as well.

 

@@RedBaron164 I have a similar setup as you and I had a couple of questions. I saw on another thread that you were experimenting with moving the Emby-Server\data folder by using symbolic links and reported that you had some success with doing that. Is it something that you would recommend for others or was it not worth the trouble? Also, just curious about your Hyper-V hosts config, Do are you still using RAID 10 for your VHD storage and if so how many disks? Do you also have a SSD Raid set or did you just move everything to the SSD that the Hosts is installed on?

 

Thanks In Advance,

 

TimFromFL

Link to comment
Share on other sites

RedBaron164

I have been using a Symbolic Link for the data directory since I made this post and have not had any issues with it. it was definitely worth the trouble for me. Since moving the data directory to an SSD, Emby has performed considerably better, night and day really.

 

As far as a recommendation, I think having the entire Emby OS VHD on an SSD is probably the best scenario. But if your like me and that wasn't an option, just moving the data to an SSD worked wonderfully.

 

My Emby VM currently is still sitting on a local Raid 10 volume with 6x 3TB Western Digital Red drives.

 

I do not have an SSD Raid set. I just created the small 10gb VHD on the host SSD. I do plan on adding another, larger SSD at some point in the future and when I do, I will then move the entire Emby OS VHD to that SSD.

  • Like 1
Link to comment
Share on other sites

TimFromFL

@@RedBaron164 As an update I moved my VHD to a spare SSD that I had and, as you pointed out, that has made a dramatic difference. Ideally i would have liked all of my VHDs to be on my RAID set, but until I get the performance where it needs to be I have a workable solution.Thanks again for your help

Link to comment
Share on other sites

Lotus503

I  run a 2008 R2 Hyper V, VM.

 

I have the VM on an SSD, I moved everything configurable to another drive via disk pass-through. I think I accomplished the same thing because I don't have issues.

Link to comment
Share on other sites

TimFromFL

I run a 2008 R2 Hyper V, VM.

 

I have the VM on an SSD, I moved everything configurable to another drive via disk pass-through. I think I accomplished the same thing because I don't have issues.

I was seeing all of my disk activity on the database files and i didn't want to have to uninstall and reinstall to a custom location or use a portable installation. I wanted to keep things in a fault tolerant setup, so writing directly to a drive was not a setup that i wanted. I now know what to look for now, and i plan on ordering some more drives to give me the performance that i need or maybe setup an ssd raid set. The big reason i moved everything to a VM was so everytime i had a hardware issue i wasn't having to start from scratch rebuilding things. My current setup, i can survive most single component failures without having an outage.

 

I do have a question for you,@@Lotus503 and @@RedBaron164, do either of you use RemoteFX graphics cards in your VMs for hardware acceleration for transcoding?

Edited by TimFromFL
Link to comment
Share on other sites

RedBaron164

As far as Hardware Acceleration I do not have a RemoteFX Graphics card. I am running on AMD Hardware so I could not use the Intel/NVidia tech but I did enable VA API acceleration and I did see a noticeable drop in CPU usage when transcoding.

Link to comment
Share on other sites

TimFromFL

As far as Hardware Acceleration I do not have a RemoteFX Graphics card. I am running on AMD Hardware so I could not use the Intel/NVidia tech but I did enable VA API acceleration and I did see a noticeable drop in CPU usage when transcoding.

I have a mid-range NVIDIA card that should help so i will try that out. I'll report back my findings so others can benefit.

Link to comment
Share on other sites

  • 4 weeks later...
snake98

I have a mid-range NVIDIA card that should help so i will try that out. I'll report back my findings so others can benefit.

Did it help, do you have an intel cpu you could try quick sync with?  I"m interested as I've had problem with live tv buffering issue in vmware workstation on a skylake 6600k.

Link to comment
Share on other sites

  • 2 years later...
markkundinger

 

Long story short, move the cache/metadata/transcode-temp directory to a separate virtual hard disk (vhd).

 

 

bumping an old thread, but for folks who had problems, did any of the problems include a delay before a direct play movie would start playing?  (like 15+ seconds instead of a couple?)

 

asking because I'm having a problem with esxi VM right now.  thanks!

Link to comment
Share on other sites

I run Emby on a Debian VM on Server 2012 R2 Hyper-V. I have the VM on an SSD and I have no issues with the responsiveness of Emby. It actually runs really well and I'm more than happy with it. I map a drive for my media and temporary conversions share that as well. No RAID, just a single drive.

Link to comment
Share on other sites

  • 1 year later...
shooftie

@pjshots can I just clarify your last.

You have Emby running on a Debian VM and the media where?

I am currently running a Server 2019 host with, among others, a Server 2019 guest containing both my media and an Emby installation but I am having horrible lag, sometimes even when I am simply accessing my library. I think that the machine is a little under powered at times but I don't think that it should be struggling to explore.

Host machine is an i5 3570K 32GB RAM.
Guest gets 4 (virtual) cores and 16GB.
Emby sits on a dedicated SSD that is passed into the guest.
Media is sitting in DrivePool across many drives that are also passed into the guest.

As I understand it, Emby should only be using DrivePool when accessing the media for playback/indexing, otherwise it should be accessing from its own dedicated SSD.

I was considering upgrading to something stupid until I read this thread. WDYT? Do you think it would be worth my while creating a dedicated VM for Emby and then sharing the media via SMB? I certainly feels like I should, from a security perspective.

Link to comment
Share on other sites

pjshots
1 hour ago, shooftie said:

Do you think it would be worth my while creating a dedicated VM for Emby and then sharing the media via SMB?

@shooftie
Yes, that's what I do. I have emby on my VM SSD and Windows/stablebit does the drive pool and share, then I mount that in the emby VM. Been working fine, no lag; having emby on the SSD cured that stuff.

It's funny you post actually. I've been moving to Proxmox. I used Clonezilla for the vms. Everything else went fine, but emby is broken. Transcoding doesn't work at all. I tried a Debian reinstall and restored my emby data, same thing. I still have my hyperv disk so going to check that and see. I suspect it's because of Windows SMB is newer, but emby will play fine if it doesn't transcode. Strange.

Edited by pjshots
Link to comment
Share on other sites

shooftie

@pjshots you are a f**king godsend!

Moving Emby into a separate (Ubuntu server) VM, the VHDX of which is sitting on a dedicated SSD, has almost fixed everything. I have managed to reduce the number of cores/memory dedicated to the storage VM. and emby is happy playing alone with 4 (virtual) cores and 4GB+ of memory.

One major problem I had was porting across the DB, which I gave up on in the end and just rebuilt everything.

I wish I could offer some kind of advice for you regarding your current migration but unfortunately I have only a basic grasp of my current situation and so can only offer moral support.... You got this!

Now I need another reason to convince the other half that upgrading to something stupid is still a good idea.

Link to comment
Share on other sites

35 minutes ago, shooftie said:

@pjshots you are a f**king godsend!

Glad you got some good performance finally.

 

36 minutes ago, shooftie said:

I wish I could offer some kind of advice for you regarding your current migration but unfortunately I have only a basic grasp of my current situation and so can only offer moral support.... You got this!

I did get it resolved; turns out Emby is really fussy at cifs shares. I use the same fstab info for all of my debian vms, but Emby needed me to change it. Don't know why as its been running for years on Hyper-V. At least I know what it is.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...