Jump to content

Emby Docker Instances Consuming 50GB+ RAM Each and Not Releasing Memory Until Restart


Recommended Posts

Posted

Hi Team,

I’m currently running multiple Emby instances in Docker, and I’ve noticed a consistent memory usage issue. Each instance gradually consumes up to 50GB+ of RAM and never seems to release it back to the system, even when idle or after streams have stopped. The only way to free the memory is to restart the container, which temporarily resolves it.

Setup details:

  • Environment: Docker (running on a Ubuntuhost)

  • RAM: 256GB total

  • Emby Version: 4.9.1.80

  • Number of instances: 2 [One for Family, One for Friends]

  • Transcoding: Enabled

What I’ve observed:

  • Each container starts with ~2–3GB of usage but slowly grows beyond 50GB after several streams or over time.

  • Memory is not reclaimed even after streams end or idle periods.

  • When checking htop, I frequently see processes like:

    • /app/emby/bin/ffprobe -restartexitcode 3

  • Restarting the container resets memory usage to normal levels.

What I’ve tried:

  • Updated to the latest Emby build.

  • Monitored Docker stats to confirm container-level memory usage.

  • Limited Docker memory, but this only delays the inevitable crash due to OOM.

  • Checked transcoding temp folders — no unusually large files remain.

  • Is this a known memory leak or caching behavior related to transcoding or ffprobe?

  • Could the -restartexitcode3 flag indicate a crash/retry loop that’s leaking memory?

  • Any recommended flags or settings to safely limit Emby’s memory footprint per container?

  • Any insights or debugging steps would be greatly appreciated — I can share logs or metrics if needed.

Thanks!

Azra3l

  • Agree 1
Posted (edited)
15 minutes ago, Luke said:

Hi there, please attach the Emby server log from when the problem occurred:

Thanks!

 

It was at 700mb when I restarted the container 4 hours back. Now its at 11GB of memory.

embyserver.txt

Edited by Azra3l
Posted
2 hours ago, Azra3l said:

It was at 700mb when I restarted the container 4 hours back. Now its at 11GB of memory.

embyserver.txt 115.37 MB · 1 download

HI,

Please try removing these plugins:

2025-10-21 15:47:36.935 Info App: Loading InfuseSync, Version=1.5.2.0, Culture=neutral, PublicKeyToken=null from /config/plugins/InfuseSync.dll
2025-10-21 15:47:36.935 Info App: Loading statistics, Version=3.4.2.0, Culture=neutral, PublicKeyToken=null from /config/plugins/Statistics.dll
2025-10-21 15:47:36.935 Info App: Loading Emby.Notifications.Discord, Version=1.2.3.0, Culture=neutral, PublicKeyToken=null from /config/plugins/Emby.Notifications.Discord.dll
2025-10-21 15:47:36.935 Info App: Loading playback_reporting, Version=2.1.0.7, Culture=neutral, PublicKeyToken=null from /config/plugins/playback_reporting.dll
2025-10-21 15:47:36.935 Info App: Loading DiskSpace, Version=1.0.6.4, Culture=neutral, PublicKeyToken=null from /config/plugins/DiskSpace.dll
2025-10-21 15:47:36.935 Info App: Loading Emby.MediaInfo, Version=1.0.1.20, Culture=neutral, PublicKeyToken=null from /config/plugins/Emby.MediaInfo.dll
2025-10-21 15:47:36.935 Info App: Loading RPDB, Version=1.0.3.2, Culture=neutral, PublicKeyToken=null from /config/plugins/RPDB.dll
2025-10-21 15:47:36.935 Info App: Loading ACdb, Version=2.3.0.3, Culture=neutral, PublicKeyToken=null from /config/plugins/ACdb.dll

Then restart the server and see how things compare. Thanks.

Posted
14 hours ago, Luke said:

HI,

Please try removing these plugins:

2025-10-21 15:47:36.935 Info App: Loading InfuseSync, Version=1.5.2.0, Culture=neutral, PublicKeyToken=null from /config/plugins/InfuseSync.dll
2025-10-21 15:47:36.935 Info App: Loading statistics, Version=3.4.2.0, Culture=neutral, PublicKeyToken=null from /config/plugins/Statistics.dll
2025-10-21 15:47:36.935 Info App: Loading Emby.Notifications.Discord, Version=1.2.3.0, Culture=neutral, PublicKeyToken=null from /config/plugins/Emby.Notifications.Discord.dll
2025-10-21 15:47:36.935 Info App: Loading playback_reporting, Version=2.1.0.7, Culture=neutral, PublicKeyToken=null from /config/plugins/playback_reporting.dll
2025-10-21 15:47:36.935 Info App: Loading DiskSpace, Version=1.0.6.4, Culture=neutral, PublicKeyToken=null from /config/plugins/DiskSpace.dll
2025-10-21 15:47:36.935 Info App: Loading Emby.MediaInfo, Version=1.0.1.20, Culture=neutral, PublicKeyToken=null from /config/plugins/Emby.MediaInfo.dll
2025-10-21 15:47:36.935 Info App: Loading RPDB, Version=1.0.3.2, Culture=neutral, PublicKeyToken=null from /config/plugins/RPDB.dll
2025-10-21 15:47:36.935 Info App: Loading ACdb, Version=2.3.0.3, Culture=neutral, PublicKeyToken=null from /config/plugins/ACdb.dll

Then restart the server and see how things compare. Thanks.

I just did, still the same. But I've been having these plugins since the start and the issue started recently only.

solidsnakex37
Posted
On 10/22/2025 at 3:59 AM, Azra3l said:

I just did, still the same. But I've been having these plugins since the start and the issue started recently only.

I've had this issue many times, and I had to reduce the frequency of library scans. 

What is your library scan interval set to right now?

Posted

I have disabled frequent scans, instead have RTM enabled and Autoscan mapped. 

 

Is there anyway for me to use the same Metadata between the two instances? I think all of this started when I did refresh Metadata for all my Libraries in both instances. 

Posted
3 hours ago, Azra3l said:

I have disabled frequent scans, instead have RTM enabled and Autoscan mapped. 

 

Is there anyway for me to use the same Metadata between the two instances? I think all of this started when I did refresh Metadata for all my Libraries in both instances. 

Hi, do you have nfo files next to your media? That would be the best way.

Posted
7 hours ago, Luke said:

Hi, do you have nfo files next to your media? That would be the best way.

I dont have nfo files actually 😅

image.png.5a4fbd87f2499e970a69251afabd580a.png

This is the recent memory usage. I have restarted the containers again.

Posted
3 hours ago, Azra3l said:

I dont have nfo files actually 😅

image.png.5a4fbd87f2499e970a69251afabd580a.png

This is the recent memory usage. I have restarted the containers again.

Enabling nfo saving on one of the servers would be the best way to share metadata between the two.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...