andrewmcd7 4 Posted June 21, 2023 Posted June 21, 2023 Please bear with me as im not very tech savvy. I have an emby server that about once a day becomes unresponsive. The Home Screen loads the library’s and that’s it.. it wont show the home screen content. Sometimes it takes 20-40 min to become responsive again. I noticed the RAM is VERY bloated at that time. I will upload the logs. Hoping ANYONE can assist. Emby Log 2.pdf Emby Log 1.pdf
Luke 38498 Posted June 22, 2023 Posted June 22, 2023 HI, can you please attach the server log files in their original plain text formats? Thanks.
andrewmcd7 4 Posted June 22, 2023 Author Posted June 22, 2023 Sorry I was doing this from my iPad so had a hard time open/downloading the logs. I will need to wait for the issue to happen again. For now can you use what’s attached? 7 hours ago, Luke said: HI, can you please attach the server log files in their original plain text formats? Thanks.
seanbuff 995 Posted June 22, 2023 Posted June 22, 2023 37 minutes ago, andrewmcd7 said: Sorry I was doing this from my iPad so had a hard time open/downloading the logs. I will need to wait for the issue to happen again. For now can you use what’s attached? The logs should still be available on the server if you know the date/time the issue occurred.
andrewmcd7 4 Posted June 22, 2023 Author Posted June 22, 2023 Here we go. Let me know if this helps. hardware_detection-63822989021.txt embyserver-63822989015.txt
andrewmcd7 4 Posted June 22, 2023 Author Posted June 22, 2023 In addition.. looks like Memory is just bloating also.
Q-Droid 803 Posted June 22, 2023 Posted June 22, 2023 If this is correct then it may also be your problem. You appear to be using RAM for your transcoding temp path. Quote 2023-06-22 00:00:35.843 Info App: Transcoding temporary files path: /dev/shm/transcoding-temp 1
andrewmcd7 4 Posted June 22, 2023 Author Posted June 22, 2023 19 minutes ago, Q-Droid said: If this is correct then it may also be your problem. You appear to be using RAM for your transcoding temp path. I will change this and see if that resolves the issue... thanks for your thoughts
andrewmcd7 4 Posted June 22, 2023 Author Posted June 22, 2023 Sadly that did not resolve the issue. Its currently unresponsive now. New logs uploaded. embyserver.txt
Q-Droid 803 Posted June 22, 2023 Posted June 22, 2023 You have many errors in your server log, most look like file access. It would be best to resolve those to clear out the clutter from the log. There are also a good number of clients connecting and failing to play/stream/whatever. A bit of a mess that makes it harder to spot a problem and could very well be contributing to it.
andrewmcd7 4 Posted June 22, 2023 Author Posted June 22, 2023 I just realized the logs cycle after a restart. Here is a diff set of logs before the restart. Hoping someone on the emby team can lead me to figure out why my server is bloating randomly. embyserver-63823053301.txt
isamudaison 8 Posted June 22, 2023 Posted June 22, 2023 What distro/version are you running this on? How are you executing the emby process (systemd? custom script?)? I'd recommend you enable debug logging and restart the server, this should give some 'more' insight into what all is happening. (Probably not what you wanted to hear, but I notice you have docker running - have you considered trying out the dockerized version?)
andrewmcd7 4 Posted June 23, 2023 Author Posted June 23, 2023 20 hours ago, isamudaison said: What distro/version are you running this on? How are you executing the emby process (systemd? custom script?)? I'd recommend you enable debug logging and restart the server, this should give some 'more' insight into what all is happening. (Probably not what you wanted to hear, but I notice you have docker running - have you considered trying out the dockerized version?) Debug logging is on now. Host is Ubuntu 18.04. emby is latest official docker container.
andrewmcd7 4 Posted June 24, 2023 Author Posted June 24, 2023 On 22/06/2023 at 16:17, isamudaison said: What distro/version are you running this on? How are you executing the emby process (systemd? custom script?)? I'd recommend you enable debug logging and restart the server, this should give some 'more' insight into what all is happening. (Probably not what you wanted to hear, but I notice you have docker running - have you considered trying out the dockerized version?) Debug logging is on now. Host is Ubuntu 18.04. emby is latest official docker container.
andrewmcd7 4 Posted June 25, 2023 Author Posted June 25, 2023 Hey Emby Dev’s. Could you possibly weigh in?
Carlo 4451 Posted June 25, 2023 Posted June 25, 2023 How's you memory right now? Be careful with Debug logging on as it has always made the problem worse for me. I've narrowed the problem down for me and can reproduce at will. What I've found is that when library scans are taking place any error events trap memory that isn't freed until the scan is complete. If you have the ability get a graph going of memory used for the day or starting at the beginning of the test. Restart your Emby server so it's using the least amount of RAM to start with. After it restarts give it a minute or two to calm down, then go into Scheduled Tasks and run a Scan Media Library job by clicking on the arrow to the right. You can run this with debug log. If you see memory starting to climb go into the logs menu and click on the 2nd icon from the right on the top server log to open the log in a new window. Are you seeing errors in the log that match the time the memory starts to climb? Carlo 1
andrewmcd7 4 Posted June 25, 2023 Author Posted June 25, 2023 2 hours ago, Luke said: @andrewmcd7? I’m not sure what the ? Is for. I’ve uploaded logs. It’s happens occasionally. Maybe once a day and requires a server reboot. I have a friend who has his own Emby server and he’s having the same issue
Luke 38498 Posted June 26, 2023 Posted June 26, 2023 On 6/25/2023 at 3:49 PM, andrewmcd7 said: I’m not sure what the ? Is for. I’ve uploaded logs. It’s happens occasionally. Maybe once a day and requires a server reboot. I have a friend who has his own Emby server and he’s having the same issue It was to answer the questions that Carlo had asked. Thanks !
andrewmcd7 4 Posted June 26, 2023 Author Posted June 26, 2023 This is happening when a scan is NOT happening. I specifically set the library scan to start at 2am to avoid this issue. But it still happens upon occasion. So it has nothing to do with server scans. Don’t the logs show anything? It’s all gibberish to me.
Neminem 617 Posted June 27, 2023 Posted June 27, 2023 I'm seeing this also, and this started after updating Unraid to 6.12.0. From change logges for Unraid 6.12.0 i see they updatet from docker: version 20.10.18 to docker: version 23.0.6 There are several open issues with docker at Unraid about these docker issues. So that tells me that something is now different, not sure what. In the below pic you can see emby docker memory usage and when i updated unraid. The update was done on the 06-17-2023 Yesterday I decided to limit emby docker memory, by using this extra docker command --memory="[memory_limit]" [docker_image] Now emby has a memory max of 4GB. I have not see the memory bloat after, and emby is running smoothly again. 1
Carlo 4451 Posted June 28, 2023 Posted June 28, 2023 On 6/26/2023 at 4:17 PM, andrewmcd7 said: This is happening when a scan is NOT happening. I specifically set the library scan to start at 2am to avoid this issue. But it still happens upon occasion. So it has nothing to do with server scans. Don’t the logs show anything? It’s all gibberish to me. On 6/27/2023 at 12:56 AM, jaycedk said: I'm seeing this also, and this started after updating Unraid to 6.12.0. From change logges for Unraid 6.12.0 i see they updatet from docker: version 20.10.18 to docker: version 23.0.6 There are several open issues with docker at Unraid about these docker issues. What I was hoping for is being able to compare a log file with a short duration of minutes. That would really be useful. On my system is a slow climb over about 7 hours which is how long my library scan takes.
Neminem 617 Posted June 28, 2023 Posted June 28, 2023 (edited) @CarloI do not think this is a Emby issue, in my case. I have other dockers showing the same behavior. So in my case it's about Unraid and the docker engine version, they updated to. Since this all started after the Unraid update and the docker engine they added. Im still testing the memory limit I set on emby, if that continues to works great. Then I will test with other dockers. I only posted, so that op could se if he/she was on the same Docker version, and if he/she had tried with memory limit. Edited June 28, 2023 by jaycedk
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now