ryzen5000 17 Posted September 10, 2024 Posted September 10, 2024 Hello, I am wondering why when my users download media remotely from emby for offline viewing and camping trips that my docker utilization goes up to 75% then I get warnings and then the docker image goes back to normal size after they finish downloading? Volume mappings isn't set right so the data is being stored in the image instead of persistent storage? is temporarily storing files in an unmapped path inside the docker image as it downloads, before moving it into the mapped storage area? Can i fix this by modifying the template? mapping cache or data to a share on my unraid server? Which would it be that I need to map to persistent storage so that people can download media without errors for offline viewing?
seanbuff 1315 Posted September 10, 2024 Posted September 10, 2024 24 minutes ago, ryzen5000 said: Can i fix this by modifying the template? mapping cache or data to a share on my unraid server? Any unmapped paths can indeed cause issues with the docker image file becoming bloated and causing issues. Ensure you have your appdata path mapped, internally it should be able to write to the default /sync dir I would also check if your users are converting before downloading, which will cause a spike in the transcoding data during the conversion task. Controlled via this Emby user permission:
ryzen5000 17 Posted September 10, 2024 Author Posted September 10, 2024 (edited) 19 minutes ago, seanbuff said: Any unmapped paths can indeed cause issues with the docker image file becoming bloated and causing issues. Ensure you have your appdata path mapped, internally it should be able to write to the default /sync dir It's mapped Like this in Emby Official template AppData Config Path: /mnt/user/appdata/emby/ Container Path: /config I would also check if your users are converting before downloading, which will cause a spike in the transcoding data during the conversion task. Controlled via this Emby user permission: Edited September 10, 2024 by ryzen5000
seanbuff 1315 Posted September 10, 2024 Posted September 10, 2024 6 minutes ago, ryzen5000 said: AppData Config Path: /mnt/user/appdata/emby/ Container Path: /config Yes, and that should be fine. As long as you haven't specified any other 'custom' paths within the Emby UI that haven't been mapped in the docker template What about this? 7 minutes ago, ryzen5000 said: I would also check if your users are converting before downloading, which will cause a spike in the transcoding data during the conversion task. Controlled via this Emby user permission:
ryzen5000 17 Posted September 10, 2024 Author Posted September 10, 2024 (edited) Conversion of media is disabled. That is a separate check box. Is that the one your referring to? The files are h264 and HEVC and shouldn't require any transcoding during download. My users are using chrome web browser to download, not the official Emby theater, if that makes any difference. I have it mapped like this Host Path 2: /mnt/user/ Container Path: /mnt It's mapped to a share on the cache drive. Conversions and transcoding weren't mapped I just filled that in as the same location if that is ok? I wonder if that will fix my problem with users downloading and docker image filling up? I am going to test it. Edited September 10, 2024 by ryzen5000
seanbuff 1315 Posted September 10, 2024 Posted September 10, 2024 2 hours ago, ryzen5000 said: Host Path 2: /mnt/user/ Container Path: /mnt It's mapped to a share on the cache drive. FYI - If it's meant to be mapped to your cache pool, for performance reasons you should probably map the host path to /mnt/cache instead of /mnt/user Reason being is mapping to /mnt/cache bypasses the FUSE abstraction layer and will write directly to cache.
ryzen5000 17 Posted September 10, 2024 Author Posted September 10, 2024 How can I mount to cache when my media files are not on cache? When I map to mnt/user I can access any path and any share I need for media files. I don't understand how mapping to cache will let me access my media.
guunter 49 Posted September 10, 2024 Posted September 10, 2024 8 hours ago, ryzen5000 said: Conversion of media is disabled. That is a separate check box. Is that the one your referring to? The files are h264 and HEVC and shouldn't require any transcoding during download. My users are using chrome web browser to download, not the official Emby theater, if that makes any difference. I have it mapped like this Host Path 2: /mnt/user/ Container Path: /mnt It's mapped to a share on the cache drive. Conversions and transcoding weren't mapped I just filled that in as the same location if that is ok? I wonder if that will fix my problem with users downloading and docker image filling up? I am going to test it. Why not use your RAM for transcoding? I do this to avoid unnecessary wear and tear on my ssds.
Neminem 1518 Posted September 10, 2024 Posted September 10, 2024 @guunter What happens is you are watching live le-man 24h race, and it needs to be transcoded. Would you have enough spare RAM to do that ? I my experience transcoding temp is only cleared after stream endes.
Neminem 1518 Posted September 10, 2024 Posted September 10, 2024 (edited) I'm not sure if op has setup an new docker path for H/H transcoding share. Edited September 10, 2024 by JayceDK
guunter 49 Posted September 10, 2024 Posted September 10, 2024 12 minutes ago, JayceDK said: @guunter What happens is you are watching live le-man 24h race, and it needs to be transcoded. Would you have enough spare RAM to do that ? I my experience transcoding temp is only cleared after stream endes. I don't do live TV like that so it wouldn't affect me. I do have 128GB of ram though and have never seen it close to max. And I host jellyfin,plex, and emby the same way. 1
guunter 49 Posted September 10, 2024 Posted September 10, 2024 4 minutes ago, JayceDK said: How many streams at a time For me the most i get is 8on weekend. On the regular its 4-5
Neminem 1518 Posted September 10, 2024 Posted September 10, 2024 ok fair enough I allways use a scrap ssd for transcoding. Misuse, mistreat, abuse and throw away.
guunter 49 Posted September 10, 2024 Posted September 10, 2024 1 minute ago, JayceDK said: ok fair enough I allways use a scrap ssd for transcoding. Misuse, mistreat, abuse and throw away. I mean i could probably get away with using my NVME SSD but i have the extra ram so why not use it. How many users do you have that the 64GB ram isnt enough? Just curious to know as a baseline.
Neminem 1518 Posted September 10, 2024 Posted September 10, 2024 5 users, but have seen 1-2 streams eat up 500gb ssd, left unattended live tv. Or just flat out left there streaming stik on when shuting off there tv.
guunter 49 Posted September 10, 2024 Posted September 10, 2024 2 minutes ago, JayceDK said: 5 users, but have seen 1-2 streams eat up 500gb ssd, left unattended live tv. Or just flat out left there streaming stik on when shuting off there tv. Okay thats good to know. Knowing my users that would happen to me too...
Neminem 1518 Posted September 10, 2024 Posted September 10, 2024 And in you case it could end up as OOM kill issues. Out Of Memory issues. Guessing your are using docker.
guunter 49 Posted September 10, 2024 Posted September 10, 2024 40 minutes ago, JayceDK said: And in you case it could end up as OOM kill issues. Out Of Memory issues. Guessing your are using docker. Yep unraid so docker life.
ryzen5000 17 Posted September 10, 2024 Author Posted September 10, 2024 (edited) I have 4 nvme ssd in one of my 5 servers and they are all about 2 years 9 months old now and the one that stood up the worst was crucial p3. The highest TB write ones were WDD blue, fanxiang, and xpg gammix 11 pro. I don't use ram for transcoding because I only have 64 GB of ram and I am using it for running about 40 apps on my server and I want to install bunch of VM's that i can leave running should I not shut them all down. The following are my SSD endurances in attributes. Here is more accurate results. In case anyone is interested. WD Blue SN550 1 TB - Critical warning 0x00 - Temperature 30 Celsius - Available spare 100% - Available spare threshold 10% - Percentage used 30% - Data units read 733,836,299 [375 TB] - Data units written 1,050,492,052 [537 TB] - Host read commands 1,957,295,315 - Host write commands 7,151,459,702 - Controller busy time 21,083 - Power cycles 473 - Power on hours 25,346 (2y, 10m, 21d, 2h) - Unsafe shutdowns 349 - Media and data integrity errors 0 - Error information log entries 5,033 - Warning comp. temperature time 0 - Critical comp. temperature time 0 - SSD endurance remaining 70 % Crucial P3 1TB - Critical warning 0x00 - Temperature 24 Celsius - Available spare 100% - Available spare threshold 5% - Percentage used 29% - Data units read 149,640,172 [76.6 TB] - Data units written 441,829,916 [226 TB] - Host read commands 183,400,974 - Host write commands 498,689,191 - Controller busy time 5,953 - Power cycles 770 - Power on hours 976 (1m, 9d, 16h) - Unsafe shutdowns 142 - Media and data integrity errors 0 - Error information log entries 562 - Warning comp. temperature time 0 - Critical comp. temperature time 0 - Temperature sensor 1 24 Celsius - SSD endurance remaining 71 % WD Blue SN550 1 TB - Critical warning 0x00 - Temperature 31 Celsius - Available spare 100% - Available spare threshold 10% - Percentage used 19% - Data units read 371,142,993 [190 TB] - Data units written 649,074,069 [332 TB] - Host read commands 703,562,826 - Host write commands 4,595,154,056 - Controller busy time 8,426 - Power cycles 472 - Power on hours 25,034 (2y, 10m, 8d, 2h) - Unsafe shutdowns 349 - Media and data integrity errors 0 - Error information log entries 4,253 - Warning comp. temperature time 0 - Critical comp. temperature time 0 - SSD endurance remaining 81 % XPX Gammix s11 pro 2TB - Critical warning 0x00 - Temperature 31 Celsius - Available spare 100% - Available spare threshold 10% - Percentage used 25% - Data units read 728,709,937 [373 TB] - Data units written 826,199,846 [423 TB] - Host read commands 1,978,196,692 - Host write commands 2,592,275,459 - Controller busy time 40,117 - Power cycles 263 - Power on hours 23,628 (2y, 8m, 10d, 12h) - Unsafe shutdowns 177 - Media and data integrity errors 0 - Error information log entries 0 - Warning comp. temperature time 0 - Critical comp. temperature time 0 - SSD endurance remaining 75 % It seems my docker is still filling up if I let my users download, especially multiple downloads at once fills docker image completely. Edited September 10, 2024 by ryzen5000
ryzen5000 17 Posted September 11, 2024 Author Posted September 11, 2024 Here is a demonstration of my users issue downloading from Emby. The docker keeps filling slowly up with each additional download, it does revert back after to original size 47% but it was up as high as 60% in this mp4 video. If multple people are downloading it also becomes almost unwatchable despite my mighty processor and 64GB ram. Docker becomes slow and sluggish. Is there any way to fix these problems? Or for the time being I had to disable downloads because I couldn't watch live tv as snappy and responsive as it should be. Docker image during downloading.mp4
guunter 49 Posted September 11, 2024 Posted September 11, 2024 (edited) 4 hours ago, ryzen5000 said: Here is a demonstration of my users issue downloading from Emby. The docker keeps filling slowly up with each additional download, it does revert back after to original size 47% but it was up as high as 60% in this mp4 video. If multple people are downloading it also becomes almost unwatchable despite my mighty processor and 64GB ram. Docker becomes slow and sluggish. Is there any way to fix these problems? Or for the time being I had to disable downloads because I couldn't watch live tv as snappy and responsive as it should be. Docker image during downloading.mp4 Have you tried moving the Emby appears to a separate drive? The app data for Emby doesn’t have to be shared with your other dockers. You can point it to a different mnt Edited September 11, 2024 by guunter
ryzen5000 17 Posted September 11, 2024 Author Posted September 11, 2024 (edited) That's something to consider, except that I run app data backup on all my docker containers so that I can restore them in case they fail one day. So if I chose another location for app data I will only have Emby backups to fall back on. The way it's set now I can restore the xml and the container in a few moments should Emby fail to start. I have had to delete the library database and start over before and users are unhappy if they lost their continue watching where they left off and have to search for the titles again manually. Edited September 11, 2024 by ryzen5000
guunter 49 Posted September 11, 2024 Posted September 11, 2024 34 minutes ago, ryzen5000 said: That's something to consider, except that I run app data backup on all my docker containers so that I can restore them in case they fail one day. So if I chose another location for app data I will only have Emby backups to fall back on. The way it's set now I can restore the xml and the container in a few moments should Emby fail to start. I have had to delete the library database and start over before and users are unhappy if they lost their continue watching where they left off and have to search for the titles again manually. You can add more than one location to backup if you’re talking about the backup restore plugin in unraid
ryzen5000 17 Posted September 11, 2024 Author Posted September 11, 2024 I see that I can add another line where it says App data Sources. Ok thanks, I will try to move it to a different location and see if that improves performance.
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now