me@jackbenda.com 10 Posted April 17 Posted April 17 I've been experimenting with Emby's built-in conversion feature and ran into a problem I suspect others have hit too: conversions writing to the system disk until it's completely full. The Conversions settings page only gives me two controls: A temporary file path and the full speed toggle. There's no native option to limit how much disk space conversions are allowed to use, or to cap the queue size. My setup: Emby running natively on Ubuntu Server 24.04, 231GB NVMe system disk. When I kicked off a batch of conversions, Emby happily wrote to /var/lib/emby/sync/ until the drive was full. No warnings, no throttling, just a dead server. A few questions for the community: 1. Is there a setting I've missed that limits conversion disk usage? I've only found the temporary file path and the speed toggle. 2. Is there a way to limit how many conversions run concurrently, to at least control the rate at which disk is consumed? 3. For those running conversions on a system disk — what's your approach? Redirect to a separate drive, or something else? If this genuinely isn't possible natively, I'll put in a feature request: A maximum disk usage cap for conversions seems like a fairly essential guardrail. Keen to hear how others are handling this before I go the OS-level workaround route (create a dedicated directory on the NVMe for conversions, then enforce a hard size limit using a fixed-size loopback mount. Emby hits the ceiling and stops. system disk is safe. I'd point the "Temporary file path" at that mount. The cap is whatever I set, say 50GB, and nothing else on the disk is at risk. I'm just worried that that could freeze Emby up entirely though.)
Q-Droid 1022 Posted April 17 Posted April 17 Conversions should not fill up the space unless there's a problem with the job or a bug. Each file that's converted is copied back into the library based on the settings (replace or new version) and the temporary work file should be deleted. If there's a problem copying the file back then work files could be left behind. Usually because the library folder (directory) or the source media file itself have the wrong ownership and/or permissions that keep Emby from creating or overwriting. Other problems or failures during conversion could also leave files behind. The activity should be logged along with possible errors. Enable debugging to get more detail in the logs. 1
Luke 42414 Posted April 17 Posted April 17 Quote If there's a problem copying the file back then work files could be left behind. Usually because the library folder (directory) or the source media file itself have the wrong ownership and/or permissions that keep Emby from creating or overwriting. Hi, yes this is the most common reason for conversions taking up a lot of disk space. 1
me@jackbenda.com 10 Posted April 17 Author Posted April 17 This is a really helpful pointer. This issue arose just before i went on holiday on the 1st of April, and I don't have those logs any more. I've just updated all the permissions and I'm running a few things as a trial run. Hopefully this will work! Big thanks to both. 1
me@jackbenda.com 10 Posted April 18 Author Posted April 18 No I don't think so. I double checked and I did already have all the permissions set. I honestly don't know what happened last time, but it does seem to be working now and I've got a telegram bot to warn me if the directory starts ballooning again.
me@jackbenda.com 10 Posted April 18 Author Posted April 18 OK so after enabling debug logging and running some test conversions, I can confirm the conversion feature itself works well. QuickSync hardware encoding, files writing correctly to the temp directory, replacements landing in the right place. No permissions issues (emby user is in the correct group with write access to the NAS shares). The problem I hit was a resilience issue rather than a conversion bug. Here's what happened: 1. Emby was mid-conversion, writing finished files back to the NAS via the Transfer media task 2. My (slightly crap) NAS became temporarily unresponsive under load 3. The CIFS/SMB mount hung at the kernel level which i think is a known Linux behaviour where processes with open file handles on an SMB share block indefinitely waiting for the server to respond 4. Emby's Transfer media task stalled on "Stopping" and couldn't be cancelled from the UI 5. The only recovery was a full server reboot This isn't really an Emby bug, it's a Linux kernel CIFS behaviour. But it does raise a question about whether Emby could handle NAS disconnects more gracefully during conversion transfer, perhaps with a configurable timeout or retry mechanism rather than hanging indefinitely. On the Linux side, the fix is to add echo_interval and echo_retries options to the fstab CIFS mount, which reduces the time before the kernel declares the server dead from ~5 minutes to ~60 seconds. But even with that, processes in uninterruptible sleep will still hang temporarily. Is there a cleaner way to handle this? Specifically, whether Emby has any internal timeout on file transfer operations that could be configured.
Q-Droid 1022 Posted April 18 Posted April 18 If you're running Linux to Linux then NFS might be more reliable. 1 1
Luke 42414 Posted April 18 Posted April 18 Quote 4. Emby's Transfer media task stalled on "Stopping" and couldn't be cancelled from the UI When you stop any scheduled task, the stopping occurs at the next point when it is possible to stop. Since the raw file transfers can't be cancelled in-progress, the stop doesn't occur until the file transfer either succeeds or fails. Thus there is no stall, just a delay.
me@jackbenda.com 10 Posted April 18 Author Posted April 18 Ok that makes sense - it was a 4 hour delay though last time. I will embark on remounting to NFS at some point though. One other thing to ask quickly - one episode from a series failed (for some reason) and I can't seem to find a way to tell it to try again... Maybe I'm being dense but do I just have to cancel the whole job and then re-do? 1
Luke 42414 Posted April 19 Posted April 19 It will try again on the next run of the server scheduled task.
me@jackbenda.com 10 Posted April 19 Author Posted April 19 So I had a server reboot last night (docker had a meltdown and had to reboot) and so a bunch of my things failed (logs:) Quote 2026-04-18 01:35:58.002 Info App: Disposing MediaEncoder 2026-04-18 01:35:58.002 Info App: Disposing SsdpDevicePublisher 2026-04-18 01:35:58.011 Info VideoEncoder: AppendExtraLogData - Read graph file: /var/lib/emby/logs/ffmpeg-transcode-e66f64af-e975-479d-a706-ba919e724395_1graph.txt 2026-04-18 01:35:58.012 Info VideoEncoder: AppendExtraLogData - Deserialized GraphData fileStream: 13,188.00 bytes Graph Count: 1 2026-04-18 01:35:58.012 Info VideoEncoder: AppendExtraLogData - File Deleted 2026-04-18 01:35:58.013 Info VideoEncoder: ProcessRun 'Encoding e66f64' Process exited with code 255 - Failed I'm trying to run the scheduled task manually and they just won't start up again. Am I going to have to delete all my conversion tasks and start from scratch? This would be a bit of a pain because I dialled in about 50 conversion jobs to slightly different specs each time...
me@jackbenda.com 10 Posted April 20 Author Posted April 20 The scheduled task runs fine - it's not that it won't start. The issue is that the individual job items that were marked as failed during the reboot aren't being picked up and retried when the task runs again. They sit in a failed state permanently. The task completes but those specific items remain failed with no way to retry them short of deleting the jobs and re-queuing from scratch. Is there a way to reset failed job items so they get picked up on the next run, without having to delete and re-create the entire conversion job?
Q-Droid 1022 Posted April 20 Posted April 20 Have you tried removing the individual failed items from the conversion list? I believe that might re-queue them the next time the scheduled task runs.
me@jackbenda.com 10 Posted April 20 Author Posted April 20 Tried that with a single Sopranos episode - removing the failed item just deleted it permanently, it didn't re-queue it on the next run. Is there any way to reset failed items back to a pending state without deleting and re-creating the entire conversion job? If not, that seems like a useful feature to have... especially when failures are caused by something external like a server reboot rather than a problem with the file itself. Also, as an FYI - I remounted my NAS to NFS and, in the process of investigating, discovered someone had tried to drop ransomware on it. The TOS web UI on my ancient TerraMaster had been exposed to the internet for a while before I got WireGuard set up, and an automated scanner had exploited a command injection vulnerability in the shared folder permissions UI - it injected shell commands as fake usernames which got executed by the backend. It had planted a file upload web shell and staged a ransomware binary, but the binary was x86-64 and the NAS is aarch64, so it couldn't actually run. The web shell had no hits in the nginx access logs either, so data appears to be intact. Cleaned everything out directly from the SQLite config database via SSH. So thank god you guys prodded me to do the NFS migration, or I probably wouldn't have found it for a long time!
Q-Droid 1022 Posted April 21 Posted April 21 1 hour ago, me@jackbenda.com said: Tried that with a single Sopranos episode - removing the failed item just deleted it permanently, it didn't re-queue it on the next run. That's odd. It's not how it behaves for me unless the episode already matches the desired output and there's nothing for it to do. Even then I think they go back on the list as "Converted". I wish there was a way to remove or age off this history because there are hundreds of items in my job. But I'm running an older version (4.9.1.90) and the latest stable or beta releases might handle re-queuing differently. Sucks about the malware, good thing you found it.
Luke 42414 Posted April 21 Posted April 21 We used to automatically retry on every run but then this led to the problem of tying up server resources by trying and failing repeatedly. So currently you have to remove and re-add, although there's room for improvement there. 1
me@jackbenda.com 10 Posted April 21 Author Posted April 21 Yeah that makes sense. Honestly if we just had a retry button next to groups of jobs / individual episodes, that'd be super helpful. Is there still a retry command I could prod the server with through the terminal? Or no?
Q-Droid 1022 Posted April 21 Posted April 21 How many failed items do you have? If just a few you could manually convert them individually and be done. Then hope no new ones fail now that you've reconfigured the storage. If it's a long list there might be an option through the API.
me@jackbenda.com 10 Posted April 21 Author Posted April 21 Bloody loads of them haha. Maybe 30, some of which are half finished TV series and the rest are films. Each is slightly differently dialled too... Depending on the film (1930s-1970 films more aggressively compressed, same for cartoons etc etc)
me@jackbenda.com 10 Posted April 23 Author Posted April 23 (edited) Any ideas on how I could wrangle the API into restarting the tasks? Or am I resigned to an evening of writing out all my configs and setting it off again? Edited April 23 by me@jackbenda.com
Q-Droid 1022 Posted April 23 Posted April 23 I don't see why you would have to redo the higher level conversion jobs. Unless of course it would take less work to recreate them than just dealing with the failed items. But an entire evening? You can do one-off conversions of movies, episodes or entire series to catch up. Around 30 of them doesn't sound like all that much.
me@jackbenda.com 10 Posted April 23 Author Posted April 23 Yeah I guess not. I think the problem is I'm scared of getting stung again and I'd like the have guide rails to restart the job. What I dont want to do is go through and reset everything only for another failure... I suppose with NFS set now it's less likely but meh is rather not throw good time behind bad...
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now