Jump to content

FreeNAS jail crashing...high disk activity?


Dunuin

Recommended Posts

Hi,

 

I was running the FreeNAS plugin for months without a problem and now I created my own emby jail to have a little bit more flexability.

The jail and emby is running fine but every few hours the jails seems to crash the whole system. First the system gets extremly slow and unresponsive, the FreeNAS middleware is crashing, it takes seconds to minutes to open a folder in the SMB share and then the FreeNAS WebGUI, emby Webgui and SSH isn't answering anymore. Sometimes I can use the mainboards IPMI webinterface to shutdown the system properly if the shell is working there but sometimes even that don't work.

 

If I run "top" to check the CPU (Xeon E31230v3) and RAM (32GB EEC) activity they aren't busy. Around 25-50% cpu usage and 16GB RAM free. Could it be that emby is creating so much disk access that the system slows down because the HDDs/SSDs aren't answering anymore? I can't see the disk usage in the FreeNAS webgui because the service that collects that data is crashing or just not answering too.

Starting an library scan really makes the smb shares unresponsive. 

 

FreeNAS is installed to 2 mirrored 120GB SSD.

My Movies, Music, ... are stored on an 4x 8TB HDD zfsraid pool.

My jails are stored on an 3x 480GB SSD zfsraid pool.

 

FreeNAS version is "11.2-u7", jails are "11.3-p5" and I installed emby version "4.3.1.0" from the "latest" and not the "quaterly" pkg repo.

 

Can someone point me in the right direction to find out whats going wrong?

Edited by Dunuin
Link to comment
Share on other sites

I was doing an "scan all libraries" task and that worked fine and the system was responsive. Right after that SMB, Emby and FreeNAS became unresponsive. FreeNAS also gave that error in the console:

Jan  6 20:49:35 BM-Homeserver uwsgi: [sentry.errors:674] Sentry responded with an API error: RateLimited(None)
Jan  6 20:49:35 BM-Homeserver uwsgi: [sentry.errors.uncaught:702] ['ClientException: Failed connection handshake', '  File "django/core/handlers/exception.py", line 42, in inner', '  File "django/core/handlers/base.py", line 244, in _legacy_get_response', '  File "freenasUI/freeadmin/middleware.py", line 296, in process_request', '  File "freenasUI/middleware/auth.py", line 8, in authenticate', '  File "freenasUI/middleware/client.py", line 20, in __enter__', '  File "middlewared/client/client.py", line 316, in __init__']

Now, 1 hour later, it is responsive again without restarting the host/jail.

 

There are some errors this:

ImageMagickSharp.WandException: ImageMagickSharp.WandException: unable to open image `/var/db/emby-server/metadata/library/6e/6e84ff7786e2f92e2f0f72ff141aadb9/backdrop.jpg': Too many open files @ error/blob.c/OpenBlob/2881

The dmesg log of the jail from today:

...
vnet0:9: promiscuous mode enabled
sonewconn: pcb 0xfffff800ba53fcb0: Listen queue overflow: 193 already in queue awaiting acceptance (1 occurrences)
sonewconn: pcb 0xfffff800ba53fcb0: Listen queue overflow: 193 already in queue awaiting acceptance (7 occurrences)
sonewconn: pcb 0xfffff800ba53fcb0: Listen queue overflow: 193 already in queue awaiting acceptance (6 occurrences)
...
sonewconn: pcb 0xfffff800ba53fcb0: Listen queue overflow: 193 already in queue awaiting acceptance (8 occurrences)
sonewconn: pcb 0xfffff800ba53fcb0: Listen queue overflow: 193 already in queue awaiting acceptance (4 occurrences)
pid 76333 (mono-sgen), uid 89: exited on signal 6
...

This is the disk report of that time from one of my HDDs of the media raid-array:

post-510082-0-43574600-1578345994_thumb.jpg

 

And this is one of the SSDs containing the jails:

post-510082-0-10618500-1578346231_thumb.jpg

 

 

log.txt

top_freenas.txt

top_emby_jail.txt

Edited by Dunuin
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...