Search the Community
Showing results for tags '500'.
-
I'm running the latest stable version of Emby on a CentOS 7 VM on Hyper-V 2012 R2 with the latest updates. The VM is configured with 3GB of RAM and 4 vCPUs (the host is running a Intel® Xeon® CPU E3-1230 V2 @ 3.30GHz -4 cores + HT, 8 logical cores,-). The vNIC is configured using a NIC team on the host, so network connection is redudant and there is more bandwidth available. I only have 2 concurrent users at most, from time to time but usually just me. A couple of nights ago I was watching an episode of one of the shows using the Samsung TV app when suddenly playback stopped and the app went back to the episode information page reporting a "Network connection error". This happened already a couple of times, always using the same app, but it was only the other night that I started to dig into it. An important detail is that the Samsung app is not doing any transcoding on the server side, just direct streaming. So far I couldn't find any network related issue (actually the network is very performant) or problems in the VM or Hypervisor itself. What I did find that it's quite surprising is that in emby-server logs I can see more than a few responses taking a very long time, in the order of the couple of seconds, or even more. While tailing emby-server log during normal playback I also checked the CPU, disk and memory usage. Since no transcoding is done, the CPUs are practically idle, less than 2% used. The disks (RAID 1+0 on a HP Smartarray P222 hardware raid card) are also barely used, but yet I keep seeing responses from time to time taking more than 2 o 3 seconds. Anothe thing that called my attention is that many, many responses are returning a code 500 (internal server error?). But I don't see any affection on the user experience at all. Please find attached a couple of server logs in case you want to take a look at them. Right now I'm pretty much lost on why this responses return a 500 code, and even more surprised that sometimes the response time is so high, even when the VM is practically idling. server-63584956811.txt.gz server-63585043210.txt.gz server-63585129600.txt.gz server-63585206083.txt.gz server-63585206858.txt.gz server-63585216010.txt.gz