Jump to content

Direct play takes forever to start, transcode is ok


markkundinger
Go to solution Solved by markkundinger,

Recommended Posts

Zaphod414

I am using Cloudflare as my public DNS, but I am bypassing their proxy caching.  This setup hasn't changed in over a year. 

I believe I have fixed my issue, though I'm not sure that I have any decent explanation of how and why.  I am using NFS to access the media that I am sharing out, so for giggles and grins I decided to try using Samba shares just to see if that made any difference.  Well it did.  The delay was gone when using Samba shares instead of NFS.  That was unexpected, but encouraging.  So I'm not going to use Samba to share files between Linux hosts as a permanent solution so long as there's a native option, so I started to play around with my NFS settings.  I noticed that the NFS service on my storage array (Truenas) was set to use NFS v3.  I switched this to use NFS v4 instead, and adjusted the mounts on my Docker host to also use NFS v4.  I'm not sure why, but that has fixed the delay issue I was having.  Videos now start playing nice and quick whether transcoding or direct playing.  The delay is also gone when connecting in remotely.  I really wish I had a better analysis of the whole issue than I do, but for now I'm just glad it's gone.  I have no idea how simply switching from NFS v3 to NFS v4 made the delay vanish, nor how using NFS v3 could have caused the delay to begin with especially seeing as it was affecting only direct play sessions while transcoding seemed to be immune to the delay.  It's all working nice and slick now, though.

Link to comment
Share on other sites

12 hours ago, Zaphod414 said:

It's all working nice and slick now, though.

Thanks for the feedback.

Link to comment
Share on other sites

  • 1 month later...
Zaphod414

Hi,

No caching involved.  The issue was present locally as well as remotely.  Cloudflare caching would only impact the remote experience.  Incidentally, I do use Cloudflare as my external DNS, but have caching disabled.  My problem ended up being some problem with my NFS mappings.  I modified my mappings in fstab to switch to NFS 4 and the problem went away.  Also switching to Samba shares also fixed the problem.  So just something funky with my NFS mounts... that I've been using for 3 years without issue.  Docker didn't seem to like them.

  • Like 1
Link to comment
Share on other sites

Vaddale

I have exactly the same problems as Zaphod414 describes. My emby server also runs as a docker on unraid and Cloudflare takes over the DNS without caching. Everything went fine for months. Since the update described, it is no longer possible to start "direct streaming" remotely, several clients have been tested from different remote locations. Always the same result: the spinner rotates endlessly and then breaks off after about 30 seconds. It's only possible to transcode and playback. my files are local on the same Server on which the emby docker is installed on unraid 6.9 My emby server docker version is 4.6.7.0 What can I do ? Thx for help.

I have the smb share already added in my Folder shares inside emby, but I don't want to create new users on my unraid system, as long as they are already existing in emby. I think this could not be the solution as long as it worked before december 2021, thx.

Edited by Vaddale
add information
Link to comment
Share on other sites

Vaddale

Update: It doesn't matter if i remove the optional smb share or if i let it inside. remote it is not working anymore.

Link to comment
Share on other sites

29 minutes ago, Vaddale said:

Update: It doesn't matter if i remove the optional smb share or if i let it inside. remote it is not working anymore.

Hi, try lowering the in app quality setting so that the server will convert it to a lower bitrate.

Link to comment
Share on other sites

Vaddale
20 hours ago, Luke said:

Hi, try lowering the in app quality setting so that the server will convert it to a lower bitrate.

Hi, thank you for your answer, but this is only a workaround, but not the solution in my eyes, sorry.

I can tell you, yesterday I installed a jellyfin docker with exactly same rules and configuration like my emby docker; result: direct streaming works perfect remotely. no lags, nothing. simply good.

For me that is one more argument, beside the fact, my emby and server configuration worked for many many months, until your last december update. Since this day, direct streaming over the internet is not working anymore with emby nad my FireTV App. I am as well owner of an emby Premiere membership and I want to continue using emby, but I expect that this bug will be fixed the soonest, and not in one year or later, while I read here many many delayed tasks..... Thank you.

Link to comment
Share on other sites

GoodOleHarold
On 05/12/2019 at 18:00, Luke said:

HI there, are you using a reverse proxy?

Out of interest, What would you suggest if the answer to that was yes?

I ask because I have this sometimes especially when watching via my phone. I have seen the phone stop loading then a few minutes later the media randomly starts playing

I do use a reverse proxy. I do not use the likes of NFS or SMB, media is local to Emby

Edited by GoodOleHarold
Link to comment
Share on other sites

On 3/6/2022 at 4:58 PM, GoodOleHarold said:

Out of interest, What would you suggest if the answer to that was yes?

 

We've seen cases where users have had their reverse proxies configured in ways that might cause this. The leading examples that we see from time to time are range request headers not being passed along properly, and most recently, the issue caused by Cloudfare caching. That is why I always refer users to @pir8radio's nginx configuration.

  • Like 1
  • Agree 1
Link to comment
Share on other sites

  • 3 months later...
rhodges

I also had this error after rebuilding everything. Here is my setup:

OpenMediaVault (bare metal) sharing video library with NFS
Storage is ZFS with sync=disabled
Emby running in docker on an Ubuntu 22.04 VM hosted on Proxmox (not the same machine as OpenMediaVault)
10Gb network
Using volumes for NFS

Here is part of the docker compose:

  emby:
    container_name: emby
    image: emby/embyserver:latest
    restart: unless-stopped
    ports:
      - 8096:8096
    environment:
      - VIRTUAL_HOST=<REDACTED>
      - VIRTUAL_PORT=8096
      - UID=1000
      - GID=100
      - UMASK=002
    volumes:
      - /etc/timezone:/etc/timezone:ro
      - /etc/localtime:/etc/localtime:ro
      - ${PWD}/emby/config:/config
      - ${PWD}/emby/backup:/mnt/backup
      - video:/mnt/video

volumes:
  video:
    driver: local
    driver_opts:
      type: nfs
      o: nfsvers=3,addr=<REDACTED_DNS_NAME>,rw,tcp,noatime,rsize=1048576,wsize=1048576
      device: ":/export/cache"

This configuration would cause exactly 30 seconds of "buffering" or whatever. Kodi as a client, the official Emby Android TV client, web browers, etc, always 30 seconds.

After changing the volume, videos play instantly:

volumes:
  video:
    driver: local
    driver_opts:
      type: nfs
      o: nfsvers=4,addr=<REDACTED_IP>,rw,tcp,noatime
      device: ":/video"

I changed from NFSv3 to NFSv4 and changed from using the DNS name to the IP address and dropped the rsize, wsize arguments. I may try adding rsize/wsize back to see if that was it. So it was one of those 3 changes that fixed my issue.
 

Edit: Added back the rsize and wsize and that wasn't the problem. I changed to nfsvers=4.2 and it continues to work, so I'm guessing it was the change from NFS v3 to v4.

Edited by rhodges
  • Thanks 1
Link to comment
Share on other sites

  • 2 weeks later...
On 6/20/2022 at 1:53 PM, rhodges said:

I also had this error after rebuilding everything. Here is my setup:

OpenMediaVault (bare metal) sharing video library with NFS
Storage is ZFS with sync=disabled
Emby running in docker on an Ubuntu 22.04 VM hosted on Proxmox (not the same machine as OpenMediaVault)
10Gb network
Using volumes for NFS

Here is part of the docker compose:

  emby:
    container_name: emby
    image: emby/embyserver:latest
    restart: unless-stopped
    ports:
      - 8096:8096
    environment:
      - VIRTUAL_HOST=<REDACTED>
      - VIRTUAL_PORT=8096
      - UID=1000
      - GID=100
      - UMASK=002
    volumes:
      - /etc/timezone:/etc/timezone:ro
      - /etc/localtime:/etc/localtime:ro
      - ${PWD}/emby/config:/config
      - ${PWD}/emby/backup:/mnt/backup
      - video:/mnt/video

volumes:
  video:
    driver: local
    driver_opts:
      type: nfs
      o: nfsvers=3,addr=<REDACTED_DNS_NAME>,rw,tcp,noatime,rsize=1048576,wsize=1048576
      device: ":/export/cache"

This configuration would cause exactly 30 seconds of "buffering" or whatever. Kodi as a client, the official Emby Android TV client, web browers, etc, always 30 seconds.

After changing the volume, videos play instantly:

volumes:
  video:
    driver: local
    driver_opts:
      type: nfs
      o: nfsvers=4,addr=<REDACTED_IP>,rw,tcp,noatime
      device: ":/video"

I changed from NFSv3 to NFSv4 and changed from using the DNS name to the IP address and dropped the rsize, wsize arguments. I may try adding rsize/wsize back to see if that was it. So it was one of those 3 changes that fixed my issue.
 

Edit: Added back the rsize and wsize and that wasn't the problem. I changed to nfsvers=4.2 and it continues to work, so I'm guessing it was the change from NFS v3 to v4.

Great, thanks for sharing !

Link to comment
Share on other sites

  • 5 months later...
deepfriedbutter

I have confirmed this is still an issue in 2022

Docker container in ubuntu VM 
NFS Mount: almost instant transcode, direct play takes ~17s. Response 200: 6000+ ms
CIFS Mount: almost instant transcode, direct play also almost instant
 

  • Thanks 1
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...