Jump to content

High Latency - TCP vs UDP


Go to solution Solved by yocker,

Recommended Posts

anothergraham
Posted

Relatively new Emby Premiere user here, and so far I'm very excited about the possibilities. 

I'm usually based in Europe but travel to Asia a lot, and was hoping to be able to stream my content when traveling. However, despite having good bandwidth at both ends (1 GBPS at the server side and 500 MBPS client side) , I can only achieve 2 - 3 MBPS reliably when streaming. Just about good enough for a full HD stream using H.265 transcode. 

Latency between client and server is 330ms RTT (ouch) <- This is the problem.

iperf3 results:
TCP: 2 - 3 MBPS
UDP: Reliably 40 to 50 MBPS without packet loss (after the first second). 

This leads me to believe that I'm suffering from the TCP Window effect (see "bandwidth-delay product"). To quote from those wiser than I:

>>>

High latency conditions can impede the sender as it might have to pause and wait for acknowledgments before it can transmit more data, effectively reducing the bandwidth of the connection — a concept known as the “bandwidth-delay product.”

This phenomenon is the TCP window effect, where throughput is constrained by high latency.

<<<

To test the theory, I tried iperf3 from our office in Asia (uses the same ISP), where we have an SD-WAN product that offers "TCP acceleration" to mitigate exactly this issue by sending ACK packets from a POP local to the sender with low RTT (Emby server) so it doesn't have to wait for the real recipient to send ACK packets after a delay. It works: 40+ MBPS achievable without issue with TCP acceleration enabled. 

So, all this to ask: Is it possible to somehow force clients to stream from the server using UDP instead of TCP to avoid this TCP Window effect? 

  • Solution
Posted

Without knowing for sure, my best bet would be looking into a VPN for that.

Just remember that depending on where in asia you are VPNs might be illegal, like in fx. China.

Posted (edited)
6 hours ago, yocker said:

Without knowing for sure, my best bet would be looking into a VPN for that.

Just remember that depending on where in asia you are VPNs might be illegal, like in fx. China.

A VPN 'may' work better but only if it's using a lightweigh protocol (UDP based) and all connections are using it.  If you just connect to the VPN via TCP, then you are adding latency, not removing it ..  Maybe worth trying for sure.

The only way to get lower latency for a WAN connection - is to lower the hop count (which translates into 'distance') or use another optimsed WAN based protocol - UDP,  QUIC, lightway  etc are also options as long as the application layer supports it.  HTTP (a web server) does not work over raw UDP !   The issue with raw UDP, is it doesn't have any form of session/error control, thus it is usually used with 'bolt on' protocols - such as RTSP, RTC, WebRTC etc .. or apps that don't 'care' if data is lost.  NTP etc doesn't care, it just requests the time again for example.   Or bring the content 'closer' to you - meet CDN's .. ;)

I did an experiment a few years ago now, when I streamed emby (tcp) using various VPN end points around the world.   As you say, I think I got to several hundred ms of latency before it all rapidly fell apart.   That was before lightway etc even existed - so I may have to revisit...  🤔 I'll try and dig up the thread ..

Edited by rbjtech
anothergraham
Posted

Thanks for the feedback - yeah I realize you can't just run a whole web interface over UDP, but I was wondering if the internal emby parts that do the actual media streaming from server to client could be set to use UDP instead of TCP and doesn't sound like it, for which I'm sure there are good reasons. 

I thought the VPN idea was pointless initially so I hadn't tried it until it was suggested here. But thinking about it, a well-placed VPN would split the long segment between server and client into two segments, in theory approximately halving the RTT on each segment, which would reduce the TCP Window issue. A good VPN provider may also have better routes than I am getting from my ISP, so fewer hops etc. So I tried Proton VPN with the server placed in Singapore, and the results speak for themselves: 

[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec   806 KBytes  6.60 Mbits/sec    0    137 KBytes       
[  5]   1.00-2.00   sec  2.22 MBytes  18.6 Mbits/sec    0    245 KBytes       
[  5]   2.00-3.00   sec  4.57 MBytes  38.3 Mbits/sec    0    436 KBytes       
[  5]   3.00-4.00   sec  6.97 MBytes  58.5 Mbits/sec    1    602 KBytes       
[  5]   4.00-5.00   sec  7.22 MBytes  60.6 Mbits/sec    0    611 KBytes       
[  5]   5.00-6.00   sec  6.54 MBytes  54.9 Mbits/sec    0    619 KBytes       
[  5]   6.00-7.00   sec  5.92 MBytes  49.7 Mbits/sec    0    625 KBytes       
[  5]   7.00-8.00   sec  6.05 MBytes  50.7 Mbits/sec    0    645 KBytes       
[  5]   8.00-9.00   sec  8.58 MBytes  72.0 Mbits/sec    0    685 KBytes       
[  5]   9.00-10.00  sec  9.19 MBytes  77.1 Mbits/sec    0    742 KBytes       
[  5]  10.00-11.00  sec  9.01 MBytes  75.6 Mbits/sec    0    815 KBytes       
[  5]  11.00-12.00  sec  8.15 MBytes  68.3 Mbits/sec    0    914 KBytes       
[  5]  12.00-13.00  sec  12.4 MBytes   104 Mbits/sec    0   1.02 MBytes       
[  5]  13.00-14.00  sec  12.3 MBytes   103 Mbits/sec    0   1.17 MBytes       
[  5]  14.00-15.00  sec  15.0 MBytes   126 Mbits/sec    0   1.35 MBytes       
[  5]  15.00-16.00  sec  11.2 MBytes  94.4 Mbits/sec    0   1.57 MBytes       
[  5]  16.00-17.00  sec  13.8 MBytes   115 Mbits/sec    0   1.83 MBytes       
[  5]  17.00-18.00  sec  17.5 MBytes   147 Mbits/sec    0   2.14 MBytes       
[  5]  18.00-19.00  sec  22.5 MBytes   189 Mbits/sec    0   2.48 MBytes       
[  5]  19.00-20.00  sec  12.5 MBytes   105 Mbits/sec    3   1.89 MBytes       
[  5]  20.00-21.00  sec  10.0 MBytes  83.9 Mbits/sec    9   1.41 MBytes       

Tada! I can certainly live with that.

With regard to routes, I also note that the total RTT of the two segments is now only 145ms (from server to VPN server) + 85 (client to VPN server) = 230ms, so I've lost 100ms in RTT by using the VPN. Looking at the routes, this is because my ISP favors sending traffic between Asia and Europe via the US, which is a far longer way around than going via Middle East - probably because it is cheaper from a peering perspective. 

Thanks for the help here! 

  • Like 1
Posted

What is the VPN provider using for the transport ?  That will have a big influence here I think.   ExpressVPN gives you the options of tcp, udp or lightway for example.  I'm planning to experiment when I get some time - it's an interesting topic (for me anyway..) ;)

anothergraham
Posted

These tests were done using wireguard on my laptop, which uses UDP.

I'm using a Google Chromecast 4k to do the actual streaming and I believe that by default their Android VPN app uses the same, but maybe there are options for others. 

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...