I only usually deal with the distribution side as an end user, and personally I tend to watch about 2 live events a year (I'd watch new years, but that's clearly pointless, so that leaves eurovision and maybe an election program), so I don't have much experience with that side.
It does amuse me when we were looking at latency for a program from a ropey bit of connectivity which we were using ARQ on. We were discussing whether we could push the latency up from 2 seconds to 6 seconds (it kept dropping out for 2 or 3 seconds at a time), as it's sport. Then we realised there was a good 30-40 seconds downstream before it even left to the CDN!
I still don't understand half of what Streampunk [1] are trying to do with their nmos grain workflows, but they are talking about sub-frame HTTP units
This is not an approach that supports line-synced timing and may not be appropriate for live sports action that requires extremely low latency.
However, for many current SDI workflows that can tolerate a small delay, this approach is sufficient.
With UHD you're talking 20MBytes for a single frame, or each "grain" (a subdivision of a frame) being in the order of a millisecond/megabyte.
I think I prefer this approach to the SMPTE 2110 approach to be honest, especially with the timing windows that 2110 requires (it doesn't lead well to a COTS virtualised environment when your packets have to be emitted at a specific microsecond)
It does amuse me when we were looking at latency for a program from a ropey bit of connectivity which we were using ARQ on. We were discussing whether we could push the latency up from 2 seconds to 6 seconds (it kept dropping out for 2 or 3 seconds at a time), as it's sport. Then we realised there was a good 30-40 seconds downstream before it even left to the CDN!
I still don't understand half of what Streampunk [1] are trying to do with their nmos grain workflows, but they are talking about sub-frame HTTP units
With UHD you're talking 20MBytes for a single frame, or each "grain" (a subdivision of a frame) being in the order of a millisecond/megabyte.I think I prefer this approach to the SMPTE 2110 approach to be honest, especially with the timing windows that 2110 requires (it doesn't lead well to a COTS virtualised environment when your packets have to be emitted at a specific microsecond)
But I digress, this is all very off topic
[0] https://github.com/Streampunk/arachnid [1] https://www.streampunk.media/