Open CDN: an approach to overcome live streaming limitations

Executive summary

Streaming technology has become the standard video consumption but, if it proves very convenient to deliver individualized content, it is consuming a staggering amount of network resources for popular live events, sports in particular, setting challenging limitations both in terms of capacity and in service quality for viewers. 

A simple and extremely efficient solution is however already available and is poised to develop rapidly, it is for content service providers (CSP) to rely on existing network optimizations of internet providers (ISP) and thus get immediate access to a vast streaming capacity, even virtually infinite when the network is provided with IP multicast capability. All parties are winning in this cooperation, the service provider resolves its limitations, the ISP no longer needs to deploy a useless and massive extra capacity.

Live streaming limitations today

There is a wide consensus today that video streaming technology is progressively, but inexorably, replacing all historical television broadcast. This evolution towards streaming is actually pretty much completed in video-on-demand (VOD), where global platforms such as Netflix or Amazon Prime are now controlling most of the market, and we can observe that the movement is well under way for traditional linear channels as well. As an illustration, even the BBC, possibly the most famous broadcaster in the world, followed more recently by the Ofcom, foresees a complete switch-off of terrestrial broadcast in the UK by 2030, replaced by full OTT streaming.

However, there is one particular type of content which is still struggling to migrate: big sport events. These events are, by far, the most popular and impactful programs but the most powerful streaming actors have only recently started acquiring their exclusive rights, with for example Amazon (in 2022), YouTube (2023) and even Netflix (2024) obtaining NFL games in the US, or Amazon and DAZN (since 2021) investing in European football, respectively Champions League in Germany, Italy, UK and Ireland and national championships in Italy, Spain and France.

With these first experiences, the actual streaming of these sport events has revealed to be extremely challenging technically and has often generated quite some frustration for all parties involved: high delivery costs for streaming service providers, steep extra infrastructure investment for network providers, and frequent complaints from users about the lack of video quality compared to what they have been used to with traditional broadcast. 

A few notable fiascos have already been reported in the news and surveys are consistently showing that these topics, network capacity and end-users quality of experience (QoE), are top technical concerns in the day-to-day operation of these services as well.

streaming issues during big live events streaming
streaming issues during big live events streaming

Examples of headlines reporting streaming issues during big live events streaming

These limitations are already impacting viewers’ experience today, but they are also blocking the many different innovations that have already been promised by the industry. If delivery networks are already struggling today and cannot be optimized, it is difficult to imagine that bandwidth-intensive applications such as 3D, virtual reality, or even 4K resolution or a decent latency, will reach our screens anytime soon.

High-level technical considerations

Adaptive Bitrate (ABR), the underlying technology of streaming, has brought a lot of benefits to end users in terms of experience, allowing in particular viewing on any screen, easy access to trick play (pause, rewind) and content personalization. It has also made the processing and delivery of video content much more accessible, relying on the existing Internet infrastructure and common IP and HTTP standards for delivery rather than on older video-specific transport protocols and on a costly dedicated infrastructure.

The Internet, however, misses one feature to be as efficient as traditional broadcast for live events, the ability to deliver in a one-to-many mode, that is from one source to all users at once. It has been designed and developed rather for individual one-to-one connections, which is probably also the source of its flexibility and success, but which also means that, when a central station wishes to distribute a sport game to potentially millions of viewers at the same time, the network has to bear huge traffic peaks formed by just as many http  connections.

 Content delivery networks (CDN) servers have been widely deployed throughout the world to mitigate this drawback. Their principle is basically to serve as distributed sources, each receiving once the content from the originating point and establishing these many individual connections only from that point in the network.

Difference in video delivery with traditional broadcast vs streaming, with or without caching Open CDN Live Streaming IP Multicast

Difference in video delivery with traditional broadcast vs streaming, with or without caching

CDN technology is working extremely well, and is already critical to contain the huge traffic generated by a surge of viewers connecting to a same live event, but it only applies to the infrastructure upstream of where the cache servers stand in the network, the issue remaining unchanged downstream. 

Streaming optimization is therefore a lot about pushing these caching servers as deep as possible in the network. The video programs included in ISPs’ offer use caches that are typically placed deep in regional PoPs but this is usually not the case for other direct-to-consumer services which have to stream from much higher in the network.

Private caches anachronism

Alternatively, some of the most important streaming services have managed to place their own exclusive caches in the ISPs’ PoPs, a good illustration being Netflix Open Connect program (18,000 cache servers deployed in 2023), but this raises serious concerns regarding its generalization.

Primarily, such strategy implies that each content service provider (CSP) builds its own infrastructure on top of each other. These systems are dimensioned for traffic peaks which are only reached during scattered popular events, therefore largely unused the rest of the time. 

Considering that a server is consuming an amount of energy that is not so different whether fully used or idle, the consequence would not only be a waste in terms of deployed equipment but also in terms of electricity consumption, unnecessarily impacting the overall delivery cost of the service, not to mention an avoidable environmental impact that would probably be pointed out at a moment when digital technologies sobriety in general is subject to thorough scrutiny. 

And the situation can sometimes become even more absurd in case the service provider that owns the right for a particular live event suddenly lose them to a competitor, the first one will have on his hands a dedicated capacity now becoming totally useless, while the second would have to rebuild an equivalent one right next to it.

Typical server power consumption vs actual CPU usage

Typical server power consumption vs actual CPU usage 

Of course, the cost of deploying its own streaming capacity in ISP networks is not something that any CSP can afford, and it might be limited to a few important actors only, the same ones actually owning significant live events rights but, still, these companies already represent most of the streaming volume today and have the capacity to multiply several times over the required infrastructure in these networks. It also raises the question of fairness, limiting this opportunity to optimize costs and improve QoE to today’s dominant players exclusively might be considered in contradiction with the general principle of net neutrality.

From the ISPs standpoint, incorporating in their infrastructure 3rd party components, over which they have no control, makes their daily operation and capacity planning very uncertain. Even more so considering that these black boxes are not necessarily designed for optimizing the streaming and reasonably using the network resources, since the ISP is entirely paying for them. 

A typical example of these behaviors would be serving very high video resolution to small smartphone screens at traffic peaks, which would bring only marginal benefit to the video quality while significantly impacting the network sizing.

Finally, private caches approach consists technically in piling up physical boxes in a datacenter in an old-school form of deployment, delivery by truck, rack space management, dedicated wiring, etc. At an era of virtualization and containers orchestration, such static and monolithic methods seem quite outdated and definitely not very future resistant. 

Having caching and streaming instances virtualized would already be much more convenient to deploy and scale, and it would also leave the opportunity to adapt to newer technologies expected in the coming years, like doing streaming further at the edge, dynamically distribute the resources based on real-time usage of the network or adding personalized edge computing.

In summary, if black boxes strategy does serve the purpose of alleviating upstream network traffic, it also has to overcome a number of difficulties:

  • Unnecessary number of servers deployed, when the industry is looking at reducing costs and everyone is encouraged towards energy sobriety
  • Unfair access to this option between the different service providers, when the reality of equal access to the internet is increasingly questioned
  • Difficult ISP’s position, incurring liability and cost for a system that isn’t really under its control, when they are already struggling in coping with on-going internet exponential growth
  • Static boxes approach, when all IT infrastructure is consensually moving towards cloud technologies

And one last limitation is that, downstream of these boxed, the issue with traffic peaks remains unsolved.

One simple solution: cooperation

There is, however, another possibility for pushing streaming caches deep in the ISP networks without having to face the difficulties mentioned above, it consists in simply mutualizing one same physical infrastructure and making it open to all services, a so-called “Open CDN”. 

For ease of operation and maintenance, the most natural choice would be to leave this responsibility to the ISP itself, as most of them already have an existing streaming system in place for their own video service, even though other actors could be considered providing they work in good intelligence with the ISP. Technically speaking, there is nothing particularly intimidating, the complexity probably lies more in a shift of mind, both in the way CDNs are designed and in the way content service providers and ISPs interact with each other.

On the technical side, the workflow principles are quite straight forward and can be split into 4 sequential steps

  1. The CSP contracts the ISP capacity, the ISP automatically configures the delivery of the contracted content in its CDN and provides the CSP with the entry point URL it shall use in order to stream through its network

  2. At the event start, the CSP application clients include this URL in their multi-CDN logic and uses it when deemed the best delivery option

  3. The contracted live content, already preconfigured in the ISP CDN, is streamed via the most relevant cache server and with the most appropriate optimization tools and settings.

  4. Relevant reporting data generated by the ISP CDN are shared with the CSP for monitoring the service.

content provider network provider coperative activities

In opposition to today’s complex entanglement of competitive and cooperative activities, with ISP creating their own TV channels or VOD platforms and CSPs overbuilding their own delivery network, Open CDN scheme has the advantage of making the split of responsibilities as simple as it can be: each actor here remains in its core domain, the content provider provides the content, focusing on getting the most valued programs for its users, and the network operator operates the network, with the best possible streaming efficiency.

Now, for such Open CDN service to be attractive, it needs to match CSP’ expectations at least at the same level as public CDN alternatives, this includes:

  • Easy contracting: CSPs shall be able to automatize the access to the service, which implies publicly available interfaces and documented protocols and APIs.
  • Competitive pricing: Open CDN pricing is expected to be consistent with the pricing already applied to the existing streaming delivery alternatives.
  • Streaming specificities adaptations: Streaming clients usually have technical requirements that differ from one service to another and that the delivery shall be able to comply with, for example HTTP headers specific parameters, caching policies, TLS versions or security tokens support.
  • Reliable reporting: Open CDN service shall at least be able to faithfully report to CSPs the quantity of traffic going through its delivery processes, since this is generally the metric used for charging for the service, but also additional information demonstrating the proper execution of the delivery service and the level of efficiency it was able to reach, since superior performance is the initial promise that likely motivated in the first place the CSP to prefer this service over a public CDN alternative.

Theoretically, all these expectations are well in reach, as an ISP Open CDN can rely on a system that is, by nature, more efficient, with more and better distributed caches. More efficiency translates into less infrastructure required, and therefore more margin to propose satisfactory prices, as well as a better overall quality of streaming leading to a better satisfaction for end users.

Regarding the ease of access to the service, the transactions between CSP and ISP, both for provisioning the content in the CDN and for reporting API, are usually not very complex and more standardization can make the process even smoother and more scalable. This is a work that SVTA (Streaming Video Technology Alliance) has taken in charge quite thoroughly in the past years through its OpenCaching working group, and their public specifications today constitute a quite consensual reference.

Multicast-ABR, ISP’s weapon of mass reduction

On top of having a better granularity of cache nodes in their delivery infrastructure, ISPs have yet the option of another structural advantage which is particularly adapted for live streaming: multicast.

Multicast is the ability to deliver one physical stream to as many users as needed at the same time, as opposed to regular HTTP unicast which requires to create one connection per user even if they are delivered strictly the same content. In our previous example of millions of users viewing the same sport game, normal HTTP delivery would produce as many connections downstream of the cache and put a consequent strain on the network while the same program delivered in multicast only requires the carriage of one single stream, which can secure a perfect quality at any video resolution without producing any impact on the load of the network.

One-to-many live streaming with IP multicast
One-to-many live streaming with IP multicast

Multicast one-to-many principle can be leveraged for adaptive bitrate (ABR) steaming with Multicast-ABR technology (MABR). Very briefly summarized, MABR is composed of one single transcaster server, usually placed in the video headend, which ingests streaming content for a standard origin server, encapsulates it into multicast and sends it in one single copy throughout the network. At the other end, an agent embedded in a home device, typically an internet gateway, receives the multicast and is used as a mini personal server to stream the content in standard protocol to any streaming device without the need for any particular adaptation.

MABR distribution is usually complemented with some HTTP unicast, for example for users streaming from outside their home, for pause and startover applications, but the proportion of unicast typically remains below 10% of the overall streamed content, thus dividing by 10 the overall capacity that would have been otherwise required with unicast only.

Multicast ABR standard architecture
Multicast ABR standard architecture

The potential impact of using multicast on networks is huge. A network is dimensioned for the anticipated highest level of traffic and Akamai graph below for example illustrates very well how these maximums are reached by very temporary peaks of traffic, peaks that are growing higher every year and generally exclusively generated by high popularity live events.

Traffic growth on Akamai - 2021 investor summit
Traffic growth on Akamai - 2021 investor summit

This leads to an absurd situation where a good proportion of the network has only two states, it is either doing nothing, most of the time, or it is stuffed with the exact same bytes repeated over and over for each viewer of the event. Knowing that a high popularity event nowadays easily adds up a few Tbps (terabits per second) to global traffic, and that each Tbps of infrastructure capacity is typically assessed to cost between 5 and 10 million dollars to ISPs, it is quite surprising to see that this obvious lack of optimization is so often overlooked.

Using systematically multicast for these events drastically erases these peaks and allows ISPs to rationalize the sizing of their network. For example, British Telecom has assessed that not implementing multicast in the upcoming full transition of legacy broadcast to streaming would globally cost UK £16.5B.

Bt saving in streaming

As a note, it is worth mentioning that software downloads, in particular popular online games updates, are also starting to generate problematic peaks in some cases, but the issue is quite similar and it seems likely that Open CDN and multicast principles will play a role too in how these growing concerns are addressed in a near future.

Multicast delivery system can be used not only for ISP video service but can also be open to 3rd parties, following the same principle of Open CDN but with the particularity of relying on MABR technology, a use case that we can refer to as “Open MABR”. It offers CSPs a nearly unlimited capacity for live events, only constrained by the more or less 10%  unicast traffic remaining, and a guaranteed quality of experience for users, enabling 4K resolutions or more and preventing typical streaming hiccups like start errors or video rebuffering.

One example of how Open MABR has brought important benefits for both an ISP and a CSP is the collaboration between DAZN streaming service and TIM, the reference ISP in Italy or Orange in Spain. DAZN has been streaming football games in these 2 countries for 3 years using the multicast capacity of these ISPs as often as possible. 

Put simply, the high level feedbacks from this operation is that it has brought a significantly better quality for DAZN users, about 10 times less streaming errors than with http unicast, and no obligation for the ISPs to invest in more network infrastructure, thanks to multicast dividing by 10 traffic peaks during the games for the ISPs.

Typical event traffic timeline report (in green: streaming served from multicast, ie not impacting the network capacity)
Typical event traffic timeline report (in green: streaming served from multicast, ie not impacting the network capacity)

On top, multicast delivery resolves the latency limitation often observed in streaming applications. In http unicast, the picture latency measured between video capture and display on a screen is nowadays typically between 20 and 60 seconds, which can generate quite some frustration, since the appeal of attending a live event is to watch it live and not 1 minute later.  

This unicast latency is mainly due to the inherent delivery variability of http protocol, but this doesn’t apply to multicast, which is much steadier and can therefore reach the same latency levels that users got accustomed to with traditional IPTV, also relying on IP multicast, or other broadcast means. With its MABR delivery network, a reference ISP in Latin America has, for example, been able to stream this year Copa America and Euro Cup of football with less than 1 second of latency difference with its legacy cable system.

HTTP vs multicast distribution mode
HTTP vs multicast distribution mode

Conclusion

Ultimately, there are 3 possible scenarios in the future for networks to cope with the growing networks capacity demand from live streaming traffic:

  1- Public CDNs: This option is already well in place, globally available and easy to use but, as demonstrated in numerous cases, it proves to often be insufficient for massive live streaming.

  2- Private caches: CSPs owning and deploying caches in ISP networks is an approach that is already largely used by streaming giants like Netflix or Apple, but this option will likely remain exclusive to dominant actors, each building its own infrastructure on top of each other in a quite static and old-fashion way, implying a wasteful amount of resources, increased operational complexity and, anyhow, leaving unresolve live peaks issue downstream of these caches.

  3- Open CDN: Such collaboration between CSPs and ISPs is still in its developing phase but holds a lot of promises, live traffic peaks are pushed to the very edge of the network, or even totally erased when using multicast, providing a nearly infinite capacity, liberating on the way numerous video innovations that remain currently constrained by physical resources (UHD, VR or others), and all this without impacting networks investments.

If our future delivery systems will probably consist of a mix of these 3 possibilities, Open CDN approach appears as a true win-win opportunity between service and network providers and it is logically expected to develop in the coming years. 

On one side, CSPs can secure all the capacity they need for their live events and guaranty QoE for their most demanding customers, while, on the other side, ISPs can retrieve some control on 3rd party traffic, rationalize their infrastructure and provide streaming optimizations at no extra cost for CSPs. On top, both can together reduce the overall power consumption and carbon footprint often reproached to video streaming in general, for the benefit of all of us living on this planet. 

In a way, this could be seen as a natural extension of the role of super aggregators that ISPs are already playing when integrating streaming services apps into their set-top boxes and apps, which has proved eventually beneficial to all parties, but now applied to content delivery as well, a form of “network super aggregation”.

Author Damien Sterkers

Damien Sterkers, Video Solutions Marketing Director 

Damien Sterkers, based in France, is currently a Video Solutions Marketing Director at Broadpeak. Damien Sterkers brings experience from previous roles at Broadpeak and Harmonic. Damien Sterkers holds a 1996 – 1999 Master of Engineering (M.Eng.) at Centrale Supélec. With a robust skill set that includes Digital TV, MPEG, IPTV, VOD, DVB and more. Damien Sterkers has 1 emails on RocketReach.

MORE BLOG

wrap up 2024 broadpeak

Wrap-Up: New Headquarters, Multi Awarded Solutions, and Much More !

Public Cloud Video Streaming Cost Optimization

A Pragmatic and Cost-Efficient Approach to the Use of the Public Cloud