9 min read

5G Superfluidity and the future of streaming video

The next generation mobile network, sometimes referred to as 5G, targets extremely ambitious key performance indicators (KPIs) such as sub-millisecond latency, 10 – 100 times higher data rates, 10 – 100 times more devices and 1000 times higher mobile data volume. The 5G Superfluidity projects proposes an architecture and technological framework to achieve these challenging KPIs. The Superfluidity architecture is a cloud-native and converged edge system for the future of mobile network technology that has ​‘superfluid characteristics’. An earlier blogpost focused on the network architecture that is part of this project. This blogpost details some of the technologies under development and focuses on their integration. It also presents how these technologies can improve the deployment and efficiency of streaming video in mobile networks.

The overall idea of the 5G Superfluidity approach is to run network processing virtualized, on-demand on third party infrastructure located throughout the network:

  • At the core in data centers
  • At micro data centers at PoPs in telecom networks
  • At the edge, in RANs next to base stations and at aggregation sites

Several technologies need to be developed to make this approach agile and flexible, i.e. superfluid. The superfluid nature can be described by the following properties of the network architecture:

  • Fast instantiation times (in milliseconds)
  • Fast migration (in hundreds of milliseconds or less)
  • High consolidation (running thousands on a single server)
  • High throughput (10Gb/​s and higher)

A superfluid network can advance the mobile network beyond the current state of the art by achieving the 4 big I’s (for Independence):

  • Scale Independence => scale from 1 to millions of users
  • Location Independence => Run services at any location in the network (including near the access network)
  • Time Independence => fast instantiation, migration, low latency
  • Hardware Independence => run on x86, FPGA, unikernel, raspberry-pi (ARM architecture)

Each one of these types of independence will be exemplified individually in the following text. We will use the relevant technologies under development in the 5G Superfluidity project to further explain them.

Scale independence

To create a superfluid video-cloud, we have experimented with on the fly data center scaling for video streaming with Unified Origin. This work was done with Citrix, Intel, Nokia and RedHat. The data center scaling consists of an offline and online phase.

Using on the fly data center scaling, resources at the server side can be matched to actual demand.

In the offline phase, a traffic generator based on Citrix Hammer is used to generate realistic video traffic. The Unified Origin is deployed on an OpenStack Node, which is monitored on the Virtual Machine level using telemetry software such as OpenStack Ceilometer or Intel’s Snap.

The monitor collects data from the virtual machines (VMs) at both the client and server sides. Examples include throughput at the client/​server, error rate and the average amount of requests served per second. For video streaming, throughput, delay and error rate at the client side largely determine the quality perceived by the end user.

The data collected form the client and server are fed into an analytics engine and used to find server side KPIs that influence the Quality of Experience at the user (client) side. These server side KPIs are then used to develop a scaling model that consists of the levels to which a VM should scale.

In the online version the telemetry module can generate an alarm when KPIs reach a certain level, triggering a scaling event. In the case of our experiment, Nokia’s orchestration tool CloudBand spins up a new VM running Unified Origin. In front of these VMs, a load balancer tunes the forward traffic to the different VMs in a weighted manner, offloading network and compute resources.

We call the above approach out-scaling and use it to match the resources at the server side to the actual demand for resources at the client side. This creates scale independence at the core data center, which, in the case of our experiment, runs an OpenStack based private cloud.

Location independence

Another way to improve video streaming is to run different network video streaming functions at different network locations. A gain can be achieved by running some of those functions as close to the user as possible.

Running Unified Origin on an edge cloud is an improvement over CDNs when streaming multiple video protocols.

As an example, we implemented such an approach based on the ETSI NFV architecture for traffic offloading. This was done by Unified Streaming together with Altice Labs, and makes the Unified Origin run in the edge cloud. Mobile traffic is directly offloaded from the core network to this edge cloud. The Unified Origin running there is used to generate different segments, Apple HLS, Microsoft Smooth, MPEG DASH on the fly (trans-muxing).

This approach has the advantage that the edge can now cache and request byte ranges, reducing backhaul traffic and cache usage. Experiments have confirmed that this approach is an improvement compared to traditional CDNs when streaming multiple protocols. We presented this approach at the ACM Multimedia Conference 2016, and for more details about it, we refer to our paper. In further experiments we will aim at edge based trans-DRM and personalization, which are other applications that can benefit significantly from running them at the edge.

Mobile Edge Computing is not completely new, and the clouds at the edge are sometimes called cloudlets. Cloudlets have been explored for enabling mobile cloud based applications before, mostly for services that benefit from the higher bandwidth and lower latency because of the reduced physical and hop distance.

Commercial products for edge computing are already available, among which are Akamai Cloud Computing, Nokia Liquids and Intel Network Edge Virtualization. Apart from this, there is a standardization effort ongoing by the ETSI that could enable more universal deployment of edge computing and cloudlets in the mobile network.

Edge computing is seen as a key technology in achieving a low latency and high bandwidth in mobile and other wireless networks. Superfluidity therefore natively integrates edge computing in the 5G mobile network architecture.

Time independence

Other experiments that are being performed focus on very minimalistic virtual machines. These minimalistic VMs are created by stripping the Linux kernel of all libraries and system components that are unnecessary to make the target application run on the particular VM.

Extremely small VMs running a specific function can result in sub-millisecond instantiation times.

Stripping the kernel can result in extremely small VMs that can run a specific function and instantiate rapidly. This can result in sub-millisecond instantiation times, which also enables fast migration and orchestration. Another possibility is the use of tiny Linux distributions such as Alpine Linux and TinyX that can be combined with container based technologies like Docker.

In our experiments, we have successfully deployed the Unified Origin software on Alpine Linux, achieving images below 60 MB for video streaming. In addition, we have composed the edge caching and transmuxing approach based on multiple Docker containers. Deployments like these enable rapid instantiation of Unified Origin based video streaming functions in the Superfluidity network architecture.

Hardware independence

The fourth and last kind of independence that is an important part of the Superfluidity framework, is the independence of hardware. All technologies that are part of the framework run on commodity third party hardware such as x86, ARM, FPGA or GPU. This increases flexibility and reduces the reliance on expensive proprietary closed box systems in the telecommunications industry.

Combining pure virtualization with the use of hardware accelerated subparts offers superfluid flexibility as well as high performance.

The Superfluidity architecture envisions integration of services that need hardware acceleration, which do not run well on current virtual machine infrastructures and need FPGAs and/​or GPUs to achieve proper performance. One example of a function that needs hardware acceleration is transcoding, which is important for video streaming.

As hardware acceleration is limited in current virtualization and cloud software like OpenStack, it’s challenging to support different types of hardware in a way that can be characterized as ​‘superfluid’. To solve this problem, the Superfluidity project introduces the concept of reusable functional blocks, or RFB’s. Instead of pure virtual machine based custom Virtual Network Functions (VNF), the VNFs will be composed of blocks that may also include hardware accelerated subparts, possibly running on dedicated hardware.

The concept of reusable functional blocks enables the use of VNFs that could previously not easily be deployed, e.g. with hardware accelerated subparts. Based on combinations of ​‘pure’ virtualized entities and pre-allocated services running on dedicated hardware, high performance can be achieved while maintaining some of the superfluid characteristics in allocation of resources. The University of Rome Tor Vergata is one of the research groups doing ground breaking work in deploying reusable functional blocks for VNF in 5G networks.

Unified Streaming and the benefits of Superfluidity

The Superfluidity project is a fertile ground for experimentation, allowing the Unified Streaming Platform (USP) to find its place in the cloudified future of IT and telecom infrastructures. Experiments with technologies like Docker, OpenStack and ETSI Mobile Edge Computing for the deployment of USP have taught us that we are rather well positioned here.

In addition, the Superfluidity approach of scaling the underlying infrastructure on the fly as well as the approach of running services anywhere in the network, enables resource efficient deployments that are very efficient and that adapt to user demands. The improved efficiency will result in reduced operating expenses (OPEX) for our clients, especially those that deploy large scale video streaming services.

The collaboration with industry and academic leaders has also offered us a close view of the contours of the next generation mobile network (5G). Please read our previous blogpost for more details on the technologies involved. Using the Superfluidity approach to make the Unified Streaming Platform work well with these technologies will not only benefit our customers in IT, but our customers in the telecom industry as well.

In a future blog, we will discuss the manageability of USP in such next generation mobile networks, via migration and orchestration, as well as the security and verification of continuous deployments.

The Superfluidity project is part of the European Union’s Horizon 2020 research and innovation program. In September 2016, as part of a first year review in Brussels, it was graded as excellent. The project is led by the University of Rome Tor Vergata and Nokia.

Share