11 min read

A superfluid 5G mobile network architecture

Video streaming on the Internet is an increasingly popular application that poses challenging bandwidth requirements, especially in mobile networks and when the number of users increases. The next generation 5G mobile network that is currently under development, will target 10 – 100 times higher bit-rates and 1000 times more mobile traffic. This could enable much more cost-efficient Internet video streaming deployment. The development of the 5G mobile network in Europe falls under certain research and development projects sponsored by the 5G public private partnerships (5GPPP) consortium under the European commission.

Unified Streaming has joined one of these large scale projects, the Superfluidity H2020 project, working with industry leaders such as Nokia, British Telecom, Telefonica, Intel, Red Hat and many others to develop an architecture for the future 5G Mobile Network. The main feature of this network architecture is described as ​‘Superfluidity’. Superfluidity, in this context means highly adaptive and flexible, i.e. the ability to meet user expectation and demand instantly without wasting time and resources. The question arises, how can a network be made superfluid? And how can this be used to reach the challenging targets of 5G?

In this blog post, we will present some of the exciting answers to these questions. We will first summarize the emerging network technologies used to achieve the H2020 Superfluid network and then dive into the details of the network architecture envisioned with some additional implementation details.

Network Function Virtualization (NFV)

NFV is a technology that aims to run network functions on virtualized infrastructure. Examples of network functions are routing, switching, firewalls, proxy servers etc. and also some larger network functions such as Evolved Packet Core that could be composed of many sub-functions. Network Function Virtualisation relies heavily on virtualization techniques such as hypervisors like Xen, KVM or container technologies such as Linux containers and Docker to be implemented. These virtualization technologies have been popular for deployment of services on the web, and have caught the attention of the telecom industry. The promise of NFV is that network elements can be deployed, scaled and migrated more easily. This is mainly because virtualized instances tend to be much more flexible in terms of deployment, resource allocation and other aspects compared to hardware based functions.

NFV has received significant attention from industry, inside standardization bodies, such as the European Telecommunications Standardization Institute (ETSI). ETSI NFV has defined a reference architecture for NFV and chosen the Open Stack cloud software platform as a reference software implementation for NFV. Many network functions can already be performed on virtual machines on this platform, but some run into performance issues. For example, packet forwarding functions such as routing, switching and firewall often reach throughput bottlenecks (due to the overhead of the virtualization layer and/​or operating system kernel space traversal). In addition, Network Function Virtualization poses more demanding performance requirements to the virtualization technologies in terms of reliability and stability (triggering further research). On the other hand, many functions in the telecom industry could already run on virtualized hardware such as VMWare or OpenStack without major performance setbacks. Therefore, NFV is a promising emerging technology for telecom operators as it makes telecom network deployment more granular and flexible. NFV can co-exist with existing physical network functions, and is expected to be gradually introduced replacing physical network functions with virtualized ones, making network deployment more flexible and agile.

Cloud-RAN (C‑RAN)

C‑RAN is a technology that enables Radio access network functions to be performed in a centralized cloud located near multiple radio access points. C‑RAN performs functions such as modulation and error correction coding that are needed for wireless transmission, at a centralized location. It sends processed signals over a fibre link to the Remote Radio Heads (RRH’s) that amplify the signal and transmit it to the ether. The aim of Cloud-RAN is to reduce overall energy usage and enable small RRH’s that can be deployed in the space constrained settings that operators often face. Cloud-RAN can ease the deployment and implementation of small cell configurations (thanks to smaller RRH’s and implementation of inter cell interference mitigation techniques). Lastly, the Cloud-RAN can expose some details on the radio link status to the mobile edge cloud allowing improved transmission schemes in mobile edge computing. Cloud-RAN is an important technology to achieve a highly performing mobile access network.

Edge computing

Computing on the edge is a technology that enables services to run at the edge of the network, near the access network for example (e.g. the mobile radio access network). Sometimes these services run on small edge clouds on this infrastructure, called cloudlets. Cloudlets have been explored for enabling mobile cloud based applications reducing request response times. Services running on cloudlets can benefit from higher bandwidth and low latency due to the reduced physical and hop distance. Commercial products for edge computing are already available such as Akamai Cloud Computing, Nokia Liquids and Intel Network Edge Virtualization. Furthermore, there is a standardization effort ongoing in ETSI regarding mobile edge computing that could enable more universal deployment and cloudlets in the mobile network. Edge computing is a key enabling technology to achieve low latency and high bandwidth in wireless networks targeted in 5G. This is mainly achieved by reducing physical and hop distance.

Software Defined Networking (SDN)

SDN is an emerging network technology for the routing and forwarding of packets. Traditional Internet routers make decisions based on the topology information that they have which is often local or pre-programmed. SDN aims to separate the packet forwarding and routing logic into (less intelligent) forwarding nodes called SDN switches and centralized control logic called the SDN controller. The SDN Controller then has a more global overview of the topology and can instruct the forwarding behaviour of SDN switches. Based on the global overview, switches on end-to-end paths can be instructed on their forwarding behaviour providing better end-to-end Quality of Service guarantees. Furthermore, it can be used to virtualize networks, by allocating certain slices of the switch queue/​line bandwidth to specific streams. Software defined networking is enabled by the open source OpenFlow protocol for communication between the SDN Controller and switches. The larger OpenDayLight platform provides a full platform for SDN including the SDN controller. The OpenFlow protocol is already supported in switching hardware from major vendors such as Cisco. SDN is being increasingly deployed in practice, but still some issues remain to be resolved such as scalability to many autonomous systems/​regions.

The Superfluidity approach

The aim of the Superfluidity project is to combine these promising emerging techniques into a next generation network architecture that can be characterized as ​“Superfluid”. A superfluid network is highly adaptive, with extremely small granularity and near instant response times. The Superfluidity framework will partially expose some of these technologies via API’s enabling applications to optimize their resource usage and deployment. This could reduce over-provisioning practice, resulting in having the correct amount of resources allocated at each time instance. To break down the concept of network superfluidity, we highlight the five important aspects of network superfluidity and how they are achieved by the Superfluidity project.

First, superfluidity implies near-instant instantiation of services and resources. This can for example be achieved with services or network functions running on minimalistic Virtual Machines that can be started instantaneously. The superfluidity project is experimenting with minimalistic virtual machines based on TinyX, a tiny linux implementation (5<MB) and container technologies such as Docker to achieve this. By deploying functions on tiny virtual machine images, rapid instantiation can be achieved. In experiments, virtual machines with a network function installed on them can be loaded within milliseconds. In addition, more granular and resource efficient Network Function Virtualization could be achieved and less overhead is wasted by keeping a full operating system running.

Second, superfluidity implies scaling from one to many users, with fine granularity, near instantly. Services could initially be deployed in a minimalistic manner (such as on a tiny VM) and when demand increases they could upscale on the fly to multiple or larger virtual machines. This ability to scale is one of the important superfluid network characteristics. To achieve this, telemetry (monitoring of service performance) and orchestration (allocation of new virtual resources) have been integrated in a feedback loop. Telemetry solutions such as OpenStack’s ceilometer and/​or the Intel’s Snap telemetry framework are used to monitor services running in the network.

Telemetry data contains information on the virtual machine utilization such as CPU Utilization, network I/O, memory usage etc. The telemetry data is then fed in an analytics pipeline that can compute the most important performance indicator for that specific service. This is done by matching the telemetry data with key performance indicators (for example measured at the client side known to affect user experience) and performing a principal components based analysis. This will then allow interpretation of this telemetric based on most critical components and online to trigger ​“alarms” to upscale or downscale resources. The upscaling is done using orchestration with Nokia’s CloudBand solution for OpenStack based virtual infrastructure. This software can detect the alarm and spin additional virtual machines (outscaling), or increase the memory/​CPU of existing virtual machines (upscaling).The feedback loop of telemetry and orchestration enables scaling of network functions near instantly.

Third, superfluidity implies that services can run on the best possible location (the edge, the core) and seamlessly shift between these locations. This can be implemented with the integration of mobile edge computing as a complement to running services in the core data center. This way services can run on the edge node that is closest to the bulk of end users, resulting in performance improvements. To implement this in practice, superfluidity is exploring early implementations of the ETSI MEC framework and integration with the coredata center cloud based on OpenStack. Also, the Superfluidity project includes a state of art C‑RAN deployment with highly performing radio link that could provide useful information on the radio link status, to enhance instances deployed at the edge. For future networks, it will be important that both services and network functions can be deployed virtualized in the best possible location.

Fourth, the superfluidity characteristic is needed for services that require hardware acceleration. Some services can run on virtualized (X86) based infrastructure, but for others hardware acceleration is critical. Many functions are best implemented using GPUs or FPGA based hardware that is not well supported in virtualized instances. To this aim, the Superfluidity project introduced the concept of re-usable functional blocks. Instead of pure virtual machine based custom virtual network functions (VNF’s), VNF’s will be composed of blocks that may also include hardware accelerated subparts, possibly running on dedicated hardware. The concept of re-usable functional blocks enables VNFs that could previously not easily be deployed. Based on combinations of ​“pure” virtualized entities and pre-allocated services running on dedicated hardware, performance can be achieved while maintaining some of the superfluid characteristics in allocation of resources.

Lastly, an important aspect of running such a superfluid network that can deploy new services and network functions on the fly is security and verification. Deployment of some functions could break the network and have serious security implications. For example, new VNFs could break certain flows or even result in network connectivity and security breaches. To this aim, the Superfluidity project is exploring methods based on symbolic execution and formal verification to avoid such failures before the actual deployment happens. With these methods it will be possible to check and verify that an action will not break or corrupt the network (before it is executed). It is extremely important to realize superfluidity with all these emerging technologies in realistic situations.

To summarize, the 5G Superfluidity effort constitutes a large architectural change to the mobile network, that holds great promise of reaching some of the extremely ambitious performance goals of the 5G effort (latency under 1 millisecond, 10 – 100 times higher data rates, 10 – 100 times more devices, 1000 times higher mobile data volume) for some applications. It envisions how technologies like SDN, NFV, edge computing and Cloud-RAN can be integrated in the future and provides the glue to make them work together. Performance targets will be achieved by more efficient resource usage and location flexibility complemented with improved mobile radio link bandwidth. Furthermore, we expect this architecture to reduce over-provisioning practice considerably and enable more efficient and tailored network and resource usage in 5G enabled applications.

For more information about the Superfluidity project please go to super​flu​id​i​ty​.eu.

Share