Bringing Knowledge to Networks

By Mika Skarp

Telecom engineers and architects have long worked within the established framework of what are known as network "planes". Though there are traditionally just three planes - 1) The User, Data, Carrier or Bearer Plane, 2) The Control Plane and 3) The Management Plane - we've arrived at the point where a 4th plane must be brought into the fold.

In the traditional framework, the primary division is between the User and Control planes that are responsible for network payload (as we used to call it) and signalling, with Management as a support plane for Control.  Together, they have provided an effective framework for engineers and operators, but today's networks have evolved to the point that a new framework must be developed. Requirements are getting tighter and that means networks need to develop more "baked-in" intelligence. They need to serve an increasing number of mission-critical operations all at the same time.

Extended to the world of OTT, we're talking about assembly line robotics and even hospital theater surgery.  This means that a new layer is needed on the top existing layers to accommodate an enormously more complex set of bearers.

Well it seems the new plane is already here.

According ETSI GANA (General Autonomic Network Architecture) the new plane is called the Knowledge Plane. It's role is to scale up and down network resources based on the requirements of services in different traffic classes running in the network. At present, we can identify three traffic classes in any information network: Best Effort; Capacity Sensitive; and Delay Sensitive. The difference is that in Capacity Sensitive situations we can use significant buffers and re-transmission to deliver the payload (think one way video streaming or large file transmissions), while in Delay Sensitive situations, (like those involving two-way, symmetrical audio or video communications or live broadcasts), we can’t, as these connections require precise timing.

To make this system work there needs to be a network element that will serve to balance the supply and demand of resources. The goal should be that every application gets the services it needs to work as planned, and that there is no shortage of resources. Of course the question will arise - "what will be adequate to deliver best effort?" This is because any one or a set of services can use all the available resources. There is currently no way at the network plane level to limit, how often users will be allowed to say synchronize their email, Facebook, Instagram, Slack or WhatsApp accounts. (Actually, there is, but please read on).

In an increasingly dynamic user environment, a Knowledge layer is needed to enable autonomous control of the network. With that in mind, what kind of information should we be feeding into the knowledge plane? On the supply side it is basically an exercise of load balancing network slices, reading the actual capacity of the radio interface at the cell level, and ensuring the ability to off load requests where required and/or possible. This is fairly straight forward. Yes, it involves a lot of data, a lot of big data in fact, but it is well structured, and controlled data.

The demand side of the equation is something quite different. Here we need to understand user needs in significant detail. While theoretically we could ask the user a series of questions like "What kind of bandwidth you want up and down in Mbps?", "What is your jitter tolerance?", "How much delay can you handle?",  and in the future "What kind of caching and mobile edge processing would you allow?". Perhaps needless to say, even if the average user knew the answer, this would be practically impossible and not necessarily helpful.

Another approach to solving this problem is to be "aware" of application. Today, all information is accessed, delivered, shared or processed in some way by the application. The application that is in use can tell the network a great deal of important things in real time. It holds the key to knowing what the demand side of the network services equation needs to be. And because applications aren't only about the content they deliver, but their location, the user and the specific device they are using access it, we can also differentiate applications based on the situations or contexts in which they are used.

An easy example of this would be Netflix and Periscope, two applications that can both be used simultaneously by the same device, but that require very different treatment by the network. To deliver on the requirements of these two apps, and possibly both at the same time, their unique application footprints would need to be translated into profiles accessible by the knowledge layer of the network and in a given cell within it.

A more challenging situation would be one in which two policeman are on duty in the same location, but one of them is conducting traffic and another is negotiating a hostage situation. They may be using the same communications tool or application, but in wildly different contexts, requiring unique profiles. This example would suggest that in certain use cases, the profile should be signaled from somewhere other than from the user, and perhaps most appropriately from a police headquarters' dispatcher.

The desired end result in both of these cases, (laying aside their radically different ranking on the public importance scale), is the is the sudden ability to deliver extremely reliable services. Networks can only be fine-tuned and deliver service quality based on the information made available to the network. This is the basis for the argument for a new network plane paradigm that will add a Knowledge Plane to the mix. The only problem with this is that without the ability to deliver real time resource requirement information from on the demand side, arguably via applications, reliability will continue to be an issue, and best effort will remain the service standard of the day.

Although the Knowledge Layer concept is still very much that - conceptual - Cloudstreet has already delivered the world's very first (and to our knowledge only) knowledge layer product - the Dynamic Profile Controller (DPC).

As described in our examples, it creates a dedicated bearer with application-specific needs for any data service, be it Periscope,Netflix or a metropolitan police communication dispatch system to become a network-communicating profile. The profile can be called from any end device and profile based services triggered and delivered to the user should the cell's capacity be adequate. Note that in the case of public health and safety services, capacity issues do not come into play as they will always be prioritized above Netflix. Go figure!

As we work to further develop the DPC, many interesting scenarios and possibilities emerge.

If nothing else, all of this should make the case for why, and this is my strong belief, that 5G is more about adding the knowledge layer onto our networks than about adding new modulations schemes and frequencies as has been the hallmark of earlier telecom generation shifts.