Edge computing offers a new approach to an ancient question and gives service providers immense freedom to solve problems and create value. Read on to learn about edge computing and see how two proof-of-concept systems in the Kyrio NFV Interoperability Lab put the power of edge computing to work.
One of the oldest questions in computing is, “Where do I put my computer?”
Mainframes and super computers need special buildings, but cell phones fit in the back pocket of a pair of jeans. A tablet or laptop can move from place to place in a satchel or backpack. A workstation sits under a desk, and a database server is almost always behind a locked door. The Internet of Things (i.e., Tiny Computers) can put powerful, connected devices on everything from your porch camera to your dog’s flea collar. Today’s computers are everywhere!
What Is Edge Computing?
That is the notion of edge computing—the placement of compute, storage and networking resources close to the user at the location where they can most efficiently, securely and economically accomplish their functions.
Your smartphone has a camera and earbuds, and you probably keep your recent pictures and favorite tunes on it. But your entire photo and music collection might well be stored somewhere else, perhaps in the “cloud” … and where exactly is that? You probably don’t think about it, instead leaving that to service providers to figure out.
That is precisely what the industry is up to right now: figuring out where to put the right resources in the right places so that you can text, talk, browse, buy, bank, store, work and play … anywhere … and without delay.
What Is Network Latency?
Network latency is the time it takes for a data packet to travel from one place to another across a local network or the whole Internet, and then for a response to come back. For example, suppose you want to watch a video clip. Here’s what happens:
- Video plays! : )
- Video stops! : (
- Jerk, pause, stutter
- Video starts again (repeat steps 4–6)
That’s the problem with latency: It interferes with the user experience.
One way to reduce latency is to move resources closer to the user, shortening the time necessary for packets to go back and forth and affording less opportunity for them to get lost. An entire generation of equipment and applications is in production now to help service providers put exactly the right number and type of resources anywhere in their network.
Very high-density compute systems with minimal space, power and cooling requirements are entering the market and enabling network operators to locate compute, network and storage resources anywhere. On a shelf at the local Starbucks? No problem! In the basement of an apartment building? Of course! In a closet at the mall or movie theater? Done! In one of those green boxes behind your backyard fence? Sure thing!
The network used to look like boxes strung together in a fixed grid. Today, it’s starting to look more like an amoeba with intelligent pseudopods extending dynamically to wherever they’re needed.
SDN/NFV Proofs of Concept at the Kyrio NFV Interoperability Lab
Now that you have an understanding of edge computing and network latency, here are two examples that demonstrate this powerful approach to networking and resource deployment. Both demos are hosted in the Kyrio NFV Interoperability Lab, and you can learn more about these demos at the SCTE Cable-Tec Expo in Atlanta, Georgia next week at booth #713.
The first system is sponsored by Intel and shows Qwilt Corporation’s Edge Cloud video caching system, running on enterprise-class Intel servers.
The quality of video content improves substantially as the source of the content moves closer to the consumer. In Qwilt’s Edge Cloud cache system, content is available in a central server, or it can be stored in a cache server located at the edge of a service provider’s network, close to the subscriber.
Once the system determines the location of a customer and the desired content, that content can be transferred to the nearest edge cache server. User experience improves because of reduction in latency, delay variation and packet retransmission.
The Qwilt edge cache server is implemented as a software application running on standard, off-the-shelf enterprise-class servers, which enables rapid deployment, flexible placement and easy scaling.
The second system is sponsored by Wind River and includes that company’s Titanium Cloud, Casa Systems’ vCCAP core and remote PHY node, and Intraway Corporation’s Symphonica Orchestrator. At the hardware level are two high-density server solutions from Aparna Systems and Kontron AG.
In this cable-critical use case, Intraway provides automated provisioning, deployment and management of the virtual CCAP core, and Remote PHY nodes and cable modems.
Casa’s vCCAP core is interoperable and can be deployed on either Wind River’s supported Titanium Cloud platform or on CableLabs’ open-source SNAPS OpenStack cloud platform.
This demo shows both cloud platforms running on ultra-high-density server solutions, again enabling compute resources for the vCCAP core to be located anywhere in the network.
Combined with unified operation, management and faulting at the orchestration layer, these systems show an elegant, highly flexible solution for extending and scaling the DOCSIS® access network.
Solution providers and network operators are laying the foundations for very intelligent distribution of resources and using the power of edge computing to solve problems and create new services. To learn more about the Kyrio NFV Interoperability Lab, visit me next week at SCTE at the CableLabs and Kyrio booth #713.
And check back here for future posts about new and interesting use cases. Virtual reality simulators, smart city data analytics, autonomous vehicle management—it’s all on the way to a cable network near you!