Most modern devices, applications and services make extensive use of cloud computing resources (data centers, CDN’s). Nearly all hardware leveraged in cloud computing is composed primarily of: CPU’s, GPU’s, RAM, Network interfaces and non-volatile storage; all of these are the essential building blocks of modern cloud computing. But there’s much more to computing, especially networks, that decides how a networked resource can be utilized.
Cloud computing affects aspects like latency (the speed information is transmitted/received measured in milliseconds) and bandwidth (amount of data transmitted per second, GB, MB, KB/s). These 2 factors can greatly impact what applications and services are reliable in a given network, or even viable at all. Multi-access Edge Computing (MEC) is a term born from the ongoing rollout of 5G networks which are utilizing mobile edge computing for ultra-low-latency needs.
Multi-access edge computing was conceptualized due to the massive amount of existing infrastructure telecom network operators already have inplace.
Not all computer and processors are created equally, GPU’s for example provide the most efficient engine for running machine vision applications used for obstacle avoidance, prediction and artificial intelligence, with the tradeoff being they consume the most power. CPU’s can churn along through almost any tasks and run routine processes mainly for: orchestration, control, data collection/archiving, compression, general reporting and any other thing you throw at it. Nonvolatile storage is useful for storing and transmitting large videos and other files, while volatile memory like RAM is used for databases and file cache as it is 1000’s of time faster than SSDs/HDDs, albeit equally as expensive per bit. These system components provide the raw power & capacity modern computers have today.
The benefits of low-latency and high-bandwidth at the network edge enables and empowers crucial applications, internet services and devices with next-generation network and compute capabilities In this article we will look at 4 of the main reasons Multi-access edge computing will pave the way towards the future of the Internet of Things.
1. UHD/4K+ Video Streaming, Surveillance
Modern internet bandwidth is primarily made up of video traffic (~56%), with this disparity only set to grow now that more 4K and even 8k content is becoming available. MEC and software-defined networks are one of the only viable ways to handle such massive amounts of internet traffic for on-demand internet services. In fact Netflix, the largest video-content provider on the web comprises 15% of all traffic globally (at peak 40% of all US traffic), youtube following in its stead with 11.4% globally.
To handle this, Netflix already employs its 240TB storage appliances directly in ISP networks all over the world, essentially an edge CDN. With other video service providers like Amazon prime, Hulu & Twitch gaining ground, opening up the network edge to video providers is the only way to sustain the growth 4K+ video will need in the foreseeable future.
Video Surveillance can also benefit greatly from edge computing, as processing heavy feeds into actionable snippets & reports is much cheaper at the edge in terms of bandwidth and long-term storage in datacenters.
2. Machine Vision
On a similar note to video surveillance, machine vision takes it a a step further with its capabilities and actionability. GPU’s (Graphics Processing Units) and VPU’s (Vision Processing Units) are much faster at parallel tasks, like video processing & neural processing/networks. While VPU’s for low-power devices exist, their cost, flexibility and relative obscurity compared to GPU’s give MEC the advantage in several respects.
Advanced drones, for example, make use of flexible onboard GPU’s for their obstacle avoidance software. Often to the detriment of battery life as around 25% a drones battery is consumed by the relatively power-hungry GPU & its added weight. If this processing is moved towards the network edge, where 5G can provide ultra low latency & high bandwidth to all devices, the overall weight, cost, batterylife and capabilities of our drones can be greatly improved.
3. Web Application Security and Performance
When it comes to web-hosting, Amazon AWS is #1 for a reason: their extensive, widespread infrastructure locates your web content and services very close to the user, one might say almost at the “edge” of the network. Its safe to say then that Multi-access edge computing can take web applications and hosting to the next level.
Common services and web content can greatly benefit from edge computing & caching capabilities. Modern complaints about bloated websites loading slow and bogging down less powerful devices is a consequence of our more complicated web applications and heavier payloads.
4. Edge Analytics & AI
We all know when it comes to processing large amounts of data, data centers are there for a reason. But how cost-effective the total system is can be another matter entirely.
Over provisioning and burdening datacenters with high volumes of unfiltered information can quickly become unsustainable and unscalable. Edge analytics takes this to heart, by completing most of the processing at the edge, before the data is transmitted to the datacenter. Enabling optimal efficiency and empowering analytics with low-latency high-volume data, all while minimizing the usage of valuable resources like internet bandwidth & non-volatile storage.
Edge analytics easily powers offshore oil rigs, deep exploration, manufacturing, next-gen public advertising, cyber-physical security and much more.
Below is an infographic overview on the most effective applications of multi-access edge computing:
The post Why Multi-Access Edge Computing is the future appeared first on Lanner.
This post first appeared on Software-Defined Networks, IoT And Next-Generation Infrastructure, please read the originial post: here