TOP TAKES is IoT Sources’ filtered content channel, bringing you the most important breaking news and notable events surrounding the Internet of Things. Today’s post originated from: www.barrons.com.


One of this week’s Top Takes is from Barrons which explains it forecasts for Apple and Tesla leading the use case for Edge Computing.  The article goes into detail on the current model of thin client + fat cloud and how the demand, specifically for IIoT architectures, is for a new model of edge computing with local AI/ML compute.  Enjoy the Top Takes read below.


Driven by the rise of artificial intelligence, the fat clouds of Amazon (AMZN) AWS, Microsoft (MSFT) Azure, and Alphabet’s (GOOGL) Google Cloud have been leading the development of machine learning. That’s all serving existing thin smartphones, observes Cihra.

In fact, we’re in the cloud-phone era of AWS plus iPhone, he writes:

We believe the two most important developments in tech over the past decade were Apple’s launch of the iPhone in 2007 and Amazon’s launch of AWS a year earlier in 2006, which have headlined a THIN CLIENT + FAT CLOUD computing model where billions of mobile devices rely on the cloud for data, processing, storage and increasingly now machine learning (ML) for artificial intelligence (AI).

But, driven by a need for lower latency and more local processing, edge computing is poised to come in a big way, he writes:

But we think EDGE COMPUTING looks like the incremental growth opportunity, increasingly necessary to overcome cloud overhead in latency and bandwidth, to enable billions of new IoT end-points and real-time LOCAL AI/ML for autonomous systems. As applications push processing back out to the edge, we see devices bifurcating into either Cloud Conduits or Edge Computers that will require much more local horsepower.

Cihra outlines the ways in which smartphones and other client devices will become smarter, to handle more local processing, and that includes self-driving cars from Tesla and from Google’s self-driving company, Waymo:

Make machines smarter via real-time on-board AI/ML; – Make thin-client smartphones fatter as they need more processing+storage for on- device ML and virtual/augmented reality (VR/AR); pushing smartphone configurations/BOM costs and thereby ASPs even higher; -Enable more frictionless user interfaces (UIs) headlined by Voice+Vision vs. Keyboard+Screen; – Enable data input from devices that increasingly involve no human interaction at all – e.g., cameras, IoT sensors for location, vibration, temperature, etc.; – Favor vertically-integrated vendors (hardware+software) particularly early on – e.g., Apple; Tesla; Google now building Hardware; GM acquired Cruise.

Cihra sees the push for local power making the hardware evolve from today’s “conduits” to the cloud — Amazon’s “Echo” is the primary to example — to full-blown “edge computers” and “gateways:

To bridge the distance between Cloud and Edge, a number of parties are also looking at architectures that push computing out closer to the edge but not literally into end-points themselves, using distributed EDGE GATEWAYS, MICRO SERVERS and tiers of FOG COMPUTING, which we see helping improve the economics and scaling particularly of IIoT applications like smart factories, buildings, cities, etc.

From an architectural standpoint, we see a number of enterprise and telecom/networking vendors targeting IIoT and broader edge computing using EDGE GATEWAYS, distributed MICRO SERVERS and tiers of FOG COMPUTING that push processing out closer to the edge, albeit often not inside the ultimate end-point devices, themselves. Sitting between the cloud and network devices, distributed edge servers can connect and aggregate local end-points, but also provide a layer of intelligence and pipe back to cloud services that may still do much of the heavy lifting.

“Inferencing” is one function of machine learning that can migrate from cloud to edge, writes Cihra. Here Nvidia (NVDA) has an early lead, but other forms of compute could take share:

NVIDIA GPUs have emerged as the near de facto choice for accelerating cloud- based ML, now being joined by processors even further optimized for that task (e.g., Google’s TPU, Intel/Nervana). But new-gen processors are also being developed specifically for ON-DEVICE ML/AI, including Apple’s neural engine, Qualcomm’s Snapdragon (NPE), Huawei’s Kirin, Intel’s Movidius, Arm’s Project Trillium, NVIDIA’s DRIVE and Jetson.

Tesla is “ahead of the curve” in making autos an edge computing device, he writes:

A self-driving car cannot be “programmed” to drive but rather needs to think and act for itself, and cannot rely on the cloud but rather needs to process streams of sensor data and complex neural net pipelines in real-time. Cloud connectivity is used just periodically for data sharing and training using real-world driving data (what Tesla calls “fleet learning”). – Horsepower takes on new meaning as we estimate an autonomous car will require 50-100X the processing power and >10X the DRAM+NAND of an ADAS car today. – We think Tesla has been ahead of the curve in using its connected fleet of customer cars for shared ML and building an in-house model (e.g., replaced Mobileye) that adds complexity, risk and cost, but also ultimate leverage. Powerful on-board compute processes data from cameras, radar and sonar, which is periodically (e.g., at home, at night via WiFi) uploaded to the cloud for training; with our estimate Tesla already now has the capability to log ~9mil miles of real-world driving data per day.

For that reason, he sees Apple ultimately either making a car itself, or getting out of the market, rather than just building “modules,” as some have in past forecast:

Apple is investing in autonomous driving as “the mother of all AI projects” but has not yet committed to a car. Yet we see its entire business model based on vertically- integrated control, so think it unlikely Apple sells modular AI to third-parties. We rather expect Apple to get all-in or all-out over the next 2yrs, and are thinking all-in given the draw of technology disruption and sheer size of TAM.


You may also like

Leave a comment