The End of Embedded Computing as We Know It…(Part 1)

by Zededa
April 16, 2018

In this series of blog posts, we will discuss the end of embedded computing as we know it. The impact of IoT’s growth is profound in its ability to change society and how humans experience the world. But in order for that growth to be realized and that impact to be felt, embedded computing as we know it will disappear. “The End of Embedded Computing as We Know It… “ will examine how the current mindset of embedded computing being applied to IoT even in its infancy has already led to near catastrophic cybersecurity issues, what technology and processes will replace traditional embedded computing, and finally a contrast between traditional embedded computing and the future of embedded computing completely changes the dynamics of the world of IoT using a simple thing that we all see every now and then – an automotive “software update” notice.

From Wikipedia’s contributors:
An embedded system is a computer system with a dedicated function within a larger mechanical or electrical system, often with real-time computing constraints.[1][2] It is embedded as part of a complete device often including hardware and mechanical parts. Embedded systems control many devices in common use today.[3] Ninety-eight percent of all microprocessors are manufactured as components of embedded systems.[4]

“Real-time” computing and the “cyber-physical” edge has been around since before the Internet. The above definition is the commonly accepted definition of embedded computing. Its been thought of as a “given” when designing systems and deploying edge functions like factory robots, car computer systems, consumer electronics like thermostats and televisions, even many of the routers and switches from the core of the Internet down to your home gateway.

Embedded computing is thought of performing a “dedicated function” within a larger systems. This definition mentions a mechanical or electrical system but we should add “networked systems” to that list now as IoT has burst onto the scene. But therein lies the problem.

The idea of a “dedicated function” has led embedded computing developers to optimize lightweight code TIGHTLY integrated with the hardware that its running on – specific app capabilities being invoked based on hardware capabilities. Even updating the “firmware” is invoked by issuing a command, having the system “flash” update itself, then wait for the reboot and pray nothing goes wrong because if it does you will need to send a truck out to have someone unplug and re-plug the device in or “roll back” the firmware from a copy using the dreaded “console cable”. But this tightly integrated lightweight code did its job very efficiently without failure for long periods of time. So before the Internet and IoT, these optimizations were ideal for hardened devices that could be deployed once and almost never dealt with again (or in some cases deployed in places where local support wasn’t available, like consumers’ homes). Why do you think every now and then you run across an ATM that has a screen that looks like you’re playing with a Windows CE handheld from 1999? Well, that’s because it is from that era (as recently as 2016, between 10%-15% of the world’s installed ATM base running on a Windows CE operating system)! It does its job very well, it still dials into a RAS (remember those?) to authenticate its transaction, and spits out money. For the most part, these ATMs are NOT connected to the Internet fortunately (or at least I hope not).

Connecting the embedded system to the Internet pulls it out of the isolated, dedicated function optimized for decades of operation (a power system, a car’s computer, an ATM, a meter in an oil field, etc.) and pushes it into the Cloud era – a world where agile software updates every 2 weeks, hackers works endlessly to exploit vulnerabilities through the network (which increases rates of software updating), new services and revenue streams are spun up in minutes, and where constant monitoring and “touches” are going to be required by the operations staff. You can’t just take “embedded computing”, bolt on networking capabilities, connect it to the Internet and call it “good to go” as an IoT platform. It requires a completely different mindset and software design philosophy.

In fact, the approach of bolting networking on to traditional embedded computers and connecting them to the Internet has caused the most disastrous distributed attacks history has known – Stuxnet, Flame, and Mirai which enabled the 2016 Dyn attack that turned early IoT devices (built with embedded computing) into a massive denial of service attack on the global DNS impacting major sites like Amazon, AirBnB, PayPal, CNN, and FoxNews.

In fact Tech Republic reported:

In Q3 2017, organizations experienced an average of 237 DDoS attack attempts per month—or eight per day, the report found. These numbers represent a 35% increase in monthly attack attempts from Q2, and a whopping 91% increase from Q1.

Why the massive rise? Researchers believe that the reason is twofold: The growing availability in DDoS-for-hire services, and the implementation of many unsecured Internet of Things (IoT) devices.

Embedded computing as you know it MUST disappear if the predictions about IoT’s growth are even 10% of their forecast.

But what is the answer? In reality, the answer lies in technologies that already exist…