NXP – Computing at the edge: a new model for connected embedded systems

Pg13_NXP

NXP_logo_CMYK_00
Cloud computing provides a new model for the operation of embedded systems, drawing on the advantages that virtualisation, massive scale and platform service models can offer.

While cloud systems are flexible, they also impose a limitation: the workload they carry must not be too sensitive to network latency or restricted bandwidth. For applications in which people are directly involved, such as mobile phones, tablets or personal computers, the delays and bandwidth limitations of cloud computing are not noticeable. But some types of applications cannot rely solely on cloud-based computing, including those which:

• require real-time millisecond responses, such as autonomous driving
• transfer massive amounts of data, such as real-time video processing
• are subject to regulatory requirements governing the location of data storage or processing

Edge computing provides an alternative approach for when cloud computing is unsuitable. It was developed to support Internet of Things (IoT) deployments. Edge computing works by selectively moving cloud-compute processing from cloud data centres to edge and end-node processing platforms.

When implementing edge computing designs, multi-core application processors from NXP Semiconductors offer various advantages, including very low power consumption, support for open-source machine-learning algorithms and cloud computing platforms, and built-in security capabilities. This Design Note describes the approach that system designers are taking to edge computing today, and outlines the key requirements of the processor component of such systems.

New agile software model
Edge computing differs from traditional embedded systems because it draws on the agile software models of cloud systems. Edge computing makes use of virtualisation and containerisation to support the deployment of software that can be continually updated.

Edge computing systems often operate over public networks, so they require the integration of both wired and wireless network connectivity. Crucially, they also require the implementation of a hardware root of trust. Hardware root of trust is an approach to computing in which the processing of security algorithms as well as key and certificate storage are performed in trusted hardware. Hardware-based trust systems are much more robust than software-based systems, offering enhanced protection against cyber attacks that seek to compromise software-based systems.

Edge computing also helps to guarantee the user’s privacy and security by sanitizing traffic transferred to the cloud, and filtering personally identifiable information when necessary.

Edge computing can be applied to all levels of the network between the cloud and end nodes, including on gateways, access points and metro edges. The concepts of edge computing may also be applied to end nodes.

Edge computing is revolutionising the way embedded systems are designed, prompting a move from a system of loosely coupled fixed- function appliances to truly distributed systems.

In addition, by using the Amazon Web Services (AWS) Greengrass software, AWS Lambda functions developed for the AWS cloud can be run locally on edge computing devices. This approach enables device data to be synchronised with cloud data stores even when not continuously connected to the Internet.

Pg13_NXP_2

Fig. 1: Typical architecture of a cloud computing system

Supporting artificial intelligence through machine learning
The past several years have seen a rapid increase in the use of machine-learning as a technology to support Artificial Intelligence (AI) applications such as Amazon’s Alexa voice service. While the cloud will remain the best place to train machine-learning algorithms, edge devices are increasingly becoming the best places to execute the inference component of machine-learning operations, and to enable AI for whole new classes of systems. Machine-learning training is often and rightly assumed to require intensive use of a Graphics Processor Unit (GPU).

The power-efficient performance of multi-core ARM Cortex®-A53 and A72-based Systems-on-Chip (SoCs) assisted by local GPUs or vector processing engines makes NXP’s SoC offerings the best choice to run the most popular open-source software libraries for machine learning.

NXP’s role in edge computing systems
NXP features a broad line-up of processors for edge computing, from the single-core sub-1W LS1012A to the recently announced 16-core LX2160A.

NXP offers AWS Greengrass running on all of its Layerscape® processors, which are all also equipped with NXP’s trusted architecture platform and cloud commissioning software to support the security of IoT devices throughout their life.

Each processor features a unique ID, on-chip secure storage, along with secure boot and software over-the-air updates. These capabilities are critical to the security of the IoT, drawing on certificate-based just-in- time registration to the AWS IoT service.

FTM-Board-Club_Stacked
Orderable Part Number: FRDM-LS1012A-PA