Learn how LEIP streamlines edge AI development. Schedule a call today! Schedule Demo

 

Practical Edge AI Terminology for Businesses

by Shelly Tzoumas | Posted Aug 08, 2024
Share

Practical Edge AI Terminology for Business 

Decode Edge AI with a business glossary crafted by experts dedicated to unlocking the power of efficient and practical AI for real-world applications.

Artificial Intelligence
Artificial intelligence, or AI, refers to the technology that enables computers and machines to mimic human intelligence and problem-solving skills. Current examples of this technology allow us to generate, classify, and perform various tasks.

Adaptive AI

Edge AI is adaptive in multiple ways. Some may state that the models are more accurate—adapting and learning as they collect and record more data. Since edge devices collect data locally, the model can be retrained and updated in the field using tools like Latent’s Ruggedized Toolkit, providing a faster turnaround time. Therefore, the more data you collect and record, the more you’re learning—since it’s quick and cost-effective, the models can adapt rapidly.

Dataset, or training data

Computer vision models are trained using images with corresponding labels, which are manually assigned to teach the model to recognize specific objects or features.  

Edge AI

The deployment of artificial intelligence (AI) in applications directly on local devices that sense, monitor, and record their environment and harness their processing power and connectivity to perform edge AI tasks.

Learn more about edge AI.

Edge or Edge Computing

Gartner defines the Edge as “the physical location where things and people connect with the networked digital world.” ABI Research says that “it’s everything not in the cloud.” Cisco says that the “edge is anywhere that data is processed before it crosses the wide area network (WAN).”

Edge computing allows data to be processed on the device near the data source or consumer. Edge devices can process data locally in near real-time. They can filter and prioritize data that is sent to central servers, thus moving some portion of the storage and compute resources out of the central data center and closer to the source of the data itself. 

Are IoT devices the same as Edge Devices?

IoT devices collect and transmit data. To do so, they tend to run single-purpose, single-process software and connect to the cloud or data center to process the data they collect. Edge devices perform computations locally and provide additional processing power and storage capabilities. This allows them to act on patterns recognized and provide immediate business insights or make real-time decisions.

Cloud Computing

Cloud computing is the on-demand delivery of all types of computing services, such as software, storage, databases, networking, analytics, and artificial intelligence, over the internet.

How is Cloud AI different from Edge AI?

While the goal of having computers simulate human intelligence by collecting data and processing it is the same with both cloud and edge AI, edge models run on devices at the network edge that may or may not be connected. Cloud AI models run in the always-connected cloud computing infrastructure. For this reason, Cloud AI models tend to be large in size and require huge computing resources to run effectively.

CPU

Central processing units (CPUs) are the primary processing units in any computing system, essentially the brain of the computer. They are general-purpose and can be used for ML inference, neural networks, and deep learning.

What is computer vision?

Computer vision is the field of AI that teaches computers to understand images and videos, enabling them to identify and process objects like humans do. Computer vision tasks include object detection, image classification, and semantic segmentation.

FPU

A floating-point unit (FPU) calculates complex mathematical operations using non-integer numbers. FPUs are specialized circuits in a computer processor and play a crucial role in the training and inference of neural networks. The various formats of FPUs can impact the accuracy, speed, and memory usage of ML models.

GPU

The graphics processing unit (GPU) is designed to efficiently process large blocks of data simultaneously, making them ideal for graphics rendering, video processing, and accelerating ML. 

Inference

In machine learning, inference is the application of new data to a trained model to generate predictions or classifications.

Image Classification

A computer vision AI task that assigns a label or class to an entire image.

Latency 

Latency is the time between the camera, sensor, or other device capturing sensory data and the AI model analyzing the data and generating a result.

Machine Learning

Machine learning (ML) is a subset of AI, but often used interchangeably, that is built on statistical algorithms to recognize patterns from data and then apply that learning to make better decisions. ML models are “trained” by applying their mathematical frameworks to a sample dataset that serves as the basis for the model’s future real-world predictions.

ML Model

A machine learning (ML) model is a program that has been trained on a set of data to autonomously recognize patterns or make decisions or predictions without human intervention.

MLOps

MLOps stands for Machine Learning Operations. It is a set of practices performed by machine learning engineers that streamlines the process of deploying machine learning models to production and then maintaining and monitoring them.

Object Detection

A computer vision technique that deals with detecting instances of semantic objects of a certain class (e.g., humans, buildings, or cars) in digital images and videos. Object detection is typically concerned with identifying multiple classes within a single image.

Recipe

LEIP Recipes are benchmarked and ready-to-execute configurations that combine model and device optimization into a repeatable process. Recipes coordinate machine learning models, data formats, optimization schemes, and deployment targets, allowing the user to select a model to use with their data and providing pre-configured steps and settings to deliver excellent performance of that model on a desired target platform.

Runtime Engine

The code you need to run a compiled model. It is a precompiled collection of features, or libraries, that can be called natively and that contains all the necessary dependencies required to run inference on compiled models. A software application interfaces with the runtime engine to run inference or services such as metrics and monitoring, security services such as encryption, and model updates. 

Semantic Segmentation

Semantic segmentation is a computer vision technique that assigns each pixel in an image to its appropriate class or object. In the context of autonomous driving, for example, semantic segmentation might be used to separate pixels belonging to a road class from those belonging to pedestrians, sidewalks, and other cars.

Share
View All
Tags

Related