See Edge MLOps in Action Request a Demo

Products

LEIP, the Latent AI Efficient Inference Platform, is finally the single tool that can take your AI model from development to device simply, reliably, and securely with a dedicated and principled edge MLOps workflow designed to produce ultra-efficient, compressed, and secured models for compute-constrained edge devices.

Technology
Features

Our flagship product, the Latent AI Efficient Inference Platform™ (LEIP), is a Machine-Learning-Operations Software-Development-Kit (MLOps SDK) that can optimize and secure neural network runtimes. LEIP is a unified SDK built specifically for machine learning engineers and AI software developers. LEIP brings AI to the edge by optimizing for compute, energy, and memory without requiring changes to existing AI/ML infrastructure and frameworks. It comes with customizable templates called Recipes that are pre-qualified to your hardware with all the maintenance and configuration reduced to a single command line call. Recipes answer questions like: “How do I train a Yolo Model on the MSCOCO Dataset and run it on a Raspberry Pi?” Because there are no worries about the target platform, the iterations and model delivery are much faster.

click the image to see hi-resolution

Ultra-Efficient AI Inference for Edge Devices

LEIP compresses conventional AI models below 8-bit integer precision but without losing result accuracy, creating up to a 10x reduction in size and a 3x improvement in inference. And with Latent AI’s Adaptive AI technology, the model can self-regulate its computational needs, only firing the parts of the neural network necessary to get the job done.

Secure Model Optimization Pipeline Automation

With LEIP, ML engineers develop optimized neural network runtimes for heterogeneous low power hardware (CPU/GPU/DSP). Our Latent-AI-Runtime-Engine (LRE) offers a modular micro-service software stack for secured inference processing, with support for CI/CD and asset tracking across the entire model life-cycle.

Fast Deployment with Development Consistency

Latent AI takes the hard work out of design and deployment by exploring thousands of candidate recipes that are pre-configured for your hardware. With hundreds of possible target models and hardware already in place, LEIP Recipes makes it easy to find the best optimized configuration that meets your design requirements.

Benefits

Scaling

LEIP builds the repeatability, reproducibility, and rapidity into edge model creation necessary for scaling the number of edge devices that can be supported.

And by making development consistent across models and hardware targets, projects that used to take weeks can now be completed in just hours.

Trust

LEIP builds trust for managing machine learning across models and datasets by enabling a repeatable process through automation, testing, and validation.

ML developers can use the same test/validation software at all stages of development, which greatly enhances the reliability and credibility of the AI model.

Seamless Integration

LEIP provides standardized and well-defined APIs to support integration into CI/CD software development flows.

This approach allows data science and software deployment teams to collaborate far more easily and leverage their individual strengths by defining handoff points between the two.

Better Use of Data

Because of shortened production cycles, LEIP enables enterprise stakeholders to capitalize on new data quickly and drive new iterations of the AI model.

This rapid development pace helps drive reliable AI analytics and actionable insights.

See what Latent AI can do for you

Watch how recipes can speed your deployment

Watch Video