Why Latent AI was selected as the first Booz Allen Ventures partnership

Latent AI and Booz Allen Hamilton both understand the importance of delivering ML models faster. It’s one of the driving reasons Booz Allen Hamilton selected Latent AI as their first Booz Allen Ventures recipient. Booz Allen Ventures is designed to close the gap between opportunity and capability by building operational readiness into AI development and ...

Latent AI Adaptive AI research featured in Machine Vision and Applications Journal

Latent AI research regarding dynamically throttleable neural networks was recently accepted for publication by the prestigious Machine Visions and Applications Journal (MVA). Part of the problem with AI has been its inflexibility, especially regarding computing power. Neural networks runtime either fully fire, or not at all. Latent AI research shows how Adaptive AI can respond ...

Latent AI named IoT Emerging Company of the Year for the Enterprise Market 

Latent AI was recently honored as the IoT Emerging Company of the Year for the Enterprise Market during the 10th Annual Compass Intelligence Awards. Other IoT category winners included technology stalwarts like Verizon, Samsung, and Palo Alto Networks, so we are doubly pleased to be included on such a distinguished list.   Current AI is far ...

Latent AI Named Exploding Topic Top Edge AI Startup

Latent AI is included in the most recent Exploding Topics newsletter as a top Edge AI startup. Also included was their prediction that the Edge AI market will grow to $1.15B by 2023 (representing a CAGR of 27%). We don’t disagree. When incremental improvements in Edge AI inference can yield exponential returns in production, it’s ...

Join Latent AI at TinyML Summit San Francisco

Wednesday, March 30th 2022 Hyatt Regency San Francisco Airport 1333 Bayshore Highway, Burlingame, CA 94010 https://www.tinyml.org/event/summit-2022/ Optimizing for tinyML is not easy. ML Engineers go through a frustrating process that is difficult to iterate and requires constant trade-offs between model accuracy, inference speed, size, and memory. However, there is a better way. Please join Latent ...