See Edge MLOps in Action Request a Demo
See Edge MLOps in Action Request a Demo
Most of us are familiar with this scenario: “Hey Siri (or insert your favorite ‘wake’ word for your digital voice assistant), turn on the hallway light.” Instantly and magically, that hallway light turns on, and life is good.
But then, reality kicks in, and all of a sudden your internet service provider is experiencing an outage in your area! Now, you curse your way to find the darn switch to turn the light on yourself. Do you wonder why such a simple thing cannot happen the way it’s supposed to without an internet connection?
As with most artificial intelligence tasks, the majority of the workload is performed in the cloud, also known as massive data centers. In the above example, other than the ‘wake’ word for your digital voice assistant, all of your babbling is transmitted to Amazon, Google, or Apple’s data centers. At these data centers, your voice gets inferred, or in other words, processed to something the computer can understand. In this case, the AI realizes this is a command for your home automation system and sends the command back to your home controller, which then fulfills the action by sending a “turn it on” command to that hallway light. Typically, there is a delay of a few milliseconds to a few seconds before the command completes.
Did you know your voice command is stored in these data centers for training and retraining? While I want my lights immediately turned on when I finish my command, I’m also expecting privacy. I don’t want this information stored anywhere, much less anyone to know that I turned my lights on or know how silly or smart I sounded with that command!
Now, imagine the same requirements if you are running an automated manufacturing plant, driving your car through one of the national parks, or operating an oil and gas platform offshore. You want to utilize the maximum benefit that AI has to offer, but not be held hostage due to bad network connectivity, data privacy, or constrained environments.
Here’s another predicament we’re facing, data storage scarcity. It’s predicted by 2020 that we will be generating 44 trillion gigabytes annually. However, the world of digital storage can only accommodate a small fraction (15%) of that data! Do you remember looking at your fridge after your Costco run? Do we really need to collect all this data? Can’t we process this data where it originates and derives the inferences?
At Latent AI, these issues are fundamental to our existence; our mission is “Enabling Adaptive AI for a Smarter Edge.” In this context, we continually ask ourselves, how do we help solve these issues to enable a better quality of service for a consumer or an enterprise?
Founded on the years of research done at SRI International, (and we are thrilled to follow in the footsteps of Nuance Communications, Siri, Abundant Robotics, among other successful SRI Ventures), Latent AI is backed by leading investors and we just closed our seed round, led by Steve Jurvetson at Future Ventures, followed by Perot Jain, Gravity Ranch and super angels such as Frank Blake (Chairman of Delta Airlines, Board Member at Macy’s, P&G), Dave Rosenberg (Co-Founder, Mulesoft) and Bruce Graham (Chairman of Cellink Circuits).
At Latent AI, we are providing tools that integrate with your existing workflow and tool flow to help AI engineers deliver neural net models that are optimized and compressed. We enable those models to be executed efficiently on any chosen platform and edge device.
If you or your organization identifies with these problems, we would love to talk to you and evaluate the opportunity to solve your current issues! Please feel free to reach out to me directly at Jags@latentai.com.
Also, as any startup CEO would emphatically say, “Yes! We’re Hiring!” Please check out our open positions on our Careers page.
By Jags Kandasamy, Co-Founder, CEO, Latent AI, Inc.
Photo Credits: Adobe Stock