Skip to main content

Xnor’s saltine-sized, solar-powered AI hardware redefines the edge

“If AI is so easy, why isn’t there any in this room?” asks Ali Farhadi, founder and CEO of Xnor, gesturing around the conference room overlooking Lake Union in Seattle. And it’s true — despite a handful of displays, phones, and other gadgets, the only things really capable of doing any kind of AI-type work are the phones each of us have set on the table. Yet we are always hearing about how AI is so accessible now, so flexible, so ubiquitous.

And in many cases even those devices that can aren’t employing machine learning techniques themselves, but rather sending data off to the cloud where it can be done more efficiently. Because the processes that make up “AI” are often resource-intensive, sucking up CPU time and battery power.

That’s the problem Xnor aimed to solve, or at least mitigate, when it spun off from the Allen Institute for Artificial Intelligence in 2017. Its breakthrough was to make the execution of deep learning models on edge devices so efficient that a $5 Raspberry Pi Zero could perform state of the art computer vision processes nearly well as a supercomputer.

The team achieved that, and Xnor’s hyper-efficient ML models are now integrated into a variety of devices and businesses. As a follow-up, the team set their sights higher — or lower, depending on your perspective.

Answering his own question on the dearth of AI-enabled devices, Farhadi pointed to the battery pack in the demo gadget they made to show off the Pi Zero platform, Farhadi explained: “This thing right here. Power.”

Power was the bottleneck they overcame to get AI onto CPU- and power-limited devices like phones and the Pi Zero. So the team came up with a crazy goal: Why not make an AI platform that doesn’t need a battery at all? Less than a year later, they’d done it.

That thing right there performs a serious computer vision task in real time: It can detect in a fraction of a second whether and where a person, or car, or bird, or whatever, is in its field of view, and relay that information wirelessly. And it does this using the kind of power usually associated with solar-powered calculators.

The device Farhadi and hardware engineering head Saman Naderiparizi showed me is very simple — and necessarily so. A tiny camera with a 320×240 resolution, an FPGA loaded with the object recognition model, a bit of memory to handle the image and camera software, and a small solar cell. A very simple wireless setup lets it send and receive data at a very modest rate.

“This thing has no power. It’s a two dollar computer with an uber-crappy camera, and it can run state of the art object recognition,” enthused Farhadi, clearly more than pleased with what the Xnor team has created.

For reference, this video from the company’s debut shows the kind of work it’s doing inside:

As long as the cell is in any kind of significant light, it will power the image processor and object recognition algorithm. It needs about a hundred millivolts coming in to work, though at lower levels it could just snap images less often.

It can run on that current alone, but of course it’s impractical to not have some kind of energy storage; to that end this demo device has a supercapacitor that stores enough energy to keep it going all night, or just when its light source is obscured.

As a demonstration of its efficiency, let’s say you did decide to equip it with, say, a watch battery. Naderiparizi said it could probably run on that at one frame per second for more than 30 years.

Not a product

Of course the breakthrough isn’t really that there’s now a solar-powered smart camera. That could be useful, sure, but it’s not really what’s worth crowing about here. It’s the fact that a sophisticated deep learning model can run on a computer that costs pennies and uses less power than your phone does when it’s asleep.

“This isn’t a product,” Farhadi said of the tiny hardware platform. “It’s an enabler.”

The energy necessary for performing inference processes such as facial recognition, natural language processing, and so on put hard limits on what can be done with them. A smart light bulb that turns on when you ask it to isn’t really a smart light bulb. It’s a board in a light bulb enclosure that relays your voice to a hub and probably a datacenter somewhere, which analyzes what you say and returns a result, turning the light on.

That’s not only convoluted, but it introduces latency and a whole spectrum of places where the process could break or be attacked. And meanwhile it requires a constant source of power or a battery!

On the other hand, imagine a camera you stick into a house plant’s pot, or stick to a wall, or set on top of the bookcase, or anything. This camera requires no more power than some light shining on it; it can recognize voice commands and analyze imagery without touching the cloud at all; it can’t really be hacked because it barely has an input at all; and its components cost maybe $10.

Only one of these things can be truly ubiquitous. Only the latter can scale to billions of devices without requiring immense investment in infrastructure.

And honestly, the latter sounds like a better bet for a ton of applications where there’s a question of privacy or latency. Would you rather have a baby monitor that streams its images to a cloud server where it’s monitored for movement? Or a baby monitor that absent an internet connection can still tell you if the kid is up and about? If they both work pretty well, the latter seems like the obvious choice. And that’s the case for numerous consumer applications.

Amazingly, the power cost of the platform isn’t anywhere near bottoming out. The FPGA used to do the computing on this demo unit isn’t particularly efficient for the processing power it provides. If they had a custom chip baked, they could get another order of magnitude or two out of it, lowering the work cost for inference to the level of microjoules. The size is more limited by the optics of the camera and the size of the antenna, which must have certain dimensions to transmit and receive radio signals.

And again, this isn’t about selling a million of these particular little widgets. As Xnor has done already with its clients, the platform and software that runs on it can be customized for individual projects or hardware. One even wanted a model to run on MIPS — so now it does.

By drastically lowering the power and space required to run a self-contained inference engine, entirely new product categories can be created. Will they be creepy? Probably. But at least they won’t have to phone home.



from TechCrunch https://tcrn.ch/2SMEKkD

Comments

Popular posts from this blog

Android blatantly copies the iPhone X navigation gestures

Google unveiled some of the new features in the next version of Android at its developer conference. One feature looked particularly familiar. Android P will get new navigation gestures to switch between apps. And it works just like the iPhone X.“As part of Android P, we’re introducing a new system navigation that we’ve been working on for more than a year now,” VP of Android Engineering Dave Burke said. “And the new design makes Android multitasking more approachable and easier to understand.”While Google has probably been working on a new multitasking screen for a year, it’s hard to believe that the company didn’t copy Apple. The iPhone X was unveiled in September 2017.On Android P, the traditional home, back and multitasking buttons are gone. There’s a single pill-shaped button at the center of the screen. If you swipe up from this button, you get a new multitasking view with your most recent apps. You can swipe left and right and select the app you’re looking for.If you swipe up o…

Square launches restaurant point-of-sale platform

Square, which has already made its way into retail stores and service-based businesses (think hair salons, massage therapists, etc), is officially getting into the restaurant business with the launch of Square for Restaurants. Square for Restaurants is a point-of-sale system that handles everything from menu updates, floor layouts, employee scheduling, performance tracking to tip splitting.Usually, restaurants have “some old legacy thing or something else,” Square Seller Lead Alyssa Henry told me.“Historically, we’ve not served this customer segment very well,” Henry said. “With Square for Restaurants, we’re excited to finally be able to serve this customer segment and deliver on a couple of key things that are core to Square but also highly valued by sellers of all types.”This new product is designed to be fast, self-serve, elegant and cohesive, Henry said. It also integrates seamlessly into Square’s existing ecosystem that includes Payroll, Capital and more. Given Square’s ownership…

State Farm sponsors popular Fortnite streamer DrLupo

DrLupo, one of the biggest names and most recognizable voices in Fortnite streaming, has closed a sponsorship deal with State Farm.Bejanmin “DrLupo” Lupo has nearly 3 million Twitch followers and often plays with the world’s most popular streamer, Tyler “Ninja” Blevins. Beloved for his talent and his personality alike, Lupo has also worked as a caster for various Fortnite tournaments and events. Last year, DrLupo held a charity stream for St. Jude’s Research Hospital and raised $1.3 million.State Farm Marketing Director Ed Gold had this to say:DrLupo is one of the world’s most followed Fortnite streamers. His philanthropic efforts and massive fanbase make him an ideal partner as we continue to amplify our esports programming and efforts with the gaming community.This marks State Farm’s first sponsorship of an esports athlete. The sponsorship will include support of the stream through branded replays, live in-stream stunts and product integration (here’s me trying to imagine integratin…