Solving the Challenges of Adding AI to Home Appliances

0
14
Solving the Challenges of Adding AI to Home Appliances

//php echo do_shortcode(‘[responsivevoice_button voice=”US English Male” buttontext=”Listen to Post”]’) ?>

For all the interesting things artificial intelligence (AI) can do today, the vast majority of it is stuck being served from a datacenter due to the extreme complexity and high cost of AI-capable chips. But if those capabilities could be run outside of a datacenter, they could enable any number of new products and features, just like when centralized mainframe computers were brought to the masses in the form of PCs, laptops, and ultimately smartphones.

For example, the home appliance market is ideal for AI. To remain competitive, appliance manufacturers must innovate, and with AI, that means finding new ways to add compelling AI features like  voice control and alerts for televisions, HVAC units, refrigerators, stoves or washing machines and dryers—all while meeting energy-efficiency standards like Energy Star and Ecodesign and at price points consumers can afford. 

sam-fok-headshot-4872168
Sam Fok, CEO, Femtosense

If AI was more efficient, easier, and less expensive to deploy, AI would add the next level of convenience and capability directly on the our appliances and devices. Residents could change the TV channel, turn on and off the lights or heat up the room without looking for the often-misplaced remotes. 

Not sure what is the best setting to use on the washer/dryer? Let AI figure it out. Not sure what to prepare with the ingredients in the fridge? Again let AI figure it out. AI can also improve safety within the home, alerting residents that their toast is burning, their pot is boiling over or even that their cabbage is about to spoil, so they take action before anyone gets sick.  

rzv2h-600x340_thumbnail-4945565

By Shingo Kojima, Sr Principal Engineer of Embedded Processing, Renesas Electronics  03.26.2024

thumbnail-image-1-5367415

By Dylan Liu, Geehy Semiconductor   03.21.2024

pcie-nvme_600x340-1353851

By Lancelot Hu  03.18.2024

Unfortunately, adding these AI features on-device to most mass market products incurs costs for the manufacturer that are passed along to the consumer. Adding $5 more in cost to build the product would likely result in an additional $25 for the consumer, pricing many out of the market.

Making AI efficient

With the rising consumer demand for added convenience and smart functionality from home appliances, manufacturers recognize the need for more cost-effective AI chips that are easier to deploy. A new on-device AI inference processor combined with a high-performing, energy-efficient microcontroller (MCU) targeted at home appliances is one such solution. The inference processor enables voice control and other AI functions in energy- and cost-sensitive appliances and devices by leveraging sparse mathematics to strip away the unnecessary work in AI and significantly improve efficiency. 

Sparse processing means incentivizing and exploiting sparsity—zeros in an AI algorithm. Prune away unnecessary connections and only strengthen connections that matter. Also only generate activations when something interesting is happening. Don’t store zeros. Don’t pull them out of memory. Don’t operate on them. Save your silicon, money, and energy. This makes AI efficient. This is what we are doing with our algorithms and what we help our customers do with theirs. What’s been lacking is hardware to exploit that sparsity.    

art-for-fok-guest-col_abov-poc-in-hand-e1711984258809-7696404
The tiny Sparse Processing Unit 001 (SPU-001) compresses AI workloads for real-time applications on devices at the edge so they fit on a small piece of silicon. This saves space, time and energy—and presents margins that grow as AI models scale. (Source: Femtosense)

There is a chicken-and-egg problem between siloed pure-hardware and pure-software worlds. Few algorithm developers use sparsity because, until now, there has not been hardware to exploit it to its fullest. And if you’re a pure hardware developer, it doesn’t make sense to build hardware for workloads that don’t exist. We make the chicken and the egg at the same time by enabling customers with sparse algorithms and providing hardware that exploits sparsity to its fullest.

In short, the processor compresses AI workloads for real-time applications on devices at the edge so they fit on a small piece of silicon – saving space, time and energy, and with margins that grow as AI models scale. And they are definitely scaling. 

To provide this real-time, ultra-lower power AI efficiency at a reasonable cost, this AI inference processor must work with high-performing, energy-efficient microcontrollers (MCUs). Last year,  Femtosense partnered with ABOV Semiconductor, a supplier of motor controls, sensors, remote controls  MCUs for home appliance and industrial. When combined with ABOV’s low-power MCU, the AI inference processor now offers home appliance manufacturers the ‘always-on’ function and leading-edge technology without compromising on energy efficiency.  It enables these manufacturers to identify the voice interface or AI helper features specific to the type of application. 

As device and appliance manufacturers add compelling AI features, they can now do so at a price point that consumers can afford while meeting their efficiency standards. Solving this problem at scale is a big market for this sparse processing AI technology as it brings AI out of the datacenter to the real world.