Edge AI Enhances Safety: Smarter Video Cameras

0
11
Edge AI Enhances Safety: Smarter Video Cameras

This article is part of the TechXchanges: AI on the Edge and Machine Vision.

Members can download this article in PDF format.

What you’ll learn:

  • How the rise of smarter camera systems brings a need for greater automation.
  • Why advanced AI for smart cameras requires a solution that doesn’t depend on the cloud.

 

Video cameras are now omnipresent among our surroundings, whether in our doorbells, situated in elevators, or scattered throughout public spaces like airports, stadiums, and city streets. These video cameras are becoming increasingly intelligent with broadened capabilities, from bolstering home security and public safety to monitoring and optimizing traffic flow.

The demand for such camera vision systems, which increasingly “understand” what they see, continues to grow. According to ABI Research, shipments will reach close to 200 million by 2027, generating $35 billion in sales.

As smarter camera systems ramp up, so too does the need for greater automation — the ability to monitor video streams and generate insights more quickly while making streaming and storage more efficient and cost-effective. This is where artificial intelligence (AI) steps in.

However, even AI-supported camera systems have their limits. Traditional AI models rely on cloud-based infrastructure, often suffering from latency issues and other challenges. They’re incapable of real-time insights and alerts, and their dependency on networks jeopardizes reliability and integration with the cloud, which poses data privacy concerns.

Therefore, advanced AI for smart cameras requires a solution that operates independently of the cloud. What’s needed is AI at the network edge. And to truly unlock the potential of edge AI cameras — handling many disparate, essential video functions all on their own — they can’t just be capable of some AI processing. They need to be able to handle lots of AI processing.

Why is Edge AI Essential for Smart Cameras?

Edge AI, in which AI processing takes place directly within cameras, makes it possible to offer real-time video analytics, insights, and alerts, thereby delivering a higher level of security. Furthermore, the implementation of AI at the edge allows for the streaming of metadata and analysis only, as opposed to transmitting entire video streams. This reduces the cost of transferring, processing, and storing video in the cloud. On top of that, edge AI can enhance privacy and reduce reliance on network connections by keeping data localized.

Until now, most smart cameras have been constrained by limited computing power for handling AI processing. They’re also largely incapable of enhancing video on the fly, a crucial component for accurate analytics.

What distinguishes the next generation of smart camera systems is the integration of robust compute power and AI processing capacity directly in the cameras. This not only enables the processing of advanced video analytics, but also applies AI for video enhancement to achieve high-quality video. Given that both functions—enabling advanced video analytics and enhancing video quality—demand their own AI capacity, today’s smart cameras must be equipped with an optimal level of AI power.

High-Quality Video Enhancements Boost Analytical Precision

Though AI is commonly associated with analytics, it can also be used in smart cameras to improve image quality and provide crisp, clear visuals. In public safety situations, the quality of the video image can be paramount in assessing potential risks.

AI is able to effectively manage a variety of image enhancement tasks, including mitigating noise in low-light conditions, performing high-dynamic-range (HDR) processing, and even addressing some aspects of the classic 3A (auto exposure, auto focus, and auto white balance).

Low-light conditions, for instance, can severely limit viewing distance and reduce image quality. The resulting video “noise” makes it challenging to differentiate detail while increasing data size during compression. This could lead to poor system efficiency when transmitting and storing video data in the cloud.

While AI can remove noise and simultaneously preserve essential image details, it also demands significant processing. For example, eliminating noise from a 4K video image captured in low-light conditions would require approximately 100 giga (billion) operations per second (GOPS) per frame, which is 3 tera (trillion) operations per second (TOPS) for real-time video streaming of 30 frames per second (Fig. 1).