커뮤니티 ▼ 언론보도 ▼
미디어에서 만난 엔텍의 소식을 전합니다
The term image processing encompasses many different tasks, including computational photography, computer vision algorithms, and even basics like image compression. One aspect all of these processes have in common is that performance and results improve as the quality of their input data improves.
But what if you don’t have high-quality images? Picture and video capture don’t always provide the highest quality data for processing in the real world. For example, image frames can be noisy due to a lack of light or incorrect shutter speeds. Important image information can be lost when the ISP (the processor that manages camera data) performs tasks like converting HDR levels or simply compressing the data. This means that subsequent downstream image processing algorithms in the camera pipeline may be forced to work with less-than-ideal data.
Here’s where NexOptic comes in. NexOptic, a machine-learning (ML) startup is a member of the Qualcomm Advantage Network (QAN), specifically the Platform Solutions Ecosystem (PSE). NexOptic recognized the need to improve image data right where it’s captured. At the edge, on mobile and IoT devices. As a result, they’ve developed ALIIS (All Light Intelligent Imaging Solutions), a suite of ML-based image enhancement algorithms. ALIIS is used to enhance and correct flaws in images provided by a device’s camera and ISP, on a pixel-by-pixel basis, making high-quality image data available for downstream camera pipeline processes.
Built on convolutional neural networks (CNNs) inspired by AlexNet and U-Net, NexOptic’s algorithms operate in real-time at the device edge. Their models use the image-processing strengths of CNNs to reduce noise in low-light images, which is a primary challenge in image and video capture today. In particular, NexOptic makes use of a CNN’s ability to extract high-level image information like edges, contours, and objects in noisy signals to reconstruct the image.
ALIIS is effectively a software ISP between the camera and subsequent downstream elements of the camera pipeline on mobile and IoT devices. Depending on the platform, ALIIS can benefit from hardware acceleration by running on specialized processors. Figure 1 provides a general illustration showing where ALIIS sits in a typical mobile or IoT device architecture:
NexOptic says that putting ALIIS right at the device edge where data is collected provides numerous benefits. Most importantly, the raw, uncompressed data from the ISP provides the most image information possible to ALIIS. And as with other AI-at-the-device-edge solutions, keeping the data on the device rather than sending it to the cloud for processing increases privacy, reduces latency, and provides data locality, the latter two being critical for real-time image processing.
In practice, the results are impressive. In one use case, ALIIS helped improve image classification performed by a commercial image classifier by 400%.
Of course, seeing is believing. Figure 2 shows an example of the type of image clean up that ALIIS is capable of:
NexOptic has effectively done the hard work of designing and training its models so that developers can reap the benefits. As a result, the company often describes ALIIS as AI for AI, since its ML-based algorithms can be used to clean up data for computer vision models that may run downstream in the camera pipeline. The company also constantly optimizes and retrains its models, and has specific versions trained for various classes of cameras.
As a QAN member, NexOptic has built an implementation of ALIIS optimized for devices built around Snapdragon mobile platforms, with the ability to process 2K video at 30 FPS on the Snapdragon 855 Mobile Platform.
NexOptic takes advantage of the Qualcomm Spectra ISP, which provides the camera data, and complements it by running ALIIS on the Qualcomm Hexagon DSP. They build their models with TensorFlow, and then use the Qualcomm Neural Processing SDK for artificial intelligence (AI) to quantize and convert the exported model into the Deep Learning Container (DLC) format that is optimized to run on the Hexagon DSP. They employ additional optimization methods including architecture search methods, model distillation, mixed-precision networks, and filter and weight-based pruning.
NexOptic is solving a unique problem. By using ML to enhance image capture in real-time at the device edge, downstream camera processes can work with significantly higher-quality image data. The company also says its technology can be applied to other sectors, including smart security, mobile, automotive, AR & VR, medical imaging, and industrial automation.
To use NexOptic’s technology, developers and OEM’s work directly with NexOptic, who provides an SDK with a C++ API, evaluation kits, guidance and support, and the ability to integrate ALIIS into camera firmware.
Developers can get started building devices powered by Snapdragon that can run NexOptic’s technology using our Snapdragon hardware development kits (HDKs). The choice of which one to use depends on the application (e.g., video vs. pictures), resolution requirements, and other performance factors. For example, NexOptic recommends using mid-to-premium tier Snapdragon mobile platforms like the Snapdragon 778G 5G Mobile Platform, Snapdragon 865 mobile platform and Snapdragon 888 mobile platform for processing high-resolution video.
For additional information and resources, contact NexOptic. Also, be sure to check out their recent webinar on YouTube that provides an overview of their technology and company.