All Roads Lead to Vision for AI-Powered Industrial Processes

Kleiner identifies two trends facilitating the use of machine vision in industrial digital transformation: more powerful CPUs and improvements in model compression. “We’re seeing a real increase in the AI capabilities of what’s within the CPU package,” Kleiner says.

“It addresses many of the issues that we have with conventional sensors in an AI system,” says Powell.

This improvement is driven by the sheer increase in memory size needed to operate those models: The models need to be better compressed to bring them down to a manageable size for the CPU’s compute power. This is particularly beneficial for CPUs at the edge, which face significant resource constraints when running AI programs.

Sachdeva says devices that capitalize on that convergence can help industry. His company, Invisible.AI, has developed an intelligent camera for manufacturing, and a software platform that monitors and learns from the camera’s recordings to deliver insights for process optimization, safety, and continuous improvement.

Echoing Kleiner’s observation of compute limitations of devices on the edge, Mitchell says that these innovations are beneficial for edge processing, where power and memory are more restrictive.

‘All roads lead to vision’

“Increasing computing power within the CPU package, and model compression so we can do more with a given architecture, really help enable a growing number of AI use cases that can handle the inference tasks with the computational power of what’s in the CPU,” says Kleiner.

This is particularly true in x86 CPU architectures, which are commonly used in industry. This growth is taking place thanks largely to the inclusion of AI-optimized GPU hardware within the physical CPU. More internal GPUs (iGPUs) are being installed; these architectures are better suited to handle AI workloads than a conventional CPU. More recently, neural processing units (NPUs) have been added as well.

Our PROMISE: Quality Digest only displays static ads that never overlay or cover up content. They never get in your way. They are there for you to read, or not.

Published Jan. 28, 2024, on engineering.com.

منبع: https://www.qualitydigest.com/inside/innovation-article/all-roads-lead-vision-ai-powered-industrial-processes-030524.html

Many companies have tried to streamline their operations with Industry 4.0 technologies such as IoT (internet of things) sensors and other text and number-based technologies, but it hasn’t been enough. Thousands of digital signals exist, but they only tell a partial story. Manufacturers have lacked complete operational visibility. According to Invisible.AI, “Video is the only way to digitize and understand the physical world at scale. AI is essential for making sense of the vast amounts of data generated on the production floor. Video-native software like computer vision, coupled with AI, represents the next generation of manufacturing technology.”

Collecting the right data for more efficient computing

Michael Kleiner, VP of Edge AI Solutions at OnLogic; Prateek Sachdeva, co-founder and CPO at Invisible.AI; and Gareth Powell, product marketing director at Prophesee, discuss the potential of AI-powered machine vision to help industries optimize their operations and compete in a global economy.

Converging trends lower the bar for deploying AI in industrial settings

While Sachdeva talked about sensor innovations, Powell presented details on the kind of model compression that enables those sensors to deploy AI at the edge.

So please consider turning off your ad blocker for our site.

For example, an automated forklift might drive 2–3 mph at most, limited because it can’t see what’s ahead. If it knew the entire route it needed to drive, based on the entire plant being digitized, it could drive 10 mph and deliver the product much faster because it could see when there’s nothing in its path. “Building toward that requires you to blanket your facilities in these vision solutions and get an understanding of what’s happening,” Sachdeva says.

Innovation

All Roads Lead to Vision for AI-Powered Industrial Processes

Powerful CPUs and better model compression are bringing AI within reach of more manufacturers

“All roads lead to vision,” says Sachdeva. “To be able to get to the digital twin, to be able to run your whole factory in an automated fashion—that future requires digitizing the real world, and understanding through video what’s happening.”

The next frontier for industrial digitization and automation is the convergence of artificial intelligence (AI) and machine vision.

Along with more powerful CPUs, techniques to compress machine learning models have also been improving, enabling CPU architectures to do more with the data than ever before. These compression techniques include model choice, quantizing/data types, pruning/sparsity optimization, knowledge distillation, and low rank factorization technique.

Prophesee’s event sensors are complete imaging systems in their own right compared with conventional sensors. Conventional sensors might have two transistors linked to a particular pixel; Prophesee’s sensors have 80 to 100—sometimes more. The sensors work on light-based contrast detection; they continuously track changes in light levels.

AI-powered machine vision devices like Invisible.AI’s sensor help operators achieve an accurate, real-time understanding of the company’s entire processes—enabling companies to implement the Toyota concept of genchi genbutsu: going to and directly observing an operation to understand and solve problems faster and more efficiently.

Published: Tuesday, March 5, 2024 – 12:03

Powell’s company, Prophesee, powers its sensors with an innovative approach to data collection. Its sensors use an event-based approach rather than the conventional image detection technologies. By selectively focusing on the changes, or events, in a series of images, and ignoring static background objects, Prophesee claims that its sensors can produce up to 1,000 times fewer data than a conventional sensor while increasing the sensor’s temporal resolution to more than 10,000 frames per second.

However, someone has to pay for this content. And that’s where advertising comes in. Most people consider ads a nuisance, but they do serve a useful function besides allowing media companies to stay afloat. They keep you aware of new products and services relevant to your industry. All ads in Quality Digest apply directly to products and services that most of our readers need. You won’t see automobile or health supplement ads.

“Manufacturing changes every single day, every week, and you’re conducting optimization on the line,” says Sachdeva. “To be able to do data collection for every scenario is just not practical. Your solution with AI needs to work quickly, needs to be able to deploy Day One, Week One, not weeks or months from now, and not depend on a lot of data collection.”

Should that change exceed a certain threshold, it triggers an “event” that directs the sensor to pay closer attention to only the elements in the image that are moving or changing—ignoring the rest of the image. As a result, the data generated by the sensor per event are much more efficient for processing than anything a conventional RGB camera generates.


AI-powered machine vision devices help operators achieve an accurate, real-time understanding of a company’s entire processes. Photo by Arseny Togulev on Unsplash.

AI-powered machine vision promises to transform the way industrial manufacturers conduct their business, according to experts at a recent webinar hosted by the Association for Advancing Automation (A3). The webinar, “Harnessing AI-Powered Machine Vision for Industrial Success,” brings together industry leaders to discuss how those two tools open up many possibilities for industrial companies to maximize their competitiveness, from improving quality control to enhancing safety and optimizing production processes.

Powell claimed event-based processing has several benefits. Ultralow latency and high temporal resolution means that data inference at any rate is possible, limited only by computation time. That computation time is also reduced: Sensors only have to learn simple patterns and features and don’t need to learn invariance in relation to the static background. If invariance is incorporated, it can be minimal, enabling greater and easier generalization.

With more powerful sensors, and more efficient processing capabilities, machine vision is increasingly within reach for industrial businesses looking to bring AI into their factories.

The convergence of these trends means a reduction in the physical complexity required by edge systems, and more options in compute hardware—because specialized high-end systems won’t be needed—as well as efficiencies in power use and emissions. “All of this helps to simplify processing data in real time at the edge and lowers the barrier of entry for many AI deployments, including machine vision,” says Kleiner.

Deploying AI-powered sensors at the edge