Deep Learning with Time of Flight Sensors
Deep Learning is becoming increasingly prevalent in image processing. In conjunction with Time-of-Flight (ToF) 3D sensor technology, higher recognition performance can be achieved compared to pure RGB images. The multi-ToF platform, which connects various sensors to a powerful hub, is the foundation for future AI camera solutions.
In industrial applications today we use solely “weak artificial intelligence”. Weak artificial intelligence systems are limited to solving specific problems. Weak AI obtains all the knowledge from the training data. There are 2 phases in its lifecycle: the training phase and the application phase. Since both phases are separate, a weak AI does not learn during the application phase. As a result, on the one hand, systems with guaranteed quality can be operated securely, but they have to be retrained if new objects are added or if they change significantly over time.
In contrast to the classic methods of image processing, Deep Learning is very reliable regarding variations. Even with a large number of different classes, a well-trained system recognises objects with low contrast, strong brightness fluctuations, obfuscation of up to 60-70%, or sometimes extreme deformations.
Deep Learning is therefore suitable even in applications where there is neither a controlled environment nor standardised objects.
Teaching machines to "see"
BECOM not only offers 2D/3D sensor technology for image data collection but also possesses extensive processing know-how with NVIDIA processors for sophisticared systems from OEM modules to the complete deep learning camera.
Read the full article: