Feature your business, services, products, events & news. Submit Website.
Breaking Top Featured Content:
Sensor fusion: The key to achieving optimal autonomy in robotics
Imagine walking in an area unknown to you — blindfolded. The chances that you are going to hit something are very high. And that’s exactly what robots would face if not equipped with sensors. Using sensor fusion allows for data to be collected by the different sensors to enable robots to navigate safely through any environment.
Robots must navigate through environments unknown to them but to which they need to react and adapt by detecting objects, classifying those objects, and then ultimately avoiding them. In addition, the robot also needs to map the area over which it is traversing by identifying gates, cliffs, or aisles, for example, and coexist safely with other humans and robots.
Looking specifically at autonomous mobile robots (AMRs), there are three challenges that designers need to resolve to achieve effective, autonomous navigation:
- Safe human presence detection. Facilitating the ability to detect the presence of humans begins with the definition of a safety area around the robot that will cause it to stop or avoid a person in proximity to that area.
- Mapping and localization. The robot must have a map of the environment where it’s going to operate, while also knowing where it is located at any point in time, so that it can return to a specified starting point or reach a pre-defined destination.
- Collision avoidance. The AMR needs to avoid objects that are in its path and navigate through aisles or gates by avoiding any kind of obstacle.
It is possible to address these challenges by using different types of sensing technologies that, when fused together, give the robot a level of autonomy that enables it to work independently and safely in any environment.
LiDAR and radar are the most popular technologies used to create a safety bubble around AMRs so that they can detect the presence of humans.
But what’s the best sensor technology to address indoor mapping and localization? Considering an unknown environment, the AMR would need to create a dynamic map by measuring the position of the nearest objects or structures with ultrasonic, LiDAR, 3D time-of-flight, and radar sensors. Then gyroscopes, accelerometers, and encoders could measure the distance that the AMR has traveled from its original location.
To design for collision avoidance, it’s important to know the characteristics of the object (material, shape, color) and whether it is a static object (for example, a wall or a shelf) or a dynamic object (such as another robot).
If it’s a static object, the robot must acquire that information while mapping the area so that it can plan to avoid the object in advance. If it’s a dynamic object, fusing data coming from different sensors in real time could detect the distance from the object, its dimension, and its speed, thus implementing active collision avoidance.
Sensor fusion addresses these challenges by overcoming the individual limitations of individual sensor technologies. For example, radar and LiDAR are good at measuring distances with high precision and accuracy, work in poor lighting conditions, and are unaffected by weather conditions.
On the other hand, cameras, like our eyes, are good at classifying objects and distinguishing shapes and colors but have limited performance in environmental conditions — ultimately leading to low visibility. Only by using different sensor technologies can an AMR achieve a level of autonomy that will enable it to navigate safely through any environment, even environments unknown to it.
Texas Instruments (TI) can provide everything from the sensors to the processors needed for AMR sensor fusion with a portfolio that spans the entire robotic signal chain.
The company’s sensor portfolio ranges from discrete to highly integrated solutions for implementing sensor fusion. For example, Safety Integrity Level-2 60- and 77-GHz single-chip TI millimeter-wave (mmWave) sensors integrate a digital signal processor, a microcontroller, and a high-frequency front end and generate measured values for the distance, speed, and angle of incidence of the objects in an AMR’s field of view.
Implementing a safety area scanner with the sensor enables the robot to detect and localize objects in three dimensions. Beyond the detection of a human, the AMR can also determine that human’s direction of travel and speed. With this information, the robot can define a safety area around itself and dynamically adjust its speed, depending on how close the human is.
Combining a camera with a TI mmWave sensor, as an example, enables the AMR to obtain additional information about an object’s characteristics that would not be possible with only a single sensor. Of course, the fusion of radar and camera brings additional challenges into the system, such as:
- It is possible to calibrate cameras with OpenCV calibration routines and the chessboard capture algorithm while calibrating radars with corner reflectors.
- Data synchronization. Achieving parallel processing of an incoming data stream requires time synchronization between the radar and the camera.
- Spatial data alignment. As data is calculated, the process of overlaying images coming from different sensors and points of view begins.
- Object recognition and classification. After performing spatial alignment, clustering, and classifying, the data will enable object recognition and classification.
The TI Jacinto processor accelerates sensor fusion by capturing all sensor data, processing the data, and interpreting the measurements to control the AMR. The Jacinto software development kit has tools that include deep-learning open-source frameworks such as TensorFlow Lite, as well as a computer-vision and perception tool kit that addresses some of the challenges mentioned previously, including image classification, object detection, and semantic segmentation.
Sensor fusion should be implemented within a robot operating system environment that enables the use of sensor devices as nodes of the system, fusing and testing the sensor devices accordingly to solve the challenges discussed in the paper.
Sensor fusion must happen in real time, especially if the AMR is collaborating with humans, because the robot must rely on data coming from the sensors to ensure that it does not harm any humans nearby while still performing tasks. Bringing the data stream from the sensor node to the brain of the robot with a high-speed communication interface is a must for real-time response so that the AMR can react quickly based on the sensor inputs.
The data acquired from the sensors can also help build better machine-learning and artificial-intelligence models that robots rely on to become autonomous, making real-time decisions and navigating in dynamic real-world environments.
Despite the many challenges that come with sensor fusion, this technology represents an important step in robotics to move from guided vehicles to fully autonomous ones that will finally allow coexistence between humans and robots and make work more efficient and profitable.
About the author
Giovanni Campanella is a sector general manager at Texas Instruments, focusing on building automation, retail automation, and payment and appliances. Giovanni leads a team that helps solve customer design challenges on a worldwide basis. He has broad experience in analog and sensing technologies as well as strong analytical and interpersonal skills combined with a hands-on attitude. Giovanni holds a bachelor’s degree in electronics and telecommunication engineering from the University of Bologna and a master’s degree in electronic engineering from the Polytechnic University of Turin.
The post Sensor fusion: The key to achieving optimal autonomy in robotics appeared first on Electronic Products.
Continue Reading at ElectronicProducts.com Click Here.
Press Release Distribution Service