Start Date
7-5-2024 10:10 AM
End Date
7-5-2024 9:55 AM
Document Type
Full Paper
Description
Perception and decision making is contextual. Landmarks are components of the environment associated with high perception and localization accuracy and their presence can significantly impact agent beliefs and decisions. Our research focuses on integrating state-of-the-art sensing technologies to enhance human decision-making. The perception model incorporates multi-sensor fusion, utilizing LiDAR, cameras, and inertial sensors to create a dynamic representation of the environment. Object recognition and tracking algorithms further enable the robot to interpret the scene providing valuable insights for informed decision-making. This effort involves a novel perception model tailored for mobile robots, emphasizing its role in assisting humans during decision-making processes. Our preliminary model includes multi-sensor fusion, semantic scene analysis, and understanding, evaluated using existing SLAM datasets. In that objective, a mobile robot serves as a valuable companion in helping navigation by providing timely and relevant information. An initial stage, results, and evaluation of our perception model are detailed in this paper. In this aspect, we validate the contextual state by object detection. In the context of achieving localization without GPS in a network of roads using stratified sequential importance sampling where the stratification levels are based on semantic object spaces in the map and on the running time, we quantify the impact of landmark presence and frequency on the success of localization and thereby of decisions.
DOI
https://doi.org/10.5038/JJMU2648
Quantifying Impact of Robot Perception Accuracy at Landmarks in Decision-Making during Complex Situations
Perception and decision making is contextual. Landmarks are components of the environment associated with high perception and localization accuracy and their presence can significantly impact agent beliefs and decisions. Our research focuses on integrating state-of-the-art sensing technologies to enhance human decision-making. The perception model incorporates multi-sensor fusion, utilizing LiDAR, cameras, and inertial sensors to create a dynamic representation of the environment. Object recognition and tracking algorithms further enable the robot to interpret the scene providing valuable insights for informed decision-making. This effort involves a novel perception model tailored for mobile robots, emphasizing its role in assisting humans during decision-making processes. Our preliminary model includes multi-sensor fusion, semantic scene analysis, and understanding, evaluated using existing SLAM datasets. In that objective, a mobile robot serves as a valuable companion in helping navigation by providing timely and relevant information. An initial stage, results, and evaluation of our perception model are detailed in this paper. In this aspect, we validate the contextual state by object detection. In the context of achieving localization without GPS in a network of roads using stratified sequential importance sampling where the stratification levels are based on semantic object spaces in the map and on the running time, we quantify the impact of landmark presence and frequency on the success of localization and thereby of decisions.