Author Information

Valeria Salas
Alfredo Weitzenfeld

Location

USF

Document Type

Event

Keywords

Localization, Trilateration, Vision, Learning

Description

Many strategies for robot localization exist, such as trilateration and triangulation algorithms, that compute relative distance and orientation of robot to multiple landmarks. These algorithms require predefined knowledge of landmarks. In this paper we describe a trilateration algorithm for robot localization where new landmarks may be defined using deep learning algorithms. We use a single camera to learn new custom objects. This information is passed as input to the system together with the distance to the objects calculated by a previously calibrated distance detection algorithm. A deep learning algorithm is applied to the object detection model making use of TensorFlow’s Object Detection API to identify custom objects in the environment. The information from the detected object in the camera image is used to calibrate the distance detection algorithm. The relative position of the objects is then used as input data to the trilateration-based localization algorithm. Examples of new objects used as landmarks in our system include a chair, sofa, fridge, shelf, and coffee table. To train the model with custom objects, a dataset of 100 images were collected by taking photos with a laptop camera. These images were randomly separated into two partitions: a train partition with 90 images and a test partition with 10 images. These objects were then tested in experimental work. The paper provides results and discusses shortcomings and future work.

DOI

https://doi.org/10.5038/VUJQ7843

Share

COinS
 

Trilateration-Based Robot Localization with Learned Visual Landmarks

USF

Many strategies for robot localization exist, such as trilateration and triangulation algorithms, that compute relative distance and orientation of robot to multiple landmarks. These algorithms require predefined knowledge of landmarks. In this paper we describe a trilateration algorithm for robot localization where new landmarks may be defined using deep learning algorithms. We use a single camera to learn new custom objects. This information is passed as input to the system together with the distance to the objects calculated by a previously calibrated distance detection algorithm. A deep learning algorithm is applied to the object detection model making use of TensorFlow’s Object Detection API to identify custom objects in the environment. The information from the detected object in the camera image is used to calibrate the distance detection algorithm. The relative position of the objects is then used as input data to the trilateration-based localization algorithm. Examples of new objects used as landmarks in our system include a chair, sofa, fridge, shelf, and coffee table. To train the model with custom objects, a dataset of 100 images were collected by taking photos with a laptop camera. These images were randomly separated into two partitions: a train partition with 90 images and a test partition with 10 images. These objects were then tested in experimental work. The paper provides results and discusses shortcomings and future work.