Enabling immersive indoor navigation and control through augmented reality with computer vision

Loading...
Thumbnail Image

Date

2024

Journal Title

Journal ISSN

Volume Title

Publisher

IEEE

Abstract

This project integrates computer vision, Augmented Reality (AR), and cloud-based communication to create a realtime 3D map of the robot’s surroundings. The outcome is an engaging and immersive control experience for users in indoor robot monitoring and controlling. Furthermore, combining an image semantic segmentation model with the Microsoft Kinect V2 depth camera introduces a novel method for achieving 1cm accuracy in estimating the distance to a particular object. This precise depth estimation is a significant benefit for tasks that require accurate object identification, segmentation, and depth information to generate a virtual representation of the real-world environment of a robot. We have enabled remote communication with the robot’s environment by incorporating AWS cloud services, guaranteeing minimal delay. The commercial viability of this project can be identified in a wide range of applications that find the system useful due to its easyto-use interface, remote access capability, and precise object rendering capabilities. Examples of such applications include surgical robotics, instructional robotics platforms in classrooms, and automated guided vehicles in warehouses.

Description

Citation

DOI

Collections

Endorsement

Review

Supplemented By

Referenced By