A Deep learning voice Assisted System for Intelligent Indoor Navigation of the Visually Impaired

Authors

  • Joy Sangeeth Raj.G Author
  • Pulugari Maheshwari Author
  • Sakinala Sandhya Author
  • Nagarthi Shashank Reddy Author
  • Vuyuru Srilakshmi Priya Author

DOI:

https://doi.org/10.62652/

Keywords:

Object recognition,

Abstract

For the visually impaired, navigating inside spaces remains a major challenge, limiting their freedom of
movement and autonomy. Guide dogs and white canes are helpful, but they can't convey complex geographical
information. Navigational aids have come a long way in the last ten years, thanks to developments in computer
vision and artificial intelligence. This research presents an artificial intelligence (AI) system for room and object
detection that may help the visually impaired and those who are blind navigate interior spaces more easily. Our
system sorts rooms according to the things they include using a deep learning model—specifically, YOLO for object
recognition. This study also includes real-time audio output and depth detection to help users navigate better by
measuring the distances of things near them. Additionally, the system incorporates voice narration to provide verbal
explanations, warn users of difficulties, and show them alternative ways. A well-structured dataset optimized for
indoor scene categorization with top-notch accuracy is also presented in this research. The goal of the solution is to
provide visually impaired persons with a practical and easy-to-use gadget that bridges the gap between existing
navigation help and full autonomous support systems. Object recognition, computer vision, room categorization, and
deep learning are some of the terms used to describe assistive technology.

Downloads

Published

03-04-2026

How to Cite

A Deep learning voice Assisted System for Intelligent Indoor Navigation of the Visually Impaired. (2026). International Journal of Marketing Management, 14(2), 69-76. https://doi.org/10.62652/