Skip to content

Visual impairment entails significant challenges for persons to perform their daily activities and could ultimately reduce the quality of life. A realistic perception of the environment is important for persons with visual impairments to provide an intuitive awareness of the physical surrounding for protection against hazards and to facilitate s…

Notifications You must be signed in to change notification settings

byteofsoren/blind_navi

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

20 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Object avoidance for the visually impaired

Visual impairment entails significant challenges for persons to perform their daily activities and could ultimately reduce the quality of life. A realistic perception of the environment is important for persons with visual impairments to provide an intuitive awareness of the physical surrounding for protection against hazards and to facilitate social inclusion. These group of people require continuous assistance in real time during their movement to and fro. Researchers and scientists have been trying to develop automatic navigation systems to assist those people in the activities of their daily living [1]. The proposed project proposes an indoor navigation system for persons with visual impairment using computer vision and machine learning. Here, an intelligent decision support system (DSS) based on artificial intelligence i.e. machine learning will be developed to reliably sense the environment by using cameras, translate the information, navigate and suggest personalized decisions to persons with visual impairments. The goal of this project is to develop a standalone navigation system using FPGA/raspberry pi board and Artificial Intelligence (AI) algorithms which can be used for visually impaired people. The proposed solution is designed for indoor use only (house, office, etc.). It provides the visually impaired person the ability to navigate without any other hardware assistance. In the project, one camera will be used for streaming surrounding environment and the FPGA/ raspberry pi system will able to help for navigation in the environment. There are several methods to perform this kind of tasks such as image GPS, image and so on [2, 3]. In this project the aim is to use image processing-based navigation system and navigation command can be either text or voice or any other format. For example, there is an obstacle after 5 meters ahead, or there is a cat 3 meter in left etc. The system should have one or more AI algorithms.
This package contains sub modules installed by using: $ git submodule add [URL] src/[NAME]

Installation

This is meant to be used with ROS Kinetic, so first install that.
Then get this repo by:
$ git clone --recursive https://github.com/byteofsoren/blind_navi.git ~/catkin_ws

About

Visual impairment entails significant challenges for persons to perform their daily activities and could ultimately reduce the quality of life. A realistic perception of the environment is important for persons with visual impairments to provide an intuitive awareness of the physical surrounding for protection against hazards and to facilitate s…

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages