Navigation System for Blind People Using Artificial Intelligence

Do you know according to WHO, there are about 39 million people in the world who are blind? Artificial Intelligence is one of our key research area to overcome that challenge. Here, we explain the use of AI based grid cell, place cell and path integration strategies to solve the problems.

Dr. Amit Ray explains how grid cell, place cell and path integration strategies with artificial intelligence can be used  for designing the navigation system for blind people. Here, we discuss the use of AI techniques for automatic navigation.

Artificial Intelligence to Help Blind GirlsNavigation for blind people in a complex environment is a challenging task. Visual Impairment makes the person depend on another person for all their works and daily chores. This project is aimed to help blind people and eliminate their pain.

Designing automated navigation system for blind people is one of the core project  of our Compassionate AI Lab. The project focuses on how image recognition, voice recognition and path navigation methods of artificial intelligence can be used  for automated navigation system for blind people. Artificial intelligence can be used for helping blind people in many ways.

The discovery of place cells and the grid cells in the hippocampus opened a new perspective for understanding  mammal navigation system. Applying grid cells, place cells and path integration concepts with artificial intelligence is the main aim of this project of developing automated navigation systems for blind people. 

AI Based Navigation System for Blind People

Our objective is to develop an automated navigation system for the blind people. Our long-term goal is to create a portable, self-contained system that will allow visually impaired individuals to travel through familiar and unfamiliar environments without the assistance of guide. Currently, the most widespread and used means by the visually impaired people are the white stick, helper or the use of guide dog; however both present some limitations. 

Artificial Intelligence to Help Blind People WalkingThe goal of the project is to give blind persons the ability to move around in unfamiliar environments, whether indoor or outdoor, through a user friendly interface. The term blindness refers to the people who have no vision at all or people who have less vision. With the advancement in AI technology usage of image recognition, voice recognition and path navigation method it can be easier to send commands regarding directions to the blind people. 

One aspect of navigation that our brains seem to perform without conscious effort is path integration. Mammals use this process to recalculate their position after every step they take by accounting for the distance they’ve traveled and the direction they’re facing. It’s thought that path integration is the key to the brain’s ability to produce a map of its surroundings. With the advancement of artificial intelligence systems, specially deep reinforcement learning algorithms, I think this is possible. 

What is Path integration?

Path integration is a navigation strategy used by sailors hundreds of years ago to estimate their ship’s position using information about start location, speed of movement, travel time, and directional change whenever visible landmarks were unavailable. Darwin (1873) was the first to propose  that animals also maintain a path integration navigation strategy. Path integration keeps track of the spatial relationship between oneself and surroundings when moving.  Path integration relies on accurate perception of self-motion, velocity, acceleration, starting point etc. Modern neuroscience research also revealed that many species has the ability to perform path integration, including bat, rat, bees, ants, spiders and dog. 

Path integration strategy is also a general way to enhance the intelligence of machines. These networks are inspired by the neuroscience of mammal brain. Navigating to a destination, whether you are a human, rat or a bat, requires a complex set of calculations and interactions among brain cells. 

AI for social good Sri Amit RayThe Nobel Prize in Physiology or Medicine was awarded in 2014 for the discovery of place cells and grid cells in brain.  These two types of cells along with direction cells are used for navigation. For navigation our brain needs to integrate information about location, direction and distance into a coherent map‐like representation. Scientists observed that medial entorhinal cortex area of the brain plays the central role in navigation. 

Navigation systems of Ants and Honeybees:

Desert ants and honeybees travel large distances in search of food. They have the navigation systems that are capable of keeping track of the distance and direction of travel throughout their outbound journey, so that they can return home expeditiously and without losing their way.  They able to meet this challenge, despite their minuscule brains and restricted computational capacity. At any one time during its journey, they compute a path integration (global) vector and landmark guidance (local) vector, in which the length of each vector is proportional to the certainty of the individual estimates. The sum of the global and local vectors indicates the navigator’s optimal directional estimate.  They use these global coordinates to remember the position of food sites with respect to their nests . 

What is Place Cell?

Many species of mammals can keep track of spatial location even in the absence of visual, auditory, olfactory, or tactile cues, by integrating their movements.  Place cells neurons are found in the hippocampus of rats and mice.  Place cells are fired when the animal is near a familiar location in their environment, for example a particular corner in a room. When a rat ran faster, place cells fired more rapidly. Place cells work with other types of neurons in the hippocampus and surrounding regions to perform spatial processing. Place cells  gives the signal that “you are here” .

What is Grid Cell?

Grid cell  are found in medial entorhinal cortex of the brain. Grid cells are neurons that fire when a freely moving animal traverses a set of small regions (firing fields) which are roughly equal in size and arranged in a periodic triangular array. Grid cells have attracted attention because the crystal‐like structure underlying their firing fields is not, like in sensory systems, imported from the outside world, but is created within the brain itself. Each grid cell fires at fixed points creating a repeating triangular activity pattern. This regular triangle-pattern is what distinguishes grid cells from other types of cells that show spatial firing. The “grid cells” fire in repeated discrete locations as the animal moves around its environment, forming the vertices of a polygonal grid that covers the environment. The orientations of grid firing patterns are clustered within and between modules. The periodic firing patterns of grid cells along with place cells provide the framework path integration (global) vector and landmark guidance (local) vector.

What is Direction Cell?

Direction cells gives the sense of direction at any moment. The are known as the neuronal “compass needle”. Direction cells in animal’s limbic system carry information about the direction the head is pointing in the horizontal plane. Head direction cells form an internal compass signaling head orientation even without visual landmarks.  The HD cells are also rigidly coupled with the hippocampal place cells. 

Artificial Intelligence and Grid Cell

We are experimenting with various artificial neural network models for designing the navigation system. Currently, for designing the automated navigation system for blind people we focused on Deep Q-Learning with Recurrent Neural Networks (DRQN) based artificial neural network architecture. Recurrent neural network architectures have been used in tasks dealing with longer term dependencies between data points.  We examined several architectures for the DRQN. One of them is the use of a RNN on top of a DQN. It is showing promising results.  The principles and algorithms are discussed in the book Compassionate Artificial Superintelligence AI 5.0

Reinforcement learning

Reinforcement learning (RL) is a machine learning algorithm, where an “agent” interacts with an “environment”. The agent’s “policy”, i. e. the choice of actions in response to the environment’s reward, is updated to increase some reward. Neural network based reinforcement learning techniques are successfully applied access vast variety of applications.

A reinforcement learning agent interacts with its environment in discrete time steps. At each time t, the agent receives an observation o_{t}, which typically includes the reward r_{t}. It then chooses an action a_{t} from the set of available actions, which is subsequently sent to the environment. The environment moves to a new state s_{t+1} and the reward r_{t+1} associated with the transition (s_{t},a_{t},s_{t+1}) is determined. The goal of a reinforcement learning agent is to collect as much reward as possible.

Deep Recurrent Learning:

A recurrent neural network (RNN) is a class of artificial neural network where connections between units form a directed graph along a sequence. This allows it to exhibit dynamic temporal behavior for a time sequence. Unlike feedforward neural networks, RNNs can use their internal state (memory) to process sequences. 

Implementing DRQN for the Navigation of Blind People

We have experimented with different algorithms of the place cell, grid cell, direct cell, path integration by implementing in a Deep Recurrent Q-Network (DRQN) architecture in Tensorflow,  and Caffe. By allowing for a temporal integration of information, the agent learns a sense of spatial location that is able to augment its observation at any moment, and allow the agent to receive a larger reward each episode.


The project is currently in development stage. Along with the AI based navigation system for blind people we want to use  Temperature Sensor, IR Sensor, Ultrasonic Sensor, Touch Sensor and Proximity Sensors to collect data to refine the model for better guidance. Voice recognition and image recognition are the other two parts of our future integrated system.  There are several issues. One of main issue is getting sufficient amount of realistic training data.   

I am really happy that many people has shown interest on this project and I hope we will overcome the technical and financial obstacles of the project and make it successful and bring benefit to the people.