Brain-Computer Interface and Compassionate Artificial Intelligence to Serve Humanity

Brain-Computer Interface and Compassionate Artificial Intelligence

Dr. Amit Ray

The purpose of Compassionate AI is to remove the pain from the society and help humanity. We focused on developing AI based low cost BCI based interfaces for helping disable people.

Artificial Intelligence with Brain-Computer Interface (BCI) or Brain Machine Interface (BMI) is a fast-growing emerging technology for removing pains from the society. Here,  Dr. Amit Ray explains how with the advancement of artificial intelligence and exploration of new mobile bio-monitoring  devices, earphones, neuroprosthetic, wireless  wearable sensors, it is possible to monitor  thoughts and activities of brain neurons  and serve humanity.

This research is going to be immensely  beneficial for the physically and mentally challenged people as well as for the people who are suffering from post-traumatic stress disorder (PTSD), and other mental disorders or brain problems. Over the last 5 years, technologies for non-invasive transmission of information from brains to computers have developed considerably.

Brain-Computer Interface and Compassionate AI

Here, researchers focus to build a direct communication link between the human brain and the smartphones, earphone, computers or other devices. With BCI mind can speak silently with a smartphone or other devices.  Recent advancement of neuroprosthetic, linking the human nervous system to computers and providing unprecedented control of artificial limbs and restoring lost sensory function.

 BCI establishes two way communications between the brain and the machine.  One is  brain-computer interface and another is called computer-brain interfaces (CBI). BCI hopes to create new communication channels for disabled or elderly persons using their brain signals. 

BCI research also created hopes to remove the fear, disturbing thoughts, feelings and bad dreams of any common people. With AI, BCI could send signals to the brain that combines the rapport-building skills of human caregiver with the feelings of love, care, protection, forgiveness and safety. By sending signals of a welcoming expression and posture, and being attentive, loving and responsive, BCI can help many patients and people who are suffering from mental disorders. AI based BCI has tremendous scope to remove the suffering of the humanity. 

However, these integration of emerging technologies and brain computer interfaces raise many ethical concerns. Particularly, the research work for mind upload or brain upload (often called as “mind copying” or “mind transfer”)  has many privacy,  safety  and ethical concerns. 

Link Between Compassionate AI and BCI

BCIWith the advancement and exploration of new mobile bio-monitoring  devices, earphones, neuroprosthetic, wireless  wearable intelligent sensors, it is becoming possible to monitor the activities of the neuron in human brain. The small motion sensors will notice the brain activities and make sync with your mobile devices.  The noninvasive devices are used to measure the brain activities, people’s emotions, their movements, their interactions with others, and notices their bio-metric changes under different conditions, and so on. The role of compassionate AI is to help the sick, and physically and mentally challenged people who are suffering a lot. With BCI the intelligent sensors will capture the brain signals and compassionate AI will interpret that and take suitable kind action to eliminate the pain.

Types of Brain Computer Interface

Brain computer interface can be classified into three main groups; Non-Invasive, Semi-invasive and Invasive. 

In invasive BCI techniques, special devices have to be used to capture the brain signals.  Invasive BCI devices are inserted directly into the human brain by a critical surgery. Invasive methods in humans remain severely limited in their practical usefulness.  In Semi-invasive BCI, devices are inserted in the skull on the top of human brain.  Non Invasive BCI devices are considered the safest type and low cost type of devices. However, these devices have weaker human brain signals than other BCI devices due to the obstruction of the  skull. The detection of signals is done by some electrodes placed on the scalp. 

Brain Signal Acquiring Instruments:

Functional magnetic resonance       imaging       (fMRI),       positron       emission       tomography       (PET), magneto-encephalography   (MEG),   and   scalp   electroencephalography   (EEG),   are commonly used for non-invasive BCI study.  However, they suffer from bulkiness, high cost, high sensitivity to head movements, low spatial and temporal resolution, and low signal quality.  

On  the  other  hand,   Near  Infrared  Spectroscopy  (NIRS)  is  a  novel optical imaging modality, for noninvasive, continuous monitoring of tissue oxygenation, and regional blood flow in brain. NIRS is not only non-invasive and safe, but also portable, affordable, and requires short setup time, which  makes  it  more  user-friendly. Even  a  wireless  NIRS system  is  also  available,  which  enables  the  monitoring  of  brain  activity  in  moving subjects,  such  as  walking  or  running  people.

On the other hand functional NIRS (fNIRS ) are relatively new, which uses near-infrared-range light (usually of 650 ∼ 1000 nm wavelength) to measure the concentration changes of oxygenated hemoglobin (HbO) and deoxygenated hemoglobin (HbR). Compared to fMRI, fNIRS offers lower spatial resolution and provides higher temporal resolution.

Brain Signals

The human brain contains, on average, about 86 billion nerve cells called neurons, each individually linked to other neurons by way of connectors called axons and dendrites. Signals at the junctures (synapses) of these connections are transmitted by the release and detection of chemicals known as neurotransmitters. Every time we think, move, feel or remember something, our neurons are at work. The brain generates huge amount of neural activities. That work is carried out by small electric signals that moves from neuron to neuron as fast as 250 mph. Here, “mind” we mean “the thinking machine” is a set of processes carried out by the neurons of the brain. 

The signals are generated by differences in electric potential carried by ions on the membrane of each neuron. There are a plethora of signals, which can be used for BCI. These signals are divide into two classes: spikes and field potentials. Although the paths of the signals take are insulated by something called myelin, some of the electric signal escapes. Scientists can detect those signals, interpret what they mean and use them to direct a device of some kind.


Convolution Neural Network and BCI:

Convolution Neural Networks (CNN) is a type of artificial neural network inspired based on visual cortex. It has capacity of learning the appropriate features from the input data automatically by optimizing the weight parameters of each filter through the forward and backward propagation in order to minimize the classification error.

Human auditory cortex is arranged in a hierarchical organization, much like the visual cortex. In a hierarchical system, a series of brain regions performs different types of computation on sensory information as it flows through the system. It has been well documented that the visual cortex has this type of organization. Earlier regions, known as the primary visual cortex, respond to simple features such as color or orientation. Later stages enable more complex tasks such as object recognition. 

One advantage of using deep learning technique is that it requires minimal pre-processing because optimal settings are learned automatically. One major advantage of CNNs is that feature extraction and classification are integrated into a single structure and optimized automatically. fNIRS time series data of human subjects were input to the CNN. As the convolution is performed in the sliding window manner, the feature extraction process of CNN retains the temporal information of the time series data obtained by fNIRS. Typical structure of a CNN is given below.

Reinforcement Learning and BCI:

Reinforcement learning is a framework to learn directly from the interaction of agent and environment and thereby achieve goals  Agent is defined as an entity that has cognitive skills, the ability to solve the problem, and the ability to communicate with the outside environment. In reinforcement learning framework, agent is a learner and decision-maker, interacting with environment which is everything outside of agent. Agent chooses an action; the environment responds to the action, generates new scenes to the agent, and then returns a reward. The ultimate goal of agent is to maximize the sum of the rewards in long term.

Reinforcement learning framework  uses either table-based algorithms (Q-learning, SARSA, SARSA-λ) or neural network-based algorithms (Q-NN, NFQ, DQN, DDQN). The advantage of using neural networks over table-based methods is that the former regulates reinforcement learning with superior generalization and convergence in real-world applications.

Neuroscience of Reinforcement Learning:

Normally, basal ganglia area of brain has been identified as the primary area in which reinforcement learning (RL) might occur and reward differentiation of RL is mostly rooted in dopamine signaling. Primary motor cortex is indirectly influenced by the major  dopamine reward pathways. Anatomically, the prefrontal cortex (PFC) can be divided into four areas dorsolateral (DLPFC), ventrolateral (VLPFC), dorsomedial (DMPFC) and ventromedial (VMPFC) areas. DLPFC has the largest number of connections with sensory cortex, while the largest share of DMPFC connections are with motor areas. The DLPFC receives abundant dopaminergic input from different areas of the brain and acts as working memory buffer. Here dopamine provides a motivating signal that increases access for reward-related stimuli.

Actor-Critic (AC) method-based learning algorithms are among the most popular methods in reinforcement learning. AC framework has many links with neuroscience and learning. Reinforcement learning is deeply connected with neuroscience. In particular a research showed that basal ganglia are involved in Pavlovian learning, area of unconscious memories such as skills and habits. 


The very aim of BCI is to translate brain activity into a command for a computer.  One of the biggest problems in BCI research is the non-stationarity of brain signals. This non-stationarity makes it difficult for a classifier to find reliable patterns in the signals, resulting in bad classifying performances. In combination with the RL agent and CNN or other feedback approaches this could improve learning performance to control a BCI and reduce training time.

BCI technology is improving very fast. They are giving ways and means to record and distinguish the signals of the brain neurons. BCI with compassionate AI can provide disabled people with communication, environment control, and movement restoration. Allow paralyzed people to control prosthetic limbs with their mind. Can transmit auditory data to the mind of a deaf person, allowing them to hear. It has many non-medical applications. However, despite its initial success, Brain computer interfacing needs to overcome many challenges. 


  1. Brain–computer interfaces for communication and rehabilitation, Nature Reviews Neurology, volume 12, pages, 513–525 (2016).
  2. A mobile brain-body imaging dataset recorded during treadmill walking with a brain-computer interface,Yongtian He, Trieu Phat Luu, Kevin Nathan, Sho Nakagome & Jose L. Contreras-Vidal, Nature, Scientific Data volume 5, Article number: 180074 (2018).
  3. Carabez, E.; Sugi, M.; Nambu, I.; Wada, Y. Identifying Single Trial Event-Related Potentials in an Earphone-Based Auditory Brain-Computer Interface. Appl. Sci. 20177, 1197.
  4. The Evolution of Neuroprosthetic Interfaces, Dayo O. Adewole et al, Crit Rev Biomed Eng. 2016; 44(1-2): 123–152.