Top 10 Limitations of Artificial Intelligence and Deep Learning

Artificial Intelligence (AI) has provided remarkable capabilities and advances in image understanding, voice recognition, face recognition, pattern recognition, natural language processing, game planning, military applications, financial modeling, language translation, and search engine optimization. In medicine, deep learning is now one of the most powerful and promising tool of AI, which can enhance every stage of patient care —from research, omics data integration, combating antibiotic resistance bacteria,  drug design and discovery to diagnosis and selection of appropriate therapy. It is also the key technology behind self-driving car.

However, deep learning algorithms of AI have several inbuilt limitations. To utilize the full power of artificial intelligence, we need to know its strength and weakness and the ways to overcome those limitations in near future.

Now, AI support messaging apps, and voice controlled chatbots are helping people for deep space communications, customer care, taking off the burden on medical professionals regarding easily diagnosable health concerns or quickly solvable health management issues and many other applications. However, there are many obstacles and number of issues remain unsolved. 

Even with so many success and promising results its full application is limited. Mainly, because, present day AI has no common sense about the world and the human psychology. Presently, in complex application areas, one part is solved by the AI system and the other part is solved by human – often called as human-assisted AI system.  The challenges are mostly in the large-scale application areas like drug discovery, multi-omics-data integration, assisting elderly people,  new material design and modeling,  computational chemistry, quantum simulation, and aerospace physics.

This article is focused to explain the power and challenges of current AI technologies and learning algorithms. It also provides the directions and lights to overcome the limits of AI technologies to achieve higher levels learning capabilities.

Top 10 Limitations of Artificial Intelligence and Deep Learning

The key limitations and challenges of the present day Artificial Intelligence systems are: 1) lack of common sense, 2) lack of explanation capability, 3) lack of feelings about human emotions, pains and sufferings, 4) unable to do complex future planning, 5) unable to handle unexpected circumstances and boundary situations, 6) lack of context dependent learning – unable to decide its own learning algorithms based on situation, 7) lack of self-planning about the best topology structure to use, 8) self-adjustment of the learning hyperparameters, 9) lack of multi-domain integrated learning, and 10) lack of subjective thinking. 

There are two types of challenges of the present day Artificial Intelligence: soft challenges and the hard challenges. In this article I will mainly discuss about the soft challenges. The details of the soft challenges are as follows:

1. Lack of High Quality Datasets

A model can only be as good as the relevant information in the dataset. Both the breadth and depth for training data in a particular application are essential, but frequently it is difficult to get real life data due to privacy concerns, and record identification concerns and non-availability of data.  Moreover, most of the experimental datasets that are often available are good for concept validation, prototype development, academic research and experimentation. But often they are far away from the requirements of exhaustive data analysis and deep discovery in real life situations.  Even a perfect model is limited by the quality and magnitude of signal in the dataset from which it is trained. 

2. Lack of Depth in Training Dataset

The training data for deep learning must cover both vertical and horizontal depth of real life conditions.  It is difficult to determine how much data and training a deep neural network requires, it is agreed that large datasets are required in order to reach acceptable levels of accuracy and performance. But how the system will behave in boundary conditions are not predictable.  Moreover, recent additional extrapolated synthetically modified data techniques are often not serving the purpose. Moreover, algorithms must be trained separately for each domain, condition and situation.

3. Lack of Explanation and Interpretation Ability

Even though the AI methods showing excellent results, in numerous occasions, it is difficult or mostly impossible to explain the technical and logical bases of the system. The black box nature of the current deep learning techniques unable explain, the logic behind the conclusions or the results it has reached. The current AI is not capable of abstracting concepts from limited experience and transferring knowledge between domains.

The new General Data Protection Regulation (GDPR) laws states that data subjects have the right not to be subject to a decision based solely on automated processing that produces legal or similarly significant effects. Individual has the rights to know information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject.

4. Problems of Over-fitting and Under-fitting

One of the problems with machine learning, including deep learning, is overfitting. Overfitting occurs when the trained model does not generalize well to unseen cases, but fits the training data well. This becomes more apparent when the training sample size is small.

Assessment of the training curve can be used to assess the possibility of overfitting. From the curve, it is often apparent that the data loss is similar for both validation and training datasets, which indicates well-fit curves. If there were overfitting, the loss on the training data would be much greater than that of the validation data. Dropout, model regularization and other strategies are helpful to overcome this issue. But in real life large-scale application it is still a crucial issue.

5. Model Complexity Issues

Machine learning needs more and more powerful computers to perform more and more operations in fractions of a second. In artificial intelligence, the computer is not normally programmed with logic but mostly on the principle of “trial and error.” Learning is achieved through incremental process of huge trial and error calculations. In AI, the computational problems are confined into many complexity classes. As the data is increasing at a double exponential rate, present day AI systems are facing many limitations in terms of processing speed, memory, and communication. Particularly, in the areas of drug discovery, multi-omics-data integration, new material design and modeling,  computational chemistry, quantum simulation and aerospace physics.

6. Lack of Reproducibility of Results

In medical and other scientific research, credibility is of utmost importance. Transparency and reproducibility of the results are vital.  Reproducibilty can also be applied under changed conditions of measurement. With more and more complex studies, advanced statistics, and the constant pressure on researchers to publish, there is a growing concern about scientific transparency and the challenges with irreproducible research.

Lack of standardization, particularly regarding data interchangeability and manipulation and reproducibility of results. One major stumbling block to the advancement of result validation has been the lack of a standard test set agreed upon and used by the entire community. The researchers often significantly process and manipulate the data before using them as input for learning programs. For example, recently, hundreds of peer reviewers are manipulating citations. There is a need for standardization of workflow, and validation and testing of AI systems which can divide into a series of standard protocols. 

Who is to blame if a smart algorithm makes a mistake and does not spot a cancerous nodule on a lung X-ray? To whom could someone turn when the systems comes up with a false prediction? Who will build in safety features so machine will not turn on humans? What will be the rules and regulations to decide on safety?

Compassionate Artificial IntelligenceThere could be legal and ethical issues regarding the use of  commercial development of deep learning based system, since the performance of the system will be highly dependent on the quality of the data. For example, the use of AI in health care raises many ethical issues, including: the potential for machine to make erroneous decisions; the question of who is responsible when algorithms are used to support decision-making; difficulties in validating the outputs of machine learning systems; inherent biases in the data used to train the systems; ensuring the protection of potentially sensitive data; securing public trust in the development and use of AI technologies; effects on people’s sense of dignity and social isolation in care situations; effects on the roles and skill-requirements of healthcare professionals; and the potential for AI to be used for malicious purposes. 

8. Lack of Integration: a synergistic approach 

Integration of AI techniques at various levels of decision making is key to success. For example, in healthcare, dynamics of multi-diseases, effects of multi-drugs, impact of environments and cultures are to be studied to include them in the machine learning algorithms.   Different types of medical data have their own predictive value, representative sensitivity, prediction rates and weights. Each type of data (basic blood test, basic urine test, MRI, electroencephalogram, electrocardiogram, genome, transcriptome, microbiome etc.) and their combinations have relevant value, depending on quality of the medical records and its biological significance for certain disease condition. 

Various patterns reflecting the changes in patient condition are more readable when doctor operates complex information, presenting patient health state on different levels at the current period of time. Normally, changes in patient conditions should be incorporated in the AI system testing and validation.  Some of the data types, like images, audios, and videos can also have substantial predictive value for medical conditions. The evaluation methods to test the performance of each individual technique as well as integrated techniques requires many developments.

Finally, incorporating and integrating the political intelligence and dynamics of the pharmaceutical companies, health insurance companies, hospital administrators,  government bodies, research institutes, doctors, and lastly incorporating the emotions, financial conditions and disease conditions of the patients in the machine learning models are real challenge.

9. Lack of Empathy and Compassion

Concerns have been raised about a loss of human contact and increased social isolation if machine learning technologies are used to replace staff or family time with patients. Machine learning may cover the whole process of treatment, However, empathy, proper communication and the human touch are still equally important and essential.

10. Stopping the Malicious Uses

AI is becoming essential technology for healthcare and can benefit us in numerous ways. However, we are increasingly aware that they can be used for nefarious purposes by malicious actors. This is something that we should all remain vigilant of, as these technologies also have the potential to undermine our conventional defenses. It is always inevitable that, sooner or later, such a powerful technology would be used for malicious purposes. The malicious actors can take advantage of the machine learning process and taint the data pool from which these systems learn. The challenge is to design and develop self-learning and self-protective systems that can identify the malicious code, or data at the early stage. 

Summary:

Deep learning and other AI techniques are increasingly showing superior performance in many application areas. Compared to human experts AI can predict and diagnose from many complex data with higher speed. However, there are many obstacles and number of issues remain unsolved for implementing large-scale integrated AI systems.

There are two types of challenges in the present day Artificial Intelligence systems: soft challenges and the hard challenges. We mainly discussed about ten immediate soft challenges for implementing large-scale AI applications. Best practices should be identified in research areas with more mature methods for addressing the issues. High quality training datasets are key to success. Experts-in-the-loop at various levels of system development, testing and validation may improve not only the way input datasets are selected and predictive performance is evaluated, but also they could guide the learning process in better ways.

Source Books:

    1. Compassionate Artificial Intelligence: Frameworks and Algorithms by Dr. Amit Ray
    2. Compassionate Superintelligence, AI 5.0 by Dr. Amit Ray
    3. Artificial Intelligence for Precision Medicine By Dr. Amit Ray
    4. Quantum Computing Algorithms for Artificial Intelligence By Dr. Amit Ray