GPUs and Deep Learning for Compassionate Artificial Intelligence
Dr. Amit Ray discusses how GPUs and deep learning can be used for developing complex modules of the compassionate artificial intelligence. GPUs and cloud TPUs made complex deep learning of artificial intelligence possible in laptop, PC and smartphone. They made complex deep learning possible for research labs and smaller companies across the world. They provided a lot more computing power and efficiency for complex matrix operations and parallel processing.
One of AI’s biggest potential benefits is to keep humanity healthy, happy and free from inequalities and slavery. The role of AI is gradually changing. It is shifting from typical object recognition or diagnosis tool, to complex human like powerful integrated compassionate care giving systems. Compassionate AI is one area where AI is beginning to take strong hold. Here, Dr. Ray explains the implementation of deep learning modules of integrated compassionate care giving systems with GPUs and TPUs .
Training of complex compassionate artificial intelligence modules requires deep learning neural networks. Training of compassionate modules are more time‐consuming compared to shallow learning models of AI. They may take months, weeks, days or hours depending on the size of training set and model architecture. This is because the number of computational steps increases rapidly with the number of elements in the matrix. In every doubling of the matrix size increases the length of the calculation eight-fold. Training on GPUs can considerably reduce this training time. It can reduce the time to tenfold or more. Hence, the use of GPUs are also crucial for evaluating multiple deep learning models efficiently.
How GPUs Help Deep Learning
The core of many machine learning applications is the repeated computation of a complex mathematical expression. Neural networks store weights in matrices. Deep learning is mostly comprised of matrix operations like matrix multiplication. Emotional Intelligence modules of compassionate AI requires matrix operations of feeling like; anger, fear, depression, pain and pleasure. This is what people want a machine to exhibit for them to be trusted. Linear algebra makes matrix operations of feelings and emotions fast and easy, especially when training on Graphical Processing Units (GPUs). GPUs were developed to handle lots of parallel computations using thousands of cores. Instead of processing pixels one-by-one, GPUs manipulate entire matrices of pixels in parallel. GPUs have almost 200 times more processors per chip than a CPU. CPU optimized serial tasks where as GPU optimized parallel tasks. For complex problems training of a deep learning networks takes months and days but with the help of GPU training will take hours or minutes. If you want to learn deep learning models quickly GPU is must.
Deep learning is an emerging computer science field that seeks to mimic the human brain with hardware and software. The use of generalized algorithm was crucial to the development of . The popularity of graphics cards for neural network training exploded after the advent of general purpose GPUs. Initially, GPUs were developed mainly for processing arrays of pixels for 3D video games. The large memory bandwidth with parallel processing power, makes them ideal for implementing deep learning in compassionate AI.
Modules and Algorithms of Compassionate AI
The core of developing compassionate AI algorithms is learning, growing, adding values and embracing challenges. They deals with possibilities, creativity, innovation, awe, and deep service to the humanity. The models like emotions, beliefs, values, imagination, long term memory, protective habits, critical thinking, and caring man-machine behavior needs huge data processing and training. Large parallel processing power and memory bandwidth of GPUs and quantum computing are ideal for that.
Presently, machine learning tools integrated with the advancements in humanoid design are enabling machines to have compassionate conversations with human and having social interactions with people to keep aging minds sharp and happy. More details about the modules and the algorithms are explained in the book Compassionate Artificial Superintelligence AI 5.0.
Modern Trends in GPUs
The growth of GPU is exponential. In 2007, NVIDIA launched the CUDA programming platform, and opened up the general purpose parallel processing capabilities of the GPU. The CUDA toolkit is deeply entrenched. Now, it works with all major deep learning frameworks like Tensoflow, Pytorch, Caffe, CNTK, etc. Deep learning with NVIDIA-CUDA is quite efficient.
Google’s new cloud TPUv2 are powerful enough. They are designed for machine learning and tailored for TensorFlow. They are free TPU cluster for researchers, and a lightweight TensorFlow version for mobile devices. TPUs are provided as network addressable devices.
The most important feature for deep learning performance is memory bandwidth. GPUs are optimized for memory bandwidth. Memory bandwidth is the rate at which data can be read from or stored into a semiconductor memory by a processor. Operations like like matrix multiplication requires high memory bandwidth.
Summary:
Training of deep learning neural networks for compassionate artificial intelligence requires large numbers of matrix multiplications, which can be parallelized efficiently on GPUs. All state‐of‐the‐art deep learning frameworks provide support to train models on either CPUs or GPUs without requiring any knowledge about GPU programming. On desktop machines, the local GPU card can often be used if the framework supports the specific brand. Alternatively, commercial providers provide GPU and TPU cloud compute clusters are the easy way of implementing complex deep learning models of compassionate AI.
References:
- Compassionate Artificial Superintelligence AI 5.0, Amit Ray, 2018
- Large-scale Deep Unsupervised Learning using Graphics Processors, Rajat Raina et al., 2009
- Deep Learning, Yann LeCun et al., Nature, 2015