17Feb
deepmind pathnet
By: Hp Creative Space On: February 17, 2017

DeepMind’s PathNet

Google’s Artificial Intelligence (AI) division, DeepMind, just published a paper sketching a novel algorithm called PathNet that is a direction for Modular Deep Learning (DL) that essentially melds it with Meta-Learning and Reinforcement Learning, forming a basis to more capable DL systems. This study snagged the spotlight due to it being a bold attempt of the division at constructing the first Artificial General Intelligence (AGI) solution.The paper, “PathNet: Evolution Channels Gradient Descent in Super Neural Networks,” was authored by Fernando et al. and published early 2017 on Arvix.

Transfer Learning

Researches hit her to have been focused on individual pathways in the building and training of neural networks; such involve assigning a unique network of an AI to perform or solve a specific problem. The bottle neck, however, lies in Transfer Learning, the ability of an AI to store the knowledge obtained from solving one problem and apply that knowledge in solving a different but related problem. PathNet can be considered a network consisting of neural networks, interconnecting the different networks responsible for solving different problems to ultimately form an AGI construct.

Artificial General Intelligence (AGI)

The approach PathNet uses to construct AGI revolves around the concept of multiple users collaboratively training one massive neural network, constantly reusing parameters in its course. PathNet utilizes embedded agents (pathways) in the network for identifying parts of the network that are fit for reuse. During learning, a selection genetic algorithm selects pathways around the neural network for mutation and replication; the performance of a particular pathway (pathway fitness) is measuredin accordance with a cost function. PathNet is designed to prevent catastrophic forgetting and loss of functionality.

PathNet Architecture

The modular deep neural network consists of L layers, each layer comprising of M modules. Each of these modules by itself is a neural network followed by a transfer function. The output of each of the layers is summed up before being passed into the active modules of the subsequent layer. A module present in the currently evaluated path genotype would be considered active. In each pathway, no more than N (typically 3 or 4) distinct modules are permitted per layer. The final layer, however, is linear, unique and unshared for each current task.

Task Study

Several studies of PathNet were carried out in such that it was trained by stochastic gradient descent and parallel implementation on reinforcement learning tasks in which PathNet is trained by the Async Advantage Actor-Critic (A3C). In all four investigated domains (Binary MNIST classification, CIFAR and SVHN classification, and Atari and Labyrinth games) positive transfer from a source task (learning) to a target task (applying) was demonstrated in comparison to the single fixed-path controls that were trained de novo and after fine-tuning following the source task.

Neural Analogue

Poetically, DeepMind has expressed resemblance of the manner in which PathNet operates to that of the basal ganglia, which are strongly interconnected with various brain areas and are associated to multiple functions such as voluntary motor movements, routine behaviors, procedural learning, and cognition. The analogy stems from the proposed role of determining the subsets of the cortex which would be active and trainable as a function of signals from the prefrontal cortex.

  •  
  • 1
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
    1
    Share

Leave reply:

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.