This package provides an implementation of the Differentiable Neural Computer, as published in Nature.
Any publication that discloses findings arising from using this source code must cite “Hybrid computing using a neural network with dynamic external memory", Nature 538, 471–476 (October 2016) doi:10.1038/nature20101.
The Differentiable Neural Computer is a recurrent neural network. At each
timestep, it has state consisting of the current memory contents (and auxiliary
information such as memory usage), and maps input at time
t to output at time
t. It is implemented as a collection of
RNNCore modules, which allow
plugging together the different modules to experiment with variations on the
The access module is where the main DNC logic happens; as this is where
memory is written to and read from. At every timestep, the input to an
access module is a vector passed from the
controller, and its output is
the contents read from memory. It uses two futher
TemporalLinkage which tracks the order of memory writes, and
which tracks which memory locations have been written to and not yet
subsequently "freed". These are both defined in
The controller module "controls" memory access. Typically, it is just a feedforward or (possibly deep) LSTM network, whose inputs are the inputs to the overall recurrent network at that time, concatenated with the read memory output from the access module from the previous timestep.
The dnc simply wraps the access module and the control module, and forms
RNNCore unit of the overall architecture. This is defined in
DNC requires an installation of TensorFlow
and Sonnet. An example training script is
provided for the algorithmic task of repeatedly copying a given input string.
This can be executed from a python interpreter:
$ ipython train.py
You can specify training options, including parameters to the model and optimizer, via flags:
$ python train.py --memory_size=64 --num_bits=8 --max_length=3 # Or with ipython: $ ipython train.py -- --memory_size=64 --num_bits=8 --max_length=3
Periodically saving, or 'checkpointing', the model is disabled by default. To
enable, use the
checkpoint_interval flag. E.g.
will ensure a checkpoint is created every
10,000 steps. The model will be
/tmp/tf/dnc/ by default. From there training can be resumed.
To specify an alternate checkpoint directory, use the
Note: ensure that
/tmp/tf/dnc/ is deleted before training is resumed with
different model parameters, to avoid shape inconsistency errors.
More generally, the
DNC class found within
dnc.py can be used as a standard
TensorFlow rnn core and unrolled with TensorFlow rnn ops, such as
tf.nn.dynamic_rnn on any sequential task.
Disclaimer: This is not an official Google product