Autoencoder

From EyeWire
Revision as of 03:02, 24 June 2016 by Pilnpat (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

An autoencoder, also known as an autoassociative encoder,[1] is a neural network which attempts to find features of the input set which can then be used to reconstruct the input. For example, if the input consists of a 10x10 square array of binary pixels -- so, 100-dimensional vectors -- at autoencoder might attempt to reduce the input set to 25 features -- so, 25-dimensional vectors.

A deep autoencoder is an autoencoder with more than one feature layer.[2] For example, the set of 25-dimensional feature vectors in the example above might be further reduced to a 10-dimensional feature vector, thus providing two layers of dimensionality reduction.

A sparse autoencoder is an autoencoder where the dimensionality of the input is not necessarily reduced, but may in fact be increased. However, most of the neurons in the output vector set have zero output.[3] In the example above, perhaps the input is better represented by a sparse autoencoder using a set of 200 features, only some of which are activated at any one time.

Methods

An autoencoder can be trained using supervised learning techniques. In this case, a neural network is constructed where the input layer is fully connected to a hidden layer of reduced (or enlarged, for sparse autoencoders) dimensionality, and the hidden layer is then fully connected to an output layer of dimensionality equal to the input layer. Each input pattern is then presented to the network which must generate an output pattern identical to the input pattern.

For sparse autoencoders, additional constraints must be placed on the hidden layer during learning. For example, the hidden layer could be constrained to have a very low average activation over all hidden neurons.[4]

Autoencoders may also be trained using a combination of supervised and unsupervised learning techniques, which is especially useful for deep autoencoders where a poor choice for initial weights may lead to poor autoencoding. For example, a deep encoder may first be trained using a restricted Boltzmann machine to the point where the resulting weights may then be used in feedforward backpropagation.[2]

References

  1. Thompson, Benjamin B.; Marks, Robert J.; Choi, Jai J.; El-Sharkawi, Mohamed A.; Huang, Ming-Yuh; Bunje, Carl (2002). "Implicit Learning in Autoencoder Novelty Assessment" in IEEE Proceedings of the 2002 International Joint Conference on Neural Networks. Volume 3 pp. 2878 – 2883.
  2. 2.0 2.1 Hinton, Geoffrey E.; Salakhutdinov, R. R. (July 28, 2006) "Reducing the dimensionality of data with neural networks".Science 313: 504-507
  3. Ng, Andrew. "CS294A Lecture Notes, Sparse autoencoder"
  4. "Autoencoders and sparsity"