This is a straightforward implementation of an autoencoder of MNIST numbers. Instead of elaborating in lengthy sentences on a whole deeplearning universe I’ll summarize things in telegram style:

  • MNIST is a collection of images representing number
  • autoencoder means that you input the images and expect the images to come out as identical as possible with in between a reduction of the information. This can be compared to zip/unzip in one go and looking at the quality of the transition. A perfect autoencoder would return the identical input.
  • Keras is a popular deeplearning framework which internally makes use of TensorFlow or Theano for actual computation on CPU/GPU.
  • adadelta: adaptive learning rate optimization algorithm is one of the many optimization choices you have when using Keras
  • binary cross-entropy: is a way to measure how well the output is
  • input/output are vectors as a linearized representation of square images consisting of 1’s and 0’s
  • loss: in general a measure of how large the (collective) error is between input and output
  • accuracy refers to mistakes. If you wish, accuracy says how many times the prediction was correct and loss tells you how big the mistakes are.

Note that everything in this list is as much hard science as it is an art or craft. It’s OK  if you discover that another optimization algorithm suits your needs better. It’s fine to use any other deeplearning framework as well.