Autoencoders

Linear Autoencoder

We first implement a linear autoencoder
. The LinearAE class construction receives two values:

Each encoder saves two inner members:

Network Autoencoder

Next, we construct a NN-based autoencoder, using the following architecture:



All layers are linear, but note the dimensions in each layer, as well as the activation function (ReLU/Tanh).

The NetworkAE class construction receives four values:

Each encoder saves two inner objects:

Training

Next, you implement a function that performs the training for a given network.

The input parameters are:

Testing the Autoencoders

To test these encoders, we use the two-moon dataset.

Our goal is to find a suitable 1-dimensional representation of this 2-dimensional data.

For a comparison, we also implement a dimension reduction and reconstruction using principal component analysis.

Below demonstrates the effectiveness of the three methods.

We test performance of the NN-autencoder on new data from make_moons.

The below cell shows the 1-dimensional encoded data on the left, and the decoded reconstruction on the right.

We can see what the decoder is doing by applying it to latent encoded space.