Friday, 3 January 2025

Hour 23 Autoencoders

#### Concept

Autoencoders are neural networks used for unsupervised learning tasks, particularly for dimensionality reduction and data compression. They learn to encode input data into a lower-dimensional representation (latent space) and then decode it back to the original data. The goal is to make the reconstructed data as close to the original as possible.

#### Key Components

1. Encoder: Maps the input data to a lower-dimensional space.

2. Latent Space: The compressed representation of the input data.

3. Decoder: Reconstructs the data from the lower-dimensional representation.

#### Key Steps

1. Encoding: Compress the input data into a latent space.

2. Decoding: Reconstruct the input data from the latent space.

3. Optimization: Minimize the reconstruction error between the original and the reconstructed data.

#### Implementation

Let's implement an autoencoder using Keras to compress and reconstruct images from the MNIST dataset.

##### Example

# Import necessary libraries

import numpy as np
import matplotlib.pyplot as plt
from tensorflow.keras.layers import Input, Dense
from tensorflow.keras.models import Model
from tensorflow.keras.datasets import mnist

# Load the MNIST dataset
(x_train, _), (x_test, _) = mnist.load_data()
x_train = x_train.astype('float32') / 255.
x_test = x_test.astype('float32') / 255.
x_train = x_train.reshape((len(x_train), np.prod(x_train.shape[1:])))
x_test = x_test.reshape((len(x_test), np.prod(x_test.shape[1:])))

# Define the autoencoder architecture
input_dim = x_train.shape[1]
encoding_dim = 32

# Encoder
input_img = Input(shape=(input_dim,))
encoded = Dense(encoding_dim, activation='relu')(input_img)

# Decoder
decoded = Dense(input_dim, activation='sigmoid')(encoded)

# Autoencoder model
autoencoder = Model(input_img, decoded)

# Compile the model
autoencoder.compile(optimizer='adam', loss='binary_crossentropy')

# Train the model
autoencoder.fit(x_train, x_train,
                epochs=50,
                batch_size=256,
                shuffle=True,
                validation_data=(x_test, x_test))

# Encoder model to extract the latent representation
encoder = Model(input_img, encoded)

# Decoder model to reconstruct the input from the latent representation
encoded_input = Input(shape=(encoding_dim,))
decoder_layer = autoencoder.layers[-1]
decoder = Model(encoded_input, decoder_layer(encoded_input))

# Encode and decode some digits
encoded_imgs = encoder.predict(x_test)
decoded_imgs = decoder.predict(encoded_imgs)

# Plot the original and reconstructed images
n = 10
plt.figure(figsize=(20, 4))
for i in range(n):
    # Display original
    ax = plt.subplot(2, n, i + 1)
    plt.imshow(x_test[i].reshape(28, 28))
    plt.gray()
    ax.get_xaxis().set_visible(False)
    ax.get_yaxis().set_visible(False)

    # Display reconstruction
    ax = plt.subplot(2, n, i + 1 + n)
    plt.imshow(decoded_imgs[i].reshape(28, 28))
    plt.gray()
    ax.get_xaxis().set_visible(False)
    ax.get_yaxis().set_visible(False)

plt.show()

Result


Epoch 50/50
791/791 ━━━━━━━━━━━━━━━━━━━━ 4s 5ms/step - loss: 1.7999e-04  
Test Loss: 2.278068132000044e-05


No comments:

Post a Comment

Hour 30 Hyperparameter Optimization

#### Concept Hyperparameter optimization involves finding the best set of hyperparameters for a machine learning model to maximize its perfo...