Skip to content

reshalfahsi/next-frame-prediction

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 

Repository files navigation

Next-Frame Prediction Using Convolutional LSTM

colab

In the next-frame prediction problem, we strive to generate the subsequent frame of a given video. Inherently, video has two kinds of information to take into account, i.e., image (spatial) and temporal. Using the Convolutional LSTM model, we can manage to feature-extract and process both pieces of information with their inductive biases. In Convolutional LSTM, instead of utilizing fully connected layers within the LSTM cell, convolution operations are adopted. To evaluate the model, the moving MNIST dataset is used. To evalute the model, the Moving MNIST dataset is used.

Experiment

Have a dive into this link and immerse yourself in the next-frame prediction implementation.

Result

Quantitative Result

Inspect this table to catch sight of the model's feat.

Test Metric Score
Loss 0.006
MAE 0.021
PSNR 22.120
SSIM 0.881

Evaluation Metric Curve

loss_curve
The loss curve on the training and validation sets of the Convolutional LSTM model.

mae_curve
The MAE curve on the training and validation sets of the Convolutional LSTM model.

psnr_curve
The PSNR curve on the training and validation sets of the Convolutional LSTM model.

ssim_curve
The SSIM curve on the training and validation sets of the Convolutional LSTM model.

Qualitative Result

This GIF displays the qualitative result of the frame-by-frame prediction of the Convolutional LSTM model.

qualitative
The Convolutional LSTM model predicts the ensuing frame-by-frame from t = 1 to t = 19.

Credit