Skip to content

Muktargetu/Image-Captioning-using-Flickr30K-from-Kaggle

 
 

Repository files navigation

Building an Image Captioning Model using VGG16 and LSTM on Flicker30K dataset

Objective:

Image captioning is the task of generating a natural language description of an image. This task involves two main components, namely image analysis and natural language processing. In this project, we aimed to build an image captioning model that can generate captions for images using the VGG16 model and LSTM neural network architecture. The dataset used for this project is the Flickr30k dataset.

Import Modules:

To build my image captioning model, I imported various Python modules including Keras, TensorFlow, Pandas, Numpy, NLTK, and Matplotlib. These modules were used for various tasks such as image processing, text preprocessing, data loading, and visualization.

Import dataset from Kaggle:

I downloaded the Flickr30k dataset from Kaggle using the Kaggle API. The dataset consists of 31,783 images with five captions per image, providing a total of 158,915 captions.

Unzip Data files:

The dataset files were downloaded in zip format, so I unzipped them using the zip file module in Python.

Set the Directories:

I set up the directories for the image and caption files and saved them as variables for further processing.

VGG16 Model using ImageNet:

I used the pre-trained VGG16 model to extract features from the images and loaded the VGG16 model using Keras and removed the last layer to extract the features of the images. The VGG16 model was trained on the ImageNet dataset.

Load Image data:

I loaded the image data using the Keras ImageDataGenerator module. This module was used to preprocess the images and convert them into arrays of numbers that can be fed into the VGG16 model.

Extract Image Features:

I extracted the image features using the VGG16 model and saved them to a file using the NumPy module. This step was performed to avoid recomputing the features each time the model was trained.

Load the Captions Data:

The captions data was stored in a CSV file, and I used Pandas to read the file and extract the captions.

Preprocess text data:

I preprocessed the text data by tokenizing the captions, creating a vocabulary of all unique words in the captions, and mapping the words to their corresponding integer values. This was done using the NLTK module in Python.

Load Saved Image features and Text Features for further process:

I loaded the saved image features and text features that were saved in the previous step. This step was performed to speed up the training process.

Vocabulary dictionary:

I created a vocabulary dictionary that maps integer values to their corresponding words. This was used to convert the predicted integer values back to their corresponding words.

Train-Test split:

I split the data into training and testing sets, with a 70-30 split. This was done using the train_test_split() function in Python.

Data generator:

I created a data generator that generated batches of image features and text features that can be fed into the model during training. This was done using the Python yield function.

Model creation:

I created a sequential model that consists of an LSTM layer and a dense layer. The LSTM layer was used to generate the captions, while the dense layer was used to output the predicted captions.

Fit the model for training:

I trained the model using the fit_generator() function in Keras. The model was trained for 30 epochs, and the loss function used was categorical cross-entropy.

Prediction:

I used the trained model to predict the captions for the test images. The predicted captions were then converted from integer values to their corresponding words using the vocabulary dictionary.

Blue scores calculation:

I calculated the Blue score, which is a measure of the similarity between the predicted captions and the actual captions. This helped me evaluate the performance of the model.

Model Performance:

In this project, I aimed to build a model that can generate descriptive and coherent captions for the test images, and the I achieved Blue score of 0.6 indicates that the model is able to generate captions that are reasonably similar to the actual captions. The Flickr30k dataset was a great choice for my image captioning project because it provided a large and diverse set of images with high-quality and descriptive captions. This allowed the model to learn from a wide variety of image-caption pairs and improve its performance. Additionally, the VGG16 model served as a powerful feature extractor, providing a rich set of visual features that could be used to generate descriptive and meaningful captions. The LSTM neural network architecture was well-suited for the task of generating captions because it can handle variable-length input sequences and has the ability to remember long-term dependencies.

Optimization:

One possible optimization would be to use a more advanced model architecture, such as a Transformer model, which has been shown to achieve state-of-the-art performance on image captioning tasks. Another optimization would be to use a larger dataset, which would allow the model to learn from an even wider variety of image-caption pairs and improve its performance further. Additionally, data augmentation techniques such as rotation, flipping, and zooming could be used to increase the variety of images in the dataset.

Conclusion:

Overall, this project was a valuable learning experience that enabled practical skills and insights into the field of machine learning and data science. It provides a great opportunity to work with a real-world dataset and apply machine-learning techniques to solve a practical problem. Through the process of building an image captioning model, valuable skills in data preprocessing, model creation, and evaluation could be learned. These skills are essential in the field of machine learning and data science and can be applied to a wide range of problems. In addition to the technical skills acquired, this project also provided insights into the challenges and considerations involved in building a machine learning model for a practical application - the importance of data quality, model architecture, evaluation metrics, and more.

About

Image captioning using VGG16

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 99.9%
  • Python 0.1%