Skip to content

RoxyDiya/BABY-SIGN-LANGUAGE-RECOGNITION

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

20 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

BSL Logo

Baby Sign Language Recognition


Sample signal

📖 Table of Contents

Table of Contents
  1. ➤ About The Project
  2. ➤ Prerequisites
  3. ➤ Dataset
  4. ➤ Roadmap
  5. ➤ Contributors

-----------------------------------------------------

📝 About The Project

This project focuses on recognizing sign language gestures made by babies. It's aimed at understanding and interpreting communication through baby sign language. We explore the gestures infants make to convey their needs and emotions, with the goal of improving parent-infant communication and understanding.

-----------------------------------------------------

🍴 Prerequisites

made-with-python
Made with Jupyter

The following open source packages are used in this project:

  • Numpy
  • Torch
  • Matplotlib
  • Scikit-Learn
  • Cv2
  • CvZone

-----------------------------------------------------

💾 Dataset

Our dataset features video clips of infants performing sign gestures alongside their corresponding interpretations. These clips serve as the foundation for training models in recognizing the signs made by babies and understanding their intended meanings. Our mission is to enrich parent-infant communication through the interpretation of baby sign language.

Expressions Included in the Dataset:

  • "milk"
  • "eat"
  • "I don't know"
  • "down"
  • "drink"
  • "frustrated"
  • "I love you"
  • "mad/grumpy"
  • "mine"
  • "mom"
  • "potty"
  • "sorry"

-----------------------------------------------------

🎯 Roadmap

This roadmap outlines the journey from collecting data to deploying the baby sign language recognition program:

  1. Data Gathering: We manually collected a diverse set of baby sign language videos from various sources, forming the foundation of our dataset.

  2. Data Augmentation: To enrich our dataset, we applied techniques such as rotation and noise addition, enhancing both its size and diversity.

  3. Data Preprocessing: This encompassed several essential tasks. We extracted frames from the videos, focusing on the hand gestures.

  4. Model Training: With our dataset primed, we embarked on training deep learning models (CNN) to recognize and interpret baby sign gestures.

  5. Real-Time Testing: Our journey culminated in real-world applicability. We tested our trained models using webcam data, allowing us to understand and interpret baby sign language gestures, ultimately enhancing communication between parents and infants.

-----------------------------------------------------

📜 Contributors

🎓 All participants in this project are undergraduate students of Applied Computer Science and Artificial Intelligence @ Sapienza University of Rome

👩 Rokshana Ahmed
      Email: [email protected]
      GitHub: @RoxyDiya

👩 Elena Martellucci
      Email: [email protected]
      GitHub: @elena-martellucci

👩 Firdaous Hajjaji
      Email: [email protected]
      GitHub: @Firdaous2002


This was the final project for the course AI LAB - Computer Vision at Sapienza University of Rome