This repository contains python programs developed during iNTUition 5.0, a one-day hackathon held by NTU Open Source Society and iEEE Student Organization in Nanyang Technological University, Singapore.
https://devpost.com/software/blinkception
-
Download the latest version of Anaconda from https://www.anaconda.com/download/
-
While installing, add Anaconda to PATH environment AND register Anaconda as default Python.
-
Open command prompt and execute the following instruction:
conda create --name opencv-env python=3.7
- After installing whatever it prompts you to install, activate the environment you just created with:
activate opencv-env
- Execute the following commands:
pip install numpy scipy matplotlib scikit-learn jupyter
pip install opencv-contrib-python
pip install cmake setuptools dlib
-
Now add the .dat file into the same directory as your project from https://github.com/AKSHAYUBHAT/TensorFace/blob/master/openface/models/dlib/shape_predictor_68_face_landmarks.dat
-
You are finally set! Run the program by executing the following command from your open cv environment:
python eyeControl.py --shape-predictor shape_predictor_68_face_landmarks.dat
Libraries for speech-to-text conversion and face-recognition:
pip install pyaudio (for mac)
pip install SpeechRecognition
pip install face_recognition
To implement Blinkception, you need to modify the classes of all the web elements you want the user to interact with. The class name rules are as follows:
-
bc-first
andbc-1
for the first element of each page. -
bc-last
for the last element of each page. -
bc-1
,bc-2
,bc-3
... for each element till the latest.
How the Blinkception program interacts with each element also depends on the class of the element. The following classes are supported:
-
bc-button
for any element that is meant to be clicked (button, link, etc.) -
bc-slide
for any sliders. -
bc-input
when the element takes a text input.