This application is designed to count the number of people and their duration in an environment. Its use cuts across different applications such as counting the number of people in a mall or using it to limit access of people to a place.
- Openvino (You can run this script to automate the installation of openvino)
- Nodejs and npm
First, download the ssd_mobilenet_v2_coco model
From command line/terminal, navigate to the directory where the model was downloaded and extract it using
tar -xvf ssd_mobilenet_v2_coco_2018_03_29.tar.gz
Go to the just extracted directory (ssd_mobilenet_v2_coco_2018_03_29) and generate the intermediate represention (IR) files i.e.(.xml and .bin) using the model optimizer by running the following
python /opt/intel/openvino/deployment_tools/model_optimizer/mo_tf.py --input_model frozen_inference_graph.pb --tensorflow_object_detection_api_pipeline_config pipeline.config --tensorflow_use_custom_operations_config /opt/intel/openvino/deployment_tools/model_optimizer/extensions/front/tf/ssd_v2_support.json --reverse_input_channel
Clone this repo and create a "model" directory
Copy the generated .xml and .bin file into the "model" directory
From terminal, navigate to the cloned root directory and give permission to the script to install the required modules (i.e. nodejs, ffserver) for Mac and/or Linux with the following command
chmod +x setup.sh
Run the script using
./setup.sh
Four terminal windows will be needed to run the app.
cd <app_dir>/webservice/server/node-server
node ./server.js
If successful, the following message should appear
connected to ./db/data.db
Mosca server started.
Open new terminal (i.e. second terminal) and run the commands below
cd <app_dir>/webservice/ui
npm run dev
If successful, the following message should appear
webpack: Compiled successfully
Open the third terminal and run the commands below
cd <app_dir>
sudo ffserver -f ./ffmpeg/server.conf
First, initialize the Openvino environment by running the command below
source /opt/intel/openvino/bin/setupvars.sh -pyver 3.5
On the same terminal (i.e. the fourth terminal), run:
python3 main.py -i <location of video> -m model/frozen_inference_graph.xml -d CPU -pt 0.6 | ffmpeg -v warning -f rawvideo -pixel_format bgr24 -video_size 768x432 -framerate 24 -i - https://0.0.0.0:3004/fac.ffm
On the same terminal (i.e. the fourth terminal), run:
python3 main.py -i 0 -m model/frozen_inference_graph.xml -d CPU -pt 0.6 | ffmpeg -v warning -f rawvideo -pixel_format bgr24 -video_size 768x432 -framerate 24 -i - https://0.0.0.0:3004/fac.ffm
To see the output on a web based interface, open the link https://0.0.0.0:3004 in a browser.