This project classifies different human activities into their respective actions using the `LibSVM` library.
-
Updated
May 15, 2019 - Perl
This project classifies different human activities into their respective actions using the `LibSVM` library.
Gaussian Latent Dirichlet Allocation
Human Robot Interaction
PhD thesis' data and code for interactive exploration and reproducibility
The FMCW radar-based food intake monitoring-oriented dataset contains 70 meal sessions (4132 eating gestures and 893 drinking gestures) from 70 participants with a total duration of 1155 minutes. Four eating styles (fork & knife, chopsticks, spoon, hand) are included in this dataset.
ROS/ROS2 -- Navigation, Manipulation, Mimicking, Sensor Fusion, VR, Speech Recogition, Activity Recognition, Computer Vision
Lo scopo dell'applicazione è quello di costruire un feature vector per la predizione dei pasti in base a valori registrati da sensori di qualità dell'aria. Le feature prodotte verranno automaticamente unite a feature statistiche già presenti in alcuni files. Per la classificazione utilizzare il Knowledge Flow Enviroment di Weka impostato nel fi…
Learning to learn for Personalised Human Activity Recognition
This python opencv code is used to segment the human object from the video frame
Human Pose Tracking-Skeleton Generation
Classifying the physical activities performed by a user based on accelerometer and gyroscope sensor data collected by a smartphone in the user’s pocket.
This experiment is the classification of human activities using a 2D pose time series dataset and an LSTM RNN. The idea is to prove the concept that using a series of 2D poses, rather than 3D poses or a raw 2D images, can produce an accurate estimation of the behaviour of a person or animal.
TCN Neural Network architecture in the automatic detection of people drawing ellipses at multiple timescales
Report task for Neurocognitive Computing | Winter Semester 2022-23
LSTM based architecture to classify Human activity based on sensor data.
This project represents the implementation of the Spatio-Temporal Image Encoding used in the paper "Human Activity Recognition: A Spatio-temporal Image Encoding of 3D Skeleton Data for Online Action Detection" published in the "International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications(VISAPP) 2022".
This repository have a deep learning model to walking activity recognition with data collected by wearable sensors.
SmartRelationship Android App
Add a description, image, and links to the human-activity-recognition topic page so that developers can more easily learn about it.
To associate your repository with the human-activity-recognition topic, visit your repo's landing page and select "manage topics."