Welcome to the GitHub repository for Team Tongmotjahaotdog's submission to the 3rd ETRI Human Understanding AI Competition in 2024. This repository contains all the necessary source codes and instructions for reproducing our model and results. Our approach includes processing and synchronizing various sensor data to predict human activities and conditions accurately.
- Date: June 28, 2024
The overall folder structure is depicted in the diagram below:
ETRLHumanUnderstanding/
│
├── datasets/
│ ├── val_datasets/
│ ├── test_datasets/
│ ├── image_datasets/
│ └── raw_datasets/
│ ├── valid_datasets/
│ └── test_datasets/
│ ├── acc/
│ ├── activity/
│ ├── hr/
│ └── raw_datasets_all_sensor/
│ ├── valid_datasets/
│ ├── acc/
│ ├── activity/
│ ├── hr/
│ ├── step/
│ ├── light/
│ ├── gps/
│ └── test_datasets/
│ ├── acc/
│ ├── activity/
│ ├── hr/
│ ├── step/
│ ├── light/
│ ├── gps/
│
├── notebooks/
│ ├── preprocessing/
│ ├── training/
│ └── inference/
│
├── logs/
├── models/
└── result/
- Move
val_datasets
andtest_datasets
to./datasets
folder. - Execute all 4 preprocessing notebooks (for 5-channel and 11-channel data sets).
- Run 2 training notebooks.
- Execute 2 inference notebooks.
- Run
submit.ipynb
to generate thesubmit.csv
file, which is the final output.
- Clone the repository:
git clone https://github.com/Tongmotjahaotdog/ETRI2024.git
- Install required libraries:
pip install -r requirements.txt
The datasets provided by ETRI are processed without including the year in the training and validation datasets as per the competition rules. Please note that the datasets are pre-split according to provided instructions, including separate folders for validated and test datasets.
- Preprocessing
jupyter notebook preprocessing_valid.ipynb jupyter notebook preprocessing_test.ipynb jupyter notebook preprocessing_valid_all_sensor.ipynb jupyter notebook preprocessing_test_all_sensor.ipynb
- Training
jupyter notebook train_11channel(resnext101).ipynb jupyter notebook train_5channel(seresnext101).ipynb
- Inference
jupyter notebook inference_5channel(seresnext101).ipynb jupyter notebook inference_11channel(resnext101).ipynb
- Submission: Finally, run the
submit.ipynb
notebook to compile results intosubmit.csv
.
Our methodology involves synchronization of sensor data at different frequencies and converting them into a uniform 1Hz time-series data before processing them into image format. This allows us to leverage convolutional neural networks (CNNs) for feature extraction and subsequent activity prediction.
Our models are based on variations of ResNet and SEResNeXt, which have shown robust performance in handling time-series image data derived from synchronized sensor readings. Detailed analysis and comparison of model performance are available in the models/
directory.
We aim to explore more sophisticated data augmentation techniques and potentially leverage multi-view learning frameworks to enhance model generalizability and performance.
- Yonghoon Na
- Seunghoon Oh
- Seongji Ko
For any additional information or queries, please open an issue in this repository.