Skip to content

This is the supporting website for the paper "MOSAD – a new data set for mobile sensing of human activities".

License

Notifications You must be signed in to change notification settings

ermshaua/mobile-sensing-human-activity-data-set

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

14 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

MOSAD - Mobile Sensing Human Activity Data Set

This is the supporting website for the paper "Time Series Segmentation Applied to a New Data Set for Mobile Sensing of Human Activities". It contains the used source codes, the MOSAD data set, raw results, and analysis notebooks. It reflects the state of the paper for reproducibility and is purposely not further updated.

Human activity recognition (HAR) systems are advanced machine learning (ML) workflows that automatically detect activities from motion data, captured e.g. by wearable devices such as smartphones. These devices contain multiple sensors that record human motion as acceleration, rotation and orientation in long time series (TS) data. As a first step, HAR methods typically partition such recordings into smaller subsequences before applying more advanced feature extraction and classification techniques. In this study, we evaluate the performance of 6 classical and recently published state-of-the-art TS segmentation (TSS) algorithms on a new large HAR benchmark of 126 TS, called MOSAD, recorded with 6 participants using recent smartphone sensors. Our results show that the ClaSP algorithm achieves significantly more accurate results compared to the other methods, scoring the best segmentations in 57 out of 126 TS. This indicates that ClaSP can be a viable solution for TSS in HAR systems. Additionally, the FLOSS algorithm also shows promising results, particularly for long TS with many segments. However, there is still scope for performance improvements and further research is needed.

You can easily load MOSAD as a pandas data frame and explore the annoatated TS with our data loader. For an in-depth data exploration, see our Jupyter notebook.

>>> from src.utils import load_mosad_dataset
>>> df_mosad = load_mosad_dataset()
>>> df_mosad.head()
...

Benchmark Results

We have evaluated six TSS algorithms on MOSAD. The following table summarises the average Covering performance (higher is better) and the corresponding average ranks. More details are in the paper. The raw measurements are here and an analysis Jupyter notebook is here.

Segmentation Algorithm Average Accuracy Average Rank Wins & Ties
ClaSP 74.0% 2.2 57/126
FLOSS 66.3% 2.9 36/126
ESPRESSO 61.8% 3.4 17/126
BinSeg 62.9% 3.5 8/126
Window 56.1% 4.1 10/126
BOCD 50.2% 4.9 0/126

Organisation

This repository is structured in the following way:

  • benchmark contains the source codes used for running the six TSS competitors on MOSAD.
  • datasets consists the raw recordings and annotations of MOSAD as published in the paper.
  • experiments contains the raw measurement results for the six competitors.
  • figures includes the paper plots, generated by the Jupyter notebooks.
  • notebooks consists of Jupyter notebooks, used to explore MOSAD and evaluate the results.
  • src contains the sources codes for the six competitors and utility methods.

Installation

You can download this repository (by clicking the download button in the upper right corner). As this repository is a supporting website and not an updated library, we do not recommend to install it! Download and use MOSAD and extract or adapt code snippets of interest.

Citation

If you want reference our work in your scientific publication, we would appreciate the following citation:

@inproceedings{mosad2023,
  title={Time Series Segmentation Applied to a New Data Set for Mobile Sensing of Human Activities},
  author={Ermshaus, Arik and Singh, Sunita and Leser, Ulf},
  booktitle={DARLI-AP@EDBT/ICDT},
  year={2023}
}

Resources

The sources codes for the competitors in the benchmark come from multiple authors and projects. We list here the resources we used (and adapted) for our study:

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.