Skip to content

Quantified Self Personal Data Aggregator and Data Analysis

License

Notifications You must be signed in to change notification settings

Jonasjjj96/health_data_handler

 
 

Repository files navigation

Quantified Self (QS) Ledger

A Personal Data Aggregator and Dashboard for Self-Trackers and Quantified Self Enthusiasts

Quantfied Self (QS) Ledger aggregates and visualizes your personal data.

The project has two primary goals:

  1. download all of your personal data from various tracking services (see below for list of integration services) and store locally.
  2. provide the starting point for personal data analysis, data visualization and a personal data dashboard

At present, the main objective is to provide working data downloaders and simple data analysis for each of the integrated services.

Some initial work has been started on using these data streams for predictive analytics and forecasting using Machine Learning and Artificial Intelligence, and the intention to increasingly focus on modeling in future iterations. .

Code / Dependencies:

  • The code is written in Python 3.
  • Shared and distributed via Jupyter Notebooks.
  • Most services depend on Pandas and NumPy for data manipulation and Matplot and Seaborn for data analysis and visualization.
  • To get started, we recommend downloading and using the Anaconda Distribution.
  • For initial installation and setup help, see documentation below.
  • For setup and usage of individual services, see documentation provided by each integration.

Current Integrations:

  • Apple Health: fitness and health tracking, data analysis and dashboard from iPhone or Apple Watch (includes example of Elastic Search integration and Kibana Health Dashboard).
  • AutoSleep: iOS sleep tracking data analysis of sleep per night and rolling averages.
  • Fitbit: fitness and health tracking and analysis of Steps, Sleep, and Heart Rate from a Fitbit wearable.
  • GoodReads: book reading tracking and data analysis for GoodReads.
  • Google Calendar: past events, meetings and times for Google Calendar.
  • Google Sheets: get data from any Google Sheet which can be useful for pulling data from IFTTT integrations that add data.
  • Habitica: habit and task tracking with Habitica's gamified approach to task management.
  • Instapaper: articles read and highlighted passages from Instapaper.
  • Kindle Highlights: Parser and Highlight Extract from Kindle clippings, along with a sample data analysis and tool to export highlights to separate markdown files.
  • Last.fm: music tracking and analysis of music listening history from Last.fm.
  • Oura: oura ring activity, sleep and wellness data.
  • RescueTime: track computer usage and analysis of computer activities and time with RescueTime.
  • Pocket: articles read and read count from Pocket.
  • Strava: activities downloader (runs, cycling, swimming, etc.) and analysis from Strava.
  • Todoist: task tracking and analysis of todo's and tasks completed history from Todoist app.
  • Toggl: time tracking and analysis of manual timelog entries from Toggl.
  • WordCounter: extract wordcounter app history and visualize recent periods of word counts.

EXAMPLES:

How to use this project: Installation and Setup Locally

Until we provide a working version for Google's Collab or other online jupyter notebook setups, we recommend to get started by downloading and using the Anaconda Distribution, which is free and open source. This will give you a local working version of Numpy, Pandas, Jupyter Notebook and other Python Data Science tools.

After installation, we recommend create and activating a virtual environment using Anaconda or manually:

python3 -m venv ~/.virtualenvs/qs_ledger

source ~/.virtualenvs/qs_ledger/bin/activate

Then clone the current github repo:

git clone https://github.com/markwk/qs_ledger.git

Using your activate virtual environment, install dependencies:

pip install -r requirements.txt

Then navigate into your directory and launch an individual notebook or the full project with jupyter notebook or jupyter lab:

jupyter lab

Code Organization

Best practices and organization are still a work-in-progresss, but in general:

  • Each project has a NAME_downloader and NAME_data_analysis.
  • Some projects include a helper function for data pulling.
  • Optionally, some projects have useful notebooks for specific use cases, like weekly reviews.

Useful Shortcuts

You can use command line to run jupyter notebooks directly and, in the case of papermill, you can pass parameters:

With nbconvert:

  • pip install nbconvert
  • jupyter nbconvert --to notebook --execute --inplace rescuetime/rescuetime_downloader.ipynb

With Papermill:

  • pip install papermill

  • papermill rescuetime_downloader.ipynb data/output.ipynb -p start_date '2019-08-14' -p end_date '2019-10-14'

  • NOTE: You first need to parameterize your notebook in order pass parameters into commands.

Creators and Contributors:

Want to help? Fork the project and provide your own data analysis, integration, etc.

Questions? Bugs? Feature Requests? Need Support?

Post a ticket in the QS Ledger Issue Queue

About

Quantified Self Personal Data Aggregator and Data Analysis

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 99.7%
  • Python 0.3%