This connector transforms raw Medicare SAF claims data into the Tuva Claims Input Layer which enables you to run most of the other components of the Tuva Project with very little effort. For a detailed overview of what the project does and how it works, check out our Knowledge Base. For information on data models and to view the entire DAG check out our dbt Docs.
- BigQuery
- Redshift
- Snowflake
- You have Medicare SAF claims data loaded into a data warehouse.
- You have dbt installed and configured (i.e. connected to your data warehouse).
Here are instructions for installing dbt.
Complete the following steps to configure the package to run in your environment.
-
Clone this repo to your local machine or environment.
-
Update the dbt_project.yml file to use the dbt profile connected to your data warehouse.
-
Run
dbt build
command while specifying the specific database and schema locations you want to read/write data fromt/to:dbt build --vars '{key: value, input_database: medicare, input_schema: saf, output_database: tuva, output_schema: claims_input}'
Note: The source data table names need to match the table names in sources.yml. These table names match the Medicare SAF data dictionary. If you rename any tables make sure you:
- Update table names in sources.yml
- Update table name in medical_claim and eligibility jinja function
The Tuva Project team maintaining this project only maintains the latest version of the project. We highly recommend you stay consistent with the latest version.
Have an opinion on the mappings? Notice any bugs when installing and running the project? If so, we highly encourage and welcome feedback! While we work on a formal process in Github, we can be easily reached on our Slack community.
Join our growing community of healthcare data practitioners on Slack!