Does your pipeline have over 10 unique upstream sources? Do you experience painful backfills? Do you want cleaner lines of responsibility?
Try out the following pattern:
-
Group the sources together
-
Come up with the shared schema. Try to optimize space here since you’re trading off some more storage for easier backfill ability. Partition the table by ds and group.
-
Build a Scala enum that keeps track of all the groups and what items are in each group. This enum will store all the DQ-related information and anything else that is a constant value for the given group and item within that group. This enum becomes a form of self-documenting data quality code. There should be an obvious and clean mapping. One Group entry -> One Spark Job.
-
Create an abstract class that takes in source function and an entry in the Scala enum.
-
Transform the Scala enum into a “little book” Hive table that keeps track of all the groups and items. This table can also be used by data quality and dashboarding
From the root of the project execute the below commands
- To clear all compiled classes, build and log directories
./gradlew clean
- To run tests
./gradlew test
- To build jar
./gradlew build
- All combined
./gradlew clean test build