Always know what to expect from your data.
Great Expectations helps data teams eliminate pipeline debt, through data testing, documentation, and profiling.
Software developers have long known that testing and documentation are essential for managing complex codebases. Great Expectations brings the same confidence, integrity, and acceleration to data science and data engineering teams.
See Down with Pipeline Debt! for an introduction to the philosophy of pipeline testing.
Expectations are assertions for data. They are the workhorse abstraction in Great Expectations, covering all kinds of common data issues, including:
expect_column_values_to_not_be_null
expect_column_values_to_match_regex
expect_column_values_to_be_unique
expect_column_values_to_match_strftime_format
expect_table_row_count_to_be_between
expect_column_median_to_be_between
- ...and many more
Expectations are declarative, flexible and extensible.
Expectations are a great start, but it takes more to get to production-ready data validation. Where are Expectations stored? How do they get updated? How do you securely connect to production data systems? How do you notify team members and triage when data validation fails?
Great Expectations supports all of these use cases out of the box. Instead of building these components for yourself over weeks or months, you will be able to add production-ready validation to your pipeline in a day. This “Expectations on rails” framework plays nice with other data engineering tools, respects your existing name spaces, and is designed for extensibility.
Many data teams struggle to maintain up-to-date data documentation. Great Expectations solves this problem by rendering Expectations directly into clean, human-readable documentation.
Since docs are rendered from tests, and tests are run against new data as it arrives, your documentation is guaranteed to never go stale. Additional renderers allow Great Expectations to generate other type of "documentation", including slack notifications, data dictionaries, customized notebooks, etc.
Wouldn't it be great if your tests could write themselves? Run your data through one of Great Expectations' data profilers and it will automatically generate Expectations and data documentation. Profiling, a beta feature of Great Expectations, provides the double benefit of helping you explore data faster, and capturing knowledge for future documentation and testing.
Automated profiling doesn't replace domain expertise—you will almost certainly tune and augment your auto-generated Expectations over time—but it's a great way to jump start the process of capturing and sharing domain knowledge across your team.
Every component of the framework is designed to be extensible: Expectations, storage, profilers, renderers for documentation, actions taken after validation, etc. This design choice gives a lot of creative freedom to developers working with Great Expectations.
Recent extensions include:
New deployment patterns include:
- How to Use Great Expectations with Google Cloud Platform and BigQuery
- How to Use Great Expectations in Databricks
- How to Use Great Expectations in Flyte
We're very excited to see what other plugins the data community comes up with!
To see Great Expectations in action on your own data:
You can install it using pip
pip install great_expectations
or conda
conda install -c conda-forge great-expectations
and then run
great_expectations init
(We recommend deploying within a virtual environment. If you’re not familiar with pip, virtual environments, notebooks, or git, you may want to check out the Supporting Resources, which will teach you how to get up and running in minutes.)
For full documentation, visit Great Expectations on readthedocs.io.
If you need help, hop into our Slack channel—there are always contributors and other users there.
Great Expectations works with the tools and systems that you're already using with your data, including:
Great Expectations is not a pipeline execution framework.
We aim to integrate seamlessly with DAG execution tools like Spark, Airflow, dbt, prefect, dagster, Kedro, Flyte, etc. We DON'T execute your pipelines for you.
Great Expectations is not a data versioning tool.
Great Expectations does not store data itself. Instead, it deals in metadata about data: Expectations, validation results, etc. If you want to bring your data itself under version control, check out tools like: DVC and Quilt.
Great Expectations currently works best in a python/bash environment.
Following the philosophy of "take the compute to the data," Great Expectations currently supports native execution of Expectations in three environments: pandas, SQL (through the SQLAlchemy core), and Spark. That said, all orchestration in Great Expectations is python-based. You can invoke it from the command line without using a python programming environment, but if you're working in another ecosystem, other tools might be a better choice. If you're running in a pure R environment, you might consider assertR as an alternative. Within the Tensorflow ecosystem, TFDV fulfills a similar function as Great Expectations.
Great Expectations is under active development by James Campbell, Abe Gong, Eugene Mandel, Rob Lim, Taylor Miller, with help from many others.
If you have questions, comments, or just want to have a good old-fashioned chat about data pipelines, please hop on our public Slack channel
If you'd like hands-on assistance setting up Great Expectations, establishing a healthy practice of data testing, or adding functionality to Great Expectations, please see options for consulting help here.
Absolutely. Yes, please. Start here and please don't be shy with questions.