Skip to content

Commit

Permalink
change: Changed run tag filtering and added additional run tag filter…
Browse files Browse the repository at this point in the history
… for better performance. (dagster-io#22833)

## Summary & Motivation
While running jobs on frequent schedules we have noticed that as the
amount of runs grows some ui operations become very slow. Looking at AWS
monitoring we see that one query in particular seems to be very slow:

![Screenshot_20240703_151316](https://github.com/dagster-io/dagster/assets/4254771/23040062-030b-4161-a43a-8667cbef4d56)

By analyzing the query we have noticed that it is related to run tag
filtering. Here is an example of such a query with filled parameters:
```sql
EXPLAIN (ANALYSE, BUFFERS) SELECT runs.id,
       runs.run_body,
       runs.status,
       runs.create_timestamp,
       runs.update_timestamp,
       runs.start_time,
       runs.end_time
FROM runs
WHERE runs.run_id IN (SELECT run_tags.run_id
                      FROM run_tags
                      WHERE
                          run_tags.key = 'dagster/schedule_name' AND run_tags.value = 'quick_partitioned_job_schedule'
                         OR run_tags.key = '.dagster/repository' AND
                            run_tags.value = '__repository__@example-code'
                      GROUP BY run_tags.run_id
                      HAVING count(DISTINCT run_tags.key) = 2)
ORDER BY runs.id DESC
LIMIT 1;
```

I believe there are multiple instances discussing this:
* dagster-io#18269
* dagster-io#19003

Looking at the query plan:
https://explain.dalibo.com/plan/2c1bga585e8ca45f

![image](https://github.com/dagster-io/dagster/assets/4254771/0d8fc125-ac67-4a1b-8f37-7ff69cbfa81f)

We notice that the subquery scans a lot of rows (which is correct as we
have a lot of runs with same tags), but afterwards, the filter on runs
is very slow and filters away a lot of rows. A lot of work is done to
retrieve only one row with highest matching run id which feels like it
can be much more efficient.

To improve performance of these type of queries I would like to propose
two changes:

1. Replace the subquery by multiple joins. I would expect that this
would make a much flatter execution plan and thus a potential for
earlier filtering.

The example query would result in something like this:
```sql
EXPLAIN (ANALYSE, BUFFERS)
SELECT runs.id,
       runs.run_body,
       runs.status,
       runs.create_timestamp,
       runs.update_timestamp,
       runs.start_time,
       runs.end_time
FROM runs
JOIN public.run_tags r on runs.run_id = r.run_id AND r.key = 'dagster/schedule_name' AND r.value = 'quick_partitioned_job_schedule'
JOIN public.run_tags r2 on runs.run_id = r2.run_id AND r2.key = '.dagster/repository' AND r2.value = '__repository__@example-code'
ORDER BY runs.id DESC
LIMIT 1;
```

2. As mentioned in one of the referenced threads, add an index on run_id
for run tags. This would make joins in (1) much faster.

```sql
CREATE UNIQUE INDEX run_tags_run_idx ON public.run_tags USING btree (run_id, id);
```

The changes in the PR implement both changes in Dagster.

## How I Tested These Changes

I have tested these change by first running tests to make sure they
don't break dagster. Then I have set up a local benchmark to test the
changes. I have populated the dagster instace with 5.4 million runs and
10.7 million related run tags.

Afterwards I have applied the proposed changes and measured their
performance. Each query was run five times and the performance of the
fifth run was used. This is, to make sure all the data was in shared
buffers to make the comparison fair.

Results
| Experiment | Query Plan Analysis | Runtime | 
| ----------------- | ----------------------------- | ------------ | 
| Baseline | [10.7 run tags, unoptimized -
explain.dalibo.com](https://explain.dalibo.com/plan/2c1bga585e8ca45f) |
15.579s |
| Query Optimization | [10.7 run tags, optimized query, no index -
explain.dalibo.com](https://explain.dalibo.com/plan/agf3fabc77b5ce58) |
7.560s |
| Query Optimization + Custom Index | [10.7 run tags, optimized query,
with index -
explain.dalibo.com](https://explain.dalibo.com/plan/dgf924f51g585ab5) |
0.076s |

Interestingly "Query Optimization" alone results in a more complex but
faster query plan. The "Query Optimization + Custom Index" results in a
desired much simpler query plan that doesn't do as many reads.

Overall the changes improve the performance almost 200x. The addition of
the index shouldn't provide much overhead.

I have also found that if run_tags would use `runs.id` as foreign key
and not `runs.run_id` the query performance would be much faster 0.018s.
But this change would be too large and possibly break things.

---------

Signed-off-by: Egor Dmitriev <[email protected]>
  • Loading branch information
egordm committed Jul 24, 2024
1 parent aeb53a2 commit 072f762
Show file tree
Hide file tree
Showing 2 changed files with 46 additions and 0 deletions.
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
"""add_run_tags_run_id_index
Revision ID: 284a732df317
Revises: 46b412388816
Create Date: 2024-07-10 09:35:20.215174
"""

from alembic import op
from dagster._core.storage.migration.utils import has_index, has_table

# revision identifiers, used by Alembic.
revision = "284a732df317"
down_revision = "46b412388816"
branch_labels = None
depends_on = None


def upgrade():
if not has_table("run_tags"):
return

if not has_index("run_tags", "idx_run_tags_run_idx"):
op.create_index(
"idx_run_tags_run_idx",
"run_tags",
["run_id", "id"],
unique=False,
postgresql_concurrently=True,
mysql_length={"run_id": 255},
)


def downgrade():
if not has_table("run_tags"):
return

if has_index("run_tags", "idx_run_tags_run_idx"):
op.drop_index(
"idx_run_tags_run_idx",
"run_tags",
postgresql_concurrently=True,
)
3 changes: 3 additions & 0 deletions python_modules/dagster/dagster/_core/storage/runs/schema.py
Original file line number Diff line number Diff line change
Expand Up @@ -148,6 +148,9 @@
)

db.Index("idx_run_tags", RunTagsTable.c.key, RunTagsTable.c.value, mysql_length=64)
db.Index(
"idx_run_tags_run_idx", RunTagsTable.c.run_id, RunTagsTable.c.id, mysql_length={"run_id": 255}
)
db.Index("idx_run_partitions", RunsTable.c.partition_set, RunsTable.c.partition, mysql_length=64)
db.Index(
"idx_runs_by_job",
Expand Down

0 comments on commit 072f762

Please sign in to comment.