Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Do not run benchmark prewarm in parallel with the benchmark #1584

Merged
merged 1 commit into from
Feb 12, 2024

Conversation

t0yv0
Copy link
Member

@t0yv0 t0yv0 commented Feb 12, 2024

Looks like ProgramTest defaults to enabling t.Parallel(), and this defeats the purpose of prewarm phase in provider benchmarks. The intent of prewarm was to do a no-op to make sure dependencies are downloaded and avoid measuring that overhead as part of the benchmark. This was not the case though because prewarm was running in parallel with the actual benchmark.

After the change, each individual benchmark first does unmeasured prewarm, and then does the measurement. In the future we may want to measure prewarm as it is interesting for user experience as well.

Looks like ProgramTest defaults to enabling t.Parallel(), and this defeats the purpose of prewarm
phase in provider benchmarks. The intent of prewarm was to do a no-op to make sure dependencies are
downloaded and avoid measuring that overhead as part of the benchmark. This was not the case though
because prewarm was running in parallel with the actual benchmark.

After the change, each individual benchmark first does unmeasured prewarm, and then does the
measurement. In the future we may want to measure prewarm as it is interesting for user experience
as well.
Copy link

🍹 The Update for pulumi/k8s-ci-cluster/dcb98b0ba0ac5801fe1b47555bcddeadccc49832-1687 was successful.

Resource Changes

    Name                                                          Type                                        Operation
+   multicloud                                                    pulumi-kubernetes:ci:GkeCluster             create
+   password                                                      random:index/randomPassword:RandomPassword  create
+   ephemeral-ci-cluster                                          gcp:container/cluster:Cluster               create
+   primary-node-pool                                             gcp:container/nodePool:NodePool             create
+   gke                                                           pulumi:providers:kubernetes                 create
+   k8s-ci-cluster-dcb98b0ba0ac5801fe1b47555bcddeadccc49832-1687  pulumi:pulumi:Stack                         create

@iwahbe
Copy link
Member

iwahbe commented Feb 12, 2024

Looks like the test failure is an instance of pulumi/pulumi-gcp#1387. I closed the original issue due to a missing repro, but I am re-opening.

Copy link

🍹 The Destroy for pulumi/k8s-ci-cluster/dcb98b0ba0ac5801fe1b47555bcddeadccc49832-1687 was successful.

Resource Changes

    Name                                                          Type                                        Operation
-   gke                                                           pulumi:providers:kubernetes                 delete
-   primary-node-pool                                             gcp:container/nodePool:NodePool             delete
-   ephemeral-ci-cluster                                          gcp:container/cluster:Cluster               delete
-   multicloud                                                    pulumi-kubernetes:ci:GkeCluster             delete
-   password                                                      random:index/randomPassword:RandomPassword  delete
-   k8s-ci-cluster-dcb98b0ba0ac5801fe1b47555bcddeadccc49832-1687  pulumi:pulumi:Stack                         delete

@t0yv0 t0yv0 merged commit e3cde53 into master Feb 12, 2024
49 of 51 checks passed
@t0yv0 t0yv0 deleted the t0yv0/do-not-run-prewarm-in-parallel branch February 12, 2024 21:21
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants