Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Energy measurements for Wagtail Bakery Demo #487

Open
wants to merge 9 commits into
base: main
Choose a base branch
from

Conversation

ArneTR
Copy link

@ArneTR ArneTR commented May 25, 2024

This pull requests is a bit out of the ordinary ... I hope :)

I work for an open source company called Green Coding Solutions in Germany and we started with the mission to increase awareness and actionability around digital CO2 emissions.

We have recently created an open source CI/CD tool called Green Metrics Tool that can measure the energy consumption of a software.

It works by orchestrating the needed infrastructure (in this case containers) and running a usage_scenario against it.

Some prior work on this has been done with @thibaudcolas on the Gold Standard which resulted in an initial implementation by @thibaudcolas in this fork

Since then the fork got quite outdated and this PR is the update to the newest Wagtail and Bakery Demo.

The tool creates awareness of the energy cost and carbon emissions of software and empoweres developers to create action for more sustainability. Also it helps to understand how the energy cost for certain features of the software change over time and find energy regressions or even optimization potentials.

I hope this PR and the information that the Green Metrics Tool provides is interesting for you and I am super interested in your feedback on it.

Changelog

  • Added usage_scenario.yml which allows our tooling to pick up the repostiory and do periodic measurements on it every time a change occured.
  • Added wget to the Dockerfile as we use that to warm up a cache. If that is not viable it can also be moved to the Green Metrics Tool initialization part. I hope the baggage is small though.
  • Removed the superflous volume binding in the docker-compose.yml. To my understanding this has no functional effect as the data is already copied anyway in the container build ...? If that is needed it can also moved to the GMT volume mapping rather.
  • Added usage_scenarios testing different flows inside of the bakery like sending a contact form, going to the admin etc.

Demo

Screenshot 2024-05-25 at 12 36 03 PM

@ribalba @MichelleGruene @mrchrisadams

@ArneTR
Copy link
Author

ArneTR commented May 25, 2024

Tagging also @syyong who has helped with technical support of migrating to the new Bakery

@mrchrisadams
Copy link

mrchrisadams commented May 26, 2024

Hey @ArneTR , can I check if I understand this?

I'm gonna refer to as Green Metrics Tool as GMT to save typing. This PR seems to:

  1. introduce a usage scenario.yml file used by GMT, this re-uses the docker compose definition, and adds the same flows through the app that were in defined .greenframe.yml, but drives a green-coding-puppeteer-container docker container containing a headless browser.
  2. take the earlier scripts that were used with greenframe, and introduce more generalised equivalents that work with puppeteer, and are used in by the green-coding-puppeteer-container above: So, where greenframe has some javascript to drive puppeteer under the hood, there are now the corresponding javascript files in benchmarks/puppeteer/that broadly follow the same steps ( i.e submitting via the contact form, exercising search, and so on. I'm able to see the steps run locally in my own browser in my host machine, if edit the script to point to the binrary for that browser on the host.

I think I have a one main question about how to test the scenarios though. I have a copy of GMT running locally, and I can 'drive' the containers listed in this PR, when I pass the path to the local checkout of this repo.

So far so good - however, I'm on a mac, so the local readouts from running my local GMT instance do not give the same breakdown you would see here, because Apple macbook M1 laptops don't have the same sensors as the testing machines.

So the output from running GMT locally on my machine, to try running this locally looks like this - it looks like you can see some rendering being offloaded to GPU (!), but you don't have the same breakdown as the screenshot in the original PR:

Screenshot 2024-05-26 at 21 58 37

Qn1. Is there a way to manually trigger a run for a given project to see how it's working, outside of pushing new code to a branch?

I'm thinking of something along the lines of the use case in the original README for @thibaudcolas 's fork - where he demonstrated how to run a specific scenario on the remote dedicate testing hardware. A bit like this:

greenframe analyze https://localhost:8000/ homepage-landing.js

The closest I can think of is running the entire set of scenarios, something a bit like calling greenframe analyze, by triggering a run via the GMT API.

I'm aware there is a cool-down between each run, so it's not like you would trigger runs every minute, but it would be helpful for testing the behaviour of this PR while working on it. Maybe something like this snippet?

import requests

gmt_run_data = {
    "name": "One-off-test",
    "email": "[email protected]",
    "url": "https://github.com/org-name/project-name",
    "branch": "pr-number",
     "filename": "usage_scenario.yml",
    "machine_id": 7,
    "schedule_mode": "one-off",
}

requests.post("https://api.green-coding.io/v1/software/add", json=gmt_run_data)

There was a bit of set up involved to get a running local instance of GMT for carrying out test runs like this. I've linked to the docs below that helped me:

https://docs.green-coding.io/docs/installation/installation-macos/
https://docs.green-coding.io/docs/measuring/measuring-locally/

I have a few other questions about the state of the existing greenframe tests, as I couldn't seem to run them either, but I reckon that's likely better addressed outside the scope of this PR.

@ArneTR
Copy link
Author

ArneTR commented May 29, 2024

Hey @mrchrisadams,

thank you so much for this detailed introspection and the additional infos! To answer your questions:

  1. The way to manually trigger the runs on the measurement cluster at the moment is either to use the web form on https://metrics.green-coding.io/request.html or to submit via the REST API.
    Your python code is correct. To make it more illustrative I changed the parameters to the exact one that would be needed for the bakerydemo:
import requests

gmt_run_data = {
    "name": "BakeryDemo Test",
    "email": "[email protected]",
    "url": "https://github.com/wagtail/bakerydemo",
    "branch": "main",
     "filename": "usage_scenario.yml",
    "machine_id": 7,
    "schedule_mode": "one-off",
}

requests.post("https://api.green-coding.io/v1/software/add", json=gmt_run_data)
  1. The way how it would be integrated then if gets merged would be to make a cronjob from our side that would be then listet here: https://metrics.green-coding.io/energy-timeline.html

Our cluster picks up the repository once a day and does the measurement.

The changes over time are then also graphed and statistically evaluated. here a demo from the old bakery gold benchmark: https://metrics.green-coding.io/timeline.html?uri=https://github.com/green-coding-solutions/bakerydemo-gold-benchmark&filename=usage_scenario_warm.yml&branch=main&machine_id=7&start_date=2023-01-01

  1. I added the greenframe tests just for reference because I assumed at somepoint @thibaudcolas will give this PR a look and it helps in putting it into category and they might be interesting also of energy tests are generally added to this project.
    But now seeing your confusion I think it might be better to remove them from the PR. Will wait for a maintainer call here for now ...

  2. The idea with having a console command to directly add a job to the cluster is very interesting. Can you open an issue on the GMT repo with some details? Would love to discuss the idea there.

ty!

@ArneTR
Copy link
Author

ArneTR commented Jun 24, 2024

I removed the legacy greenframe benchmarks to slim the PR and not bring benchmarks in that might not be run.

  1. I fixed the linting errors by rebasing with changes from main. This makes the python lint work.
    Then I added the benchmarks to the ignore directory, since they are not core part of the project and mostly the import statement was complained about I thought ignore is the way to go here.

Let me know

"author": "",
"license": "ISC",
"dependencies": {
"puppeteer": "^19.4.0"
Copy link

@mrchrisadams mrchrisadams Jun 25, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hey @ArneTR - I think had to explicitly npm install microtime when I used this outside of the green metric tool docker container before for manually confirming what puppeteer was doing with a local environment version of the Wagtail Bakery Demo. Is it implicitly included in the greencoding/puppeteer-chrome docker image?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That might be correct. We tie this test to a standardized browser container. The Dockerfile source for this is: https://github.com/green-coding-solutions/example-applications/tree/main/puppeteer-firefox-chrome

I think it makes more sense to provide this container than to mention dependencies for setting up puppteeer locally as anyway you have to use it in a container context later. What do you think?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants