Skip to content

Commit

Permalink
Setup Continuous Benchmarking workflow with pytest-codspeed (#2908)
Browse files Browse the repository at this point in the history
Measuring the execution speed of tests to track performance of PyGMT functions
over time. Using pytest-codspeed, see
https://docs.codspeed.io/benchmarks/python#running-the-benchmarks-in-your-ci.
Decorated a unit test with @pytest.mark.benchmark to see if the benchmarking works.

* Pin to Python 3.12
* Add shields.io badge for CodSpeed
* Document benchmarks.yml workflow in docs/maintenance.md
* Run benchmarks when a release is published
* Add benchmarks.yml to bump_gmt_checklist.md
* Only benchmark test_basemap for now
* Only run when non-test PyGMT source files and benchmarks.yml is modified

Trigger the benchmark run when files in `pygmt/clib`, `pygmt/datasets`, `pygmt/helpers`,
`pygmt/src` and `pygmt/*.py` are modified (i.e. except `pygmt/tests/**`), and also when
`.github/workflows/benchmarks.yml` is modified.

---------

Co-authored-by: Dongdong Tian <[email protected]>
  • Loading branch information
weiji14 and seisman committed Dec 25, 2023
1 parent 06ae818 commit 013014b
Show file tree
Hide file tree
Showing 5 changed files with 86 additions and 0 deletions.
1 change: 1 addition & 0 deletions .github/ISSUE_TEMPLATE/bump_gmt_checklist.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,7 @@ assignees: ''
- [ ] Bump the GMT version in CI (1 PR)
- [ ] Update `environment.yml`
- [ ] Update `ci/requirements/docs.yml`
- [ ] Update `.github/workflows/benchmarks.yml`
- [ ] Update `.github/workflows/cache_data.yaml`
- [ ] Update `.github/workflows/ci_doctests.yaml`
- [ ] Update `.github/workflows/ci_docs.yml`
Expand Down
79 changes: 79 additions & 0 deletions .github/workflows/benchmarks.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,79 @@
# Run performance benchmarks
#
# Continuous benchmarking using pytest-codspeed. Measures the execution speed
# of tests marked with @pytest.mark.benchmark decorator.

name: Benchmarks

on:
# Run on pushes to the main branch
push:
branches: [ main ]
paths:
- 'pygmt/**/*.py'
- '!pygmt/tests/**'
- '.github/workflows/benchmarks.yml'
pull_request:
paths:
- 'pygmt/**/*.py'
- '!pygmt/tests/**'
- '.github/workflows/benchmarks.yml'
# `workflow_dispatch` allows CodSpeed to trigger backtest
# performance analysis in order to generate initial data.
workflow_dispatch:
release:
types:
- published

concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: ${{ github.ref != 'refs/heads/main' }}

jobs:
benchmarks:
runs-on: ubuntu-22.04
defaults:
run:
shell: bash -l {0}

steps:
# Checkout current git repository
- name: Checkout
uses: actions/[email protected]
with:
# fetch all history so that setuptools-scm works
fetch-depth: 0

# Install Miniconda with conda-forge dependencies
- name: Setup Miniconda
uses: conda-incubator/[email protected]
with:
auto-activate-base: true
activate-environment: "" # base environment
channels: conda-forge,nodefaults
channel-priority: strict

# Install GMT and dependencies from conda-forge
- name: Install dependencies
run: |
# $CONDA is an environment variable pointing to the root of the miniconda directory
# Preprend $CONDA/bin to $PATH so that conda's python is used over system python
echo $CONDA/bin >> $GITHUB_PATH
conda install --solver=libmamba gmt=6.4.0 python=3.12 \
numpy pandas xarray netCDF4 packaging \
pytest pytest-benchmark pytest-mpl
python -m pip install -U pytest-codspeed setuptools
# Install the package that we want to test
- name: Install the package
run: make install

# Run the benchmark tests
- name: Run benchmarks
uses: CodSpeedHQ/[email protected]
with:
run: |
python -c "import pygmt; pygmt.show_versions()"
PYGMT_USE_EXTERNAL_DISPLAY="false" python -m pytest -r P --pyargs pygmt --codspeed
env:
GMT_LIBRARY_PATH: /usr/share/miniconda/lib/
3 changes: 3 additions & 0 deletions README.rst
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,9 @@ PyGMT
.. image:: https://codecov.io/gh/GenericMappingTools/pygmt/branch/main/graph/badge.svg?token=78Fu4EWstx
:alt: Test coverage status
:target: https://app.codecov.io/gh/GenericMappingTools/pygmt
.. image:: https://img.shields.io/endpoint?url=https://codspeed.io/badge.json
:alt: CodSpeed Performance Benchmarks
:target: https://codspeed.io/GenericMappingTools/pygmt
.. image:: https://img.shields.io/pypi/pyversions/pygmt.svg?style=flat-square
:alt: Compatible Python versions.
:target: https://pypi.python.org/pypi/pygmt
Expand Down
2 changes: 2 additions & 0 deletions doc/maintenance.md
Original file line number Diff line number Diff line change
Expand Up @@ -104,6 +104,8 @@ workflow files for more details.
12. `format-command.yml`: Format the codes using slash command
13. `dvc-diff.yml`: Report changes in test images
14. `slash-command-dispatch.yml`: Support slash commands in pull requests
15. `benchmarks.yml`: Benchmarks the execution speed of tests to track performance of PyGMT functions


## Continuous Documentation

Expand Down
1 change: 1 addition & 0 deletions pygmt/tests/test_basemap.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@
from pygmt import Figure


@pytest.mark.benchmark
@pytest.mark.mpl_image_compare
def test_basemap():
"""
Expand Down

0 comments on commit 013014b

Please sign in to comment.