Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Setup Continuous Benchmarking workflow with pytest-codspeed (#2908)
Measuring the execution speed of tests to track performance of PyGMT functions over time. Using pytest-codspeed, see https://docs.codspeed.io/benchmarks/python#running-the-benchmarks-in-your-ci. Decorated a unit test with @pytest.mark.benchmark to see if the benchmarking works. * Pin to Python 3.12 * Add shields.io badge for CodSpeed * Document benchmarks.yml workflow in docs/maintenance.md * Run benchmarks when a release is published * Add benchmarks.yml to bump_gmt_checklist.md * Only benchmark test_basemap for now * Only run when non-test PyGMT source files and benchmarks.yml is modified Trigger the benchmark run when files in `pygmt/clib`, `pygmt/datasets`, `pygmt/helpers`, `pygmt/src` and `pygmt/*.py` are modified (i.e. except `pygmt/tests/**`), and also when `.github/workflows/benchmarks.yml` is modified. --------- Co-authored-by: Dongdong Tian <[email protected]>
- Loading branch information