This code lets you set up your own Github repo to run pyperformance benchmarks on your own self-hosted Github Action runners.
For example, you can see the Faster CPython team's benchmarking results.
Create a new empty repository on Github and clone it locally.
Add bench_runner to your requirements.txt
. Since there are no PyPI releases (yet), you can install it from a tag in the git repo:
git+https://github.com/faster-cpython/bench_runner@{VERSION}#egg=bench_runner
Replace the {VERSION} above with the latest version tag of bench_runner
.
Create a virtual environment and install your requirements to it, for example:
python -m venv venv
source venv/bin/activate
python -m pip install -r requirements.txt
Provision the machine to have the build requirements for CPython and the base requirements for Github Actions according to the provisioning instructions.
Then, add it to the pool of runners by following the instructions on Github's
Settings -> Actions -> Runners -> Add New Runner
to add a new runner.
The default responses to all questions should be fine except pay careful attention to set the labels correctly. Each runner must have the following labels:
- One of
linux
,macos
orwindows
. bare-metal
(to distinguish it from VMs in the cloud).$os-$arch-$nickname
, where:$os
is one oflinux
,macos
,windows
$arch
is one ofx86_64
orarm64
(others may be supported in future)$nickname
is a unique nickname for the runner.
Once the runner is set up, enable it as a service so it will start automatically on boot.
In addition, the metadata about the runner must be added to runners.ini
, for example:
[linux]
os = linux
arch = x86_64
hostname = pyperf
TODO: Describe the special pystats runner
Run the install script to generate the files to make the Github Actions work (from the root of your repo):
python -m bench_runner install
This will create some files in .github/workflows
as well as some configuration files at the root of your repo.
Commit them to your repository, and push up to Github.
git commit -a -m "Initial commit"
git push origin main
There are instructions for running a benchmarking action already in the README.md
of your repo.
Look there and give it a try!
By default, all of the benchmarks in pyperformance
and python-macrobenchmarks
are run. To configure the set of benchmarks, or add more, edit the benchmarks.manifest
file.
The format of this file is documented with pyperformance
.
All benchmarked commits are automatically compared to key "reference" versions, as well as their merge base, if available.
The reference versions are defined in the bases.txt
file.
Don't forget to actually collect benchmark data for those tags -- it's doesn't happen automatically.
TODO: The longitudinal plot isn't currently configurable.
To learn how to hack on this project, see the full developer documentation.