Optimise anything (but mainly large-scale biophysical models) using Gaussian Processes surrogate
pyGPSO
is a python package for Gaussian-Processes Surrogate Optimisation. GPSO is a Bayesian optimisation method designed to cope with costly, high-dimensional, non-convex problems by switching between exploration of the parameter space (using partition tree) and exploitation of the gathered knowledge (by training the surrogate function using Gaussian Processes regression). The motivation for this method stems from the optimisation of large-scale biophysical models in neuroscience when the modelled data should match the experimental one. This package leverages GPFlow
for training and predicting the Gaussian Processes surrogate.
This is port of original Matlab implementation by the paper's author.
Reference: Hadida, J., Sotiropoulos, S. N., Abeysuriya, R. G., Woolrich, M. W., & Jbabdi, S. (2018). Bayesian Optimisation of Large-Scale Biophysical Networks. NeuroImage, 174, 219-236.
Comparison of the GPR surrogate and the true objective function after optimisation.
Example of ternary partition tree after optimisation.
GPSO
package is tested and should run without any problems on python versions 3.6 -- 3.9.
Installing pytables
might give you hdf5 errors. If this is the case, please do
brew install hdf5 c-blosc
and all should work like a charm afterwards.
For those who want to optimise right away just
pip install pygpso
and go ahead! Make sure to check example notebooks in the examples directory to see how it works and what it can do. Or, alternatively, you can run interactive notebooks in binder:
When you are the type of girl or guy who likes to install packages properly, start by cloning (or forking) this repository, then installing all the dependencies and finally install the package itself
git clone https://github.com/jajcayn/pygpso
cd pygpso/
pip install -r requirements.txt
# optionally, but recommended
pip install -r requirements_optional.txt
pip install .
Don't forget to test!
pytest
A guide on how to optimise and what can be done using this package is given as jupyter notebooks in the examples directory. You can also try them out live thanks to binder: .
The basic idea is to initialise the parameter space in which the optimisation is to be run and then iteratively dig deeper and evaluate the objective function when necessary
from gpso import ParameterSpace, GPSOptimiser
def objective_function(params):
# params as a list or tuple
x, y = params
...
<some hardcore computation>
...
return <float>
# bounds of the parameters we will optimise
x_bounds = [-3, 5]
y_bounds = [-3, 3]
space = ParameterSpace(parameter_names=["x", "y"], parameter_bounds=[x_bounds, y_bounds])
opt = GPSOptimiser(parameter_space=space, n_workers=4)
best_point = opt.run(objective_function)
The package also offers plotting functions for visualising the results. Again, those are documented and showcased in the examples directory.
Gaussian Processes regression uses normalised coordinates within the bounds [0, 1]. All normalisation and de-normalisation is done automatically, however when you want to call predict_y
on GPR model, do not forget to pass normalised coordinates. The normalisation is handled by sklearn.MinMaxScaler
and ParameterSpace
instance offers a convenience functions for this: ParameterSpace.normalise_coords(orig_coords)
and ParameterSpace.denormalise_coords(normed_coords)
.
Plotting of the ternary tree (gpso.plotting.plot_ternary_tree()
) requires igraph
package, whose layout function is exploited. If you want to see the resulting beautiful tree, please install python-igraph
.
Support of saver (for saving models run, e.g. timeseries along with the optimisation) is provided by PyTables
(and pandas
if you're saving results to DataFrame
s).
- saving of GP surrogate is now hacky, as
GPFlow
supports only saving model for future prediction but AFAIK they cannot be trained anymore, since the information on kernels and mean-functions are not saved (only the trained weights in the computational graph). Thus,pyGPSO
still relies on hacky saving topkl
files and recreating kernels and mean-function on-the-go when loading from saved.
When you encounter a bug or have any idea for an improvement, please open an issue and/or contact me.
When using this package in publications, please cite the original Jonathan's paper for the methodology as
@article{hadida2018bayesian,
title={Bayesian Optimisation of Large-Scale Biophysical Networks},
author={Hadida, Jonathan and Sotiropoulos, Stamatios N and Abeysuriya, Romesh G and Woolrich, Mark W and Jbabdi, Saad},
journal={Neuroimage},
volume={174},
pages={219--236},
year={2018},
publisher={Elsevier}
}
and acknowledge the usage of this software via its DOI: . After clicking, you will see citation data.