Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Metamodelling sampler for different experiments #93

Open
joergfunger opened this issue Sep 20, 2022 · 4 comments
Open

Metamodelling sampler for different experiments #93

joergfunger opened this issue Sep 20, 2022 · 4 comments

Comments

@joergfunger
Copy link
Member

joergfunger commented Sep 20, 2022

I was going through the code that generates the LatinHyperCube samples. I realized that apart from the forward model and the model parameters, the experiment is also an input to the forward model (providing e.g. environmental temperature, load position). IMO, there are now two options for creating a metamodel, one for each combinatin of forward model and experiment (thus e.g. the load position as part of the experiment is assumed to be constant), or to have a response surface that covers both the forward model parameters and the experimental conditions (thus increasing the dimenion of the metamodel). I guess the first approach is a little compuationally more efficient, and for the second approach it might also be difficult to determine all really variable parameters (the environmental temperature stored in the experiment and passed to the forward model might actually be identical in all experiments). However, this would mean the surrogate would require as input the forward model and the experiment. @JanKoune and @danielandresarcones , what do you think?

@danielandresarcones
Copy link
Collaborator

I would opt for the first approach. If the surrogate model already receives the forward model, in the end it just needs to specify for which experiment or set of them has to be implemented. An extra method could be implemented to add the set of experiments as a new dimension, but I don't think this should be the default behaviour.

@joergfunger
Copy link
Member Author

Where do we then place the loop over the experiments (resulting in a list/dict of surrogates). In the inference probem, this is already stored. Should we leave that to the user (to manually loop over the experiments), or do we think that the adaptive sampling class should actually return a dict of samples to create different forward models based on the inference problem passed as input?

@danielandresarcones
Copy link
Collaborator

Leaving it to the user would probably the best. Looping automatically over the experiments to create a dict of surrogates may be too inefficient, especially if each takes a long time to train or if the experiments don't affect significantly the surrogate.

@joergfunger
Copy link
Member Author

Just looking at the examples, e.g. here, the forward models are actually added for a direct experiment (the experiment is actually the relevant entity that is looped over also in the definition of the likelihoods that then knows that corresponding forward model). Thus, it might actually be a good idea to attach the surrogate to the experiment, which means that even for a single forward model and multiple experiments we end up with multiple surrogate models (one for each experiment). So instead of

  problem.add_forward_model(linear_model, experiments="TestSeries_linear")

we would first create a surrogate

  surrogate = metamodel.gaussian_process(linear_model, experiments="TestSeries_linear", samples_of model_parameters, other_hyper_parameters)
  surrogate.fit(..)
  problem.add_forward_model(surrogate_model, experiments="TestSeries_linear")

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants