Skip to content
Jörg F. Unger edited this page Apr 21, 2021 · 8 revisions

This project is a collection of different examples for model calibration and a discussion on how to connect this calibration process with real data and parameterized simulation problems. This also requires to structure the information that is put back into the database and to define metadata to be able to retrieve the information.

Model calibration competency questions

In order to answer the question, what metadata we would have to store, we should think about competency questions and how potential answers would look like.

  • Return all data with more than 10 model parameters, 5 hyperparameters using a variational bayesian approach that was solved with tensorflow (calibration) and a fenics forward model in 2020 from KIT?

  • Return all data in the area of constitutive models for concrete that are based on at least 20 different experimental data from different labs?

  • Point me to all specific data that was used in the calibration of the following model parameters. *Unger: We should clarify what we mean by data. What exactly do we expect here?

  • How was the forward model chosen, and what guided the decision?

    • Unger: I would argue that this is not part of the data base, because I don't see any general way of searching for this reasoning in a meta data type of general procedure.
  • Who performed the model calibration and which is the underlying experimental process (e.g. also raw data, owner/agent, instances of that)?

  • (And, of course, for the evaluation of models and impacts/insights beyond them:) What were the resulting parameters, the resulting function values, data ranges, and the respective uncertainties?

  • What model (type) did you used for identifying the parameters? What are the attributes of such models?

  • What is the applicability range of the model (with the model parameters), including material class, temperature ranges? This should allow the users to select the best model for application

  • Uncertainty revisited: If a model reaches a 1 % uncertainty for one particular Aluminium alloy, this doesn't mean it will work equally well for a different alloy or the same after a heat treatment. How do we ensure that users don't misinterpret any given uncertainty values?

    • Unger: I would say this boils down to the question of how we define our material (in a potentially hierarchical way, inorganic/metal/aluminum/...), and what data is used for the calibration, so this has to be part of the calibration information (type of materials used for the calibration that could be metals, of a specific alloy)
  • [hof] What models are available for a certain topic (e.g. diffusion, mechanical, fluid transport, ...)

    • [hof] What data (obtained from these models) are available (including further selections like time, organisation, ...)
    • [hof] What method was used for the model calibration?
    • [hof] What experimental/synthetic data was used for calibration?
    • [hof] Are these experimental/synthetic data sets available (e.g. in public domain)
    • [hof] How good is the calibration (metrics)?
  • How was the model set up (link to a specific WORKFLOW), including the procedure (deterministic opt, sampling method, Bayes, etc.)?

  • [rfalkenb] How exactly is the residual defined that goes into the optimiser?

  • [rfalkenb] How were specific datapoints addressed to be neglected or overweighted?

  • [rfalkenb] What is/are the stop criterion(s) (residual/maxIter)?

Clone this wiki locally