Tags: JuliaAI/MLJTuning.jl
Tags
## MLJTuning v0.8.8 [Diff since v0.8.7](v0.8.7...v0.8.8) - Change default `logger` from `nothing` to `MLJBase.default_logger()` (which can be reset with `MLJBase.default_logger(new_logger)`) #221 **Merged pull requests:** - Make the global `default_logger()` the default `logger` in `TunedModel(logger=...)` (#221) (@ablaom) - For a 0.8.8 release (#222) (@ablaom) **Closed issues:** - Use measures that are not of the form `f(y, yhat)` but `f(fitresult)` (#202)
## MLJTuning v0.8.6 [Diff since v0.8.5](v0.8.5...v0.8.6) - (**new feature**) Add `logger` option to `TunedModel` wrapper, for logging internal model evaluations to an ML tracking platform, such as MLflow via [MLJFlow.jl](https://github.com/JuliaAI/MLJFlow.jl). Default should be `nothing` for no logging (#193). The logger must support asynchronous messaging if `TunedModel(model, ...)` is specified with the option `acceleration=CPUThreads()` or `CPUProcesses()`. Currently, `CPU1()` (the default) is supported by MLJFlow.jl's loggers, while ansynchronous support is a work in progress; see JuliaAI/MLJFlow.jl#41 #193. **Merged pull requests:** - Adding loggers into TunedModels (#193) (@pebeto) - For a 0.8.6 release (#218) (@ablaom) **Closed issues:** - Broken link (404) for each dependent link of the site https://alan-turing-institute.github.io (#217)
## MLJTuning v0.8.5 [Diff since v0.8.4](v0.8.4...v0.8.5) - Write the `PerformanceEvaluation` objects computed for each model (hyper-parameter set) to the history, or write compact versions of the same (`CompactPeformanceEvaluation` objects) by providing `TunedModel(...)` a new option `compact_history=true`. The evaluation objects are accessed like this: `evaluation = report(mach).history[index].evaluation`, where `mach` is a machine associated with the `TunedModel` instance. For more on the differences between `PerformanceEvaluation` and `CompactPerformanceEvaluation` objects, refer to their document strings. (In MLJTuning 0.5.3 and 0.5.4 an experimental feature already introduced `PerformanceEvalution` objects to the history, but with no option to write the compact form. In the current release, compact objects are written *by default*.) **Merged pull requests:** - Create option to write `CompactPerformanceEvaluation` objects to history (#215) (@ablaom) - For a 0.8.5 release (#216) (@ablaom)
## MLJTuning v0.8.4 [Diff since v0.8.3](v0.8.3...v0.8.4) - (**enhancement**) Implement feature importances that expose the feature importances of the optimal atomic model (#213) **Merged pull requests:** - add feature importances support for tuned models (#213) (@OkonSamuel) - For a 0.8.4 release (#214) (@ablaom)
## MLJTuning v0.8.0 [Diff since v0.7.4](v0.7.4...v0.8.0) - (**breaking**) Bump MLJBase compatibility to version 1. When using without MLJ, users may need to explicitly import StatisticalMeasures.jl. See also the [MLJBase 1.0 migration guide](https://github.com/alan-turing-institute/MLJ.jl/blob/measure/docs/src/performance_measures.md#migration-guide-for-changes-to-measures-in-mljbase-10) (#194) **Merged pull requests:** - Get rid of test/Project.toml (#190) (@ablaom) - Fix some tests that use deprecated MLJBase code (#191) (@ablaom) - Update code and tests to address migration of measures MLJBase -> StatisticalMeasures (#194) (@ablaom) - For a 0.8 release (#195) (@ablaom) - add compat for julia (#196) (@ablaom) **Closed issues:** - Are GridSearch using the update! method? (#82) - Improper loss functions silently accepted in training a `TunedModel` (#184) - Typo in error message for `TunedModel` missing arguments (#188) - Skipping parts of search space? (#189)
PreviousNext