Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add parameter to select model name #563

Closed

Conversation

Kshitij68
Copy link

@Kshitij68 Kshitij68 commented Sep 2, 2022

#423
This is introductory PR which introduces the basic strucutrue of the extension that I propose. I think further testing can be done as well.

@pplonski
Copy link
Contributor

pplonski commented Sep 2, 2022

Thanks @Kshitij68, I would leave _best_model in the base_automl.py - the information about best model will be needed anyway, we shouldn't loose it.

The basic structure of the extension that you propose looks good. I'm very interested in the full structure of the extension. Can't wait for it!

@Kshitij68 Kshitij68 closed this Sep 9, 2022
@Kshitij68
Copy link
Author

I'm trying to add more test cases.
The only way I see to assert that the correct model is used is via asserting the log loss or accuracy and comparing with the values that are persisted.
However, this approach doesn't work as the log_loss doesn't match between training and prediction dataset?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants