This project identifies sources for evaluation of machine learning models.
Evaluation Metrics https://www.kdnuggets.com/2019/10/5-classification-evaluation-metrics-every-data-scientist-must-know.html
Evaluating Machine Learning Models (book) Alice Zheng https://www.pindex.com/uploads/post_docs/evaluating-machine-learning-models(PINDEX-DOC-6950).pdf
Interpretable Machine Learning (book) Christopher Molnar https://christophm.github.io/interpretable-ml-book/
AI Explainability 360 Open Source Toolkit --- https://aix360.mybluemix.net/
Using ML for Operational Decisions (article) --- https://towardsdatascience.com/solving-machine-learnings-last-mile-problem-for-operational-decisions-65e9f44d82b
Explaining the predictions of any machine learning classifier --- https://github.com/marcotcr/lime
ELI5 is a Python library which allows to visualize and debug various Machine Learning models --- https://eli5.readthedocs.io/en/latest/ Example article showing ELI5 --- https://towardsdatascience.com/adding-interpretability-to-multiclass-text-classification-models-c44864e8a13b
InterpretML is an open-source python package for training interpretable models and explaining blackbox systems --- https://github.com/microsoft/interpret
A unified approach to explain the output of any machine learning model --- https://github.com/slundberg/shap
Alibi is an open source Python library aimed at machine learning model inspection and interpretation --- https://docs.seldon.io/projects/alibi/en/stable/