-
Notifications
You must be signed in to change notification settings - Fork 110
/
README.Rmd
121 lines (91 loc) · 4.16 KB
/
README.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
---
output: github_document
---
<!-- README.md is generated from README.Rmd. Please edit that file -->
```{r, echo = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>",
fig.path = "man/figures/README-"
)
```
# lime <img src="man/figures/logo.png" width="131px" height="140px" align="right" style="padding-left:10px;background-color:white;" />
<!-- badges: start -->
[![R-CMD-check](https://github.com/thomasp85/lime/actions/workflows/R-CMD-check.yaml/badge.svg)](https://github.com/thomasp85/lime/actions/workflows/R-CMD-check.yaml)
[![Codecov test coverage](https://codecov.io/gh/thomasp85/lime/branch/main/graph/badge.svg)](https://app.codecov.io/gh/thomasp85/lime?branch=main)
[![CRAN_Release_Badge](https://www.r-pkg.org/badges/version-ago/lime)](https://CRAN.R-project.org/package=lime)
[![CRAN_Download_Badge](https://cranlogs.r-pkg.org/badges/lime)](https://CRAN.R-project.org/package=lime)
<!-- badges: end -->
> There once was a package called lime,
>
> Whose models were simply sublime,
>
> It gave explanations for their variations,
>
> one observation at a time.
*lime-rick by Mara Averick*
* * *
*This is an R port of the Python lime package (https://github.com/marcotcr/lime)
developed by the authors of the lime (Local Interpretable Model-agnostic
Explanations) approach for black-box model explanations. All credits for the
invention of the approach goes to the original developers.*
The purpose of `lime` is to explain the predictions of black box classifiers.
What this means is that for any given prediction and any given classifier it is
able to determine a small set of features in the original data that has driven
the outcome of the prediction. To learn more about the methodology of `lime`
read the [paper](https://arxiv.org/abs/1602.04938) and visit the repository of
the [original implementation](https://github.com/marcotcr/lime).
The `lime` package for R does not aim to be a line-by-line port of its Python
counterpart. Instead it takes the ideas laid out in the original code and
implements them in an API that is idiomatic to R.
## An example
Out of the box `lime` supports a long range of models, e.g. those created with
caret, parsnip, and mlr. Support for unsupported models are easy to achieve by
adding a `predict_model` and `model_type` method for the given model.
The following shows how a random forest model is trained on the iris data set
and how `lime` is then used to explain a set of new observations:
```{r, message=FALSE, fig.asp=1}
library(caret)
library(lime)
# Split up the data set
iris_test <- iris[1:5, 1:4]
iris_train <- iris[-(1:5), 1:4]
iris_lab <- iris[[5]][-(1:5)]
# Create Random Forest model on iris data
model <- train(iris_train, iris_lab, method = 'rf')
# Create an explainer object
explainer <- lime(iris_train, model)
# Explain new observation
explanation <- explain(iris_test, explainer, n_labels = 1, n_features = 2)
# The output is provided in a consistent tabular format and includes the
# output from the model.
explanation
# And can be visualised directly
plot_features(explanation)
```
`lime` also supports explaining image and text models. For image explanations
the relevant areas in an image can be highlighted:
```{r, fig.asp=0.5}
explanation <- .load_image_example()
plot_image_explanation(explanation)
```
Here we see that the second most probably class is hardly true, but is due to
the model picking up waxy areas of the produce and interpreting them as
wax-light surface.
For text the explanation can be shown by highlighting the important words. It
even includes a `shiny` application for interactively exploring text models:
![interactive text explainer](man/figures/shine_text_explanations.gif)
## Installation
`lime` is available on CRAN and can be installed using the standard approach:
```{r, eval=FALSE}
install.packages('lime')
```
To get the development version, install from GitHub instead:
```{r, eval=FALSE}
# install.packages('devtools')
devtools::install_github('thomasp85/lime')
```
## Code of Conduct
Please note that the 'lime' project is released with a
[Contributor Code of Conduct](https://lime.data-imaginist.com/CODE_OF_CONDUCT.html).
By contributing to this project, you agree to abide by its terms.