Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Related to #2114: reformat docstrings in the 'metrics' folder #2221

Merged
merged 5 commits into from
Sep 28, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
41 changes: 20 additions & 21 deletions ignite/metrics/accumulation.py
Original file line number Diff line number Diff line change
Expand Up @@ -99,18 +99,6 @@ class Average(VariableAccumulation):
For input `x` being an ND `torch.Tensor` with N > 1, the first dimension is seen as the number of samples and
is summed up and added to the accumulator: `accumulator += x.sum(dim=0)`

Examples:

.. code-block:: python

evaluator = ...

custom_var_mean = Average(output_transform=lambda output: output['custom_var'])
custom_var_mean.attach(evaluator, 'mean_custom_var')

state = evaluator.run(dataset)
# state.metrics['mean_custom_var'] -> average of output['custom_var']

Args:
output_transform: a callable that is used to transform the
:class:`~ignite.engine.engine.Engine`'s ``process_function``'s output into the
Expand All @@ -119,6 +107,17 @@ class Average(VariableAccumulation):
device: specifies which device updates are accumulated on. Setting the metric's
device to be the same as your ``update`` arguments ensures the ``update`` method is non-blocking. By
default, CPU.

Examples:
.. code-block:: python

evaluator = ...

custom_var_mean = Average(output_transform=lambda output: output['custom_var'])
custom_var_mean.attach(evaluator, 'mean_custom_var')

state = evaluator.run(dataset)
# state.metrics['mean_custom_var'] -> average of output['custom_var']
"""

def __init__(
Expand Down Expand Up @@ -147,6 +146,15 @@ class GeometricAverage(VariableAccumulation):
- ``update`` must receive output of the form `x`.
- `x` can be a positive number or a positive `torch.Tensor`, such that ``torch.log(x)`` is not `nan`.

Args:
output_transform: a callable that is used to transform the
:class:`~ignite.engine.engine.Engine`'s ``process_function``'s output into the
form expected by the metric. This can be useful if, for example, you have a multi-output model and
you want to compute the metric with respect to one of the outputs.
device: specifies which device updates are accumulated on. Setting the metric's
device to be the same as your ``update`` arguments ensures the ``update`` method is non-blocking. By
default, CPU.

Note:

Number of samples is updated following the rule:
Expand All @@ -158,15 +166,6 @@ class GeometricAverage(VariableAccumulation):
For input `x` being an ND `torch.Tensor` with N > 1, the first dimension is seen as the number of samples and
is aggregated and added to the accumulator: `accumulator *= prod(x, dim=0)`

Args:
output_transform: a callable that is used to transform the
:class:`~ignite.engine.engine.Engine`'s ``process_function``'s output into the
form expected by the metric. This can be useful if, for example, you have a multi-output model and
you want to compute the metric with respect to one of the outputs.
device: specifies which device updates are accumulated on. Setting the metric's
device to be the same as your ``update`` arguments ensures the ``update`` method is non-blocking. By
default, CPU.

"""

def __init__(
Expand Down
2 changes: 0 additions & 2 deletions ignite/metrics/accuracy.py
Original file line number Diff line number Diff line change
Expand Up @@ -117,7 +117,6 @@ def thresholded_output_transform(output):

binary_accuracy = Accuracy(thresholded_output_transform)


Args:
output_transform: a callable that is used to transform the
:class:`~ignite.engine.engine.Engine`'s ``process_function``'s output into the
Expand All @@ -127,7 +126,6 @@ def thresholded_output_transform(output):
device: specifies which device updates are accumulated on. Setting the metric's
device to be the same as your ``update`` arguments ensures the ``update`` method is non-blocking. By
default, CPU.

"""

def __init__(
Expand Down
56 changes: 28 additions & 28 deletions ignite/metrics/classification_report.py
Original file line number Diff line number Diff line change
Expand Up @@ -36,36 +36,36 @@ def ClassificationReport(
device: optional device specification for internal storage.
labels: Optional list of label indices to include in the report

.. code-block:: python
Examples:
.. code-block:: python

def process_function(engine, batch):
# ...
return y_pred, y

def process_function(engine, batch):
# ...
return y_pred, y

engine = Engine(process_function)
metric = ClassificationReport()
metric.attach(engine, "cr")
engine.run...
res = engine.state.metrics["cr"]
# result should be like
{
"0": {
"precision": 0.4891304347826087,
"recall": 0.5056179775280899,
"f1-score": 0.497237569060773
},
"1": {
"precision": 0.5157232704402516,
"recall": 0.4992389649923896,
"f1-score": 0.507347254447022
},
"macro avg": {
"precision": 0.5024268526114302,
"recall": 0.5024284712602398,
"f1-score": 0.5022924117538975
}
}
engine = Engine(process_function)
metric = ClassificationReport()
metric.attach(engine, "cr")
engine.run...
res = engine.state.metrics["cr"]
# result should be like
{
"0": {
"precision": 0.4891304347826087,
"recall": 0.5056179775280899,
"f1-score": 0.497237569060773
},
"1": {
"precision": 0.5157232704402516,
"recall": 0.4992389649923896,
"f1-score": 0.507347254447022
},
"macro avg": {
"precision": 0.5024268526114302,
"recall": 0.5024284712602398,
"f1-score": 0.5022924117538975
}
}
"""

# setup all the underlying metrics
Expand Down
54 changes: 24 additions & 30 deletions ignite/metrics/confusion_matrix.py
Original file line number Diff line number Diff line change
Expand Up @@ -35,11 +35,17 @@ class ConfusionMatrix(Metric):
device to be the same as your ``update`` arguments ensures the ``update`` method is non-blocking. By
default, CPU.

Note:
The confusion matrix is formatted such that columns are predictions and rows are targets.
For example, if you were to plot the matrix, you could correctly assign to the horizontal axis
the label "predicted values" and to the vertical axis the label "actual values".

Note:
In case of the targets `y` in `(batch_size, ...)` format, target indices between 0 and `num_classes` only
contribute to the confusion matrix and others are neglected. For example, if `num_classes=20` and target index
equal 255 is encountered, then it is filtered out.

Examples:
If you are doing binary classification with a single output unit, you may have to transform your network output,
so that you have one value for each class. E.g. you can transform your network output into a one-hot vector
with:
Expand All @@ -60,12 +66,6 @@ def binary_one_hot_output_transform(output):
evaluator = create_supervised_evaluator(
model, metrics=metrics, output_transform=lambda x, y, y_pred: (y_pred, y)
)

Note:
The confusion matrix is formatted such that columns are predictions and rows are targets.
For example, if you were to plot the matrix, you could correctly assign to the horizontal axis
the label "predicted values" and to the vertical axis the label "actual values".

"""

def __init__(
Expand Down Expand Up @@ -174,16 +174,15 @@ def IoU(cm: ConfusionMatrix, ignore_index: Optional[int] = None) -> MetricsLambd
MetricsLambda

Examples:
.. code-block:: python

.. code-block:: python

train_evaluator = ...
train_evaluator = ...

cm = ConfusionMatrix(num_classes=num_classes)
IoU(cm, ignore_index=0).attach(train_evaluator, 'IoU')
cm = ConfusionMatrix(num_classes=num_classes)
IoU(cm, ignore_index=0).attach(train_evaluator, 'IoU')

state = train_evaluator.run(train_dataset)
# state.metrics['IoU'] -> tensor of shape (num_classes - 1, )
state = train_evaluator.run(train_dataset)
# state.metrics['IoU'] -> tensor of shape (num_classes - 1, )

"""
if not isinstance(cm, ConfusionMatrix):
Expand Down Expand Up @@ -225,18 +224,15 @@ def mIoU(cm: ConfusionMatrix, ignore_index: Optional[int] = None) -> MetricsLamb
MetricsLambda

Examples:
.. code-block:: python

.. code-block:: python

train_evaluator = ...

cm = ConfusionMatrix(num_classes=num_classes)
mIoU(cm, ignore_index=0).attach(train_evaluator, 'mean IoU')

state = train_evaluator.run(train_dataset)
# state.metrics['mean IoU'] -> scalar
train_evaluator = ...

cm = ConfusionMatrix(num_classes=num_classes)
mIoU(cm, ignore_index=0).attach(train_evaluator, 'mean IoU')

state = train_evaluator.run(train_dataset)
# state.metrics['mean IoU'] -> scalar
"""
iou = IoU(cm=cm, ignore_index=ignore_index).mean() # type: MetricsLambda
return iou
Expand Down Expand Up @@ -345,16 +341,14 @@ def JaccardIndex(cm: ConfusionMatrix, ignore_index: Optional[int] = None) -> Met
MetricsLambda

Examples:
.. code-block:: python

.. code-block:: python

train_evaluator = ...

cm = ConfusionMatrix(num_classes=num_classes)
JaccardIndex(cm, ignore_index=0).attach(train_evaluator, 'JaccardIndex')
train_evaluator = ...

state = train_evaluator.run(train_dataset)
# state.metrics['JaccardIndex'] -> tensor of shape (num_classes - 1, )
cm = ConfusionMatrix(num_classes=num_classes)
JaccardIndex(cm, ignore_index=0).attach(train_evaluator, 'JaccardIndex')

state = train_evaluator.run(train_dataset)
# state.metrics['JaccardIndex'] -> tensor of shape (num_classes - 1, )
"""
return IoU(cm, ignore_index)
1 change: 0 additions & 1 deletion ignite/metrics/frequency.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,6 @@ class Frequency(Metric):
"""Provides metrics for the number of examples processed per second.

Examples:

.. code-block:: python

# Compute number of tokens processed
Expand Down
3 changes: 1 addition & 2 deletions ignite/metrics/gan/fid.py
Original file line number Diff line number Diff line change
Expand Up @@ -94,8 +94,7 @@ class FID(_BaseInceptionMetric):
metric's device to be the same as your ``update`` arguments ensures the ``update`` method is
non-blocking. By default, CPU.

Example:

Examples:
.. code-block:: python

import torch
Expand Down
7 changes: 3 additions & 4 deletions ignite/metrics/gan/inception_score.py
Original file line number Diff line number Diff line change
Expand Up @@ -25,9 +25,6 @@ class InceptionScore(_BaseInceptionMetric):

__ https://arxiv.org/pdf/1801.01973.pdf

.. note::
The default Inception model requires the `torchvision` module to be installed.

Args:
num_features: number of features predicted by the model or number of classes of the model. Default
value is 1000.
Expand All @@ -47,8 +44,10 @@ class InceptionScore(_BaseInceptionMetric):
metric's device to be the same as your ``update`` arguments ensures the ``update`` method is
non-blocking. By default, CPU.

Example:
.. note::
The default Inception model requires the `torchvision` module to be installed.

Examples:
.. code-block:: python

from ignite.metric.gan import InceptionScore
Expand Down
5 changes: 2 additions & 3 deletions ignite/metrics/loss.py
Original file line number Diff line number Diff line change
Expand Up @@ -34,10 +34,9 @@ class Loss(Metric):
required_output_keys: dictionary defines required keys to be found in ``engine.state.output`` if the
latter is a dictionary. Default, ``("y_pred", "y", "criterion_kwargs")``. This is useful when the
criterion function requires additional arguments, which can be passed using ``criterion_kwargs``.
See notes below for an example.

Note:
See an example below.

Examples:
Let's implement a Loss metric that requires ``x``, ``y_pred``, ``y`` and ``criterion_kwargs`` as input
for ``criterion`` function. In the example below we show how to setup standard metric like Accuracy
and the Loss metric using an ``evaluator`` created with
Expand Down
Loading