Skip to content

Commit

Permalink
[Features]Support dump segment predition (#2712)
Browse files Browse the repository at this point in the history
## Motivation

1. It is used to save the segmentation predictions as files and upload
these files to a test server

## Modification

1. Add output_file and format only in `IoUMetric`
 
## BC-breaking (Optional)

No

## Use cases (Optional)

If this PR introduces a new feature, it is better to list some use cases
here, and update the documentation.

## Checklist

1. Pre-commit or other linting tools are used to fix the potential lint
issues.
3. The modification is covered by complete unit tests. If not, please
add more unit test to ensure the correctness.
4. If the modification has potential influence on downstream projects,
this PR should be tested with downstream projects, like MMDet or
MMDet3D.
5. The documentation has been modified accordingly, like docstring or
example tutorials.
  • Loading branch information
MeowZheng committed Mar 17, 2023
1 parent f6de1aa commit ff95416
Show file tree
Hide file tree
Showing 10 changed files with 347 additions and 33 deletions.
29 changes: 29 additions & 0 deletions docs/en/migration/interface.md
Original file line number Diff line number Diff line change
Expand Up @@ -65,6 +65,35 @@ Compared with MMSeg0.x, MMSeg1.x provides fewer command line arguments in `tools
<td>--cfg-options randomness.deterministic=True</td>
</table>

## Test launch

Similar to training launch, there are only common arguments in tools/test.py of MMSegmentation 1.x.
Below is the difference in test scripts,
please refer to [this documentation](../user_guides/4_train_test.md) for more details about test launch.

<table class="docutils">
<tr>
<td>Function</td>
<td>0.x</td>
<td>1.x</td>
</tr>
<tr>
<td>Evaluation metrics</td>
<td>--eval mIoU</td>
<td>--cfg-options test_evaluator.type=IoUMetric</td>
</tr>
<tr>
<td>Whether to use test time augmentation</td>
<td>--aug-test</td>
<td>--tta</td>
</tr>
<tr>
<td>Whether save the output results without perform evaluation</td>
<td>--format-only</td>
<td>--cfg-options test_evaluator.format_only=True</td>
</tr>
</table>

## Configuration file

### Model settings
Expand Down
97 changes: 96 additions & 1 deletion docs/en/user_guides/4_train_test.md
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,7 @@ This tool accepts several optional arguments, including:
export CUDA_VISIBLE_DEVICES=-1
```

And then run the script [above](#testing-on-a-single-gpu).
then run the script [above](#testing-on-a-single-gpu).

## Training and testing on multiple GPUs and multiple machines

Expand Down Expand Up @@ -218,3 +218,98 @@ You can check [the source code](../../../tools/slurm_test.sh) to review full arg
CUDA_VISIBLE_DEVICES=0,1,2,3 GPUS=4 MASTER_PORT=29500 sh tools/slurm_train.sh ${PARTITION} ${JOB_NAME} config1.py ${WORK_DIR}
CUDA_VISIBLE_DEVICES=4,5,6,7 GPUS=4 MASTER_PORT=29501 sh tools/slurm_train.sh ${PARTITION} ${JOB_NAME} config2.py ${WORK_DIR}
```

## Testing and saving segment files

### Basic Usage

When you want to save the results, you can use `--out` to specify the output directory.

```shell
python tools/test.py ${CONFIG_FILE} ${CHECKPOINT_FILE} --out ${OUTPUT_DIR}
```

Here is an example to save the predicted results from model `fcn_r50-d8_4xb4-80k_ade20k-512x512` on ADE20k validatation dataset.

```shell
python tools/test.py configs/fcn/fcn_r50-d8_4xb4-80k_ade20k-512x512.py ckpt/fcn_r50-d8_512x512_80k_ade20k_20200614_144016-f8ac5082.pth --out work_dirs/format_results
```

You also can modify the config file to define `output_dir`. We also take
`fcn_r50-d8_4xb4-80k_ade20k-512x512` as example just add
`test_evaluator` in `configs/fcn/fcn_r50-d8_4xb4-80k_ade20k-512x512.py`

```python
test_evaluator = dict(type='IoUMetric', iou_metrics=['mIoU'], output_dir='work_dirs/format_results')
```

then run command without `--out`:

```shell
python tools/test.py configs/fcn/fcn_r50-d8_4xb4-80k_ade20k-512x512.py ckpt/fcn_r50-d8_512x512_80k_ade20k_20200614_144016-f8ac5082.pth
```

If you would like to only save the predicted results without evaluation as annotation is not released by the official dataset, you can set `format_only=True` and modify `test_dataloader`.
As there is no annotation in dataset, we remove `dict(type='LoadAnnotations')` from `test_dataloader` Here is the example configuration:

```python
test_evaluator = dict(
type='IoUMetric',
iou_metrics=['mIoU'],
format_only=True,
output_dir='work_dirs/format_results')
test_dataloader = dict(
batch_size=1,
num_workers=4,
persistent_workers=True,
sampler=dict(type='DefaultSampler', shuffle=False),
dataset=dict(
type = 'ADE20KDataset'
data_root='data/ade/release_test',
data_prefix=dict(img_path='testing'),
# we don't load annotation in test transform pipeline.
pipeline=[
dict(type='LoadImageFromFile'),
dict(type='Resize', scale=(2048, 512), keep_ratio=True),
dict(type='PackSegInputs')
]))
```

then run test command:

```shell
python tools/test.py configs/fcn/fcn_r50-d8_4xb4-80k_ade20k-512x512.py ckpt/fcn_r50-d8_512x512_80k_ade20k_20200614_144016-f8ac5082.pth
```

### Testing Cityscape dataset and save predicted segment files

We recommend `CityscapesMetric` which is the wrapper of Cityscapes'sdk, when you want to
save the predicted results of Cityscape test dataset to submit them in [Cityscape test server](https://www.cityscapes-dataset.com/submit/). Here is the example configuration:

```python
test_evaluator = dict(
type='CityscapesMetric',
format_only=True,
keep_results=True,
output_dir='work_dirs/format_results')
test_dataloader = dict(
batch_size=1,
num_workers=4,
persistent_workers=True,
sampler=dict(type='DefaultSampler', shuffle=False),
dataset=dict(
type='CityscapesDataset',
data_root='data/cityscapes/',
data_prefix=dict(img_path='leftImg8bit/test'),
pipeline=[
dict(type='LoadImageFromFile'),
dict(type='Resize', scale=(2048, 1024), keep_ratio=True),
dict(type='PackSegInputs')
]))
```

then run test command, for example:

```shell
python tools/test.py configs/fcn/fcn_r18-d8_4xb2-80k_cityscapes-512x1024.py ckpt/fcn_r18-d8_512x1024_80k_cityscapes_20201225_021327-6c50f8b4.pth
```
31 changes: 29 additions & 2 deletions docs/zh_cn/migration/interface.md
Original file line number Diff line number Diff line change
Expand Up @@ -65,6 +65,33 @@ OpenMMLab 2.0 的主要改进是发布了 MMEngine,它为启动训练任务的
<td>--cfg-options randomness.deterministic=True</td>
</table>

## 测试启动

与训练启动类似,MMSegmentation 1.x 的测试启动脚本在 tools/test.py 中仅提供关键命令行参数,以下是测试启动脚本的区别,更多关于测试启动的细节请参考[这里](../user_guides/4_train_test.md)

<table class="docutils">
<tr>
<td>功能</td>
<td>0.x</td>
<td>1.x</td>
</tr>
<tr>
<td>指定评测指标</td>
<td>--eval mIoU</td>
<td>--cfg-options test_evaluator.type=IoUMetric</td>
</tr>
<tr>
<td>测试时数据增强</td>
<td>--aug-test</td>
<td>--tta</td>
</tr>
<tr>
<td>测试时是否只保存预测结果不计算评测指标</td>
<td>--format-only</td>
<td>--cfg-options test_evaluator.format_only=True</td>
</tr>
</table>

## 配置文件

### 模型设置
Expand Down Expand Up @@ -98,7 +125,7 @@ OpenMMLab 2.0 的主要改进是发布了 MMEngine,它为启动训练任务的

**data** 的更改:

原版 `data` 字段被拆分为 `train_dataloader``val_dataloader``test_dataloader`。这允许我们以细粒度配置它们。例如,您可以在训练和测试期间指定不同的采样器和批次大小。
原版 `data` 字段被拆分为 `train_dataloader``val_dataloader``test_dataloader`,允许我们以细粒度配置它们。例如,您可以在训练和测试期间指定不同的采样器和批次大小。
`samples_per_gpu` 重命名为 `batch_size`
`workers_per_gpu` 重命名为 `num_workers`

Expand Down Expand Up @@ -144,7 +171,7 @@ test_dataloader = val_dataloader
</tr>
</table>

**流程**变更
**数据增强变换流程**变更

- 原始格式转换 **`ToTensor`****`ImageToTensor`****`Collect`** 组合为 [`PackSegInputs`](mmseg.datasets.transforms.PackSegInputs)
- 我们不建议在数据集流程中执行 **`Normalize`****Pad**。请将其从流程中删除,并将其设置在 `data_preprocessor` 字段中。
Expand Down
92 changes: 92 additions & 0 deletions docs/zh_cn/user_guides/4_train_test.md
Original file line number Diff line number Diff line change
Expand Up @@ -223,3 +223,95 @@ GPUS=4 sh tools/slurm_train.sh dev pspnet configs/pspnet/pspnet_r50-d8_512x1024_
CUDA_VISIBLE_DEVICES=0,1,2,3 GPUS=4 MASTER_PORT=29500 sh tools/slurm_train.sh ${分区} ${任务名} config1.py ${工作路径}
CUDA_VISIBLE_DEVICES=4,5,6,7 GPUS=4 MASTER_PORT=29501 sh tools/slurm_train.sh ${分区} ${任务名} config2.py ${工作路径}
```

## 测试并保存分割结果

### 基础使用

当需要保存测试输出的分割结果,用 `--out` 指定分割结果输出路径

```shell
python tools/test.py ${CONFIG_FILE} ${CHECKPOINT_FILE} --out ${OUTPUT_DIR}
```

以保存模型 `fcn_r50-d8_4xb4-80k_ade20k-512x512` 在 ADE20K 验证数据集上的结果为例:

```shell
python tools/test.py configs/fcn/fcn_r50-d8_4xb4-80k_ade20k-512x512.py ckpt/fcn_r50-d8_512x512_80k_ade20k_20200614_144016-f8ac5082.pth --out work_dirs/format_results
```

或者通过配置文件定义 `output_dir`。例如在 `configs/fcn/fcn_r50-d8_4xb4-80k_ade20k-512x512.py` 添加 `test_evaluator` 定义:

```python
test_evaluator = dict(type='IoUMetric', iou_metrics=['mIoU'], output_dir='work_dirs/format_results')
```

然后执行相同功能的命令不需要再使用 `--out`

```shell
python tools/test.py configs/fcn/fcn_r50-d8_4xb4-80k_ade20k-512x512.py ckpt/fcn_r50-d8_512x512_80k_ade20k_20200614_144016-f8ac5082.pth
```

当测试的数据集没有提供标注,评测时没有真值可以参与计算,因此需要设置 `format_only=True`
同时需要修改 `test_dataloader`,由于没有标注,我们需要在数据增强变换中删掉 `dict(type='LoadAnnotations')`,以下是一个配置示例:

```python
test_evaluator = dict(
type='IoUMetric',
iou_metrics=['mIoU'],
format_only=True,
output_dir='work_dirs/format_results')
test_dataloader = dict(
batch_size=1,
num_workers=4,
persistent_workers=True,
sampler=dict(type='DefaultSampler', shuffle=False),
dataset=dict(
type = 'ADE20KDataset'
data_root='data/ade/release_test',
data_prefix=dict(img_path='testing'),
# 测试数据变换中没有加载标注
pipeline=[
dict(type='LoadImageFromFile'),
dict(type='Resize', scale=(2048, 512), keep_ratio=True),
dict(type='PackSegInputs')
]))
```

然后执行测试命令:

```shell
python tools/test.py configs/fcn/fcn_r50-d8_4xb4-80k_ade20k-512x512.py ckpt/fcn_r50-d8_512x512_80k_ade20k_20200614_144016-f8ac5082.pth
```

### 测试 Cityscapes 数据集并保存输出分割结果

推荐使用 `CityscapesMetric` 来保存模型在 Cityscapes 数据集上的测试结果,以下是一个配置示例:

```python
test_evaluator = dict(
type='CityscapesMetric',
format_only=True,
keep_results=True,
output_dir='work_dirs/format_results')
test_dataloader = dict(
batch_size=1,
num_workers=4,
persistent_workers=True,
sampler=dict(type='DefaultSampler', shuffle=False),
dataset=dict(
type='CityscapesDataset',
data_root='data/cityscapes/',
data_prefix=dict(img_path='leftImg8bit/test'),
pipeline=[
dict(type='LoadImageFromFile'),
dict(type='Resize', scale=(2048, 1024), keep_ratio=True),
dict(type='PackSegInputs')
]))
```

然后执行相同的命令,例如:

```shell
python tools/test.py configs/fcn/fcn_r18-d8_4xb2-80k_cityscapes-512x1024.py ckpt/fcn_r18-d8_512x1024_80k_cityscapes_20201225_021327-6c50f8b4.pth
```
2 changes: 1 addition & 1 deletion mmseg/datasets/transforms/formatting.py
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ class PackSegInputs(BaseTransform):
def __init__(self,
meta_keys=('img_path', 'seg_map_path', 'ori_shape',
'img_shape', 'pad_shape', 'scale_factor', 'flip',
'flip_direction')):
'flip_direction', 'reduce_zero_label')):
self.meta_keys = meta_keys

def transform(self, results: dict) -> dict:
Expand Down
15 changes: 10 additions & 5 deletions mmseg/evaluation/metrics/citys_metric.py
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,8 @@ def __init__(self,
format_only: bool = False,
keep_results: bool = False,
collect_device: str = 'cpu',
prefix: Optional[str] = None) -> None:
prefix: Optional[str] = None,
**kwargs) -> None:
super().__init__(collect_device=collect_device, prefix=prefix)
if CSEval is None:
raise ImportError('Please run "pip install cityscapesscripts" to '
Expand Down Expand Up @@ -97,10 +98,14 @@ def process(self, data_batch: dict, data_samples: Sequence[dict]) -> None:
osp.join(self.output_dir, f'{basename}.png'))
output = Image.fromarray(pred_label.astype(np.uint8)).convert('P')
output.save(png_filename)
# when evaluating with official cityscapesscripts,
# **_gtFine_labelIds.png is used
gt_filename = data_sample['seg_map_path'].replace(
'labelTrainIds.png', 'labelIds.png')
if self.format_only:
# format_only always for test dataset without ground truth
gt_filename = ''
else:
# when evaluating with official cityscapesscripts,
# **_gtFine_labelIds.png is used
gt_filename = data_sample['seg_map_path'].replace(
'labelTrainIds.png', 'labelIds.png')
self.results.append((png_filename, gt_filename))

def compute_metrics(self, results: list) -> Dict[str, float]:
Expand Down
Loading

0 comments on commit ff95416

Please sign in to comment.