homura.vision.transforms package¶
Submodules¶
homura.vision.transforms.mixup module¶
homura.vision.transforms.transform module¶
- class homura.vision.transforms.transform.CenterCrop(size, target_type=None)[source]¶
Bases:
homura.vision.transforms.transform.GeometricTransformBase
- class homura.vision.transforms.transform.ColorJitter(brightness=0, contrast=0, saturation=0, hue=0, target_type=None)[source]¶
Bases:
homura.vision.transforms.transform.NonGeometricTransformBase
- Parameters
target_type (TargetType) –
- class homura.vision.transforms.transform.ConcatTransform(*transforms, target_type=None)[source]¶
Bases:
homura.vision.transforms.transform.TransformBase
- Parameters
transforms (TransformBase) –
target_type (TargetType) –
- apply_bbox(bbox, params, original_wh)[source]¶
- Parameters
bbox (torch.Tensor) –
original_wh (tuple[int, int]) –
- Return type
torch.Tensor
- class homura.vision.transforms.transform.GeometricTransformBase(target_type)[source]¶
Bases:
homura.vision.transforms.transform.TransformBase
,abc.ABC
- Parameters
target_type (TargetType) –
- class homura.vision.transforms.transform.NonGeometricTransformBase(target_type)[source]¶
Bases:
homura.vision.transforms.transform.TransformBase
,abc.ABC
- Parameters
target_type (TargetType) –
- apply_bbox(bbox, params, original_wh)[source]¶
- Parameters
bbox (torch.Tensor) –
original_wh (tuple[int, int]) –
- Return type
torch.Tensor
- class homura.vision.transforms.transform.Normalize(mean, std, target_type=None)[source]¶
Bases:
homura.vision.transforms.transform.NonGeometricTransformBase
- Parameters
mean (list[float]) –
std (list[float]) –
target_type (TargetType) –
- class homura.vision.transforms.transform.RandomCrop(size, padding=None, pad_if_needed=False, fill=0, padding_mode='constant', mask_fill=255, target_type=None)[source]¶
Bases:
homura.vision.transforms.transform.GeometricTransformBase
- Parameters
target_type (TargetType) –
- class homura.vision.transforms.transform.RandomGrayScale(p=0.5, target_type=None)[source]¶
Bases:
homura.vision.transforms.transform.NonGeometricTransformBase
- Parameters
p (float) –
target_type (TargetType) –
- class homura.vision.transforms.transform.RandomHorizontalFlip(p=0.5, target_type=None)[source]¶
Bases:
homura.vision.transforms.transform.GeometricTransformBase
- Parameters
p (float) –
target_type (TargetType) –
- class homura.vision.transforms.transform.RandomResize(min_size, max_size=None, target_type=None)[source]¶
Bases:
homura.vision.transforms.transform.GeometricTransformBase
- Parameters
min_size (int) –
max_size (int) –
target_type (TargetType) –
- class homura.vision.transforms.transform.RandomResizedCrop(size, scale=(0.08, 1.0), ratio=(0.75, 1.3333333333333333), target_type=None)[source]¶
Bases:
homura.vision.transforms.transform.GeometricTransformBase
- class homura.vision.transforms.transform.RandomRotation(degrees, fill=None, mask_fill=255, target_type=None)[source]¶
Bases:
homura.vision.transforms.transform.GeometricTransformBase
- class homura.vision.transforms.transform.TransformBase(target_type)[source]¶
Bases:
abc.ABC
Base class of data augmentation transformations. Transform is expected to be used as drop-in replacements of torchvision’s transforms.
train_da = CenterCrop(224, task=”segmentation”) * ColorJitter(task=”segmentation”) + …
- Parameters
target_type (TargetType) –
- apply_bbox(bbox, params, original_wh)[source]¶
- Parameters
bbox (torch.Tensor) –
original_wh (tuple[int, int]) –
- Return type
torch.Tensor
- abstract apply_coords(coords, original_wh, params)[source]¶
- Parameters
coords (torch.Tensor) –
original_wh (tuple[int, int]) –
- Return type
torch.Tensor
- abstract apply_image(image, params)[source]¶
- Parameters
image (torch.Tensor) –
- Return type
torch.Tensor
- abstract apply_mask(mask, params)[source]¶
- Parameters
mask (torch.Tensor) –
- Return type
torch.Tensor
- supported_target_types = {'bbox', 'mask'}¶