homura package¶
Subpackages¶
- homura.metrics package
- homura.modules package
- homura.utils package
- homura.vision package
Submodules¶
homura.liblog module¶
logging tools leaned a lot from Optuna and Transformers
-
homura.liblog.
get_logger
(name=None)[source]¶ - Parameters
name (Optional[str]) –
- Return type
logging.Logger
-
homura.liblog.
log_once
(logger, message, key=typing.Optional[str])[source]¶ Log message only once.
- Parameters
logger – e.g., print, logger.info
message (str) –
key – if key=None, message is used as key.
- Returns
- Return type
None
-
homura.liblog.
print_once
(message, key=typing.Optional[str])[source]¶ print version of log_once
- Parameters
message (str) –
- Return type
None
-
homura.liblog.
set_file_handler
(log_file, level=10, formatter=None)[source]¶ - Parameters
log_file (str) –
level (str) –
formatter (Optional[logging.Formatter]) –
- Return type
None
homura.lr_scheduler module¶
-
homura.lr_scheduler.
CosineAnnealingWithWarmup
(total_epochs, multiplier, warmup_epochs, min_lr=0, last_epoch=- 1)[source]¶ - Parameters
total_epochs (int) –
multiplier (float) –
warmup_epochs (int) –
min_lr (float) –
last_epoch (int) –
homura.optim module¶
-
homura.optim.
Adam
(lr=0.001, betas=(0.9, 0.999), eps=1e-08, weight_decay=0, amsgrad=False, multi_tensor=False)[source]¶ - Parameters
multi_tensor (bool) –
-
homura.optim.
AdamW
(lr=0.001, betas=(0.9, 0.999), eps=1e-08, weight_decay=0.01, amsgrad=False, multi_tensor=False)[source]¶ - Parameters
multi_tensor (bool) –
homura.register module¶
-
class
homura.register.
Registry
(name, type=None)[source]¶ Bases:
object
Registry of models, datasets and anything you like.
model_registry = Registry('model') @model_registry.register def your_model(*args, **kwargs): return ... your_model_instance = model_registry('your_model')(...) model_registry2 = Registry('model') model_registry is model_registry2
- Parameters
name – name of registry. If name is already used, return that registry.
type – type of registees. If type is not None, registees are type checked when registered.
homura.reporters module¶
-
class
homura.reporters.
ReporterList
(reporters)[source]¶ Bases:
object
ReporterList is expected to be used in TrainerBase
-
add
(key, value, *, is_averaged=True, reduction='average', no_sync=False)¶ Add value(s) to reporter
def iteration(self: TrainerBase, data: Tuple[Tensor, ...]): self.reporter.add_value('loss', loss.detach()) self.reporter.add_value('miou', confusion_matrix(output, target), reduction=cm_to_miou)
- Parameters
key (str) – Unique key to track value
value (torch.Tensor) – Value
is_averaged (bool) – If value is averaged
reduction (str) – Method of reduction after epoch, ‘average’, ‘sum’ or function of List[Value] -> Value
no_sync (bool) – If not sync in distributed setting
- Returns
- Return type
None
-
add_figure
(key, figure, step=None)[source]¶ Report Figure of matplotlib.pyplot
- Parameters
key (str) –
figure (matplotlib.pyplot.figure) –
step (Optional[int]) –
- Return type
None
-
add_histogram
(key, value, step=None, bins='tensorflow')[source]¶ Report histogram of a given tensor
- Parameters
key (str) –
value (torch.Tensor) –
step (Optional[int]) –
bins (str) –
- Return type
None
-
add_image
(key, image, step=None, normalize=False)[source]¶ Report a single image or a batch of images
- Parameters
key (str) –
image (torch.Tensor) –
step (Optional[int]) –
normalize (bool) –
- Return type
None
-
add_text
(key, text, step=None)[source]¶ Report text
- Parameters
key (str) –
text (str) –
step (Optional[int]) –
- Return type
None
-
add_value
(key, value, *, is_averaged=True, reduction='average', no_sync=False)[source]¶ Add value(s) to reporter
def iteration(self: TrainerBase, data: Tuple[Tensor, ...]): self.reporter.add_value('loss', loss.detach()) self.reporter.add_value('miou', confusion_matrix(output, target), reduction=cm_to_miou)
- Parameters
key (str) – Unique key to track value
value (torch.Tensor) – Value
is_averaged (bool) – If value is averaged
reduction (str) – Method of reduction after epoch, ‘average’, ‘sum’ or function of List[Value] -> Value
no_sync (bool) – If not sync in distributed setting
- Returns
- Return type
None
-
property
history
¶
-
-
class
homura.reporters.
TQDMReporter
(ncols=80)[source]¶ Bases:
homura.reporters._ReporterBase
-
add_scalar
(key, value, step=None)[source]¶ - Parameters
key (str) –
value (numbers.Number) –
step (Optional[int]) –
- Return type
None
-
add_scalars
(key, value, step=None)[source]¶ - Parameters
key (str) –
value (Dict[str, numbers.Number]) –
step (Optional[int]) –
- Return type
None
-
-
class
homura.reporters.
TensorboardReporter
(save_dir=None)[source]¶ Bases:
homura.reporters._ReporterBase
-
add_audio
(key, audio, step=None)[source]¶ - Parameters
key (str) –
audio (torch.Tensor) –
step (Optional[int]) –
- Return type
None
-
add_figure
(key, figure, step=None)[source]¶ - Parameters
key (str) –
figure (matplotlib.pyplot.figure) –
step (Optional[int]) –
- Return type
None
-
add_histogram
(key, values, step, bins='tensorflow')[source]¶ - Parameters
key (str) –
values (torch.Tensor) –
step (Optional[int]) –
bins (str) –
- Return type
None
-
add_image
(key, image, step=None)[source]¶ - Parameters
key (str) –
image (torch.Tensor) –
step (Optional[int]) –
- Return type
None
-
add_scalar
(key, value, step=None)[source]¶ - Parameters
key (str) –
value (Any) –
step (Optional[int]) –
- Return type
None
-
homura.trainers module¶
-
class
homura.trainers.
SupervisedTrainer
(model, optimizer, loss_f, *, reporters=None, scheduler=None, quiet=False, disable_cudnn_benchmark=False, data_parallel=False, use_amp=False, use_channel_last=False, report_accuracy_topk=None, **kwargs)[source]¶ Bases:
homura.trainers.TrainerBase
A simple trainer for supervised image classification. It only accepts single model. AMP-ready.
-
class
homura.trainers.
TrainerBase
(model, optimizer, loss_f=None, *, reporters=None, scheduler=None, device=None, quiet=False, disable_cudnn_benchmark=False, disable_cuda_nonblocking=False, logger=None, use_sync_bn=False, tqdm_ncols=120, debug=False, **kwargs)[source]¶ Bases:
homura.utils._mixin.StateDictMixIn
Baseclass for Trainers
- Parameters
model – model to be trained
optimizer – optimizer for the model
loss_f – loss function for training
reporters – list of reporters
scheduler – learning rate scheduler
device – device to be used
quiet – True to disable tqdm
disable_cudnn_benchmark – True to disable cudnn benchmark mode
disable_cuda_nonblocking – True to disable cuda nonblocking
logger – optional logger
use_sync_bn – True to convert BN to sync BN
tqdm_ncols – number of columns of tqdm
kwargs –
-
data_preprocess
(data)[source]¶ preprocess data and return (TensorTuple, batch_size)
- Parameters
data (Tuple[torch.Tensor, ..]) –
- Return type
(typing.Tuple[torch.Tensor, ..], <class ‘int’>)
-
property
epoch
¶
-
property
history
¶
-
property
is_train
¶
-
override_iteration
(new_iteration)[source]¶ Override iteration method
def new_iteration(trainer, data): input, target = data ... results.loss = loss return results trainer.update_iteration(new_iteration)
- Parameters
new_iteration (Callable[[Tuple], None]) –
- Returns
- Return type
None
-
run
(train_loader, val_loaders, total_iterations, val_intervals)[source]¶ Train the model for a given iterations. This module is almost equal to
for ep in range(total_iterations): trainer.train(train_loader) for k, v in val_loaders.items(): trainer.test(v, k)
- Parameters
train_loader (Iterable) –
val_loaders (Iterable) –
total_iterations (int) –
val_intervals (int) –
- Returns
- Return type
None
-
set_optimizer
()[source]¶ Set optimizer(s) for model(s). You can override as:
class YourTrainer(TrainerBase): def set_optimizer(self): self.optimizer = torch.optim.SGD(self.model.parameters())
- Returns
- Return type
None
-
set_scheduler
()[source]¶ Set scheduler(s) for optimizer(s). You can override as
class YourTrainer(TrainerBase): def set_scheduler(self): self.scheduler = torch.optim.lr_scheduler.Foo(self.optimizer)
- Returns
- Return type
None
-
property
step
¶