homura package¶
Subpackages¶
- homura.metrics package
- homura.modules package
- homura.utils package
- homura.vision package
Submodules¶
homura.liblog module¶
logging tools leaned a lot from Optuna and Transformers
- homura.liblog.get_logger(name=None)[source]¶
- Parameters
name (Optional[str]) –
- Return type
logging.Logger
- homura.liblog.log_once(logger, message, key=<class 'str'>)[source]¶
Log message only once.
- Parameters
logger – e.g., print, logger.info
message (str) –
key – if key=None, message is used as key.
- Returns
- Return type
None
- homura.liblog.print_once(message, key=<class 'str'>)[source]¶
print version of log_once
- Parameters
message (str) –
- Return type
None
- homura.liblog.set_file_handler(log_file, level=10, formatter=None)[source]¶
- Parameters
log_file (str) –
level (str) –
formatter (Optional[logging.Formatter]) –
- Return type
None
homura.lr_scheduler module¶
- homura.lr_scheduler.CosineAnnealingWithWarmup(total_epochs, warmup_epochs, min_lr=0, last_epoch=- 1)[source]¶
- Parameters
total_epochs (int) –
warmup_epochs (int) –
min_lr (float) –
last_epoch (int) –
- homura.lr_scheduler.InverseSquareRootWithWarmup(warmup_epochs, last_epoch=- 1)[source]¶
inverse square root with warmup: $sqrt{w} min(1/sqrt{e}, e/sqrt{e}^3)$, where $w$ is warmup_epochs and e is the current epoch
- Parameters
warmup_epochs (int) –
last_epoch (int) –
- homura.lr_scheduler.MultiStepWithWarmup(warmup, milestones, gamma=0.1, last_epoch=- 1)[source]¶
- Parameters
warmup (int) –
milestones (list[int]) –
gamma (float) –
last_epoch (int) –
- homura.lr_scheduler.ReduceLROnPlateau(mode='min', factor=0.1, patience=10, verbose=False, threshold=0.0001, threshold_mode='rel', cooldown=0, min_lr=0, eps=1e-08)[source]¶
homura.optim module¶
- homura.optim.Adam(lr=0.001, betas=(0.9, 0.999), eps=1e-08, weight_decay=0, amsgrad=False, multi_tensor=False)[source]¶
- Parameters
multi_tensor (bool) –
- homura.optim.AdamW(lr=0.001, betas=(0.9, 0.999), eps=1e-08, weight_decay=0.01, amsgrad=False, multi_tensor=False)[source]¶
- Parameters
multi_tensor (bool) –
- class homura.optim.LARC(optimizer, trust_coefficient=0.02, no_clip=False, eps=1e-08)[source]¶
Bases:
object
LARC based on NVIDIA’s Apex for Layer-wise Adaptive Rate Scaling. LARC is designed to wrap a given optimizer. Optimizer should be wrapped after initializing scheduler.
- Parameters
optimizer (torch.optim.optimizer.Optimizer) –
trust_coefficient (float) –
no_clip (bool) –
eps (float) –
- property param_groups¶
- property state¶
homura.register module¶
- class homura.register.Registry(name, type=None)[source]¶
Bases:
object
Registry of models, datasets and anything you like.
model_registry = Registry('model') @model_registry.register def your_model(*args, **kwargs): return ... your_model_instance = model_registry('your_model')(...) model_registry2 = Registry('model') model_registry is model_registry2
- Parameters
name (str) – name of registry. If name is already used, return that registry.
type (Type[T]) – type of registees. If type is not None, registees are type checked when registered.
homura.reporters module¶
- class homura.reporters.ReporterList(reporters)[source]¶
Bases:
object
ReporterList is expected to be used in TrainerBase
- Parameters
reporters (list[_ReporterBase]) –
- Return type
None
- add(key, value, *, is_averaged=True, reduction='average', no_sync=False)¶
Add value(s) to reporter
def iteration(self: TrainerBase, data: Tuple[Tensor, ...]): self.reporter.add_value('loss', loss.detach()) self.reporter.add_value('miou', confusion_matrix(output, target), reduction=cm_to_miou)
- Parameters
key (str) – Unique key to track value
value (torch.Tensor) – Value
is_averaged (bool) – If value is averaged
reduction (str) – Method of reduction after epoch, ‘average’, ‘sum’ or function of list[Value] -> Value
no_sync (bool) – If not sync in distributed setting
- Returns
- Return type
None
- add_figure(key, figure, step=None)[source]¶
Report Figure of matplotlib.pyplot
- Parameters
key (str) –
figure (matplotlib.pyplot.figure) –
step (int) –
- Return type
None
- add_histogram(key, value, step=None, bins='tensorflow')[source]¶
Report histogram of a given tensor
- Parameters
key (str) –
value (torch.Tensor) –
step (int) –
bins (str) –
- Return type
None
- add_image(key, image, step=None, normalize=False)[source]¶
Report a single image or a batch of images
- Parameters
key (str) –
image (torch.Tensor) –
step (int) –
normalize (bool) –
- Return type
None
- add_text(key, text, step=None)[source]¶
Report text
- Parameters
key (str) –
text (str) –
step (int) –
- Return type
None
- add_value(key, value, *, is_averaged=True, reduction='average', no_sync=False)[source]¶
Add value(s) to reporter
def iteration(self: TrainerBase, data: Tuple[Tensor, ...]): self.reporter.add_value('loss', loss.detach()) self.reporter.add_value('miou', confusion_matrix(output, target), reduction=cm_to_miou)
- Parameters
key (str) – Unique key to track value
value (torch.Tensor) – Value
is_averaged (bool) – If value is averaged
reduction (str) – Method of reduction after epoch, ‘average’, ‘sum’ or function of list[Value] -> Value
no_sync (bool) – If not sync in distributed setting
- Returns
- Return type
None
- property history: homura.reporters._History¶
- class homura.reporters.TQDMReporter(ncols=80)[source]¶
Bases:
homura.reporters._ReporterBase
- Parameters
ncols (int) –
- Return type
None
- add_scalar(key, value, step=None)[source]¶
- Parameters
key (str) –
value (numbers.Number) –
step (int) –
- Return type
None
- add_scalars(key, value, step=None)[source]¶
- Parameters
key (str) –
value (dict[str, numbers.Number]) –
step (int) –
- Return type
None
- class homura.reporters.TensorboardReporter(save_dir=None)[source]¶
Bases:
homura.reporters._ReporterBase
- Parameters
save_dir (str) –
- Return type
None
- add_audio(key, audio, step=None)[source]¶
- Parameters
key (str) –
audio (torch.Tensor) –
step (int) –
- Return type
None
- add_figure(key, figure, step=None)[source]¶
- Parameters
key (str) –
figure (matplotlib.pyplot.figure) –
step (int) –
- Return type
None
- add_histogram(key, values, step, bins='tensorflow')[source]¶
- Parameters
key (str) –
values (torch.Tensor) –
step (int) –
bins (str) –
- Return type
None
- add_image(key, image, step=None)[source]¶
- Parameters
key (str) –
image (torch.Tensor) –
step (int) –
- Return type
None
- add_scalar(key, value, step=None)[source]¶
- Parameters
key (str) –
value (Any) –
step (int) –
- Return type
None
homura.trainers module¶
- class homura.trainers.SupervisedTrainer(model, optimizer, loss_f, *, reporters=None, scheduler=None, quiet=False, disable_cudnn_benchmark=False, data_parallel=False, use_amp=False, use_channel_last=False, report_accuracy_topk=None, update_scheduler_iter=False, use_larc=False, grad_accum_steps=None, **kwargs)[source]¶
Bases:
homura.trainers.TrainerBase
A simple trainer for supervised image classification. It only accepts single model. AMP-ready.
- Parameters
model (nn.Module) –
optimizer (Optimizer) –
loss_f (Callable) –
reporters (_ReporterBase or list[_ReporterBase]) –
scheduler (Scheduler) –
report_accuracy_topk (int or list[int]) –
update_scheduler_iter (bool) –
use_larc (bool) –
grad_accum_steps (int) –
- data_preprocess(data)[source]¶
- Parameters
data (tuple[torch.Tensor, torch.Tensor]) –
- Return type
tuple[torch.Tensor, torch.Tensor]
- load_state_dict(state_dict)[source]¶
- Parameters
state_dict (dict[str, typing.Any]) –
- Return type
None
- class homura.trainers.TrainerBase(model, optimizer, loss_f=None, *, reporters=None, scheduler=None, device=None, quiet=False, disable_cudnn_benchmark=False, disable_cuda_nonblocking=False, logger=None, use_sync_bn=False, tqdm_ncols=120, debug=False, profile=False, dist_kwargs=None, prof_kwargs=None, disable_auto_ddp=False, **kwargs)[source]¶
Bases:
homura.utils._mixin.StateDictMixIn
Baseclass for Trainers
- Parameters
model (nn.Module or dict[str, nn.Module]) – model to be trained
optimizer (Partial or Optimizer or dict[str, Optimizer]) – optimizer for the model
loss_f (Callable or dict[str, Callable]) – loss function for training
reporters (_ReporterBase or list[_ReporterBase]) – list of reporters
scheduler (Partial or Scheduler or dict[str, Scheduler]) – learning rate scheduler
device (torch.device or str) – device to be used
quiet (bool) – True to disable tqdm
disable_cudnn_benchmark (bool) – True to disable cudnn benchmark mode
disable_cuda_nonblocking (bool) – True to disable cuda nonblocking
logger – optional logger
use_sync_bn (bool) – True to convert BN to sync BN
tqdm_ncols (int) – number of columns of tqdm
kwargs –
debug (bool) –
profile (bool) –
dist_kwargs (dict) –
prof_kwargs (dict) –
disable_auto_ddp (bool) –
- data_preprocess(data)[source]¶
- Parameters
data (homura.trainers.DataType) –
- Return type
homura.trainers.DataType
- property epoch: int¶
- property history: dict[str, list[float]]¶
- property is_train: bool¶
- override_iteration(new_iteration)[source]¶
Override iteration method
def new_iteration(trainer, data): input, target = data ... results.loss = loss return results trainer.update_iteration(new_iteration)
- Parameters
new_iteration (Callable[[homura.trainers.DataType], None]) –
- Returns
- Return type
None
- run(train_loader, val_loaders, total_iterations, val_intervals)[source]¶
Train the model for a given iterations. This module is almost equal to
for ep in range(total_iterations): trainer.train(train_loader) for k, v in val_loaders.items(): trainer.test(v, k)
- Parameters
train_loader (Iterable) –
val_loaders (Iterable) –
total_iterations (int) –
val_intervals (int) –
- Returns
- Return type
None
- set_optimizer()[source]¶
Set optimizer(s) for model(s). You can override as:
class YourTrainer(TrainerBase): def set_optimizer(self): self.optimizer = torch.optim.SGD(self.model.parameters())
- Returns
- Return type
None
- set_scheduler()[source]¶
Set scheduler(s) for optimizer(s). You can override as
class YourTrainer(TrainerBase): def set_scheduler(self): self.scheduler = torch.optim.lr_scheduler.Foo(self.optimizer)
- Returns
- Return type
None
- property step: int¶
- test(data_loader, mode='test')[source]¶
Evaluate the model.
- Parameters
data_loader (Iterable) –
mode (str) – Name of this loop. Default is test. Passed to callbacks.
- Returns
- Return type
None
- train(data_loader, mode='train')[source]¶
Training the model for an epoch.
- Parameters
data_loader (Iterable) –
mode (str) – Name of this loop. Default is train. Passed to callbacks.
- Return type
None
- property verbose: bool¶