homura package

Submodules

homura.liblog module

logging tools leaned a lot from Optuna and Transformers

homura.liblog.disable_default_handler()[source]
Return type

None

homura.liblog.disable_propagation()[source]
Return type

None

homura.liblog.enable_default_handler()[source]
Return type

None

homura.liblog.enable_propagation()[source]
Return type

None

homura.liblog.get_logger(name=None)[source]
Parameters

name (Optional[str]) –

Return type

logging.Logger

homura.liblog.get_verb_level()[source]
Return type

int

homura.liblog.log_once(logger, message, key=typing.Optional[str])[source]

Log message only once.

Parameters
  • logger – e.g., print, logger.info

  • message (str) –

  • key – if key=None, message is used as key.

Returns

Return type

None

homura.liblog.print_once(message, key=typing.Optional[str])[source]

print version of log_once

Parameters

message (str) –

Return type

None

homura.liblog.set_file_handler(log_file, level=10, formatter=None)[source]
Parameters
  • log_file (str) –

  • level (str) –

  • formatter (Optional[logging.Formatter]) –

Return type

None

homura.liblog.set_tqdm_handler(level=20, formatter=None)[source]

An alternative handler to avoid disturbing tqdm

Parameters
  • level (str) –

  • formatter (Optional[logging.Formatter]) –

Return type

None

homura.liblog.set_tqdm_stdout_stderr()[source]
homura.liblog.set_verb_level(level)[source]
Parameters

level (str) –

Return type

None

homura.liblog.tqdm(*args, **kwargs)[source]

homura.lr_scheduler module

homura.lr_scheduler.CosineAnnealingWithWarmup(total_epochs, multiplier, warmup_epochs, min_lr=0, last_epoch=- 1)[source]
Parameters
  • total_epochs (int) –

  • multiplier (float) –

  • warmup_epochs (int) –

  • min_lr (float) –

  • last_epoch (int) –

homura.lr_scheduler.ExponentialLR(T_max, eta_min=0, last_epoch=- 1)[source]
homura.lr_scheduler.LambdaLR(lr_lambda, last_epoch=- 1)[source]
homura.lr_scheduler.MultiStepLR(milestones, gamma=0.1, last_epoch=- 1)[source]
homura.lr_scheduler.ReduceLROnPlateau(mode='min', factor=0.1, patience=10, verbose=False, threshold=0.0001, threshold_mode='rel', cooldown=0, min_lr=0, eps=1e-08)[source]
homura.lr_scheduler.StepLR(step_size, gamma=0.1, last_epoch=- 1)[source]

homura.optim module

homura.optim.Adam(lr=0.001, betas=(0.9, 0.999), eps=1e-08, weight_decay=0, amsgrad=False, multi_tensor=False)[source]
Parameters

multi_tensor (bool) –

homura.optim.AdamW(lr=0.001, betas=(0.9, 0.999), eps=1e-08, weight_decay=0.01, amsgrad=False, multi_tensor=False)[source]
Parameters

multi_tensor (bool) –

homura.optim.RMSprop(lr=0.01, alpha=0.99, eps=1e-08, weight_decay=0, momentum=0, centered=False, multi_tensor=False)[source]
Parameters

multi_tensor (bool) –

homura.optim.SGD(lr=0.1, momentum=0, dampening=0, weight_decay=0, nesterov=False, multi_tensor=False)[source]
Parameters

multi_tensor (bool) –

homura.register module

class homura.register.Registry(name, type=None)[source]

Bases: object

Registry of models, datasets and anything you like.

model_registry = Registry('model')
@model_registry.register
def your_model(*args, **kwargs):
    return ...
your_model_instance = model_registry('your_model')(...)
model_registry2 = Registry('model')
model_registry is model_registry2
Parameters
  • name – name of registry. If name is already used, return that registry.

  • type – type of registees. If type is not None, registees are type checked when registered.

classmethod available_registries(detailed=False)[source]
Parameters

detailed (bool) –

choices()[source]
help()[source]
static import_modules(package_name)[source]
Parameters

package_name (str) –

Return type

None

register(func=None, *, name=None)[source]
Parameters
  • func (Optional[T]) –

  • name (Optional[str]) –

Return type

T

register_from_dict(name_to_func)[source]
Parameters

name_to_func (Dict[str, T]) –

homura.reporters module

class homura.reporters.ReporterList(reporters)[source]

Bases: object

ReporterList is expected to be used in TrainerBase

add(key, value, *, is_averaged=True, reduction='average', no_sync=False)

Add value(s) to reporter

def iteration(self: TrainerBase, data: Tuple[Tensor, ...]):

    self.reporter.add_value('loss', loss.detach())
    self.reporter.add_value('miou', confusion_matrix(output, target), reduction=cm_to_miou)
Parameters
  • key (str) – Unique key to track value

  • value (torch.Tensor) – Value

  • is_averaged (bool) – If value is averaged

  • reduction (str) – Method of reduction after epoch, ‘average’, ‘sum’ or function of List[Value] -> Value

  • no_sync (bool) – If not sync in distributed setting

Returns

Return type

None

add_figure(key, figure, step=None)[source]

Report Figure of matplotlib.pyplot

Parameters
  • key (str) –

  • figure (matplotlib.pyplot.figure) –

  • step (Optional[int]) –

Return type

None

add_histogram(key, value, step=None, bins='tensorflow')[source]

Report histogram of a given tensor

Parameters
  • key (str) –

  • value (torch.Tensor) –

  • step (Optional[int]) –

  • bins (str) –

Return type

None

add_image(key, image, step=None, normalize=False)[source]

Report a single image or a batch of images

Parameters
  • key (str) –

  • image (torch.Tensor) –

  • step (Optional[int]) –

  • normalize (bool) –

Return type

None

add_text(key, text, step=None)[source]

Report text

Parameters
  • key (str) –

  • text (str) –

  • step (Optional[int]) –

Return type

None

add_value(key, value, *, is_averaged=True, reduction='average', no_sync=False)[source]

Add value(s) to reporter

def iteration(self: TrainerBase, data: Tuple[Tensor, ...]):

    self.reporter.add_value('loss', loss.detach())
    self.reporter.add_value('miou', confusion_matrix(output, target), reduction=cm_to_miou)
Parameters
  • key (str) – Unique key to track value

  • value (torch.Tensor) – Value

  • is_averaged (bool) – If value is averaged

  • reduction (str) – Method of reduction after epoch, ‘average’, ‘sum’ or function of List[Value] -> Value

  • no_sync (bool) – If not sync in distributed setting

Returns

Return type

None

exit()[source]
Return type

None

property history
report(step=None, mode='')[source]
Parameters
  • step (Optional[int]) –

  • mode (str) –

Return type

None

set_batch_size(batch_size)[source]
Parameters

batch_size (int) –

Return type

None

class homura.reporters.TQDMReporter(ncols=80)[source]

Bases: homura.reporters._ReporterBase

add_scalar(key, value, step=None)[source]
Parameters
  • key (str) –

  • value (numbers.Number) –

  • step (Optional[int]) –

Return type

None

add_scalars(key, value, step=None)[source]
Parameters
  • key (str) –

  • value (Dict[str, numbers.Number]) –

  • step (Optional[int]) –

Return type

None

add_text(key, value, step=None)[source]
Parameters
  • key (str) –

  • value (str) –

  • step (Optional[int]) –

Return type

None

flush()[source]
set_iterator(iterator)[source]
Parameters

iterator (Iterator) –

Return type

None

class homura.reporters.TensorboardReporter(save_dir=None)[source]

Bases: homura.reporters._ReporterBase

add_audio(key, audio, step=None)[source]
Parameters
  • key (str) –

  • audio (torch.Tensor) –

  • step (Optional[int]) –

Return type

None

add_figure(key, figure, step=None)[source]
Parameters
  • key (str) –

  • figure (matplotlib.pyplot.figure) –

  • step (Optional[int]) –

Return type

None

add_histogram(key, values, step, bins='tensorflow')[source]
Parameters
  • key (str) –

  • values (torch.Tensor) –

  • step (Optional[int]) –

  • bins (str) –

Return type

None

add_image(key, image, step=None)[source]
Parameters
  • key (str) –

  • image (torch.Tensor) –

  • step (Optional[int]) –

Return type

None

add_scalar(key, value, step=None)[source]
Parameters
  • key (str) –

  • value (Any) –

  • step (Optional[int]) –

Return type

None

add_scalars(key, value, step=None)[source]
Parameters
  • key (str) –

  • value (Dict[str, Any]) –

  • step (Optional[int]) –

Return type

None

add_text(key, value, step=None)[source]
Parameters
  • key (str) –

  • value (str) –

  • step (Optional[int]) –

Return type

None

homura.trainers module

class homura.trainers.SupervisedTrainer(model, optimizer, loss_f, *, reporters=None, scheduler=None, quiet=False, disable_cudnn_benchmark=False, data_parallel=False, use_amp=False, use_channel_last=False, report_accuracy_topk=None, **kwargs)[source]

Bases: homura.trainers.TrainerBase

A simple trainer for supervised image classification. It only accepts single model. AMP-ready.

data_preprocess(data)[source]

preprocess data and return (TensorTuple, batch_size)

Parameters

data (Tuple[torch.Tensor, torch.Tensor]) –

Return type

(typing.Tuple[torch.Tensor, torch.Tensor], <class ‘int’>)

iteration(data)[source]
Parameters

data (Tuple[torch.Tensor, torch.Tensor]) –

Return type

None

load_state_dict(state_dict)[source]
Parameters

state_dict (Mapping[str, Any]) –

Return type

None

state_dict()[source]
Return type

Mapping[str, Any]

class homura.trainers.TrainerBase(model, optimizer, loss_f=None, *, reporters=None, scheduler=None, device=None, quiet=False, disable_cudnn_benchmark=False, disable_cuda_nonblocking=False, logger=None, use_sync_bn=False, tqdm_ncols=120, debug=False, **kwargs)[source]

Bases: homura.utils._mixin.StateDictMixIn

Baseclass for Trainers

Parameters
  • model – model to be trained

  • optimizer – optimizer for the model

  • loss_f – loss function for training

  • reporters – list of reporters

  • scheduler – learning rate scheduler

  • device – device to be used

  • quiet – True to disable tqdm

  • disable_cudnn_benchmark – True to disable cudnn benchmark mode

  • disable_cuda_nonblocking – True to disable cuda nonblocking

  • logger – optional logger

  • use_sync_bn – True to convert BN to sync BN

  • tqdm_ncols – number of columns of tqdm

  • kwargs

data_preprocess(data)[source]

preprocess data and return (TensorTuple, batch_size)

Parameters

data (Tuple[torch.Tensor, ..]) –

Return type

(typing.Tuple[torch.Tensor, ..], <class ‘int’>)

property epoch
epoch_range(epoch)[source]
Parameters

epoch (int) –

Return type

homura.reporters.TQDMReporter

exit()[source]
property history
property is_train
abstract iteration(data)[source]
Parameters

data (Tuple[torch.Tensor, ..]) –

Return type

None

override_iteration(new_iteration)[source]

Override iteration method

def new_iteration(trainer, data):
    input, target = data
    ...
    results.loss = loss
    return results
trainer.update_iteration(new_iteration)
Parameters

new_iteration (Callable[[Tuple], None]) –

Returns

Return type

None

run(train_loader, val_loaders, total_iterations, val_intervals)[source]

Train the model for a given iterations. This module is almost equal to

for ep in range(total_iterations):
    trainer.train(train_loader)
    for k, v in val_loaders.items():
        trainer.test(v, k)
Parameters
  • train_loader (Iterable) –

  • val_loaders (Iterable) –

  • total_iterations (int) –

  • val_intervals (int) –

Returns

Return type

None

set_optimizer()[source]

Set optimizer(s) for model(s). You can override as:

class YourTrainer(TrainerBase):
    def set_optimizer(self):
        self.optimizer = torch.optim.SGD(self.model.parameters())
Returns

Return type

None

set_scheduler()[source]

Set scheduler(s) for optimizer(s). You can override as

class YourTrainer(TrainerBase):
    def set_scheduler(self):
        self.scheduler = torch.optim.lr_scheduler.Foo(self.optimizer)
Returns

Return type

None

property step
test(data_loader, mode='test')[source]

Evaluate the model.

Parameters
  • data_loader (Iterable) –

  • mode (str) – Name of this loop. Default is test. Passed to callbacks.

Returns

Return type

None

train(data_loader, mode='train')[source]

Training the model for an epoch.

Parameters
  • data_loader (Iterable) –

  • mode (str) – Name of this loop. Default is train. Passed to callbacks.

Return type

None

Module contents