homura.utils package

Submodules

homura.utils.backends module

Helper functions to convert PyTorch Tensors <-> Cupy/Numpy arrays. These functions are useful to write device-agnostic extensions.

homura.utils.backends.einsum(expr, *xs)[source]
Parameters

expr (str) –

homura.utils.backends.torch_to_xp(input)[source]

Convert a PyTorch tensor to a Cupy/Numpy array.

Parameters

input (torch.Tensor) –

Return type

numpy.ndarray

homura.utils.backends.xp_to_torch(input)[source]

Convert a Cupy/Numpy array to a PyTorch tensor

Parameters

input (numpy.ndarray) –

Return type

torch.Tensor

homura.utils.benchmarks module

homura.utils.benchmarks.timeit(func=None, num_iters=100, warmup_iters=None)[source]

A simple timeit for GPU operations.

>>> @timeit(num_iters=100, warmup_iters=100)
>>> def mm(a, b):
>>>     return a @ b
>>> mm(a, b)
[homura.utils.benchmarks|2019-11-24 06:40:46|INFO] f requires 0.000021us per iteration
Parameters
  • func (Optional[Callable]) –

  • num_iters (int) –

  • warmup_iters (Optional[int]) –

homura.utils.containers module

Useful containers for PyTorch tensors and others

class homura.utils.containers.StepDict(_type, **kwargs)[source]

Bases: dict

Dictionary with step, state_dict, load_state_dict and zero_grad. Intended to be used with Optimizer, lr_scheduler:

sd = StepDict(Optimizer, generator=Adam(...), discriminator=Adam(...))
sd.step()
# is equivalent to generator_opt.step(); discriminator.step()
Parameters
  • _type (Type) –

  • kwargs

load_state_dict(state_dicts)[source]
Parameters

state_dicts (dict) –

state_dict()[source]
Return type

dict[str, typing.Any]

step()[source]
zero_grad()[source]
class homura.utils.containers.TensorDataClass[source]

Bases: object

TensorDataClass is an extension of dataclass that can handle tensors easily.

Return type

None

to(*args, **kwargs)[source]
class homura.utils.containers.TensorTuple(iterable=(), /)[source]

Bases: tuple

Tuple for tensors.

to(*args, **kwargs)[source]

Move stored tensors to a given device

homura.utils.containers.tensor_dataclass(cls=None, **kwargs)[source]

Helper function to create a TensorDataClass, expected to be used as decorator:

@tensor_dataclass
class YourTensorClass(TensorDataClass):
    __slots__ = ('pred', 'loss')
    pred: torch.Tensor
    loss: torch.Tensor

x = YourTensorClass(prediction, loss)
x_cuda = x.to('cuda')
x_int = x.to(dtype=torch.int32)
registry_name, loss = x
loss = x.loss
loss = x['loss']
Parameters
  • cls – wrapped class

  • kwargs – kwargs to dataclasses.dataclass

Returns

Return type

homura.utils.containers.TensorDataClass

homura.utils.distributed module

Helper functions to make distributed training easy

homura.utils.distributed.distributed_print(self, *args, sep=' ', end='\n', file=None)[source]

print something on any node

Return type

None

homura.utils.distributed.distributed_ready_main(func=None, backend=None, init_method=None, disable_distributed_print=False)[source]

Wrap a main function to make it distributed ready

Parameters
  • func (Optional[Callable]) –

  • backend (Optional[str]) –

  • init_method (Optional[str]) –

  • disable_distributed_print (str) –

Return type

Callable

homura.utils.distributed.get_global_rank()[source]

Get the global rank of the process. 0 if the process is the master.

Return type

int

homura.utils.distributed.get_local_rank()[source]

Get the local rank of the process, i.e., the process number of the node.

Return type

int

homura.utils.distributed.get_num_nodes()[source]

Get the number of nodes. Note that this function assumes all nodes have the same number of processes.

Return type

int

homura.utils.distributed.get_world_size()[source]

Get the world size, i.e., the total number of processes.

Return type

int

homura.utils.distributed.if_is_master(func)[source]

Wrap a void function that are active only if it is the master process:

@if_is_master
def print_master(message):

print(message)

Parameters

func (Callable) – Any function

Return type

Callable

homura.utils.distributed.init_distributed(backend=None, init_method=None, disable_distributed_print=False)[source]

Simple initializer for distributed training. This function substitutes print function with _print_if_master.

Parameters
  • backend (Optional[str]) – backend of torch.distributed.init_process_group

  • init_method (Optional[str]) – init_method of torch.distributed.init_process_group

  • disable_distributed_print (str) –

Returns

None

Return type

None

homura.utils.distributed.is_distributed()[source]

Check if the process is distributed by checking the world size is larger than 1.

Return type

bool

homura.utils.distributed.is_distributed_available()[source]
Return type

bool

homura.utils.distributed.is_master()[source]
Return type

bool

homura.utils.environment module

Helper functions to get information about the environment.

homura.utils.environment.disable_tf32()[source]

Globally disable TF32

Return type

None

class homura.utils.environment.disable_tf32_locally[source]

Bases: object

Locally disable TF32

>>> with disable_tf32_locally():
>>>     ...

or

>>> @disable_tf32_locally()
>>> def function():
>>>     ...
homura.utils.environment.enable_accimage()[source]
Return type

None

homura.utils.environment.get_args()[source]
Return type

list

homura.utils.environment.get_environ(name, default=None)[source]
Parameters
  • name (str) –

  • default (Optional[Any]) –

Return type

str

homura.utils.environment.get_git_hash()[source]
Return type

str

homura.utils.environment.is_accimage_available()[source]
Return type

bool

homura.utils.environment.is_cupy_available()[source]
Return type

bool

homura.utils.environment.is_faiss_available()[source]
Return type

bool

homura.utils.environment.is_opteinsum_available()[source]
Return type

bool

homura.utils.grad_tools module

homura.utils.grad_tools.ggnvp(loss, f, p, v)[source]

Generalized Gaussian Newton vector product. In case of loss=F.cross_entropy(output, target), GGN matrix is equivalent to the Fisher matrix.

Parameters
  • loss (torch.Tensor) –

  • f (torch.Tensor) –

  • p (nn.Parameter | list[nn.Parameter]) –

  • v (torch.Tensor) –

Return type

torch.Tensor

homura.utils.grad_tools.hvp(loss, f, p, v)[source]

Hessian vector product

Parameters
  • loss (torch.Tensor) –

  • p (torch.nn.parameter.Parameter) –

  • v (torch.Tensor) –

Return type

torch.Tensor

homura.utils.grad_tools.jvp(f, p, v)[source]

Jacobian vector product

Parameters
  • f (torch.Tensor) –

  • p (nn.Parameter | list[nn.Parameter]) –

  • v (torch.Tensor) –

Return type

torch.Tensor

homura.utils.grad_tools.param_to_vector(parameters)[source]
Parameters

parameters (collections.abc.Iterable[torch.Tensor]) –

Return type

torch.Tensor

homura.utils.grad_tools.vjp(f, p, v, *, only_retain_graph=False)[source]

vector Jacobian product

Parameters
  • f (torch.Tensor) –

  • p (nn.Parameter | list[nn.Parameter]) –

  • v (torch.Tensor) –

  • only_retain_graph (bool) –

Return type

torch.Tensor

homura.utils.reproducibility module

homura.utils.reproducibility.set_deterministic(seed=None, by_rank=False)[source]

Set seed of torch, random and numpy to seed for making it deterministic. Because of CUDA’s limitation, this may not make everything deterministic, however.

Parameters
  • seed (int) –

  • by_rank (bool) –

homura.utils.reproducibility.set_seed(seed=None, by_rank=False)[source]

Fix seed of random generator in the given context.

>>> with set_seed(0):
>>>     do_some_random_thing()
Parameters
  • seed (int) –

  • by_rank (bool) –

Module contents