homura.utils package¶
Submodules¶
homura.utils.backends module¶
Helper functions to convert PyTorch Tensors <-> Cupy/Numpy arrays. These functions are useful to write device-agnostic extensions.
homura.utils.benchmarks module¶
- homura.utils.benchmarks.timeit(func=None, num_iters=100, warmup_iters=None)[source]¶
A simple timeit for GPU operations.
>>> @timeit(num_iters=100, warmup_iters=100) >>> def mm(a, b): >>> return a @ b >>> mm(a, b) [homura.utils.benchmarks|2019-11-24 06:40:46|INFO] f requires 0.000021us per iteration
- Parameters
func (Optional[Callable]) –
num_iters (int) –
warmup_iters (Optional[int]) –
homura.utils.containers module¶
Useful containers for PyTorch tensors and others
- class homura.utils.containers.StepDict(_type, **kwargs)[source]¶
Bases:
dict
Dictionary with step, state_dict, load_state_dict and zero_grad. Intended to be used with Optimizer, lr_scheduler:
sd = StepDict(Optimizer, generator=Adam(...), discriminator=Adam(...)) sd.step() # is equivalent to generator_opt.step(); discriminator.step()
- Parameters
_type (Type) –
kwargs –
- class homura.utils.containers.TensorDataClass[source]¶
Bases:
object
TensorDataClass is an extension of dataclass that can handle tensors easily.
- Return type
None
- homura.utils.containers.tensor_dataclass(cls=None, **kwargs)[source]¶
Helper function to create a TensorDataClass, expected to be used as decorator:
@tensor_dataclass class YourTensorClass(TensorDataClass): __slots__ = ('pred', 'loss') pred: torch.Tensor loss: torch.Tensor x = YourTensorClass(prediction, loss) x_cuda = x.to('cuda') x_int = x.to(dtype=torch.int32) registry_name, loss = x loss = x.loss loss = x['loss']
- Parameters
cls – wrapped class
kwargs – kwargs to dataclasses.dataclass
- Returns
- Return type
homura.utils.distributed module¶
Helper functions to make distributed training easy
- homura.utils.distributed.distributed_print(self, *args, sep=' ', end='\n', file=None)[source]¶
print something on any node
- Return type
None
- homura.utils.distributed.distributed_ready_main(func=None, backend=None, init_method=None, disable_distributed_print=False)[source]¶
Wrap a main function to make it distributed ready
- Parameters
func (Optional[Callable]) –
backend (Optional[str]) –
init_method (Optional[str]) –
disable_distributed_print (str) –
- Return type
Callable
- homura.utils.distributed.get_global_rank()[source]¶
Get the global rank of the process. 0 if the process is the master.
- Return type
int
- homura.utils.distributed.get_local_rank()[source]¶
Get the local rank of the process, i.e., the process number of the node.
- Return type
int
- homura.utils.distributed.get_num_nodes()[source]¶
Get the number of nodes. Note that this function assumes all nodes have the same number of processes.
- Return type
int
- homura.utils.distributed.get_world_size()[source]¶
Get the world size, i.e., the total number of processes.
- Return type
int
- homura.utils.distributed.if_is_master(func)[source]¶
Wrap a void function that are active only if it is the master process:
@if_is_master
- def print_master(message):
print(message)
- Parameters
func (Callable) – Any function
- Return type
Callable
- homura.utils.distributed.init_distributed(backend=None, init_method=None, disable_distributed_print=False)[source]¶
Simple initializer for distributed training. This function substitutes print function with _print_if_master.
- Parameters
backend (Optional[str]) – backend of torch.distributed.init_process_group
init_method (Optional[str]) – init_method of torch.distributed.init_process_group
disable_distributed_print (str) –
- Returns
None
- Return type
None
homura.utils.environment module¶
Helper functions to get information about the environment.
- class homura.utils.environment.disable_tf32_locally[source]¶
Bases:
object
Locally disable TF32
>>> with disable_tf32_locally(): >>> ...
or
>>> @disable_tf32_locally() >>> def function(): >>> ...
homura.utils.grad_tools module¶
- homura.utils.grad_tools.ggnvp(loss, f, p, v)[source]¶
Generalized Gaussian Newton vector product. In case of loss=F.cross_entropy(output, target), GGN matrix is equivalent to the Fisher matrix.
- Parameters
loss (torch.Tensor) –
f (torch.Tensor) –
p (nn.Parameter | list[nn.Parameter]) –
v (torch.Tensor) –
- Return type
torch.Tensor
- homura.utils.grad_tools.hvp(loss, f, p, v)[source]¶
Hessian vector product
- Parameters
loss (torch.Tensor) –
p (torch.nn.parameter.Parameter) –
v (torch.Tensor) –
- Return type
torch.Tensor
- homura.utils.grad_tools.jvp(f, p, v)[source]¶
Jacobian vector product
- Parameters
f (torch.Tensor) –
p (nn.Parameter | list[nn.Parameter]) –
v (torch.Tensor) –
- Return type
torch.Tensor