homura.utils package¶
Submodules¶
homura.utils.backends module¶
Helper functions to convert PyTorch Tensors <-> Cupy/Numpy arrays. These functions are useful to write device-agnostic extensions.
homura.utils.benchmarks module¶
-
homura.utils.benchmarks.
timeit
(func=None, num_iters=100, warmup_iters=None)[source]¶ A simple timeit for GPU operations.
>>> @timeit(num_iters=100, warmup_iters=100) >>> def mm(a, b): >>> return a @ b >>> mm(a, b) [homura.utils.benchmarks|2019-11-24 06:40:46|INFO] f requires 0.000021us per iteration
- Parameters
func (Optional[Callable]) –
num_iters (Optional[int]) –
warmup_iters (Optional[int]) –
homura.utils.containers module¶
Useful containers for PyTorch tensors and others
-
class
homura.utils.containers.
StepDict
(_type, **kwargs)[source]¶ Bases:
dict
Dictionary with step, state_dict, load_state_dict and zero_grad. Intended to be used with Optimizer, lr_scheduler:
sd = StepDict(Optimizer, generator=Adam(...), discriminator=Adam(...)) sd.step() # is equivalent to generator_opt.step(); discriminator.step()
- Parameters
_type –
kwargs –
-
class
homura.utils.containers.
TensorDataClass
[source]¶ Bases:
object
TensorDataClass is an extension of dataclass that can handle tensors easily.
-
homura.utils.containers.
tensor_dataclass
(cls=None, **kwargs)[source]¶ Helper function to create a TensorDataClass, expected to be used as decorator:
@tensor_dataclass class YourTensorClass(TensorDataClass): __slots__ = ('pred', 'loss') pred: torch.Tensor loss: torch.Tensor x = YourTensorClass(prediction, loss) x_cuda = x.to('cuda') x_int = x.to(dtype=torch.int32) registry_name, loss = x loss = x.loss loss = x['loss']
- Parameters
cls – wrapped class
kwargs – kwargs to dataclasses.dataclass
- Returns
- Return type
homura.utils.distributed module¶
Helper functions to make distributed training easy
-
homura.utils.distributed.
distributed_print
(self, *args, sep=' ', end='\n', file=None)[source]¶ print something on any node
- Return type
None
-
homura.utils.distributed.
distributed_ready_main
(func=None, backend=None, init_method=None, disable_distributed_print=False)[source]¶ Wrap a main function to make it distributed ready
- Parameters
func (Optional[Callable]) –
backend (Optional[str]) –
init_method (Optional[str]) –
disable_distributed_print (str) –
- Return type
Callable
-
homura.utils.distributed.
get_global_rank
()[source]¶ Get the global rank of the process. 0 if the process is the master.
- Return type
int
-
homura.utils.distributed.
get_local_rank
()[source]¶ Get the local rank of the process, i.e., the process number of the node.
- Return type
int
-
homura.utils.distributed.
get_num_nodes
()[source]¶ Get the number of nodes. Note that this function assumes all nodes have the same number of processes.
- Return type
int
-
homura.utils.distributed.
get_world_size
()[source]¶ Get the world size, i.e., the total number of processes.
- Return type
int
-
homura.utils.distributed.
if_is_master
(func)[source]¶ Wrap a void function that are active only if it is the master process:
@if_is_master
- def print_master(message):
print(message)
- Parameters
func (Callable) – Any function
- Return type
Callable
-
homura.utils.distributed.
init_distributed
(use_horovod=False, backend=None, init_method=None, disable_distributed_print=False)[source]¶ Simple initializer for distributed training. This function substitutes print function with _print_if_master.
- Parameters
use_horovod (bool) – If use horovod as distributed backend
backend (Optional[str]) – backend of torch.distributed.init_process_group
init_method (Optional[str]) – init_method of torch.distributed.init_process_group
disable_distributed_print (str) –
- Returns
None
- Return type
None
homura.utils.environment module¶
Helper functions to get information about the environment.