homura.modules package¶
Subpackages¶
Submodules¶
homura.modules.attention module¶
- class homura.modules.attention.AttentionPool2d(embed_dim, num_heads)[source]¶
Bases:
torch.nn.modules.module.Module
- Parameters
embed_dim (int) –
num_heads (int) –
- forward(x)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.- Parameters
x (torch.Tensor) –
- Return type
torch.Tensor
- training: bool¶
- class homura.modules.attention.KeyValAttention(scaling=False, dropout_prob=0)[source]¶
Bases:
torch.nn.modules.module.Module
Key-value attention.
- Parameters
scaling (bool) –
dropout_prob (float) –
- forward(query, key, value, mask=None, additive_mask=None)[source]¶
See functional.attention.kv_attention for details
- Parameters
query (torch.Tensor) –
key (torch.Tensor) –
value (torch.Tensor) –
mask (Optional[torch.Tensor]) –
additive_mask (Optional[torch.Tensor]) –
- Returns
- Return type
tuple[torch.Tensor, torch.Tensor]
- training: bool¶
homura.modules.discretization module¶
- class homura.modules.discretization.GumbelSigmoid(temp=0.1, threshold=0.5)[source]¶
Bases:
torch.nn.modules.module.Module
This module outputs gumbel_sigmoid while training and input.sigmoid() >= threshold while evaluation
- Parameters
temp (float) –
threshold (float) –
- forward(input)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.- Parameters
input (torch.Tensor) –
- Return type
torch.Tensor
- training: bool¶
- class homura.modules.discretization.SemanticHashing[source]¶
Bases:
torch.nn.modules.module.Module
- forward(input)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.- Parameters
input (torch.Tensor) –
- Return type
torch.Tensor
- training: bool¶
- class homura.modules.discretization.StraightThroughEstimator[source]¶
Bases:
torch.nn.modules.module.Module
- forward(input)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.- Parameters
input (torch.Tensor) –
- training: bool¶
homura.modules.ema module¶
- class homura.modules.ema.EMA(original_model, momentum=0.999, copy_buffer=False)[source]¶
Bases:
torch.nn.modules.module.Module
Exponential moving average of a given model.
model = EMA(original_model, 0.99999)
- Parameters
original_model (torch.nn.modules.module.Module) – Original model
momentum (float) – Momentum value for EMA
copy_buffer (bool) – If true, copy float buffers instead of EMA
- property ema_model: torch.nn.modules.module.Module¶
- forward(*args, **kwargs)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- property original_model: torch.nn.modules.module.Module¶
- parameters(recurse=True)[source]¶
Returns an iterator over module parameters.
This is typically passed to an optimizer.
- Args:
- recurse (bool): if True, then yields parameters of this module
and all submodules. Otherwise, yields only parameters that are direct members of this module.
- Yields:
Parameter: module parameter
Example:
>>> for param in model.parameters(): >>> print(type(param), param.size()) <class 'torch.Tensor'> (20L,) <class 'torch.Tensor'> (20L, 1L, 5L, 5L)
- Parameters
recurse (bool) –
- Return type
Iterator[torch.nn.parameter.Parameter]
- requires_grad_(requires_grad=True)[source]¶
Change if autograd should record operations on parameters in this module.
This method sets the parameters’
requires_grad
attributes in-place.This method is helpful for freezing part of the module for finetuning or training parts of a model individually (e.g., GAN training).
See locally-disable-grad-doc for a comparison between .requires_grad_() and several similar mechanisms that may be confused with it.
- Args:
- requires_grad (bool): whether autograd should record operations on
parameters in this module. Default:
True
.
- Returns:
Module: self
- Parameters
requires_grad (bool) –
- Return type
torch.nn.modules.module.Module
- training: bool¶
- homura.modules.ema.exponential_moving_average_(base, update, momentum)[source]¶
Inplace exponential moving average of base tensor
- Parameters
base (torch.Tensor) – tensor to be updated
update (torch.Tensor) – tensor for updating
momentum (float) –
- Returns
exponential-moving-averaged base tensor
- Return type
torch.Tensor