learning.normalizer
#
Normalizer module.
Enables distributed normalizers to keep preprocessing consistent across all nodes. The normalizer is based on the implementation in https://github.com/openai/baselines.
Module Contents#
- class learning.normalizer.Normalizer(size: int, eps: float = 0.01)#
Normalizer to maintain an estimate on the current data mean and variance.
Used to normalize input data to zero mean and unit variance. .. py:method:: __call__(x: torch.FloatTensor) -> torch.FloatTensor
Alias for
self.normalize
.- Parameters:
size (int) –
eps (float) –
- normalize(x: torch.FloatTensor) torch.FloatTensor #
Normalize the input data with the current mean and variance estimate.
- Parameters:
x (torch.FloatTensor) – Input data array.
- Returns:
The normalized data.
- Return type:
torch.FloatTensor
- update(x: torch.FloatTensor) None #
Update the mean and variance estimate with new data.
- Parameters:
x (torch.FloatTensor) – New input data. Expects a 3D array of shape (episodes, timestep, data dimension).
- Raises:
AssertionError – Shape check failed.
- Return type:
None
- wrap_obs(states: torch.FloatTensor, goals: torch.FloatTensor) torch.FloatTensor #
Wrap states and goals into a contingent input tensor.
- Parameters:
states (torch.FloatTensor) – Input states array.
goals (torch.FloatTensor) – Input goals array.
- Returns:
A fused state goal tensor.
- Return type:
torch.FloatTensor
- load(checkpoint: Any) None #
Load data for the state_norm.
- Parameters:
checkpoint (Any) – dict containing loaded data.
- Return type:
None
- save(f: io.BufferedWriter) None #
Save data for the state_norm.
- Parameters:
f (io.BufferedWriter) –
- Return type:
None