RAdam: Rectified Adam¶
RAdam (Rectified Adam) is a variant of Adam which improves Adam’s convergence by fixing the adaptive learning rate's large variance during early stages of training. RAdam estimates the variance of the squared gradient moving average and scales the update by this term to rectify the variance. RAdam is comparable to using a learning rate warmup schedule.
RAdam was introduced by Liu et al in On the Variance of the Adaptive Learning Rate and Beyond.
Hyperparameters¶
optimi sets the default \(\beta\)s to (0.9, 0.99)
and default \(\epsilon\) to 1e-6
. These values reflect current best-practices and usually outperform the PyTorch defaults.
If training on large batch sizes or observing training loss spikes, consider reducing \(\beta_2\) between \([0.95, 0.99)\).
optimi’s implementation of RAdam supports both decoupled weight decay decouple_wd=True
and fully decoupled weight decay decouple_lr=True
. Weight decay will likely need to be reduced when using fully decoupled weight decay as the learning rate will not modify the effective weight decay.
RAdam ¶
Rectified Adam optimizer. Optionally with decoupled weight decay.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
params |
Iterable[Tensor] | Iterable[dict]
|
Iterable of parameters to optimize or dicts defining parameter groups |
required |
lr |
float
|
Learning rate |
required |
betas |
tuple[float, float]
|
Coefficients for gradient and squared gradient moving averages (default: (0.9, 0.99)) |
(0.9, 0.99)
|
weight_decay |
float
|
Weight decay coefficient. If |
0
|
eps |
float
|
Added to denominator to improve numerical stability (default: 1e-6) |
1e-06
|
decouple_wd |
bool
|
Apply decoupled weight decay instead of L2 penalty (default: False) |
False
|
decouple_lr |
bool
|
Apply fully decoupled weight decay instead of L2 penalty (default: False) |
False
|
max_lr |
float | None
|
Maximum scheduled learning rate. Set if |
None
|
kahan_sum |
bool | None
|
Enables Kahan summation for more accurate parameter updates when training in low precision (float16 or bfloat16). If unspecified, automatically applies for low precision parameters (default: None) |
None
|
foreach |
bool | None
|
Enables the foreach implementation. If unspecified, tries to use foreach over for-loop implementation since it is significantly faster (default: None) |
None
|
gradient_release |
bool
|
Fuses optimizer step and zero_grad as part of the parameter's backward
pass. Requires model hooks created with |
False
|
Algorithm¶
RAdam: Rectified Adam.
optimi’s RAdam also supports decoupled weight decay and fully decoupled weight decay, which are not shown.