AdamW: Adam with Decoupled Weight Decay¶
AdamW improves upon Adam by decoupling weight decay from the gradients and instead applying weight decay directly to the model parameters. This modification allows AdamW to achieve better convergence and generalization than Adam.
AdamW was introduced by Ilya Loshchilov and Frank Hutter in Decoupled Weight Decay Regularization.
Hyperparameters¶
optimi sets the default \(\beta\)s to (0.9, 0.99)
and default \(\epsilon\) to 1e-6
. These values reflect current best-practices and usually outperform the PyTorch defaults.
If training on large batch sizes or observing training loss spikes, consider reducing \(\beta_2\) between \([0.95, 0.99)\).
optimi’s implementation of AdamW also supports fully decoupled weight decay decouple_lr=True
. The default weight decay of 0.01 will likely need to be reduced when using fully decoupled weight decay as the learning rate will not modify the effective weight decay.
AdamW ¶
AdamW optimizer: Adam with decoupled weight decay.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
params |
Iterable[Tensor] | Iterable[dict]
|
Iterable of parameters to optimize or dicts defining parameter groups |
required |
lr |
float
|
Learning rate |
required |
betas |
tuple[float, float]
|
Coefficients for gradient and squared gradient moving averages (default: (0.9, 0.99)) |
(0.9, 0.99)
|
weight_decay |
float
|
Weight decay coefficient. If |
0.01
|
eps |
float
|
Added to denominator to improve numerical stability (default: 1e-6) |
1e-06
|
decouple_lr |
bool
|
Apply fully decoupled weight decay instead of decoupled weight decay (default: False) |
False
|
max_lr |
float | None
|
Maximum scheduled learning rate. Set if |
None
|
kahan_sum |
bool | None
|
Enables Kahan summation for more accurate parameter updates when training in low precision (float16 or bfloat16). If unspecified, automatically applies for low precision parameters (default: None) |
None
|
foreach |
bool | None
|
Enables the foreach implementation. If unspecified, tries to use foreach over for-loop implementation since it is significantly faster (default: None) |
None
|
gradient_release |
bool
|
Fuses optimizer step and zero_grad as part of the parameter's backward
pass. Requires model hooks created with |
False
|
Algorithm¶
Adam with decoupled weight decay (AdamW).
optimi’s AdamW also supports fully decoupled weight decay, which is not shown.