vis4d.engine.optim.scheduler

LR schedulers.

Classes

ConstantLR(optimizer, max_steps[, factor, ...])

Constant learning rate scheduler.

LRSchedulerDict

LR scheduler.

LRSchedulerWrapper(lr_schedulers_cfg, optimizer)

LR scheduler wrapper.

PolyLR(optimizer, max_steps[, power, ...])

Polynomial learning rate decay.

QuadraticLRWarmup(optimizer, max_steps[, ...])

Quadratic learning rate warmup.

class ConstantLR(optimizer, max_steps, factor=0.3333333333333333, last_epoch=-1)[source]

Constant learning rate scheduler.

Parameters:
  • optimizer (Optimizer) – Wrapped optimizer.

  • max_steps (int) – Maximum number of steps.

  • factor (float) – Scale factor. Default: 1.0 / 3.0.

  • last_epoch (int) – The index of last epoch. Default: -1.

Initialize ConstantLR.

get_lr()[source]

Compute current learning rate.

Return type:

list[float]

class LRSchedulerDict[source]

LR scheduler.

class LRSchedulerWrapper(lr_schedulers_cfg, optimizer, steps_per_epoch=-1)[source]

LR scheduler wrapper.

Initialize LRSchedulerWrapper.

get_lr()[source]

Get current learning rate.

Return type:

list[float]

load_state_dict(state_dict)[source]

Load state dict.

Return type:

None

state_dict()[source]

Get state dict.

Return type:

dict[int, Dict[str, Any]]

step(epoch=None)[source]

Step on training epoch end.

Return type:

None

step_on_batch(step)[source]

Step on training batch end.

Return type:

None

class PolyLR(optimizer, max_steps, power=1.0, min_lr=0.0, last_epoch=-1)[source]

Polynomial learning rate decay.

Example

Assuming lr = 0.001, max_steps = 4, min_lr = 0.0, and power = 1.0, the learning rate will be: lr = 0.001 if step == 0 lr = 0.00075 if step == 1 lr = 0.00050 if step == 2 lr = 0.00025 if step == 3 lr = 0.0 if step >= 4

Parameters:
  • optimizer (Optimizer) – Wrapped optimizer.

  • max_steps (int) – Maximum number of steps.

  • power (float, optional) – Power factor. Default: 1.0.

  • min_lr (float) – Minimum learning rate. Default: 0.0.

  • last_epoch (int) – The index of last epoch. Default: -1.

Initialize PolyLRScheduler.

get_lr()[source]

Compute current learning rate.

Return type:

list[float]

class QuadraticLRWarmup(optimizer, max_steps, last_epoch=-1)[source]

Quadratic learning rate warmup.

Parameters:
  • optimizer (Optimizer) – Wrapped optimizer.

  • max_steps (int) – Maximum number of steps.

  • last_epoch (int) – The index of last epoch. Default: -1.

Initialize QuadraticLRWarmup.

get_lr()[source]

Compute current learning rate.

Return type:

list[float]