vis4d.pl.trainer

Trainer for PyTorch Lightning.

Classes

PLTrainer(*args, work_dir, exp_name, version)

Trainer for PyTorch Lightning.

class PLTrainer(*args, work_dir, exp_name, version, epoch_based=True, find_unused_parameters=False, save_top_k=1, checkpoint_period=1, checkpoint_callback=None, wandb=False, seed=-1, **kwargs)[source]

Trainer for PyTorch Lightning.

Perform some basic common setups at the beginning of a job.

Parameters:
  • work_dir (str) – Specific directory to save checkpoints, logs, etc. Integrates with exp_name and version to get output_dir.

  • exp_name (str) – Name of current experiment.

  • version (str) – Version of current experiment.

  • epoch_based (bool) – Use epoch-based / iteration-based training. Default is True.

  • find_unused_parameters (bool) – Activates PyTorch checking for unused parameters in DDP setting. Default: False, for better performance.

  • save_top_k (int) – Save top k checkpoints. Default: 1 (save last).

  • checkpoint_period (int) – After N epochs / stpes, save out checkpoints. Default: 1.

  • checkpoint_callback (Optional[ModelCheckpoint]) – Custom PL checkpoint callback. Default: None.

  • wandb (bool) – Use weights and biases logging instead of tensorboard. Default: False.

  • seed (int, optional) – The integer value seed for global random state. Defaults to -1. If -1, a random seed will be generated. This will be set by TrainingModule.