vis4d.model.seg.semantic_fpn

SemanticFPN Implementation.

Classes

MaskOut(masks)

Output mask predictions.

SemanticFPN(num_classes[, resize, weights, ...])

Semantic FPN.

class MaskOut(masks: list[torch.Tensor])[source]

Output mask predictions.

Create new instance of MaskOut(masks,)

masks: list[Tensor]

Alias for field number 0

class SemanticFPN(num_classes, resize=True, weights=None, basemodel=None)[source]

Semantic FPN.

Parameters:
  • num_classes (int) – Number of classes.

  • resize (bool) – Resize output to input size.

  • weights (None | str) – Pre-trained weights.

  • basemodel (None | BaseModel) – Base model to use. If None is passed, this will default to ResNetV1c

Init.

forward(images, original_hw=None)[source]

Forward pass.

Parameters:
  • images (torch.Tensor) – Input images.

  • original_hw (None | list[tuple[int, int]], optional) – Original image resolutions (before padding and resizing). Required for testing. Defaults to None.

Returns:

Raw model predictions.

Return type:

MaskOut

forward_test(images, original_hw)[source]

Forward pass for testing.

Parameters:
  • images (torch.Tensor) – Input images.

  • original_hw (list[tuple[int, int]], optional) – Original image resolutions (before padding and resizing). Required for testing.

Returns:

Raw model predictions.

Return type:

SemanticFPNOut

forward_train(images)[source]

Forward pass for training.

Parameters:

images (torch.Tensor) – Input images.

Returns:

Raw model predictions.

Return type:

SemanticFPNOut