vis4d.model.track.qdtrack

Quasi-dense instance similarity learning model.

Classes

FasterRCNNQDTrack(num_classes[, basemodel, ...])

Wrap QDTrack with Faster R-CNN detector.

FasterRCNNQDTrackOut(detector_out, ...)

Output of QDtrack model.

YOLOXQDTrack(num_classes[, basemodel, fpn, ...])

Wrap QDTrack with YOLOX detector.

YOLOXQDTrackOut(detector_out, key_images_hw, ...)

Output of QDtrack YOLOX model.

class FasterRCNNQDTrack(num_classes, basemodel=None, faster_rcnn_head=None, rcnn_box_decoder=None, qdtrack_head=None, track_graph=None, weights=None)[source]

Wrap QDTrack with Faster R-CNN detector.

Creates an instance of the class.

Parameters:
  • num_classes (int) – Number of object categories.

  • basemodel (BaseModel, optional) – Base model network. Defaults to None. If None, will use ResNet50.

  • faster_rcnn_head (FasterRCNNHead, optional) – Faster RCNN head. Defaults to None. if None, will use default FasterRCNNHead.

  • rcnn_box_decoder (DeltaXYWHBBoxDecoder, optional) – Decoder for RCNN bounding boxes. Defaults to None.

  • qdtrack_head (QDTrack, optional) – QDTrack head. Defaults to None. If None, will use default QDTrackHead.

  • track_graph (QDTrackGraph, optional) – Track graph. Defaults to None. If None, will use default QDTrackGraph.

  • weights (str, optional) – Weights to load for model.

__call__(images, images_hw, original_hw, frame_ids, boxes2d=None, boxes2d_classes=None, boxes2d_track_ids=None, keyframes=None)[source]

Type definition for call implementation.

Return type:

TrackOut | FasterRCNNQDTrackOut

forward(images, images_hw, original_hw, frame_ids, boxes2d=None, boxes2d_classes=None, boxes2d_track_ids=None, keyframes=None)[source]

Forward.

Return type:

TrackOut | FasterRCNNQDTrackOut

class FasterRCNNQDTrackOut(detector_out: FRCNNOut, key_images_hw: list[tuple[int, int]], key_target_boxes: list[Tensor], key_embeddings: list[Tensor], ref_embeddings: list[list[Tensor]], key_track_ids: list[Tensor], ref_track_ids: list[list[Tensor]])[source]

Output of QDtrack model.

Create new instance of FasterRCNNQDTrackOut(detector_out, key_images_hw, key_target_boxes, key_embeddings, ref_embeddings, key_track_ids, ref_track_ids)

detector_out: FRCNNOut

Alias for field number 0

key_embeddings: list[Tensor]

Alias for field number 3

key_images_hw: list[tuple[int, int]]

Alias for field number 1

key_target_boxes: list[Tensor]

Alias for field number 2

key_track_ids: list[Tensor]

Alias for field number 5

ref_embeddings: list[list[Tensor]]

Alias for field number 4

ref_track_ids: list[list[Tensor]]

Alias for field number 6

class YOLOXQDTrack(num_classes, basemodel=None, fpn=None, yolox_head=None, train_postprocessor=None, test_postprocessor=None, qdtrack_head=None, track_graph=None, weights=None)[source]

Wrap QDTrack with YOLOX detector.

Creates an instance of the class.

Parameters:
  • num_classes (int) – Number of object categories.

  • basemodel (BaseModel, optional) – Base model. Defaults to None. If None, will use CSPDarknet.

  • fpn (FeaturePyramidProcessing, optional) – Feature Pyramid Processing. Defaults to None. If None, will use YOLOXPAFPN.

  • yolox_head (YOLOXHead, optional) – YOLOX head. Defaults to None. If None, will use YOLOXHead.

  • train_postprocessor (YOLOXPostprocess, optional) – Post processor for training. Defaults to None. If None, will use YOLOXPostprocess.

  • test_postprocessor (YOLOXPostprocess, optional) – Post processor for testing. Defaults to None. If None, will use YOLOXPostprocess.

  • qdtrack_head (QDTrack, optional) – QDTrack head. Defaults to None. If None, will use default QDTrackHead.

  • track_graph (QDTrackGraph, optional) – Track graph. Defaults to None. If None, will use default QDTrackGraph.

  • weights (str, optional) – Weights to load for model.

__call__(images, images_hw, original_hw, frame_ids, boxes2d=None, boxes2d_classes=None, boxes2d_track_ids=None, keyframes=None)[source]

Type definition for call implementation.

Return type:

TrackOut | FasterRCNNQDTrackOut

forward(images, images_hw, original_hw, frame_ids, boxes2d=None, boxes2d_classes=None, boxes2d_track_ids=None, keyframes=None)[source]

Forward.

Return type:

TrackOut | YOLOXQDTrackOut

class YOLOXQDTrackOut(detector_out: YOLOXOut, key_images_hw: list[tuple[int, int]], key_target_boxes: list[Tensor], key_target_classes: list[Tensor], key_embeddings: list[Tensor], ref_embeddings: list[list[Tensor]], key_track_ids: list[Tensor], ref_track_ids: list[list[Tensor]])[source]

Output of QDtrack YOLOX model.

Create new instance of YOLOXQDTrackOut(detector_out, key_images_hw, key_target_boxes, key_target_classes, key_embeddings, ref_embeddings, key_track_ids, ref_track_ids)

detector_out: YOLOXOut

Alias for field number 0

key_embeddings: list[Tensor]

Alias for field number 4

key_images_hw: list[tuple[int, int]]

Alias for field number 1

key_target_boxes: list[Tensor]

Alias for field number 2

key_target_classes: list[Tensor]

Alias for field number 3

key_track_ids: list[Tensor]

Alias for field number 6

ref_embeddings: list[list[Tensor]]

Alias for field number 5

ref_track_ids: list[list[Tensor]]

Alias for field number 7