vis4d.op.detect.dense_anchor¶
Dense anchor-based head.
Functions
|
Get targets for all batch elements, all scales. |
|
Get targets per batch element, all scales. |
|
Convert targets by image to targets by feature level. |
Classes
|
Loss of dense anchor heads. |
|
Dense anchor head loss container. |
|
Targets for first-stage detection. |
- class DenseAnchorHeadLoss(anchor_generator, box_encoder, box_matcher, box_sampler, loss_cls, loss_bbox, allowed_border=0)[source]¶
Loss of dense anchor heads.
For a given set of multi-scale dense outputs, compute the desired target outputs and apply classification and regression losses. The targets are computed with the given target bounding boxes, the anchor grid defined by the anchor generator and the given box encoder.
Creates an instance of the class.
- Parameters:
anchor_generator (AnchorGenerator) – Generates anchor grid priors.
box_encoder (DeltaXYWHBBoxEncoder) – Encodes bounding boxes to the desired network output.
box_matcher (Matcher) – Box matcher.
box_sampler (Sampler) – Box sampler.
loss_cls (TorchLossFunc) – Classification loss.
loss_bbox (TorchLossFunc) – Bounding box regression loss.
allowed_border (int) – The border to allow the valid anchor. Defaults to 0.
- __call__(cls_outs, reg_outs, target_boxes, images_hw, target_class_ids=None)[source]¶
Type definition.
- Return type:
- forward(cls_outs, reg_outs, target_boxes, images_hw, target_class_ids=None)[source]¶
Compute RetinaNet classification and regression losses.
- Parameters:
cls_outs (list[Tensor]) – Network classification outputs at all scales.
reg_outs (list[Tensor]) – Network regression outputs at all scales.
target_boxes (list[Tensor]) – Target bounding boxes.
images_hw (list[tuple[int, int]]) – Image dimensions without padding.
target_class_ids (list[Tensor] | None, optional) – Target class labels.
- Returns:
Classification and regression losses.
- Return type:
- class DenseAnchorHeadLosses(loss_cls: Tensor, loss_bbox: Tensor)[source]¶
Dense anchor head loss container.
Create new instance of DenseAnchorHeadLosses(loss_cls, loss_bbox)
-
loss_bbox:
Tensor
¶ Alias for field number 1
-
loss_cls:
Tensor
¶ Alias for field number 0
-
loss_bbox:
- class DetectorTargets(labels: Tensor, label_weights: Tensor, bbox_targets: Tensor, bbox_weights: Tensor)[source]¶
Targets for first-stage detection.
Create new instance of DetectorTargets(labels, label_weights, bbox_targets, bbox_weights)
-
bbox_targets:
Tensor
¶ Alias for field number 2
-
bbox_weights:
Tensor
¶ Alias for field number 3
-
label_weights:
Tensor
¶ Alias for field number 1
-
labels:
Tensor
¶ Alias for field number 0
-
bbox_targets:
- get_targets_per_batch(featmap_sizes, target_boxes, target_class_ids, images_hw, anchor_generator, box_encoder, box_matcher, box_sampler, allowed_border=0)[source]¶
Get targets for all batch elements, all scales.
- Return type:
tuple
[list
[list
[Tensor
]],int
]
- get_targets_per_image(target_boxes, anchors, matcher, sampler, box_encoder, image_hw, target_class=1.0, allowed_border=0)[source]¶
Get targets per batch element, all scales.
- Parameters:
target_boxes (Tensor) – (N, 4) Tensor of target boxes for a single image.
anchors (Tensor) – (M, 4) box priors
matcher (Matcher) – box matcher matching anchors to targets.
sampler (Sampler) – box sampler sub-sampling matches.
box_encoder (DeltaXYWHBBoxEncoder) – Encodes boxes into target regression parameters.
image_hw (tuple[int, int]) – input image height and width.
target_class (Tensor | float, optional) – class label(s) of target boxes. Defaults to 1.0.
allowed_border (int, optional) – Allowed border for sub-sampling anchors that lie inside the input image. Defaults to 0.
- Returns:
- Targets, sum of positives, sum
of negatives.
- Return type:
tuple[DetectorTargets, Tensor, Tensor]