Shortcuts

mmrotate.apis

mmrotate.apis.inference_detector_by_patches(model, img, sizes, steps, ratios, merge_iou_thr, bs=1)[source]

inference patches with the detector.

Split huge image(s) into patches and inference them with the detector. Finally, merge patch results on one huge image by nms.

Parameters
  • model (nn.Module) – The loaded detector.

  • img (str | ndarray or) – Either an image file or loaded image.

  • sizes (list) – The sizes of patches.

  • steps (list) – The steps between two patches.

  • ratios (list) – Image resizing ratios for multi-scale detecting.

  • merge_iou_thr (float) – IoU threshold for merging results.

  • bs (int) – Batch size, must greater than or equal to 1.

Returns

Detection results.

Return type

list[np.ndarray]

mmrotate.apis.train_detector(model, dataset, cfg, distributed=False, validate=False, timestamp=None, meta=None)[source]

Main function of train.

mmrotate.core

anchor

class mmrotate.core.anchor.PseudoAnchorGenerator(strides)[source]

Non-Standard pseudo anchor generator that is used to generate valid flags only!

property num_base_anchors

total number of base anchors in a feature grid

Type

list[int]

single_level_grid_anchors(featmap_sizes, device='cuda')[source]

Calling its grid_anchors() method will raise NotImplementedError!

class mmrotate.core.anchor.RotatedAnchorGenerator(strides, ratios, scales=None, base_sizes=None, scale_major=True, octave_base_scale=None, scales_per_octave=None, centers=None, center_offset=0.0)[source]

Fake rotate anchor generator for 2D anchor-based detectors.

Horizontal bounding box represented by (x,y,w,h,theta).

single_level_grid_priors(featmap_size, level_idx, dtype=torch.float32, device='cuda')[source]

Generate grid anchors of a single level.

Note

This function is usually called by method self.grid_priors.

Parameters
  • featmap_size (tuple[int]) – Size of the feature maps.

  • level_idx (int) – The index of corresponding feature map level.

  • (obj (dtype) – torch.dtype): Date type of points.Defaults to

  • torch.float32.

  • device (str, optional) – The device the tensor will be put on.

  • to 'cuda'. (Defaults) –

Returns

Anchors in the overall feature maps.

Return type

torch.Tensor

mmrotate.core.anchor.rotated_anchor_inside_flags(flat_anchors, valid_flags, img_shape, allowed_border=0)[source]

Check whether the rotated anchors are inside the border.

Parameters
  • flat_anchors (torch.Tensor) – Flatten anchors, shape (n, 5).

  • valid_flags (torch.Tensor) – An existing valid flags of anchors.

  • img_shape (tuple(int)) – Shape of current image.

  • allowed_border (int, optional) – The border to allow the valid anchor. Defaults to 0.

Returns

Flags indicating whether the anchors are inside a valid range.

Return type

torch.Tensor

bbox

class mmrotate.core.bbox.ATSSKldAssigner(topk, use_reassign=False)[source]

Assign a corresponding gt bbox or background to each bbox.

Each proposals will be assigned with 0 or a positive integer indicating the ground truth index.

  • 0: negative sample, no assigned gt

  • positive integer: positive sample, index (1-based) of assigned gt

Parameters
  • topk (float) – Number of bbox selected in each level.

  • use_reassign (bool, optional) – If true, it is used to reassign samples.

AspectRatio(gt_rbboxes)[source]

compute the aspect ratio of all gts.

Parameters

gt_rbboxes (torch.Tensor) – Groundtruth polygons, shape (k, 8).

Returns

The aspect ratio of gt_rbboxes, shape (k, 1).

Return type

ratios (torch.Tensor)

assign(bboxes, num_level_bboxes, gt_bboxes, gt_bboxes_ignore=None, gt_labels=None)[source]

Assign gt to bboxes.

The assignment is done in following steps

  1. compute iou between all bbox (bbox of all pyramid levels) and gt

  2. compute center distance between all bbox and gt

  3. on each pyramid level, for each gt, select k bbox whose center are closest to the gt center, so we total select k*l bbox as candidates for each gt

  4. get corresponding iou for the these candidates, and compute the mean and std, set mean + std as the iou threshold

  5. compute the mean aspect ratio of all gts, and set exp((-mean aspect ratio / 4) * (mean + std) as the iou threshold

  6. select these candidates whose iou are greater than or equal to the threshold as positive

  7. limit the positive sample’s center in gt

Parameters
  • bboxes (Tensor) – Bounding boxes to be assigned, shape(n, 4).

  • num_level_bboxes (List) – num of bboxes in each level

  • gt_bboxes (Tensor) – Groundtruth boxes, shape (k, 4).

  • gt_bboxes_ignore (Tensor, optional) – Ground truth bboxes that are labelled as ignored, e.g., crowd boxes in COCO.

  • gt_labels (Tensor, optional) – Label of gt_bboxes, shape (k, ).

Returns

The assign result.

Return type

AssignResult

get_horizontal_bboxes(gt_rbboxes)[source]

get_horizontal_bboxes from polygons.

Parameters

gt_rbboxes (torch.Tensor) – Groundtruth polygons, shape (k, 8).

Returns

The horizontal bboxes, shape (k, 4).

Return type

gt_rect_bboxes (torch.Tensor)

kld_mixture2single(g1, g2)[source]

Compute Kullback-Leibler Divergence between two Gaussian distribution.

Parameters
  • g1 (dict[str, torch.Tensor]) – Gaussian distribution 1.

  • g2 (torch.Tensor) – Gaussian distribution 2.

Returns

Kullback-Leibler Divergence.

Return type

torch.Tensor

kld_overlaps(gt_rbboxes, points, eps=1e-06)[source]

Compute overlaps between polygons and points by Kullback-Leibler Divergence loss.

Parameters
  • gt_rbboxes (torch.Tensor) – Ground truth polygons, shape (k, 8).

  • points (torch.Tensor) – Points to be assigned, shape(n, 18).

  • eps (float, optional) – Defaults to 1e-6.

Returns

Kullback-Leibler Divergence loss.

Return type

Tensor

class mmrotate.core.bbox.ConvexAssigner(scale=4, pos_num=3)[source]

Assign a corresponding gt bbox or background to each bbox. Each proposals will be assigned with 0 or a positive integer indicating the ground truth index.

  • 0: negative sample, no assigned gt

  • positive integer: positive sample, index (1-based) of assigned gt

Parameters
  • scale (float) – IoU threshold for positive bboxes.

  • pos_num (float) – find the nearest pos_num points to gt center in this

  • level.

assign(points, gt_rbboxes, gt_rbboxes_ignore=None, gt_labels=None, overlaps=None)[source]

Assign gt to bboxes.

The assignment is done in following steps

  1. compute iou between all bbox (bbox of all pyramid levels) and gt

  2. compute center distance between all bbox and gt

  3. on each pyramid level, for each gt, select k bbox whose center are closest to the gt center, so we total select k*l bbox as candidates for each gt

  4. get corresponding iou for the these candidates, and compute the mean and std, set mean + std as the iou threshold

  5. select these candidates whose iou are greater than or equal to the threshold as positive

  6. limit the positive sample’s center in gt

Parameters
  • points (torch.Tensor) – Points to be assigned, shape(n, 18).

  • gt_rbboxes (torch.Tensor) – Groundtruth polygons, shape (k, 8).

  • gt_rbboxes_ignore (Tensor, optional) – Ground truth polygons that are labelled as ignored, e.g., crowd boxes in COCO.

  • gt_labels (Tensor, optional) – Label of gt_bboxes, shape (k, ).

Returns

The assign result.

Return type

AssignResult

get_horizontal_bboxes(gt_rbboxes)[source]

get_horizontal_bboxes from polygons.

Parameters

gt_rbboxes (torch.Tensor) – Groundtruth polygons, shape (k, 8).

Returns

The horizontal bboxes, shape (k, 4).

Return type

gt_rect_bboxes (torch.Tensor)

class mmrotate.core.bbox.DeltaXYWHAHBBoxCoder(target_means=(0.0, 0.0, 0.0, 0.0, 0.0), target_stds=(1.0, 1.0, 1.0, 1.0, 1.0), angle_range='oc', norm_factor=None, edge_swap=False, clip_border=True, add_ctr_clamp=False, ctr_clamp=32)[source]

Delta XYWHA HBBox coder.

this coder encodes bbox (x1, y1, x2, y2) into delta (dx, dy, dw, dh, da) and decodes delta (dx, dy, dw, dh, da) back to original bbox (cx, cy, w, h, a).

Parameters
  • target_means (Sequence[float]) – Denormalizing means of target for delta coordinates

  • target_stds (Sequence[float]) – Denormalizing standard deviation of target for delta coordinates

  • angle_range (str, optional) – Angle representations. Defaults to ‘oc’.

  • norm_factor (None|float, optional) – Regularization factor of angle.

  • edge_swap (bool, optional) – Whether swap the edge if w < h. Defaults to False.

  • clip_border (bool, optional) – Whether clip the objects outside the border of the image. Defaults to True.

  • add_ctr_clamp (bool) – Whether to add center clamp, when added, the predicted box is clamped is its center is too far away from the original anchor’s center. Only used by YOLOF. Default False.

  • ctr_clamp (int) – the maximum pixel shift to clamp. Only used by YOLOF. Default 32.

decode(bboxes, pred_bboxes, max_shape=None, wh_ratio_clip=0.016)[source]

Apply transformation pred_bboxes to boxes.

Parameters
  • bboxes (torch.Tensor) – Basic boxes. Shape (B, N, 4) or (N, 4)

  • pred_bboxes (torch.Tensor) –

    Encoded offsets with respect to each

    roi. Has shape (B, N, num_classes * 5) or (B, N, 5) or

    (N, num_classes * 5) or (N, 5). Note N = num_anchors * W * H when rois is a grid of anchors.

  • (Sequence[int] or torch.Tensor or Sequence[ (max_shape) – Sequence[int]],optional): Maximum bounds for boxes, specifies (H, W, C) or (H, W). If bboxes shape is (B, N, 5), then the max_shape should be a Sequence[Sequence[int]] and the length of max_shape should also be B.

  • wh_ratio_clip (float, optional) – The allowed ratio between width and height.

Returns

Decoded boxes.

Return type

torch.Tensor

encode(bboxes, gt_bboxes)[source]

Get box regression transformation deltas that can be used to transform the bboxes into the gt_bboxes.

Parameters
  • bboxes (torch.Tensor) – Source boxes, e.g., object proposals.

  • gt_bboxes (torch.Tensor) – Target of the transformation, e.g., ground-truth boxes.

Returns

Box transformation deltas

Return type

torch.Tensor

class mmrotate.core.bbox.DeltaXYWHAOBBoxCoder(target_means=(0.0, 0.0, 0.0, 0.0, 0.0), target_stds=(1.0, 1.0, 1.0, 1.0, 1.0), angle_range='oc', norm_factor=None, edge_swap=False, proj_xy=False, add_ctr_clamp=False, ctr_clamp=32)[source]

Delta XYWHA OBBox coder. This coder is used for rotated objects detection (for example on task1 of DOTA dataset). this coder encodes bbox (xc, yc, w, h, a) into delta (dx, dy, dw, dh, da) and decodes delta (dx, dy, dw, dh, da) back to original bbox (xc, yc, w, h, a).

Parameters
  • target_means (Sequence[float]) – Denormalizing means of target for delta coordinates

  • target_stds (Sequence[float]) – Denormalizing standard deviation of target for delta coordinates

  • angle_range (str, optional) – Angle representations. Defaults to ‘oc’.

  • norm_factor (None|float, optional) – Regularization factor of angle.

  • edge_swap (bool, optional) – Whether swap the edge if w < h. Defaults to False.

  • proj_xy (bool, optional) – Whether project x and y according to angle. Defaults to False.

  • add_ctr_clamp (bool) – Whether to add center clamp, when added, the predicted box is clamped is its center is too far away from the original anchor’s center. Only used by YOLOF. Default False.

  • ctr_clamp (int) – the maximum pixel shift to clamp. Only used by YOLOF. Default 32.

decode(bboxes, pred_bboxes, max_shape=None, wh_ratio_clip=0.016)[source]

Apply transformation pred_bboxes to boxes.

Parameters
  • bboxes (torch.Tensor) – Basic boxes. Shape (B, N, 5) or (N, 5)

  • pred_bboxes (torch.Tensor) – Encoded offsets with respect to each roi. Has shape (B, N, num_classes * 5) or (B, N, 5) or (N, num_classes * 5) or (N, 5). Note N = num_anchors * W * H when rois is a grid of anchors.

  • max_shape (Sequence[int] or torch.Tensor or Sequence[ Sequence[int]],optional) – Maximum bounds for boxes, specifies (H, W, C) or (H, W). If bboxes shape is (B, N, 5), then the max_shape should be a Sequence[Sequence[int]] and the length of max_shape should also be B.

  • wh_ratio_clip (float, optional) – The allowed ratio between width and height.

Returns

Decoded boxes.

Return type

torch.Tensor

encode(bboxes, gt_bboxes)[source]

Get box regression transformation deltas that can be used to transform the bboxes into the gt_bboxes.

Parameters
  • bboxes (torch.Tensor) – Source boxes, e.g., object proposals.

  • gt_bboxes (torch.Tensor) – Target of the transformation, e.g., ground-truth boxes.

Returns

Box transformation deltas

Return type

torch.Tensor

class mmrotate.core.bbox.GVFixCoder(angle_range='oc', **kwargs)[source]

Gliding vertex fix coder.

this coder encodes bbox (cx, cy, w, h, a) into delta (dt, dr, dd, dl) and decodes delta (dt, dr, dd, dl) back to original bbox (cx, cy, w, h, a).

Parameters

angle_range (str, optional) – Angle representations. Defaults to ‘oc’.

decode(hbboxes, fix_deltas)[source]

Apply transformation fix_deltas to boxes.

Parameters
  • hbboxes (torch.Tensor) – Basic boxes. Shape (B, N, 4) or (N, 4)

  • fix_deltas (torch.Tensor) – Encoded offsets with respect to each roi. Has shape (B, N, num_classes * 4) or (B, N, 4) or (N, num_classes * 4) or (N, 4). Note N = num_anchors * W * H when rois is a grid of anchors.

Returns

Decoded boxes.

Return type

torch.Tensor

encode(rbboxes)[source]

Get box regression transformation deltas.

Parameters

rbboxes (torch.Tensor) – Source boxes, e.g., object proposals.

Returns

Box transformation deltas

Return type

torch.Tensor

class mmrotate.core.bbox.GVRatioCoder(angle_range='oc', **kwargs)[source]

Gliding vertex ratio coder.

this coder encodes bbox (cx, cy, w, h, a) into delta (ratios).

Parameters

angle_range (str, optional) – Angle representations. Defaults to ‘oc’.

decode(bboxes, bboxes_pred)[source]

Apply transformation fix_deltas to boxes.

Parameters
  • bboxes (torch.Tensor) –

  • bboxes_pred (torch.Tensor) –

Returns

NotImplementedError

encode(rbboxes)[source]

Get box regression transformation deltas.

Parameters

rbboxes (torch.Tensor) – Source boxes, e.g., object proposals.

Returns

Box transformation deltas

Return type

torch.Tensor

class mmrotate.core.bbox.GaussianMixture(n_components, n_features=2, mu_init=None, var_init=None, eps=1e-06, requires_grad=False)[source]

Initializes the Gaussian mixture model and brings all tensors into their required shape.

Parameters
  • n_components (int) – number of components.

  • n_features (int, optional) – number of features.

  • mu_init (torch.Tensor, optional) – (T, k, d)

  • var_init (torch.Tensor, optional) – (T, k, d) or (T, k, d, d)

  • eps (float, optional) – Defaults to 1e-6.

  • requires_grad (bool, optional) – Defaults to False.

EM_step(x, log_resp)[source]

From the log-probabilities, computes new parameters pi, mu, var (that maximize the log-likelihood). This is the maximization step of the EM-algorithm.

Parameters
  • x (torch.Tensor) – (T, n, d) or (T, n, 1, d)

  • log_resp (torch.Tensor) – (T, n, k, 1)

Returns

pi (torch.Tensor): (T, k, 1) mu (torch.Tensor): (T, k, d) var (torch.Tensor): (T, k, d) or (T, k, d, d)

Return type

tuple

check_size(x)[source]

Make sure that the shape of x is (T, n, 1, d).

Parameters

x (torch.Tensor) – input tensor.

Returns

output tensor.

Return type

torch.Tensor

em_runner(x)[source]

Performs one iteration of the expectation-maximization algorithm by calling the respective subroutines.

Parameters

x (torch.Tensor) – (n, 1, d)

estimate_log_prob(x)[source]

Estimate the log-likelihood probability that samples belong to the k-th Gaussian.

Parameters

x (torch.Tensor) – (T, n, d) or (T, n, 1, d)

Returns

log-likelihood probability that samples belong to the k-th Gaussian with dimensions (T, n, k, 1).

Return type

torch.Tensor

fit(x, delta=0.001, n_iter=10)[source]

Fits Gaussian mixture model to the data.

Parameters
  • x (torch.Tensor) – input tensor.

  • delta (float, optional) – threshold.

  • n_iter (int, optional) – number of iterations.

get_score(x, sum_data=True)[source]

Computes the log-likelihood of the data under the model.

Parameters
  • x (torch.Tensor) – (T, n, 1, d)

  • sum_data (bool,optional) – Flag of whether to sum scores.

Returns

score or per_sample_score.

Return type

torch.Tensor

log_resp_step(x)[source]

Computes log-responses that indicate the (logarithmic) posterior belief (sometimes called responsibilities) that a data point was generated by one of the k mixture components. Also returns the mean of the mean of the logarithms of the probabilities (as is done in sklearn). This is the so-called expectation step of the EM-algorithm.

Parameters

x (torch.Tensor) – (T, n, d) or (T, n, 1, d)

Returns

log_prob_norm (torch.Tensor): the mean of the mean of the logarithms of the probabilities. log_resp (torch.Tensor): log-responses that indicate the posterior belief.

Return type

tuple

update_mu(mu)[source]

Updates mean to the provided value.

Parameters

mu (torch.Tensor) –

update_pi(pi)[source]

Updates pi to the provided value.

Parameters

pi (torch.Tensor) – (T, k, 1)

update_var(var)[source]

Updates variance to the provided value.

Parameters

var (torch.Tensor) – (T, k, d) or (T, k, d, d)

class mmrotate.core.bbox.MaxConvexIoUAssigner(pos_iou_thr, neg_iou_thr, min_pos_iou=0.0, gt_max_assign_all=True, ignore_iof_thr=- 1, ignore_wrt_candidates=True, gpu_assign_thr=- 1)[source]

Assign a corresponding gt bbox or background to each bbox. Each proposals will be assigned with -1, or a semi-positive integer indicating the ground truth index.

  • -1: negative sample, no assigned gt

  • semi-positive integer: positive sample, index (0-based) of assigned gt

Parameters
  • pos_iou_thr (float) – IoU threshold for positive bboxes.

  • neg_iou_thr (float or tuple) – IoU threshold for negative bboxes.

  • min_pos_iou (float) – Minimum iou for a bbox to be considered as a positive bbox. Positive samples can have smaller IoU than pos_iou_thr due to the 4th step (assign max IoU sample to each gt).

  • gt_max_assign_all (bool) – Whether to assign all bboxes with the same highest overlap with some gt to that gt.

  • ignore_iof_thr (float) – IoF threshold for ignoring bboxes (if gt_bboxes_ignore is specified). Negative values mean not ignoring any bboxes.

  • ignore_wrt_candidates (bool) – Whether to compute the iof between bboxes and gt_bboxes_ignore, or the contrary.

  • gpu_assign_thr (int) – The upper bound of the number of GT for GPU assign. When the number of gt is above this threshold, will assign on CPU device. Negative values mean not assign on CPU.

assign(points, gt_rbboxes, overlaps, gt_rbboxes_ignore=None, gt_labels=None)[source]

Assign gt to bboxes.

The assignment is done in following steps

  1. compute iou between all bbox (bbox of all pyramid levels) and gt

  2. compute center distance between all bbox and gt

  3. on each pyramid level, for each gt, select k bbox whose center are closest to the gt center, so we total select k*l bbox as candidates for each gt

  4. get corresponding iou for the these candidates, and compute the mean and std, set mean + std as the iou threshold

  5. select these candidates whose iou are greater than or equal to the threshold as positive

  6. limit the positive sample’s center in gt

Parameters
  • points (torch.Tensor) – Points to be assigned, shape(n, 18).

  • gt_rbboxes (torch.Tensor) – Groundtruth polygons, shape (k, 8).

  • overlaps (torch.Tensor) – Overlaps between k gt_bboxes and n bboxes, shape(k, n).

  • gt_rbboxes_ignore (Tensor, optional) – Ground truth polygons that are labelled as ignored, e.g., crowd boxes in COCO.

  • gt_labels (Tensor, optional) – Label of gt_bboxes, shape (k, ).

Returns

The assign result.

Return type

AssignResult

assign_wrt_overlaps(overlaps, gt_labels=None)[source]

Assign w.r.t.

the overlaps of bboxes with gts.

Parameters
  • overlaps (torch.Tensor) – Overlaps between k gt_bboxes and n bboxes, shape(k, n).

  • gt_labels (Tensor, optional) – Labels of k gt_bboxes, shape (k, ).

Returns

The assign result.

Return type

AssignResult

convex_overlaps(gt_rbboxes, points)[source]

Compute overlaps between polygons and points.

Parameters
  • gt_rbboxes (torch.Tensor) – Groundtruth polygons, shape (k, 8).

  • points (torch.Tensor) – Points to be assigned, shape(n, 18).

Returns

Overlaps between k gt_bboxes and n bboxes, shape(k, n).

Return type

overlaps (torch.Tensor)

class mmrotate.core.bbox.MidpointOffsetCoder(target_means=(0.0, 0.0, 0.0, 0.0, 0.0, 0.0), target_stds=(1.0, 1.0, 1.0, 1.0, 1.0, 1.0), angle_range='oc')[source]

Mid point offset coder. This coder encodes bbox (x1, y1, x2, y2) into delta (dx, dy, dw, dh, da, db) and decodes delta (dx, dy, dw, dh, da, db) back to original bbox (x1, y1, x2, y2).

Parameters
  • target_means (Sequence[float]) – Denormalizing means of target for delta coordinates

  • target_stds (Sequence[float]) – Denormalizing standard deviation of target for delta coordinates

  • angle_range (str, optional) – Angle representations. Defaults to ‘oc’.

decode(bboxes, pred_bboxes, max_shape=None, wh_ratio_clip=0.016)[source]

Apply transformation pred_bboxes to bboxes.

Parameters
  • bboxes (torch.Tensor) – Basic boxes. Shape (B, N, 4) or (N, 4)

  • pred_bboxes (torch.Tensor) – Encoded offsets with respect to each roi. Has shape (B, N, 5) or (N, 5). Note N = num_anchors * W * H when rois is a grid of anchors.

  • (Sequence[int] or torch.Tensor or Sequence[ (max_shape) – Sequence[int]],optional): Maximum bounds for boxes, specifies (H, W, C) or (H, W). If bboxes shape is (B, N, 6), then the max_shape should be a Sequence[Sequence[int]] and the length of max_shape should also be B.

  • wh_ratio_clip (float, optional) – The allowed ratio between width and height.

Returns

Decoded boxes.

Return type

torch.Tensor

encode(bboxes, gt_bboxes)[source]

Get box regression transformation deltas that can be used to transform the bboxes into the gt_bboxes.

Parameters
  • bboxes (torch.Tensor) – Source boxes, e.g., object proposals.

  • gt_bboxes (torch.Tensor) – Target of the transformation, e.g., ground-truth boxes.

Returns

Box transformation deltas

Return type

torch.Tensor

class mmrotate.core.bbox.RBboxOverlaps2D[source]

2D Overlaps (e.g. IoUs, GIoUs) Calculator.

class mmrotate.core.bbox.RRandomSampler(num, pos_fraction, neg_pos_ub=- 1, add_gt_as_proposals=True, **kwargs)[source]

Random sampler.

Parameters
  • num (int) – Number of samples

  • pos_fraction (float) – Fraction of positive samples

  • neg_pos_up (int, optional) – Upper bound number of negative and positive samples. Defaults to -1.

  • add_gt_as_proposals (bool, optional) – Whether to add ground truth boxes as proposals. Defaults to True.

random_choice(gallery, num)[source]

Random select some elements from the gallery.

If gallery is a Tensor, the returned indices will be a Tensor; If gallery is a ndarray or list, the returned indices will be a ndarray.

Parameters
  • gallery (Tensor | ndarray | list) – indices pool.

  • num (int) – expected sample num.

Returns

sampled indices.

Return type

Tensor or ndarray

sample(assign_result, bboxes, gt_bboxes, gt_labels=None, **kwargs)[source]

Sample positive and negative bboxes.

This is a simple implementation of bbox sampling given candidates, assigning results and ground truth bboxes.

Parameters
  • assign_result (AssignResult) – Bbox assigning results.

  • bboxes (torch.Tensor) – Boxes to be sampled from.

  • gt_bboxes (torch.Tensor) – Ground truth bboxes.

  • gt_labels (Tensor, optional) – Class labels of ground truth bboxes.

Returns

Sampling result.

Return type

SamplingResult

Example

>>> from mmdet.core.bbox import RandomSampler
>>> from mmdet.core.bbox import AssignResult
>>> from mmdet.core.bbox.demodata import ensure_rng, random_boxes
>>> rng = ensure_rng(None)
>>> assign_result = AssignResult.random(rng=rng)
>>> bboxes = random_boxes(assign_result.num_preds, rng=rng)
>>> gt_bboxes = random_boxes(assign_result.num_gts, rng=rng)
>>> gt_labels = None
>>> self = RandomSampler(num=32, pos_fraction=0.5, neg_pos_ub=-1,
>>>                      add_gt_as_proposals=False)
>>> self = self.sample(assign_result, bboxes, gt_bboxes, gt_labels)
class mmrotate.core.bbox.SASAssigner(topk)[source]

Assign a corresponding gt bbox or background to each bbox. Each proposals will be assigned with 0 or a positive integer indicating the ground truth index.

  • 0: negative sample, no assigned gt

  • positive integer: positive sample, index (1-based) of assigned gt

Parameters
  • scale (float) – IoU threshold for positive bboxes.

  • pos_num (float) – find the nearest pos_num points to gt center in this

  • level.

assign(bboxes, num_level_bboxes, gt_bboxes, gt_bboxes_ignore=None, gt_labels=None)[source]

Assign gt to bboxes.

The assignment is done in following steps

  1. compute iou between all bbox (bbox of all pyramid levels) and gt

  2. compute center distance between all bbox and gt

  3. on each pyramid level, for each gt, select k bbox whose center are closest to the gt center, so we total select k*l bbox as candidates for each gt

  4. get corresponding iou for the these candidates, and compute the mean and std, set mean + std as the iou threshold

  5. select these candidates whose iou are greater than or equal to the threshold as positive

  6. limit the positive sample’s center in gt

Parameters
  • bboxes (torch.Tensor) – Bounding boxes to be assigned, shape(n, 4).

  • num_level_bboxes (List) – num of bboxes in each level

  • gt_bboxes (torch.Tensor) – Groundtruth boxes, shape (k, 4).

  • gt_bboxes_ignore (Tensor, optional) – Ground truth bboxes that are labelled as ignored, e.g., crowd boxes in COCO.

  • gt_labels (Tensor, optional) – Label of gt_bboxes, shape (k, ).

Returns

The assign result.

Return type

AssignResult

mmrotate.core.bbox.bbox_mapping_back(bboxes, img_shape, scale_factor, flip, flip_direction='horizontal')[source]

Map bboxes from testing scale to original image scale.

mmrotate.core.bbox.build_assigner(cfg, **default_args)[source]

Builder of box assigner.

mmrotate.core.bbox.build_bbox_coder(cfg, **default_args)[source]

Builder of box coder.

mmrotate.core.bbox.build_sampler(cfg, **default_args)[source]

Builder of box sampler.

mmrotate.core.bbox.gaussian2bbox(gmm)[source]

Convert Gaussian distribution to polygons by SVD.

Parameters

gmm (dict[str, torch.Tensor]) – Dict of Gaussian distribution.

Returns

Polygons.

Return type

torch.Tensor

mmrotate.core.bbox.gt2gaussian(target)[source]

Convert polygons to Gaussian distributions.

Parameters

target (torch.Tensor) – Polygons with shape (N, 8).

Returns

Gaussian distributions.

Return type

dict[str, torch.Tensor]

mmrotate.core.bbox.hbb2obb(hbboxes, version='oc')[source]

Convert horizontal bounding boxes to oriented bounding boxes.

Parameters
  • hbbs (torch.Tensor) – [x_lt,y_lt,x_rb,y_rb]

  • version (Str) – angle representations.

Returns

[x_ctr,y_ctr,w,h,angle]

Return type

obbs (torch.Tensor)

mmrotate.core.bbox.norm_angle(angle, angle_range)[source]

Limit the range of angles.

Parameters
  • angle (ndarray) – shape(n, ).

  • angle_range (Str) – angle representations.

Returns

shape(n, ).

Return type

angle (ndarray)

mmrotate.core.bbox.obb2hbb(rbboxes, version='oc')[source]

Convert oriented bounding boxes to horizontal bounding boxes.

Parameters
  • obbs (torch.Tensor) – [x_ctr,y_ctr,w,h,angle]

  • version (Str) – angle representations.

Returns

[x_ctr,y_ctr,w,h,-pi/2]

Return type

hbbs (torch.Tensor)

mmrotate.core.bbox.obb2poly(rbboxes, version='oc')[source]

Convert oriented bounding boxes to polygons.

Parameters
  • obbs (torch.Tensor) – [x_ctr,y_ctr,w,h,angle]

  • version (Str) – angle representations.

Returns

[x0,y0,x1,y1,x2,y2,x3,y3]

Return type

polys (torch.Tensor)

mmrotate.core.bbox.obb2poly_np(rbboxes, version='oc')[source]

Convert oriented bounding boxes to polygons.

Parameters
  • obbs (ndarray) – [x_ctr,y_ctr,w,h,angle]

  • version (Str) – angle representations.

Returns

[x0,y0,x1,y1,x2,y2,x3,y3]

Return type

polys (ndarray)

mmrotate.core.bbox.obb2xyxy(rbboxes, version='oc')[source]

Convert oriented bounding boxes to horizontal bounding boxes.

Parameters
  • obbs (torch.Tensor) – [x_ctr,y_ctr,w,h,angle]

  • version (Str) – angle representations.

Returns

[x_lt,y_lt,x_rb,y_rb]

Return type

hbbs (torch.Tensor)

mmrotate.core.bbox.poly2obb(polys, version='oc')[source]

Convert polygons to oriented bounding boxes.

Parameters
  • polys (torch.Tensor) – [x0,y0,x1,y1,x2,y2,x3,y3]

  • version (Str) – angle representations.

Returns

[x_ctr,y_ctr,w,h,angle]

Return type

obbs (torch.Tensor)

mmrotate.core.bbox.poly2obb_np(polys, version='oc')[source]

Convert polygons to oriented bounding boxes.

Parameters
  • polys (ndarray) – [x0,y0,x1,y1,x2,y2,x3,y3]

  • version (Str) – angle representations.

Returns

[x_ctr,y_ctr,w,h,angle]

Return type

obbs (ndarray)

mmrotate.core.bbox.rbbox2result(bboxes, labels, num_classes)[source]

Convert detection results to a list of numpy arrays.

Parameters
  • bboxes (torch.Tensor) – shape (n, 6)

  • labels (torch.Tensor) – shape (n, )

  • num_classes (int) – class number, including background class

Returns

bbox results of each class

Return type

list(ndarray)

mmrotate.core.bbox.rbbox2roi(bbox_list)[source]

Convert a list of bboxes to roi format.

Parameters

bbox_list (list[Tensor]) – a list of bboxes corresponding to a batch of images.

Returns

shape (n, 6), [batch_ind, cx, cy, w, h, a]

Return type

Tensor

mmrotate.core.bbox.rbbox_overlaps(bboxes1, bboxes2, mode='iou', is_aligned=False)[source]

Calculate overlap between two set of bboxes.

Parameters
  • bboxes1 (torch.Tensor) – shape (B, m, 5) in <cx, cy, w, h, a> format or empty.

  • bboxes2 (torch.Tensor) – shape (B, n, 5) in <cx, cy, w, h, a> format or empty.

  • mode (str) – “iou” (intersection over union), “iof” (intersection over foreground) or “giou” (generalized intersection over union). Default “iou”.

  • is_aligned (bool, optional) – If True, then m and n must be equal. Default False.

Returns

shape (m, n) if is_aligned is False else shape (m,)

Return type

Tensor

patch

mmrotate.core.patch.get_multiscale_patch(sizes, steps, ratios)[source]

Get multiscale patch sizes and steps.

Parameters
  • sizes (list) – A list of patch sizes.

  • steps (list) – A list of steps to slide patches.

  • ratios (list) – Multiscale ratios. devidie to each size and step and generate patches in new scales.

Returns

A list of multiscale patch sizes. new_steps (list): A list of steps corresponding to new_sizes.

Return type

new_sizes (list)

mmrotate.core.patch.merge_results(results, offsets, iou_thr=0.1, device='cpu')[source]

Merge patch results via nms.

Parameters
  • results (list[np.ndarray]) – A list of patches results.

  • offsets (np.ndarray) – Positions of the left top points of patches.

  • iou_thr (float) – The IoU threshold of NMS.

  • device (str) – The device to call nms.

Retunrns:

list[np.ndarray]: Detection results after merging.

mmrotate.core.patch.slide_window(width, height, sizes, steps, img_rate_thr=0.6)[source]

Slide windows in images and get window position.

Parameters
  • width (int) – The width of the image.

  • height (int) – The height of the image.

  • sizes (list) – List of window’s sizes.

  • steps (list) – List of window’s steps.

  • img_rate_thr (float) – Threshold of window area divided by image area.

Returns

Information of valid windows.

Return type

np.ndarray

evaluation

mmrotate.core.evaluation.eval_rbbox_map(det_results, annotations, scale_ranges=None, iou_thr=0.5, use_07_metric=True, dataset=None, logger=None, nproc=4)[source]

Evaluate mAP of a rotated dataset.

Parameters
  • det_results (list[list]) – [[cls1_det, cls2_det, …], …]. The outer list indicates images, and the inner list indicates per-class detected bboxes.

  • annotations (list[dict]) –

    Ground truth annotations where each item of the list indicates an image. Keys of annotations are:

    • bboxes: numpy array of shape (n, 5)

    • labels: numpy array of shape (n, )

    • bboxes_ignore (optional): numpy array of shape (k, 5)

    • labels_ignore (optional): numpy array of shape (k, )

  • scale_ranges (list[tuple] | None) – Range of scales to be evaluated, in the format [(min1, max1), (min2, max2), …]. A range of (32, 64) means the area range between (32**2, 64**2). Default: None.

  • iou_thr (float) – IoU threshold to be considered as matched. Default: 0.5.

  • use_07_metric (bool) – Whether to use the voc07 metric.

  • dataset (list[str] | str | None) – Dataset name or dataset classes, there are minor differences in metrics for different datasets, e.g. “voc07”, “imagenet_det”, etc. Default: None.

  • logger (logging.Logger | str | None) – The way to print the mAP summary. See mmcv.utils.print_log() for details. Default: None.

  • nproc (int) – Processes used for computing TP and FP. Default: 4.

Returns

(mAP, [dict, dict, …])

Return type

tuple

post_processing

mmrotate.core.post_processing.aug_multiclass_nms_rotated(merged_bboxes, merged_labels, score_thr, nms, max_num, classes)[source]

NMS for aug multi-class bboxes.

Parameters
  • multi_bboxes (torch.Tensor) – shape (n, #class*5) or (n, 5)

  • multi_scores (torch.Tensor) – shape (n, #class), where the last column contains scores of the background class, but this will be ignored.

  • score_thr (float) – bbox threshold, bboxes with scores lower than it will not be considered.

  • nms (float) – Config of NMS.

  • max_num (int, optional) – if there are more than max_num bboxes after NMS, only top max_num will be kept. Default to -1.

  • classes (int) – number of classes.

Returns

tensors of shape (k, 5), and (k). Dets are boxes

with scores. Labels are 0-based.

Return type

tuple (dets, labels)

mmrotate.core.post_processing.multiclass_nms_rotated(multi_bboxes, multi_scores, score_thr, nms, max_num=- 1, score_factors=None, return_inds=False)[source]

NMS for multi-class bboxes.

Parameters
  • multi_bboxes (torch.Tensor) – shape (n, #class*5) or (n, 5)

  • multi_scores (torch.Tensor) – shape (n, #class), where the last column contains scores of the background class, but this will be ignored.

  • score_thr (float) – bbox threshold, bboxes with scores lower than it will not be considered.

  • nms (float) – Config of NMS.

  • max_num (int, optional) – if there are more than max_num bboxes after NMS, only top max_num will be kept. Default to -1.

  • score_factors (Tensor, optional) – The factors multiplied to scores before applying NMS. Default to None.

  • return_inds (bool, optional) – Whether return the indices of kept bboxes. Default to False.

Returns

tensors of shape (k, 5), (k), and (k). Dets are boxes with scores. Labels are 0-based.

Return type

tuple (dets, labels, indices (optional))

mmrotate.datasets

datasets

class mmrotate.datasets.DOTADataset(ann_file, pipeline, version='oc', difficulty=100, **kwargs)[source]

DOTA dataset for detection.

Parameters
  • ann_file (str) – Annotation file path.

  • pipeline (list[dict]) – Processing pipeline.

  • version (str, optional) – Angle representations. Defaults to ‘oc’.

  • difficulty (bool, optional) – The difficulty threshold of GT.

evaluate(results, metric='mAP', logger=None, proposal_nums=(100, 300, 1000), iou_thr=0.5, scale_ranges=None, nproc=4)[source]

Evaluate the dataset.

Parameters
  • results (list) – Testing results of the dataset.

  • metric (str | list[str]) – Metrics to be evaluated.

  • logger (logging.Logger | None | str) – Logger used for printing related information during evaluation. Default: None.

  • proposal_nums (Sequence[int]) – Proposal number used for evaluating recalls, such as recall@100, recall@1000. Default: (100, 300, 1000).

  • iou_thr (float | list[float]) – IoU threshold. It must be a float when evaluating mAP, and can be a list when evaluating recall. Default: 0.5.

  • scale_ranges (list[tuple] | None) – Scale ranges for evaluating mAP. Default: None.

  • nproc (int) – Processes used for computing TP and FP. Default: 4.

format_results(results, submission_dir=None, nproc=4, **kwargs)[source]

Format the results to submission text (standard format for DOTA evaluation).

Parameters
  • results (list) – Testing results of the dataset.

  • submission_dir (str, optional) – The folder that contains submission files. If not specified, a temp folder will be created. Default: None.

  • nproc (int, optional) – number of process.

Returns

  • result_files (dict): a dict containing the json filepaths

  • tmp_dir (str): the temporal directory created for saving json files when submission_dir is not specified.

Return type

tuple

load_annotations(ann_folder)[source]
Parameters

ann_folder – folder that contains DOTA v1 annotations txt files

merge_det(results, nproc=4)[source]

Merging patch bboxes into full image.

Parameters
  • results (list) – Testing results of the dataset.

  • nproc (int) – number of process. Default: 4.

class mmrotate.datasets.HRSCDataset(ann_file, pipeline, img_subdir='JPEGImages', ann_subdir='Annotations', classwise=False, version='oc', **kwargs)[source]

HRSC dataset for detection.

Parameters
  • ann_file (str) – Annotation file path.

  • pipeline (list[dict]) – Processing pipeline.

  • img_subdir (str) – Subdir where images are stored. Default: JPEGImages.

  • ann_subdir (str) – Subdir where annotations are. Default: Annotations.

  • classwise (bool) – Whether to use all classes or only ship.

  • version (str, optional) – Angle representations. Defaults to ‘oc’.

evaluate(results, metric='mAP', logger=None, proposal_nums=(100, 300, 1000), iou_thr=0.5, scale_ranges=None, use_07_metric=True, nproc=4)[source]

Evaluate the dataset.

Parameters
  • results (list) – Testing results of the dataset.

  • metric (str | list[str]) – Metrics to be evaluated.

  • logger (logging.Logger | None | str) – Logger used for printing related information during evaluation. Default: None.

  • proposal_nums (Sequence[int]) – Proposal number used for evaluating recalls, such as recall@100, recall@1000. Default: (100, 300, 1000).

  • iou_thr (float | list[float]) – IoU threshold. It must be a float when evaluating mAP, and can be a list when evaluating recall. Default: 0.5.

  • scale_ranges (list[tuple] | None) – Scale ranges for evaluating mAP. Default: None.

  • use_07_metric (bool) – Whether to use the voc07 metric.

  • nproc (int) – Processes used for computing TP and FP. Default: 4.

load_annotations(ann_file)[source]

Load annotation from XML style ann_file.

Parameters

ann_file (str) – Path of Imageset file.

Returns

Annotation info from XML file.

Return type

list[dict]

class mmrotate.datasets.SARDataset(ann_file, pipeline, version='oc', difficulty=100, **kwargs)[source]

SAR ship dataset for detection (Support RSSDD and HRSID).

pipelines

class mmrotate.datasets.pipelines.LoadPatchFromImage(to_float32=False, color_type='color', channel_order='bgr', file_client_args={'backend': 'disk'})[source]

Load an patch from the huge image.

Similar with LoadImageFromFile, but only reserve a patch of results['img'] according to results['win'].

class mmrotate.datasets.pipelines.PolyRandomRotate(rotate_ratio=0.5, angles_range=180, auto_bound=False, rect_classes=None, version='le90')[source]

Rotate img & bbox. Reference: https://github.com/hukaixuan19970627/OrientedRepPoints_DOTA

Parameters
  • rate (bool) – (float, optional): The rotating probability. Default: 0.5.

  • angles_range (int, optional) – The rotate angle defined by random (-angles_range, +angles_range).

  • auto_bound (bool, optional) – whether to find the new width and height bounds.

  • rect_classes (None|list, optional) – Specifies classes that needs to be rotated by a multiple of 90 degrees.

  • version (str, optional) – Angle representations. Defaults to ‘oc’.

apply_coords(coords)[source]

coords should be a N * 2 array-like, containing N couples of (x, y) points

apply_image(img, bound_h, bound_w, interp=1)[source]

img should be a numpy array, formatted as Height * Width * Nchannels

create_rotation_matrix(center, angle, bound_h, bound_w, offset=0)[source]

Create rotation matrix.

filter_border(bboxes, h, w)[source]

Filter the box whose center point is outside or whose side length is less than 5.

property is_rotate

Randomly decide whether to rotate.

class mmrotate.datasets.pipelines.RRandomFlip(flip_ratio=None, direction='horizontal', version='oc')[source]
Parameters
  • flip_ratio (float | list[float], optional) – The flipping probability. Default: None.

  • direction (str | list[str], optional) – The flipping direction. Options are ‘horizontal’, ‘vertical’, ‘diagonal’.

  • version (str, optional) – Angle representations. Defaults to ‘oc’.

bbox_flip(bboxes, img_shape, direction)[source]

Flip bboxes horizontally or vertically.

Parameters
  • bboxes (ndarray) – shape (…, 5*k)

  • img_shape (tuple) – (height, width)

Returns

Flipped bounding boxes.

Return type

numpy.ndarray

class mmrotate.datasets.pipelines.RResize(img_scale=None, multiscale_mode='range', ratio_range=None)[source]

Resize images & rotated bbox Inherit Resize pipeline class to handle rotated bboxes.

Parameters
  • img_scale (tuple or list[tuple]) – Images scales for resizing.

  • multiscale_mode (str) – Either “range” or “value”.

  • ratio_range (tuple[float]) – (min_ratio, max_ratio).

mmrotate.models

detectors

class mmrotate.models.detectors.GlidingVertex(backbone, rpn_head, roi_head, train_cfg, test_cfg, neck=None, pretrained=None, init_cfg=None)[source]

Implementation of Gliding Vertex on the Horizontal Bounding Box for Multi-Oriented Object Detection

class mmrotate.models.detectors.OrientedRCNN(backbone, rpn_head, roi_head, train_cfg, test_cfg, neck=None, pretrained=None, init_cfg=None)[source]

Implementation of Oriented R-CNN for Object Detection.

class mmrotate.models.detectors.R3Det(num_refine_stages, backbone, neck=None, bbox_head=None, frm_cfgs=None, refine_heads=None, train_cfg=None, test_cfg=None, pretrained=None, init_cfg=None)[source]

Rotated Refinement RetinaNet.

aug_test(imgs, img_metas, **kwargs)[source]

Test function with test time augmentation.

extract_feat(img)[source]

Directly extract features from the backbone+neck.

forward_dummy(img)[source]

Used for computing network flops.

See mmedetection/tools/get_flops.py

forward_train(img, img_metas, gt_bboxes, gt_labels, gt_bboxes_ignore=None)[source]

Forward function.

simple_test(img, img_meta, rescale=False)[source]

Test function without test time augmentation.

Parameters
  • imgs (list[torch.Tensor]) – List of multiple images

  • img_metas (list[dict]) – List of image information.

  • rescale (bool, optional) – Whether to rescale the results. Defaults to False.

Returns

BBox results of each image and classes. The outer list corresponds to each image. The inner list corresponds to each class.

Return type

list[list[np.ndarray]]

class mmrotate.models.detectors.ReDet(backbone, rpn_head, roi_head, train_cfg, test_cfg, neck=None, pretrained=None, init_cfg=None)[source]

Implementation of ReDet: A Rotation-equivariant Detector for Aerial Object Detection.

class mmrotate.models.detectors.RoITransformer(backbone, rpn_head, roi_head, train_cfg, test_cfg, neck=None, pretrained=None, init_cfg=None)[source]

Implementation of Learning RoI Transformer for Oriented Object Detection in Aerial Images.

class mmrotate.models.detectors.RotatedBaseDetector(init_cfg=None)[source]

Base class for rotated detectors.

show_result(img, result, score_thr=0.3, bbox_color=(226, 43, 138), text_color='white', thickness=2, font_scale=0.25, win_name='', show=False, wait_time=0, out_file=None, **kwargs)[source]

Draw result over img.

Parameters
  • img (str or Tensor) – The image to be displayed.

  • result (Tensor or tuple) – The results to draw over img bbox_result or (bbox_result, segm_result).

  • score_thr (float, optional) – Minimum score of bboxes to be shown. Default: 0.3.

  • bbox_color (str or tuple or Color) – Color of bbox lines.

  • text_color (str or tuple or Color) – Color of texts.

  • thickness (int) – Thickness of lines.

  • font_scale (float) – Font scales of texts.

  • win_name (str) – The window name.

  • wait_time (int) – Value of waitKey param. Default: 0.

  • show (bool) – Whether to show the image. Default: False.

  • out_file (str or None) – The filename to write the image. Default: None.

Returns

Only if not show or out_file

Return type

img (torch.Tensor)

class mmrotate.models.detectors.RotatedFasterRCNN(backbone, rpn_head, roi_head, train_cfg, test_cfg, neck=None, pretrained=None, init_cfg=None)[source]

Implementation of Rotated Faster R-CNN.

class mmrotate.models.detectors.RotatedRepPoints(backbone, neck, bbox_head, train_cfg=None, test_cfg=None, pretrained=None)[source]

Implementation of Rotated RepPoints.

class mmrotate.models.detectors.RotatedRetinaNet(backbone, neck, bbox_head, train_cfg=None, test_cfg=None, pretrained=None, init_cfg=None)[source]

Implementation of Rotated RetinaNet.

class mmrotate.models.detectors.RotatedSingleStageDetector(backbone, neck=None, bbox_head=None, train_cfg=None, test_cfg=None, pretrained=None, init_cfg=None)[source]

Base class for rotated single-stage detectors.

Single-stage detectors directly and densely predict bounding boxes on the output features of the backbone+neck.

aug_test(imgs, img_metas, rescale=False)[source]

Test function with test time augmentation.

Parameters
  • imgs (list[Tensor]) – the outer list indicates test-time augmentations and inner Tensor should have a shape NxCxHxW, which contains all images in the batch.

  • img_metas (list[list[dict]]) – the outer list indicates test-time augs (multiscale, flip, etc.) and the inner list indicates images in a batch. each dict has image information.

  • rescale (bool, optional) – Whether to rescale the results. Defaults to False.

Returns

BBox results of each image and classes. The outer list corresponds to each image. The inner list

corresponds to each class.

Return type

list[list[np.ndarray]]

extract_feat(img)[source]

Directly extract features from the backbone+neck.

forward_dummy(img)[source]

Used for computing network flops.

See mmdetection/tools/analysis_tools/get_flops.py

forward_train(img, img_metas, gt_bboxes, gt_labels, gt_bboxes_ignore=None)[source]
Parameters
  • img (Tensor) – Input images of shape (N, C, H, W). Typically these should be mean centered and std scaled.

  • img_metas (list[dict]) – A List of image info dict where each dict has: ‘img_shape’, ‘scale_factor’, ‘flip’, and may also contain ‘filename’, ‘ori_shape’, ‘pad_shape’, and ‘img_norm_cfg’. For details on the values of these keys see mmdet.datasets.pipelines.Collect.

  • gt_bboxes (list[Tensor]) – Each item are the truth boxes for each image in [tl_x, tl_y, br_x, br_y] format.

  • gt_labels (list[Tensor]) – Class indices corresponding to each box

  • gt_bboxes_ignore (None | list[Tensor]) – Specify which bounding boxes can be ignored when computing the loss.

Returns

A dictionary of loss components.

Return type

dict[str, Tensor]

simple_test(img, img_metas, rescale=False)[source]

Test function without test time augmentation.

Parameters
  • imgs (list[torch.Tensor]) – List of multiple images

  • img_metas (list[dict]) – List of image information.

  • rescale (bool, optional) – Whether to rescale the results. Defaults to False.

Returns

BBox results of each image and classes. The outer list corresponds to each image. The inner list corresponds to each class.

Return type

list[list[np.ndarray]]

class mmrotate.models.detectors.RotatedTwoStageDetector(backbone, neck=None, rpn_head=None, roi_head=None, train_cfg=None, test_cfg=None, pretrained=None, init_cfg=None)[source]

Base class for rotated two-stage detectors.

Two-stage detectors typically consisting of a region proposal network and a task-specific regression head.

async async_simple_test(img, img_meta, proposals=None, rescale=False)[source]

Async test without augmentation.

aug_test(imgs, img_metas, rescale=False)[source]

Test with augmentations.

If rescale is False, then returned bboxes and masks will fit the scale of imgs[0].

extract_feat(img)[source]

Directly extract features from the backbone+neck.

forward_dummy(img)[source]

Used for computing network flops.

See mmdetection/tools/analysis_tools/get_flops.py

forward_train(img, img_metas, gt_bboxes, gt_labels, gt_bboxes_ignore=None, gt_masks=None, proposals=None, **kwargs)[source]
Parameters
  • img (Tensor) – of shape (N, C, H, W) encoding input images. Typically these should be mean centered and std scaled.

  • img_metas (list[dict]) – list of image info dict where each dict has: ‘img_shape’, ‘scale_factor’, ‘flip’, and may also contain ‘filename’, ‘ori_shape’, ‘pad_shape’, and ‘img_norm_cfg’. For details on the values of these keys see mmdet/datasets/pipelines/formatting.py:Collect.

  • gt_bboxes (list[Tensor]) – Ground truth bboxes for each image with shape (num_gts, 5) in [cx, cy, w, h, a] format.

  • gt_labels (list[Tensor]) – class indices corresponding to each box

  • gt_bboxes_ignore (None | list[Tensor]) – specify which bounding boxes can be ignored when computing the loss.

  • gt_masks (None | Tensor) – true segmentation masks for each box used if the architecture supports a segmentation task.

  • proposals – override rpn proposals with custom proposals. Use when with_rpn is False.

Returns

a dictionary of loss components

Return type

dict[str, Tensor]

simple_test(img, img_metas, proposals=None, rescale=False)[source]

Test without augmentation.

property with_roi_head

whether the detector has a RoI head

Type

bool

property with_rpn

whether the detector has RPN

Type

bool

class mmrotate.models.detectors.S2ANet(backbone, neck=None, fam_head=None, align_cfgs=None, odm_head=None, train_cfg=None, test_cfg=None, pretrained=None)[source]

Implementation of Align Deep Features for Oriented Object Detection.

aug_test(imgs, img_metas, **kwargs)[source]

Test function with test time augmentation.

extract_feat(img)[source]

Directly extract features from the backbone+neck.

forward_dummy(img)[source]

Used for computing network flops.

See mmedetection/tools/get_flops.py

forward_train(img, img_metas, gt_bboxes, gt_labels, gt_bboxes_ignore=None)[source]

Forward function of S2ANet.

simple_test(img, img_meta, rescale=False)[source]

Test function without test time augmentation.

Parameters
  • imgs (list[torch.Tensor]) – List of multiple images

  • img_metas (list[dict]) – List of image information.

  • rescale (bool, optional) – Whether to rescale the results. Defaults to False.

Returns

BBox results of each image and classes. The outer list corresponds to each image. The inner list corresponds to each class.

Return type

list[list[np.ndarray]]

backbones

class mmrotate.models.backbones.ReResNet(depth, in_channels=3, stem_channels=64, base_channels=64, expansion=None, num_stages=4, strides=(1, 2, 2, 2), dilations=(1, 1, 1, 1), out_indices=(3), style='pytorch', deep_stem=False, avg_down=False, frozen_stages=- 1, conv_cfg=None, norm_cfg={'requires_grad': True, 'type': 'BN'}, norm_eval=False, with_cp=False, zero_init_residual=True, pretrained=None, init_cfg=None)[source]

ReResNet backbone.

Please refer to the paper for details.

Parameters
  • depth (int) – Network depth, from {18, 34, 50, 101, 152}.

  • in_channels (int) – Number of input image channels. Default: 3.

  • stem_channels (int) – Output channels of the stem layer. Default: 64.

  • base_channels (int) – Middle channels of the first stage. Default: 64.

  • num_stages (int) – Stages of the network. Default: 4.

  • strides (Sequence[int]) – Strides of the first block of each stage. Default: (1, 2, 2, 2).

  • dilations (Sequence[int]) – Dilation of each stage. Default: (1, 1, 1, 1).

  • out_indices (Sequence[int]) – Output from which stages. If only one stage is specified, a single tensor (feature map) is returned, otherwise multiple stages are specified, a tuple of tensors will be returned. Default: (3, ).

  • style (str) – pytorch or caffe. If set to “pytorch”, the stride-two layer is the 3x3 conv layer, otherwise the stride-two layer is the first 1x1 conv layer.

  • deep_stem (bool) – Replace 7x7 conv in input stem with 3 3x3 conv. Default: False.

  • avg_down (bool) – Use AvgPool instead of stride conv when downsampling in the bottleneck. Default: False.

  • frozen_stages (int) – Stages to be frozen (stop grad and set eval mode). -1 means not freezing any parameters. Default: -1.

  • conv_cfg (dict | None) – The config dict for conv layers. Default: None.

  • norm_cfg (dict) – The config dict for norm layers.

  • norm_eval (bool) – Whether to set norm layers to eval mode, namely, freeze running stats (mean and var). Note: Effect on Batch Norm and its variants only. Default: False.

  • with_cp (bool) – Use checkpoint or not. Using checkpoint will save some memory while slowing down the training speed. Default: False.

  • zero_init_residual (bool) – Whether to use zero init for last norm layer in resblocks to let them behave as identity. Default: True.

forward(x)[source]

Forward function of ReResNet.

make_res_layer(**kwargs)[source]

Build Reslayer.

property norm1

Get normalizion layer’s name.

train(mode=True)[source]

Train function of ReResNet.

necks

class mmrotate.models.necks.ReFPN(in_channels, out_channels, num_outs, start_level=0, end_level=- 1, add_extra_convs=False, extra_convs_on_inputs=True, relu_before_extra_convs=False, no_norm_on_lateral=False, conv_cfg=None, norm_cfg=None, activation=None, init_cfg={'distribution': 'uniform', 'layer': 'Conv2d', 'type': 'Xavier'})[source]

ReFPN.

Parameters
  • in_channels (List[int]) – Number of input channels per scale.

  • out_channels (int) – Number of output channels (used at each scale)

  • num_outs (int) – Number of output scales.

  • start_level (int, optional) – Index of the start input backbone level used to build the feature pyramid. Default: 0.

  • end_level (int, optional) – Index of the end input backbone level (exclusive) to build the feature pyramid. Default: -1, which means the last level.

  • add_extra_convs (bool, optional) – It decides whether to add conv layers on top of the original feature maps. Default to False.

  • extra_convs_on_inputs (bool, optional) – It specifies the source feature map of the extra convs is the last feat map of neck inputs.

  • relu_before_extra_convs (bool) – Whether to apply relu before the extra conv. Default: False.

  • no_norm_on_lateral (bool) – Whether to apply norm on lateral. Default: False.

  • conv_cfg (dict, optional) – Config dict for convolution layer. Default: None.

  • norm_cfg (dict, optional) – Config dict for normalization layer. Default: None.

  • activation (str, optional) – Activation layer in ConvModule. Default: None.

  • init_cfg (dict or list[dict], optional) – Initialization config dict.

forward(inputs)[source]

Forward function of ReFPN.

dense_heads

class mmrotate.models.dense_heads.CSLRRetinaHead(use_encoded_angle=True, shield_reg_angle=False, angle_coder={'angle_version': 'le90', 'omega': 1, 'radius': 6, 'type': 'CSLCoder', 'window': 'gaussian'}, loss_angle={'loss_weight': 1.0, 'type': 'CrossEntropyLoss', 'use_sigmoid': True}, init_cfg={'layer': 'Conv2d', 'override': [{'type': 'Normal', 'name': 'retina_cls', 'std': 0.01, 'bias_prob': 0.01}, {'type': 'Normal', 'name': 'retina_angle_cls', 'std': 0.01, 'bias_prob': 0.01}], 'std': 0.01, 'type': 'Normal'}, **kwargs)[source]

Rotational Anchor-based refine head.

Parameters
  • use_encoded_angle (bool) – Decide whether to use encoded angle or gt angle as target. Default: True.

  • shield_reg_angle (bool) – Decide whether to shield the angle loss from reg branch. Default: False.

  • angle_coder (dict) – Config of angle coder.

  • loss_angle (dict) – Config of angle classification loss.

  • init_cfg (dict or list[dict], optional) – Initialization config dict.

forward_single(x)[source]

Forward feature of a single scale level.

Parameters

x (torch.Tensor) – Features of a single scale level.

Returns

  • cls_score (torch.Tensor): Cls scores for a single scale level the channels number is num_anchors * num_classes.

  • bbox_pred (torch.Tensor): Box energies / deltas for a single scale level, the channels number is num_anchors * 5.

  • angle_cls (torch.Tensor): Angle for a single scale level the channels number is num_anchors * coding_len.

Return type

tuple (torch.Tensor)

get_bboxes(cls_scores, bbox_preds, angle_clses, img_metas, cfg=None, rescale=False, with_nms=True)[source]

Transform network output for a batch into bbox predictions.

Parameters
  • cls_scores (list[Tensor]) – Box scores for each scale level Has shape (N, num_anchors * num_classes, H, W)

  • bbox_preds (list[Tensor]) – Box energies / deltas for each scale level with shape (N, num_anchors * 5, H, W)

  • angle_clses (list[Tensor]) – Box angles for each scale level with shape (N, num_anchors * coding_len, H, W)

  • img_metas (list[dict]) – Meta information of each image, e.g., image size, scaling factor, etc.

  • cfg (mmcv.Config | None) – Test / postprocessing configuration, if None, test_cfg would be used

  • rescale (bool) – If True, return boxes in original image space. Default: False.

  • with_nms (bool) – If True, do nms before return boxes. Default: True.

Returns

Each item in result_list is 2-tuple.

The first item is an (n, 6) tensor, where the first 5 columns are bounding box positions (cx, cy, w, h, a) and the 6-th column is a score between 0 and 1. The second item is a (n,) tensor where each item is the predicted class label of the corresponding box.

Return type

list[tuple[Tensor, Tensor]]

Example

>>> import mmcv
>>> self = AnchorHead(
>>>     num_classes=9,
>>>     in_channels=1,
>>>     anchor_generator=dict(
>>>         type='AnchorGenerator',
>>>         scales=[8],
>>>         ratios=[0.5, 1.0, 2.0],
>>>         strides=[4,]))
>>> img_metas = [{'img_shape': (32, 32, 3), 'scale_factor': 1}]
>>> cfg = mmcv.Config(dict(
>>>     score_thr=0.00,
>>>     nms=dict(type='nms', iou_thr=1.0),
>>>     max_per_img=10))
>>> feat = torch.rand(1, 1, 3, 3)
>>> cls_score, bbox_pred = self.forward_single(feat)
>>> # Note the input lists are over different levels, not images
>>> cls_scores, bbox_preds = [cls_score], [bbox_pred]
>>> result_list = self.get_bboxes(cls_scores, bbox_preds,
>>>                               img_metas, cfg)
>>> det_bboxes, det_labels = result_list[0]
>>> assert len(result_list) == 1
>>> assert det_bboxes.shape[1] == 5
>>> assert len(det_bboxes) == len(det_labels) == cfg.max_per_img
loss(cls_scores, bbox_preds, angle_clses, gt_bboxes, gt_labels, img_metas, gt_bboxes_ignore=None)[source]

Compute losses of the head.

Parameters
  • cls_scores (list[Tensor]) – Box scores for each scale level Has shape (N, num_anchors * num_classes, H, W)

  • bbox_preds (list[Tensor]) – Box energies / deltas for each scale level with shape (N, num_anchors * 5, H, W)

  • angle_clses (list[Tensor]) – Box angles for each scale level with shape (N, num_anchors * coding_len, H, W)

  • gt_bboxes (list[Tensor]) – Ground truth bboxes for each image with shape (num_gts, 5) in [cx, cy, w, h, a] format.

  • gt_labels (list[Tensor]) – class indices corresponding to each box

  • img_metas (list[dict]) – Meta information of each image, e.g., image size, scaling factor, etc.

  • gt_bboxes_ignore (None | list[Tensor]) – specify which bounding boxes can be ignored when computing the loss. Default: None

Returns

A dictionary of loss components.

Return type

dict[str, Tensor]

loss_single(cls_score, bbox_pred, angle_cls, anchors, labels, label_weights, bbox_targets, bbox_weights, angle_targets, angle_weights, num_total_samples)[source]

Compute loss of a single scale level.

Parameters
  • cls_score (torch.Tensor) – Box scores for each scale level Has shape (N, num_anchors * num_classes, H, W).

  • bbox_pred (torch.Tensor) – Box energies / deltas for each scale level with shape (N, num_anchors * 5, H, W).

  • anchors (torch.Tensor) – Box reference for each scale level with shape (N, num_total_anchors, 5).

  • labels (torch.Tensor) – Labels of each anchors with shape (N, num_total_anchors).

  • label_weights (torch.Tensor) – Label weights of each anchor with shape (N, num_total_anchors)

  • bbox_targets (torch.Tensor) – BBox regression targets of each anchor weight shape (N, num_total_anchors, 5).

  • bbox_weights (torch.Tensor) – BBox regression loss weights of each anchor with shape (N, num_total_anchors, 5).

  • angle_targets (torch.Tensor) – Angle classification targets of each anchor weight shape (N, num_total_anchors, coding_len).

  • angle_weights (torch.Tensor) – Angle classification loss weights of each anchor with shape (N, num_total_anchors, 1).

  • num_total_samples (int) – If sampling, num total samples equal to the number of total anchors; Otherwise, it is the number of positive anchors.

Returns

  • loss_cls (torch.Tensor): cls. loss for each scale level.

  • loss_bbox (torch.Tensor): reg. loss for each scale level.

  • loss_angle (torch.Tensor): angle cls. loss for each scale level.

Return type

tuple (torch.Tensor)

class mmrotate.models.dense_heads.KFIoUODMRefineHead(num_classes, in_channels, stacked_convs=2, conv_cfg=None, norm_cfg=None, anchor_generator={'strides': [8, 16, 32, 64, 128], 'type': 'PseudoAnchorGenerator'}, init_cfg={'layer': 'Conv2d', 'override': {'bias_prob': 0.01, 'name': 'odm_cls', 'std': 0.01, 'type': 'Normal'}, 'std': 0.01, 'type': 'Normal'}, **kwargs)[source]

Rotated Anchor-based refine head for KFIoU. It’s a part of the Oriented Detection Module (ODM), which produces orientation-sensitive features for classification and orientation-invariant features for localization. The difference from ODMRefineHead is that its loss_bbox requires bbox_pred, bbox_targets, pred_decode and targets_decode as inputs.

Parameters
  • num_classes (int) – Number of categories excluding the background category.

  • in_channels (int) – Number of channels in the input feature map.

  • feat_channels (int) – Number of hidden channels. Used in child classes.

  • anchor_generator (dict) – Config dict for anchor generator

  • bbox_coder (dict) – Config of bounding box coder.

  • reg_decoded_bbox (bool) – If true, the regression loss would be applied on decoded bounding boxes. Default: False

  • background_label (int | None) – Label ID of background, set as 0 for RPN and num_classes for other heads. It will automatically set as num_classes if None is given.

  • loss_cls (dict) – Config of classification loss.

  • loss_bbox (dict) – Config of localization loss.

  • train_cfg (dict) – Training config of anchor head.

  • test_cfg (dict) – Testing config of anchor head.

  • init_cfg (dict or list[dict], optional) – Initialization config dict.

forward_single(x)[source]

Forward feature of a single scale level.

Parameters

x (torch.Tensor) – Features of a single scale level.

Returns

  • cls_score (torch.Tensor): Cls scores for a single scale level the channels number is num_anchors * num_classes.

  • bbox_pred (torch.Tensor): Box energies / deltas for a single scale level, the channels number is num_anchors * 4.

Return type

tuple (torch.Tensor)

get_anchors(featmap_sizes, img_metas, device='cuda')[source]

Get anchors according to feature map sizes.

Parameters
  • featmap_sizes (list[tuple]) – Multi-level feature map sizes.

  • img_metas (list[dict]) – Image meta info.

  • bboxes_as_anchors (list[list[Tensor]]) – before further regression just like anchors.

  • device (torch.device | str) – Device for returned tensors

Returns

  • anchor_list (list[Tensor]): Anchors of each image

  • valid_flag_list (list[Tensor]): Valid flags of each image

Return type

tuple

get_bboxes(cls_scores, bbox_preds, img_metas, cfg=None, rescale=False, rois=None)[source]

Transform network output for a batch into labeled boxes.

Parameters
  • cls_scores (list[Tensor]) – Box scores for each scale level Has shape (N, num_anchors * num_classes, H, W)

  • bbox_preds (list[Tensor]) – Box energies / deltas for each scale level with shape (N, num_anchors * 5, H, W)

  • img_metas (list[dict]) – size / scale info for each image

  • cfg (mmcv.Config) – test / postprocessing configuration

  • rescale (bool) – if True, return boxes in original image space

  • rois (list[list[Tensor]]) – input rbboxes of each level of each image. rois output by former stages and are to be refined.

Returns

each item in result_list is 2-tuple.

The first item is an (n, 6) tensor, where the first 5 columns are bounding box positions (xc, yc, w, h, a) and the 6-th column is a score between 0 and 1. The second item is a (n,) tensor where each item is the class index of the corresponding box.

Return type

list[tuple[Tensor, Tensor]]

loss(cls_scores, bbox_preds, gt_bboxes, gt_labels, img_metas, rois=None, gt_bboxes_ignore=None)[source]

Loss function of KFIoUODMRefineHead.

class mmrotate.models.dense_heads.KFIoURRetinaHead(num_classes, in_channels, stacked_convs=4, conv_cfg=None, norm_cfg=None, anchor_generator={'octave_base_scale': 4, 'ratios': [0.5, 1.0, 2.0], 'scales_per_octave': 3, 'strides': [8, 16, 32, 64, 128], 'type': 'AnchorGenerator'}, init_cfg={'layer': 'Conv2d', 'override': {'bias_prob': 0.01, 'name': 'retina_cls', 'std': 0.01, 'type': 'Normal'}, 'std': 0.01, 'type': 'Normal'}, **kwargs)[source]

Rotated Anchor-based head for KFIoU. The difference from RRetinaHead is that its loss_bbox requires bbox_pred, bbox_targets, pred_decode and targets_decode as inputs.

Parameters
  • num_classes (int) – Number of categories excluding the background category.

  • in_channels (int) – Number of channels in the input feature map.

  • stacked_convs (int, optional) – Number of stacked convolutions.

  • conv_cfg (dict, optional) – Config dict for convolution layer. Default: None.

  • norm_cfg (dict, optional) – Config dict for normalization layer. Default: None.

  • anchor_generator (dict) – Config dict for anchor generator

  • init_cfg (dict or list[dict], optional) – Initialization config dict.

loss_single(cls_score, bbox_pred, anchors, labels, label_weights, bbox_targets, bbox_weights, num_total_samples)[source]

Compute loss of a single scale level.

Parameters
  • cls_score (torch.Tensor) – Box scores for each scale level Has shape (N, num_anchors * num_classes, H, W).

  • bbox_pred (torch.Tensor) – Box energies / deltas for each scale level with shape (N, num_anchors * 5, H, W).

  • anchors (torch.Tensor) – Box reference for each scale level with shape (N, num_total_anchors, 5).

  • labels (torch.Tensor) – Labels of each anchors with shape (N, num_total_anchors).

  • label_weights (torch.Tensor) – Label weights of each anchor with shape (N, num_total_anchors)

  • bbox_targets (torch.Tensor) – BBox regression targets of each anchor weight shape (N, num_total_anchors, 5).

  • bbox_weights (torch.Tensor) – BBox regression loss weights of each anchor with shape (N, num_total_anchors, 5).

  • num_total_samples (int) – If sampling, num total samples equal to the number of total anchors; Otherwise, it is the number of positive anchors.

Returns

  • loss_cls (torch.Tensor): cls. loss for each scale level.

  • loss_bbox (torch.Tensor): reg. loss for each scale level.

Return type

tuple (torch.Tensor)

class mmrotate.models.dense_heads.KFIoURRetinaRefineHead(num_classes, in_channels, stacked_convs=4, conv_cfg=None, norm_cfg=None, anchor_generator={'strides': [8, 16, 32, 64, 128], 'type': 'PseudoAnchorGenerator'}, bbox_coder={'target_means': (0.0, 0.0, 0.0, 0.0, 0.0), 'target_stds': (1.0, 1.0, 1.0, 1.0, 1.0), 'type': 'DeltaXYWHABBoxCoder'}, init_cfg={'layer': 'Conv2d', 'override': {'bias_prob': 0.01, 'name': 'retina_cls', 'std': 0.01, 'type': 'Normal'}, 'std': 0.01, 'type': 'Normal'}, **kwargs)[source]

Rotational Anchor-based refine head. The difference from RRetinaRefineHead is that its loss_bbox requires bbox_pred, bbox_targets, pred_decode and targets_decode as inputs.

Parameters
  • num_classes (int) – Number of categories excluding the background category.

  • in_channels (int) – Number of channels in the input feature map.

  • stacked_convs (int, optional) – Number of stacked convolutions.

  • conv_cfg (dict, optional) – Config dict for convolution layer. Default: None.

  • norm_cfg (dict, optional) – Config dict for normalization layer. Default: None.

  • anchor_generator (dict) – Config dict for anchor generator

  • bbox_coder (dict) – Config of bounding box coder.

  • init_cfg (dict or list[dict], optional) – Initialization config dict.

get_anchors(featmap_sizes, img_metas, device='cuda')[source]

Get anchors according to feature map sizes.

Parameters
  • featmap_sizes (list[tuple]) – Multi-level feature map sizes.

  • img_metas (list[dict]) – Image meta info.

  • bboxes_as_anchors (list[list[Tensor]]) – before further regression just like anchors.

  • device (torch.device | str) – Device for returned tensors

Returns

  • anchor_list (list[Tensor]): Anchors of each image

  • valid_flag_list (list[Tensor]): Valid flags of each image

Return type

tuple (list[Tensor])

get_bboxes(cls_scores, bbox_preds, img_metas, cfg=None, rescale=False, rois=None)[source]

Transform network output for a batch into labeled boxes.

Parameters
  • cls_scores (list[Tensor]) – Box scores for each scale level Has shape (N, num_anchors * num_classes, H, W)

  • bbox_preds (list[Tensor]) – Box energies / deltas for each scale level with shape (N, num_anchors * 5, H, W)

  • img_metas (list[dict]) – size / scale info for each image

  • cfg (mmcv.Config) – test / postprocessing configuration

  • rois (list[list[Tensor]]) – input rbboxes of each level of each image. rois output by former stages and are to be refined

  • rescale (bool) – if True, return boxes in original image space

Returns

each item in result_list is 2-tuple.

The first item is an (n, 6) tensor, where the first 5 columns are bounding box positions (xc, yc, w, h, a) and the 6-th column is a score between 0 and 1. The second item is a (n,) tensor where each item is the class index of the corresponding box.

Return type

list[tuple[Tensor, Tensor]]

loss(cls_scores, bbox_preds, gt_bboxes, gt_labels, img_metas, rois=None, gt_bboxes_ignore=None)[source]

Loss function of KFIoURRetinaRefineHead.

refine_bboxes(cls_scores, bbox_preds, rois)[source]

Refine predicted bounding boxes at each position of the feature maps. This method will be used in R3Det in refinement stages.

Parameters
  • cls_scores (list[Tensor]) – Box scores for each scale level Has shape (N, num_classes, H, W)

  • bbox_preds (list[Tensor]) – Box energies / deltas for each scale level with shape (N, 5, H, W)

  • rois (list[list[Tensor]]) – input rbboxes of each level of each image. rois output by former stages and are to be refined

Returns

best or refined rbboxes of each level of each image.

Return type

list[list[Tensor]]

class mmrotate.models.dense_heads.ODMRefineHead(num_classes, in_channels, stacked_convs=2, conv_cfg=None, norm_cfg=None, anchor_generator={'strides': [8, 16, 32, 64, 128], 'type': 'PseudoAnchorGenerator'}, init_cfg={'layer': 'Conv2d', 'override': {'bias_prob': 0.01, 'name': 'odm_cls', 'std': 0.01, 'type': 'Normal'}, 'std': 0.01, 'type': 'Normal'}, **kwargs)[source]

Rotated Anchor-based refine head. It’s a part of the Oriented Detection Module (ODM), which produces orientation-sensitive features for classification and orientation-invariant features for localization.

Parameters
  • num_classes (int) – Number of categories excluding the background category.

  • in_channels (int) – Number of channels in the input feature map.

  • stacked_convs (int, optional) – Number of stacked convolutions.

  • conv_cfg (dict, optional) – Config dict for convolution layer. Default: None.

  • norm_cfg (dict, optional) – Config dict for normalization layer. Default: None.

  • anchor_generator (dict) – Config dict for anchor generator

  • init_cfg (dict or list[dict], optional) – Initialization config dict.

forward_single(x)[source]

Forward feature of a single scale level.

Parameters

x (torch.Tensor) – Features of a single scale level.

Returns

  • cls_score (torch.Tensor): Cls scores for a single scale level the channels number is num_anchors * num_classes.

  • bbox_pred (torch.Tensor): Box energies / deltas for a single scale level, the channels number is num_anchors * 4.

Return type

tuple (torch.Tensor)

get_anchors(featmap_sizes, img_metas, device='cuda')[source]

Get anchors according to feature map sizes.

Parameters
  • featmap_sizes (list[tuple]) – Multi-level feature map sizes.

  • img_metas (list[dict]) – Image meta info.

  • bboxes_as_anchors (list[list[Tensor]]) – before further regression just like anchors.

  • device (torch.device | str) – Device for returned tensors

Returns

  • anchor_list (list[Tensor]): Anchors of each image

  • valid_flag_list (list[Tensor]): Valid flags of each image

Return type

tuple (list[Tensor])

get_bboxes(cls_scores, bbox_preds, img_metas, cfg=None, rescale=False, rois=None)[source]

Transform network output for a batch into labeled boxes.

Parameters
  • cls_scores (list[Tensor]) – Box scores for each scale level Has shape (N, num_anchors * num_classes, H, W)

  • bbox_preds (list[Tensor]) – Box energies / deltas for each scale level with shape (N, num_anchors * 5, H, W)

  • img_metas (list[dict]) – size / scale info for each image

  • cfg (mmcv.Config) – test / postprocessing configuration

  • rois (list[list[Tensor]]) – input rbboxes of each level of

  • image. rois output by former stages and are to be refined (each) –

  • rescale (bool) – if True, return boxes in original image space

Returns

each item in result_list is 2-tuple.

The first item is an (n, 6) tensor, where the first 5 columns are bounding box positions (xc, yc, w, h, a) and the 6-th column is a score between 0 and 1. The second item is a (n,) tensor where each item is the class index of the corresponding box.

Return type

list[tuple[Tensor, Tensor]]

loss(cls_scores, bbox_preds, gt_bboxes, gt_labels, img_metas, rois=None, gt_bboxes_ignore=None)[source]

Loss function of ODMRefineHead.

class mmrotate.models.dense_heads.OrientedRPNHead(in_channels, init_cfg={'layer': 'Conv2d', 'std': 0.01, 'type': 'Normal'}, version='oc', **kwargs)[source]

Oriented RPN head for Oriented R-CNN.

loss_single(cls_score, bbox_pred, anchors, labels, label_weights, bbox_targets, bbox_weights, num_total_samples)[source]

Compute loss of a single scale level.

Parameters
  • cls_score (torch.Tensor) – Box scores for each scale level Has shape (N, num_anchors * num_classes, H, W).

  • bbox_pred (torch.Tensor) – Box energies / deltas for each scale level with shape (N, num_anchors * 5, H, W).

  • anchors (torch.Tensor) – Box reference for each scale level with shape (N, num_total_anchors, 4).

  • labels (torch.Tensor) – Labels of each anchors with shape (N, num_total_anchors).

  • label_weights (torch.Tensor) – Label weights of each anchor with shape (N, num_total_anchors)

  • bbox_targets (torch.Tensor) – BBox regression targets of each anchor

  • shape (weight) –

  • bbox_weights (torch.Tensor) – BBox regression loss weights of each anchor with shape (N, num_total_anchors, 4).

  • num_total_samples (int) – If sampling, num total samples equal to the number of total anchors; Otherwise, it is the number of positive anchors.

Returns

  • loss_cls (torch.Tensor): cls. loss for each scale level.

  • loss_bbox (torch.Tensor): reg. loss for each scale level.

Return type

tuple (torch.Tensor)

class mmrotate.models.dense_heads.RotatedAnchorHead(num_classes, in_channels, feat_channels=256, anchor_generator={'octave_base_scale': 4, 'ratios': [1.0, 0.5, 2.0], 'scales_per_octave': 3, 'strides': [8, 16, 32, 64, 128], 'type': 'RotatedAnchorGenerator'}, bbox_coder={'target_means': (0.0, 0.0, 0.0, 0.0, 0.0), 'target_stds': (1.0, 1.0, 1.0, 1.0, 1.0), 'type': 'DeltaXYWHAOBBoxCoder'}, reg_decoded_bbox=False, assign_by_circumhbbox='oc', loss_cls={'alpha': 0.25, 'gamma': 2.0, 'loss_weight': 1.0, 'type': 'FocalLoss', 'use_sigmoid': True}, loss_bbox={'loss_weight': 1.0, 'type': 'L1Loss'}, train_cfg=None, test_cfg=None, init_cfg={'layer': 'Conv2d', 'std': 0.01, 'type': 'Normal'})[source]

Rotated Anchor-based head (RotatedRPN, RotatedRetinaNet, etc.).

Parameters
  • num_classes (int) – Number of categories excluding the background category.

  • in_channels (int) – Number of channels in the input feature map.

  • feat_channels (int) – Number of hidden channels. Used in child classes.

  • anchor_generator (dict) – Config dict for anchor generator

  • bbox_coder (dict) – Config of bounding box coder.

  • reg_decoded_bbox (bool) – If true, the regression loss would be applied on decoded bounding boxes. Default: False

  • assign_by_circumhbbox (str) – If None, assigner will assign according to the IoU between anchor and GT (OBB), called RetinaNet-OBB. If angle definition method, assigner will assign according to the IoU between anchor and GT’s circumbox (HBB), called RetinaNet-HBB.

  • loss_cls (dict) – Config of classification loss.

  • loss_bbox (dict) – Config of localization loss.

  • train_cfg (dict) – Training config of anchor head.

  • test_cfg (dict) – Testing config of anchor head.

  • init_cfg (dict or list[dict], optional) – Initialization config dict.

aug_test(feats, img_metas, rescale=False)[source]

Test det bboxes with test time augmentation, can be applied in DenseHead except for RPNHead and its variants, e.g., GARPNHead, etc.

Parameters
  • feats (list[Tensor]) – the outer list indicates test-time augmentations and inner Tensor should have a shape NxCxHxW, which contains features for all images in the batch.

  • img_metas (list[list[dict]]) – the outer list indicates test-time augs (multiscale, flip, etc.) and the inner list indicates images in a batch. each dict has image information.

  • rescale (bool, optional) – Whether to rescale the results. Defaults to False.

Returns

Each item in result_list is 2-tuple.

The first item is bboxes with shape (n, 6), where 6 represent (x, y, w, h, a, score). The shape of the second tensor in the tuple is labels with shape (n,). The length of list should always be 1.

Return type

list[tuple[Tensor, Tensor]]

forward(feats)[source]

Forward features from the upstream network.

Parameters

feats (tuple[Tensor]) – Features from the upstream network, each is a 4D-tensor.

Returns

A tuple of classification scores and bbox prediction.

  • cls_scores (list[Tensor]): Classification scores for all scale levels, each is a 4D-tensor, the channels number is num_anchors * num_classes.

  • bbox_preds (list[Tensor]): Box energies / deltas for all scale levels, each is a 4D-tensor, the channels number is num_anchors * 5.

Return type

tuple

forward_single(x)[source]

Forward feature of a single scale level.

Parameters

x (torch.Tensor) – Features of a single scale level.

Returns

  • cls_score (torch.Tensor): Cls scores for a single scale level the channels number is num_anchors * num_classes.

  • bbox_pred (torch.Tensor): Box energies / deltas for a single scale level, the channels number is num_anchors * 5.

Return type

tuple (torch.Tensor)

get_anchors(featmap_sizes, img_metas, device='cuda')[source]

Get anchors according to feature map sizes.

Parameters
  • featmap_sizes (list[tuple]) – Multi-level feature map sizes.

  • img_metas (list[dict]) – Image meta info.

  • device (torch.device | str) – Device for returned tensors

Returns

  • anchor_list (list[Tensor]): Anchors of each image.

  • valid_flag_list (list[Tensor]): Valid flags of each image.

Return type

tuple (list[Tensor])

get_bboxes(cls_scores, bbox_preds, img_metas, cfg=None, rescale=False, with_nms=True)[source]

Transform network output for a batch into bbox predictions.

Parameters
  • cls_scores (list[Tensor]) – Box scores for each scale level Has shape (N, num_anchors * num_classes, H, W)

  • bbox_preds (list[Tensor]) – Box energies / deltas for each scale level with shape (N, num_anchors * 5, H, W)

  • img_metas (list[dict]) – Meta information of each image, e.g., image size, scaling factor, etc.

  • cfg (mmcv.Config | None) – Test / postprocessing configuration, if None, test_cfg would be used

  • rescale (bool) – If True, return boxes in original image space. Default: False.

  • with_nms (bool) – If True, do nms before return boxes. Default: True.

Returns

Each item in result_list is 2-tuple.

The first item is an (n, 6) tensor, where the first 5 columns are bounding box positions (cx, cy, w, h, a) and the 6-th column is a score between 0 and 1. The second item is a (n,) tensor where each item is the predicted class label of the corresponding box.

Return type

list[tuple[Tensor, Tensor]]

Example

>>> import mmcv
>>> self = AnchorHead(
>>>     num_classes=9,
>>>     in_channels=1,
>>>     anchor_generator=dict(
>>>         type='AnchorGenerator',
>>>         scales=[8],
>>>         ratios=[0.5, 1.0, 2.0],
>>>         strides=[4,]))
>>> img_metas = [{'img_shape': (32, 32, 3), 'scale_factor': 1}]
>>> cfg = mmcv.Config(dict(
>>>     score_thr=0.00,
>>>     nms=dict(type='nms', iou_thr=1.0),
>>>     max_per_img=10))
>>> feat = torch.rand(1, 1, 3, 3)
>>> cls_score, bbox_pred = self.forward_single(feat)
>>> # note the input lists are over different levels, not images
>>> cls_scores, bbox_preds = [cls_score], [bbox_pred]
>>> result_list = self.get_bboxes(cls_scores, bbox_preds,
>>>                               img_metas, cfg)
>>> det_bboxes, det_labels = result_list[0]
>>> assert len(result_list) == 1
>>> assert det_bboxes.shape[1] == 5
>>> assert len(det_bboxes) == len(det_labels) == cfg.max_per_img
get_targets(anchor_list, valid_flag_list, gt_bboxes_list, img_metas, gt_bboxes_ignore_list=None, gt_labels_list=None, label_channels=1, unmap_outputs=True, return_sampling_results=False)[source]

Compute regression and classification targets for anchors in multiple images.

Parameters
  • anchor_list (list[list[Tensor]]) – Multi level anchors of each image. The outer list indicates images, and the inner list corresponds to feature levels of the image. Each element of the inner list is a tensor of shape (num_anchors, 5).

  • valid_flag_list (list[list[Tensor]]) – Multi level valid flags of each image. The outer list indicates images, and the inner list corresponds to feature levels of the image. Each element of the inner list is a tensor of shape (num_anchors, )

  • gt_bboxes_list (list[Tensor]) – Ground truth bboxes of each image.

  • img_metas (list[dict]) – Meta info of each image.

  • gt_bboxes_ignore_list (list[Tensor]) – Ground truth bboxes to be ignored.

  • gt_labels_list (list[Tensor]) – Ground truth labels of each box.

  • label_channels (int) – Channel of label.

  • unmap_outputs (bool) – Whether to map outputs back to the original set of anchors.

Returns

Usually returns a tuple containing learning targets.

  • labels_list (list[Tensor]): Labels of each level.

  • label_weights_list (list[Tensor]): Label weights of each level.

  • bbox_targets_list (list[Tensor]): BBox targets of each level.

  • bbox_weights_list (list[Tensor]): BBox weights of each level.

  • num_total_pos (int): Number of positive samples in all images.

  • num_total_neg (int): Number of negative samples in all images.

additional_returns: This function enables user-defined returns from

self._get_targets_single. These returns are currently refined to properties at each feature map (i.e. having HxW dimension). The results will be concatenated after the end

Return type

tuple

loss(cls_scores, bbox_preds, gt_bboxes, gt_labels, img_metas, gt_bboxes_ignore=None)[source]

Compute losses of the head.

Parameters
  • cls_scores (list[Tensor]) – Box scores for each scale level Has shape (N, num_anchors * num_classes, H, W)

  • bbox_preds (list[Tensor]) – Box energies / deltas for each scale level with shape (N, num_anchors * 5, H, W)

  • gt_bboxes (list[Tensor]) – Ground truth bboxes for each image with shape (num_gts, 5) in [cx, cy, w, h, a] format.

  • gt_labels (list[Tensor]) – class indices corresponding to each box

  • img_metas (list[dict]) – Meta information of each image, e.g., image size, scaling factor, etc.

  • gt_bboxes_ignore (None | list[Tensor]) – specify which bounding boxes can be ignored when computing the loss. Default: None

Returns

A dictionary of loss components.

Return type

dict[str, Tensor]

loss_single(cls_score, bbox_pred, anchors, labels, label_weights, bbox_targets, bbox_weights, num_total_samples)[source]

Compute loss of a single scale level.

Parameters
  • cls_score (torch.Tensor) – Box scores for each scale level Has shape (N, num_anchors * num_classes, H, W).

  • bbox_pred (torch.Tensor) – Box energies / deltas for each scale level with shape (N, num_anchors * 5, H, W).

  • anchors (torch.Tensor) – Box reference for each scale level with shape (N, num_total_anchors, 5).

  • labels (torch.Tensor) – Labels of each anchors with shape (N, num_total_anchors).

  • label_weights (torch.Tensor) – Label weights of each anchor with shape (N, num_total_anchors)

  • bbox_targets (torch.Tensor) – BBox regression targets of each anchor

  • shape (weight) –

  • bbox_weights (torch.Tensor) – BBox regression loss weights of each anchor with shape (N, num_total_anchors, 5).

  • num_total_samples (int) – If sampling, num total samples equal to the number of total anchors; Otherwise, it is the number of positive anchors.

Returns

  • loss_cls (torch.Tensor): cls. loss for each scale level.

  • loss_bbox (torch.Tensor): reg. loss for each scale level.

Return type

tuple (torch.Tensor)

merge_aug_bboxes(aug_bboxes, aug_scores, img_metas)[source]

Merge augmented detection bboxes and scores.

Parameters
  • aug_bboxes (list[Tensor]) – shape (n, 4*#class)

  • aug_scores (list[Tensor] or None) – shape (n, #class)

  • img_shapes (list[Tensor]) – shape (3, ).

Returns

bboxes with shape (n,4), where 4 represent (tl_x, tl_y, br_x, br_y) and scores with shape (n,).

Return type

tuple[Tensor]

class mmrotate.models.dense_heads.RotatedRPNHead(in_channels, init_cfg={'layer': 'Conv2d', 'std': 0.01, 'type': 'Normal'}, version='oc', **kwargs)[source]

Rotated RPN head for rotated bboxes.

Parameters
  • in_channels (int) – Number of channels in the input feature map.

  • init_cfg (dict or list[dict], optional) – Initialization config dict.

forward_single(x)[source]

Forward feature map of a single scale level.

get_bboxes(cls_scores, bbox_preds, img_metas, cfg=None, rescale=False, with_nms=True)[source]

Transform network output for a batch into bbox predictions.

Parameters
  • cls_scores (list[Tensor]) – Box scores for each scale level Has shape (N, num_anchors * num_classes, H, W)

  • bbox_preds (list[Tensor]) – Box energies / deltas for each scale level with shape (N, num_anchors * 5, H, W)

  • img_metas (list[dict]) – Meta information of each image, e.g., image size, scaling factor, etc.

  • cfg (mmcv.Config | None) – Test / postprocessing configuration, if None, test_cfg would be used

  • rescale (bool) – If True, return boxes in original image space. Default: False.

  • with_nms (bool) – If True, do nms before return boxes. Default: True.

Returns

Each item in result_list is 2-tuple.

The first item is an (n, 6) tensor, where the first 5 columns are bounding box positions (cx, cy, w, h, a) and the 6-th column is a score between 0 and 1. The second item is a (n,) tensor where each item is the predicted class label of the corresponding box.

Return type

list[tuple[Tensor, Tensor]]

get_targets(anchor_list, valid_flag_list, gt_bboxes_list, img_metas, gt_bboxes_ignore_list=None, gt_labels_list=None, label_channels=1, unmap_outputs=True, return_sampling_results=False)[source]

Compute regression and classification targets for anchors in multiple images.

Parameters
  • anchor_list (list[list[Tensor]]) – Multi level anchors of each image. The outer list indicates images, and the inner list corresponds to feature levels of the image. Each element of the inner list is a tensor of shape (num_anchors, 4).

  • valid_flag_list (list[list[Tensor]]) – Multi level valid flags of each image. The outer list indicates images, and the inner list corresponds to feature levels of the image. Each element of the inner list is a tensor of shape (num_anchors, )

  • gt_bboxes_list (list[Tensor]) – Ground truth bboxes of each image.

  • img_metas (list[dict]) – Meta info of each image.

  • gt_bboxes_ignore_list (list[Tensor]) – Ground truth bboxes to be ignored.

  • gt_labels_list (list[Tensor]) – Ground truth labels of each box.

  • label_channels (int) – Channel of label.

  • unmap_outputs (bool) – Whether to map outputs back to the original set of anchors.

Returns

Usually returns a tuple containing learning targets.

  • labels_list (list[Tensor]): Labels of each level.

  • label_weights_list (list[Tensor]): Label weights of each level.

  • bbox_targets_list (list[Tensor]): BBox targets of each level.

  • bbox_weights_list (list[Tensor]): BBox weights of each level.

  • num_total_pos (int): Number of positive samples in all images.

  • num_total_neg (int): Number of negative samples in all images.

additional_returns: This function enables user-defined returns from

self._get_targets_single. These returns are currently refined to properties at each feature map (i.e. having HxW dimension). The results will be concatenated after the end

Return type

tuple

loss(cls_scores, bbox_preds, gt_bboxes, img_metas, gt_bboxes_ignore=None)[source]

Compute losses of the head.

Parameters
  • cls_scores (list[Tensor]) – Box scores for each scale level Has shape (N, num_anchors * num_classes, H, W)

  • bbox_preds (list[Tensor]) – Box energies / deltas for each scale level with shape (N, num_anchors * 5, H, W)

  • gt_bboxes (list[Tensor]) – Ground truth bboxes for each image with shape (num_gts, 5) in [cx, cy, w, h, a] format.

  • gt_labels (list[Tensor]) – class indices corresponding to each box

  • img_metas (list[dict]) – Meta information of each image, e.g., image size, scaling factor, etc.

  • gt_bboxes_ignore (None | list[Tensor]) – specify which bounding boxes can be ignored when computing the loss. Default: None

Returns

A dictionary of loss components.

Return type

dict[str, Tensor]

loss_single(cls_score, bbox_pred, anchors, labels, label_weights, bbox_targets, bbox_weights, num_total_samples)[source]

Compute loss of a single scale level.

Parameters
  • cls_score (torch.Tensor) – Box scores for each scale level Has shape (N, num_anchors * num_classes, H, W).

  • bbox_pred (torch.Tensor) – Box energies / deltas for each scale level with shape (N, num_anchors * 5, H, W).

  • anchors (torch.Tensor) – Box reference for each scale level with shape (N, num_total_anchors, 4).

  • labels (torch.Tensor) – Labels of each anchors with shape (N, num_total_anchors).

  • label_weights (torch.Tensor) – Label weights of each anchor with shape (N, num_total_anchors)

  • bbox_targets (torch.Tensor) – BBox regression targets of each anchor

  • shape (weight) –

  • bbox_weights (torch.Tensor) – BBox regression loss weights of each anchor with shape (N, num_total_anchors, 4).

  • num_total_samples (int) – If sampling, num total samples equal to the number of total anchors; Otherwise, it is the number of positive anchors.

Returns

A dictionary of loss components.

Return type

dict[str, Tensor]

class mmrotate.models.dense_heads.RotatedRepPointsHead(num_classes, in_channels, feat_channels, point_feat_channels=256, stacked_convs=3, num_points=9, gradient_mul=0.1, point_strides=[8, 16, 32, 64, 128], point_base_scale=4, conv_bias='auto', loss_cls={'alpha': 0.25, 'gamma': 2.0, 'loss_weight': 1.0, 'type': 'FocalLoss', 'use_sigmoid': True}, loss_bbox_init={'beta': 0.1111111111111111, 'loss_weight': 0.5, 'type': 'SmoothL1Loss'}, loss_bbox_refine={'beta': 0.1111111111111111, 'loss_weight': 1.0, 'type': 'SmoothL1Loss'}, conv_cfg=None, norm_cfg=None, train_cfg=None, test_cfg=None, center_init=True, transform_method='rotrect', use_reassign=False, topk=6, anti_factor=0.75, version='oc', init_cfg={'layer': 'Conv2d', 'override': {'bias_prob': 0.01, 'name': 'reppoints_cls_out', 'std': 0.01, 'type': 'Normal'}, 'std': 0.01, 'type': 'Normal'}, **kwargs)[source]

Rotated RepPoints head.

Parameters
  • num_classes (int) – Number of classes.

  • in_channels (int) – Number of input channels.

  • feat_channels (int) – Number of feature channels.

  • point_feat_channels (int, optional) – Number of channels of points features.

  • stacked_convs (int, optional) – Number of stacked convolutions.

  • num_points (int, optional) – Number of points in points set.

  • gradient_mul (float, optional) – The multiplier to gradients from points refinement and recognition.

  • point_strides (Iterable, optional) – points strides.

  • point_base_scale (int, optional) – Bbox scale for assigning labels.

  • conv_bias (str, optional) – The bias of convolution.

  • loss_cls (dict, optional) – Config of classification loss.

  • loss_bbox_init (dict, optional) – Config of initial points loss.

  • loss_bbox_refine (dict, optional) – Config of points loss in refinement.

  • conv_cfg (dict, optional) – The config of convolution.

  • norm_cfg (dict, optional) – The config of normlization.

  • train_cfg (dict, optional) – The config of train.

  • test_cfg (dict, optional) – The config of test.

  • center_init (bool, optional) – Whether to use center point assignment.

  • transform_method (str, optional) – The methods to transform RepPoints to bbox.

  • use_reassign (bool, optional) – Whether to reassign samples.

  • topk (int, optional) – Number of the highest topk points. Defaults to 9.

  • anti_factor (float, optional) – Feature anti-aliasing coefficient.

  • version (str, optional) – Angle representations. Defaults to ‘oc’.

  • init_cfg (dict or list[dict], optional) – Initialization config dict.

forward(feats)[source]

Forward function.

forward_single(x)[source]

Forward feature map of a single FPN level.

get_bboxes(cls_scores, pts_preds_init, pts_preds_refine, img_metas, cfg=None, rescale=False, with_nms=True, **kwargs)[source]

Transform network outputs of a batch into bbox results.

Parameters
  • cls_scores (list[Tensor]) – Classification scores for all scale levels, each is a 4D-tensor, has shape (batch_size, num_priors * num_classes, H, W).

  • pts_preds_init (list[Tensor]) – Box energies / deltas for all scale levels, each is a 18D-tensor, has shape (batch_size, num_points * 2, H, W).

  • pts_preds_refine (list[Tensor]) – Box energies / deltas for all scale levels, each is a 18D-tensor, has shape (batch_size, num_points * 2, H, W).

  • img_metas (list[dict], Optional) – Image meta info. Default None.

  • cfg (mmcv.Config, Optional) – Test / postprocessing configuration, if None, test_cfg would be used. Default None.

  • rescale (bool) – If True, return boxes in original image space. Default False.

  • with_nms (bool) – If True, do nms before return boxes. Default True.

Returns

Each item in result_list is 2-tuple.

The first item is an (n, 6) tensor, where the first 4 columns are bounding box positions (cx, cy, w, h, a) and the 6-th column is a score between 0 and 1. The second item is a (n,) tensor where each item is the predicted class label of the corresponding box.

Return type

list[list[Tensor, Tensor]]

get_cfa_targets(proposals_list, valid_flag_list, gt_bboxes_list, img_metas, gt_bboxes_ignore_list=None, gt_labels_list=None, stage='init', label_channels=1, unmap_outputs=True)[source]

Compute corresponding GT box and classification targets for proposals.

Parameters
  • proposals_list (list[list]) – Multi level points/bboxes of each image.

  • valid_flag_list (list[list]) – Multi level valid flags of each image.

  • gt_bboxes_list (list[Tensor]) – Ground truth bboxes of each image.

  • img_metas (list[dict]) – Meta info of each image.

  • gt_bboxes_ignore_list (list[Tensor]) – Ground truth bboxes to be ignored.

  • gt_bboxes_list – Ground truth labels of each box.

  • stage (str) – init or refine. Generate target for init stage or refine stage

  • label_channels (int) – Channel of label.

  • unmap_outputs (bool) – Whether to map outputs back to the original set of anchors.

Returns

  • all_labels (list[Tensor]): Labels of each level.

  • all_label_weights (list[Tensor]): Label weights of each level.

  • all_bbox_gt (list[Tensor]): Ground truth bbox of each level.

  • all_proposals (list[Tensor]): Proposals(points/bboxes) of each level.

  • all_proposal_weights (list[Tensor]): Proposal weights of each level.

  • pos_inds (list[Tensor]): Index of positive samples in all images.

  • gt_inds (list[Tensor]): Index of ground truth bbox in all images.

Return type

tuple

get_points(featmap_sizes, img_metas, device)[source]

Get points according to feature map sizes.

Parameters
  • featmap_sizes (list[tuple]) – Multi-level feature map sizes.

  • img_metas (list[dict]) – Image meta info.

Returns

points of each image, valid flags of each image

Return type

tuple

get_pos_loss(cls_score, pts_pred, label, bbox_gt, label_weight, convex_weight, pos_inds)[source]

Calculate loss of all potential positive samples obtained from first match process.

Parameters
  • cls_score (Tensor) – Box scores of single image with shape (num_anchors, num_classes)

  • pts_pred (Tensor) – Box energies / deltas of single image with shape (num_anchors, 4)

  • label (Tensor) – classification target of each anchor with shape (num_anchors,)

  • bbox_gt (Tensor) – Ground truth box.

  • label_weight (Tensor) – Classification loss weight of each anchor with shape (num_anchors).

  • convex_weight (Tensor) – Bbox weight of each anchor with shape (num_anchors, 4).

  • pos_inds (Tensor) – Index of all positive samples got from first assign process.

Returns

Losses of all positive samples in single image.

Return type

Tensor

get_targets(proposals_list, valid_flag_list, gt_bboxes_list, img_metas, gt_bboxes_ignore_list=None, gt_labels_list=None, stage='init', label_channels=1, unmap_outputs=True)[source]

Compute corresponding GT box and classification targets for proposals.

Parameters
  • proposals_list (list[list]) – Multi level points/bboxes of each image.

  • valid_flag_list (list[list]) – Multi level valid flags of each image.

  • gt_bboxes_list (list[Tensor]) – Ground truth bboxes of each image.

  • img_metas (list[dict]) – Meta info of each image.

  • gt_bboxes_ignore_list (list[Tensor]) – Ground truth bboxes to be ignored.

  • gt_bboxes_list – Ground truth labels of each box.

  • stage (str) – init or refine. Generate target for init stage or refine stage

  • label_channels (int) – Channel of label.

  • unmap_outputs (bool) – Whether to map outputs back to the original set of anchors.

Returns

  • labels_list (list[Tensor]): Labels of each level.

  • label_weights_list (list[Tensor]): Label weights of each level.

  • bbox_gt_list (list[Tensor]): Ground truth bbox of each level.

  • proposal_list (list[Tensor]): Proposals(points/bboxes) of each level.

  • proposal_weights_list (list[Tensor]): Proposal weights of each level.

  • num_total_pos (int): Number of positive samples in all images.

  • num_total_neg (int): Number of negative samples in all images.

Return type

tuple (list[Tensor])

loss(cls_scores, pts_preds_init, pts_preds_refine, gt_bboxes, gt_labels, img_metas, gt_bboxes_ignore=None)[source]

Loss function of CFA head.

loss_single(cls_score, pts_pred_init, pts_pred_refine, labels, label_weights, rbbox_gt_init, convex_weights_init, rbbox_gt_refine, convex_weights_refine, stride, num_total_samples_refine)[source]

Single loss function.

offset_to_pts(center_list, pred_list)[source]

Change from point offset to point coordinate.

points2rotrect(pts, y_first=True)[source]

Convert points to oriented bboxes.

reassign(pos_losses, label, label_weight, pts_pred_init, convex_weight, gt_bbox, pos_inds, pos_gt_inds, num_proposals_each_level=None, num_level=None)[source]

CFA reassign process.

Parameters
  • pos_losses (Tensor) – Losses of all positive samples in single image.

  • label (Tensor) – classification target of each anchor with shape (num_anchors,)

  • label_weight (Tensor) – Classification loss weight of each anchor with shape (num_anchors).

  • pts_pred_init (Tensor) –

  • convex_weight (Tensor) – Bbox weight of each anchor with shape (num_anchors, 4).

  • gt_bbox (Tensor) – Ground truth box.

  • pos_inds (Tensor) – Index of all positive samples got from first assign process.

  • pos_gt_inds (Tensor) – Gt_index of all positive samples got from first assign process.

  • num_proposals_each_level (list, optional) – Number of proposals of each level.

  • num_level (int, optional) – Number of level.

Returns

Usually returns a tuple containing learning targets.

  • label (Tensor): classification target of each anchor after paa assign, with shape (num_anchors,)

  • label_weight (Tensor): Classification loss weight of each anchor after paa assign, with shape (num_anchors).

  • convex_weight (Tensor): Bbox weight of each anchor with shape (num_anchors, 4).

  • pos_normalize_term (list): pos normalize term for refine points losses.

Return type

tuple

class mmrotate.models.dense_heads.RotatedRetinaHead(num_classes, in_channels, stacked_convs=4, conv_cfg=None, norm_cfg=None, anchor_generator={'octave_base_scale': 4, 'ratios': [0.5, 1.0, 2.0], 'scales_per_octave': 3, 'strides': [8, 16, 32, 64, 128], 'type': 'AnchorGenerator'}, init_cfg={'layer': 'Conv2d', 'override': {'bias_prob': 0.01, 'name': 'retina_cls', 'std': 0.01, 'type': 'Normal'}, 'std': 0.01, 'type': 'Normal'}, **kwargs)[source]

An anchor-based head used in RotatedRetinaNet.

The head contains two subnetworks. The first classifies anchor boxes and the second regresses deltas for the anchors.

Parameters
  • num_classes (int) – Number of categories excluding the background category.

  • in_channels (int) – Number of channels in the input feature map.

  • stacked_convs (int, optional) – Number of stacked convolutions.

  • conv_cfg (dict, optional) – Config dict for convolution layer. Default: None.

  • norm_cfg (dict, optional) – Config dict for normalization layer. Default: None.

  • anchor_generator (dict) – Config dict for anchor generator

  • init_cfg (dict or list[dict], optional) – Initialization config dict.

filter_bboxes(cls_scores, bbox_preds)[source]

Filter predicted bounding boxes at each position of the feature maps. Only one bounding boxes with highest score will be left at each position. This filter will be used in R3Det prior to the first feature refinement stage.

Parameters
  • cls_scores (list[Tensor]) – Box scores for each scale level Has shape (N, num_anchors * num_classes, H, W)

  • bbox_preds (list[Tensor]) – Box energies / deltas for each scale level with shape (N, num_anchors * 5, H, W)

Returns

best or refined rbboxes of each level of each image.

Return type

list[list[Tensor]]

forward_single(x)[source]

Forward feature of a single scale level.

Parameters

x (torch.Tensor) – Features of a single scale level.

Returns

  • cls_score (torch.Tensor): Cls scores for a single scale level the channels number is num_anchors * num_classes.

  • bbox_pred (torch.Tensor): Box energies / deltas for a single scale level, the channels number is num_anchors * 4.

Return type

tuple (torch.Tensor)

refine_bboxes(cls_scores, bbox_preds)[source]

This function will be used in S2ANet, whose num_anchors=1.

Parameters
  • cls_scores (list[Tensor]) – Box scores for each scale level Has shape (N, num_classes, H, W)

  • bbox_preds (list[Tensor]) – Box energies / deltas for each scale level with shape (N, 5, H, W)

Returns

refined rbboxes of each level of each image.

Return type

list[list[Tensor]]

class mmrotate.models.dense_heads.RotatedRetinaRefineHead(num_classes, in_channels, stacked_convs=4, conv_cfg=None, norm_cfg=None, anchor_generator={'strides': [8, 16, 32, 64, 128], 'type': 'PseudoAnchorGenerator'}, bbox_coder={'target_means': (0.0, 0.0, 0.0, 0.0, 0.0), 'target_stds': (1.0, 1.0, 1.0, 1.0, 1.0), 'type': 'DeltaXYWHABBoxCoder'}, init_cfg={'layer': 'Conv2d', 'override': {'bias_prob': 0.01, 'name': 'retina_cls', 'std': 0.01, 'type': 'Normal'}, 'std': 0.01, 'type': 'Normal'}, **kwargs)[source]

Rotated Anchor-based refine head.

Parameters
  • num_classes (int) – Number of categories excluding the background category.

  • in_channels (int) – Number of channels in the input feature map.

  • stacked_convs (int, optional) – Number of stacked convolutions.

  • conv_cfg (dict, optional) – Config dict for convolution layer. Default: None.

  • norm_cfg (dict, optional) – Config dict for normalization layer. Default: None.

  • anchor_generator (dict) – Config dict for anchor generator

  • bbox_coder (dict) – Config of bounding box coder.

  • init_cfg (dict or list[dict], optional) – Initialization config dict.

get_anchors(featmap_sizes, img_metas, device='cuda')[source]

Get anchors according to feature map sizes.

Parameters
  • featmap_sizes (list[tuple]) – Multi-level feature map sizes.

  • img_metas (list[dict]) – Image meta info.

  • bboxes_as_anchors (list[list[Tensor]]) – before further regression just like anchors.

  • device (torch.device | str) – Device for returned tensors

Returns

  • anchor_list (list[Tensor]): Anchors of each image

  • valid_flag_list (list[Tensor]): Valid flags of each image

Return type

tuple (list[Tensor])

get_bboxes(cls_scores, bbox_preds, img_metas, cfg=None, rescale=False, rois=None)[source]

Transform network output for a batch into labeled boxes.

Parameters
  • cls_scores (list[Tensor]) – Box scores for each scale level Has shape (N, num_anchors * num_classes, H, W)

  • bbox_preds (list[Tensor]) – Box energies / deltas for each scale level with shape (N, num_anchors * 5, H, W)

  • img_metas (list[dict]) – size / scale info for each image

  • cfg (mmcv.Config) – test / postprocessing configuration

  • rois (list[list[Tensor]]) – input rbboxes of each level of each image. rois output by former stages and are to be refined

  • rescale (bool) – if True, return boxes in original image space

Returns

each item in result_list is 2-tuple.

The first item is an (n, 6) tensor, where the first 5 columns are bounding box positions (xc, yc, w, h, a) and the 6-th column is a score between 0 and 1. The second item is a (n,) tensor where each item is the class index of the corresponding box.

Return type

list[tuple[Tensor, Tensor]]

loss(cls_scores, bbox_preds, gt_bboxes, gt_labels, img_metas, rois=None, gt_bboxes_ignore=None)[source]

Loss function of RotatedRetinaRefineHead.

refine_bboxes(cls_scores, bbox_preds, rois)[source]

Refine predicted bounding boxes at each position of the feature maps. This method will be used in R3Det in refinement stages.

Parameters
  • cls_scores (list[Tensor]) – Box scores for each scale level Has shape (N, num_classes, H, W)

  • bbox_preds (list[Tensor]) – Box energies / deltas for each scale level with shape (N, 5, H, W)

  • rois (list[list[Tensor]]) – input rbboxes of each level of each image. rois output by former stages and are to be refined

Returns

best or refined rbboxes of each level of each image.

Return type

list[list[Tensor]]

class mmrotate.models.dense_heads.SAMRepPointsHead(num_classes, in_channels, feat_channels, point_feat_channels=256, stacked_convs=3, num_points=9, gradient_mul=0.1, point_strides=[8, 16, 32, 64, 128], point_base_scale=4, conv_bias='auto', loss_cls={'alpha': 0.25, 'gamma': 2.0, 'loss_weight': 1.0, 'type': 'FocalLoss', 'use_sigmoid': True}, loss_bbox_init={'beta': 0.1111111111111111, 'loss_weight': 0.5, 'type': 'SmoothL1Loss'}, loss_bbox_refine={'beta': 0.1111111111111111, 'loss_weight': 1.0, 'type': 'SmoothL1Loss'}, conv_cfg=None, norm_cfg=None, train_cfg=None, test_cfg=None, center_init=True, transform_method='rotrect', topk=6, anti_factor=0.75, version='oc', init_cfg={'layer': 'Conv2d', 'override': {'bias_prob': 0.01, 'name': 'reppoints_cls_out', 'std': 0.01, 'type': 'Normal'}, 'std': 0.01, 'type': 'Normal'}, **kwargs)[source]

Rotated RepPoints head for SASM.

Parameters
  • num_classes (int) – Number of classes.

  • in_channels (int) – Number of input channels.

  • feat_channels (int) – Number of feature channels.

  • point_feat_channels (int, optional) – Number of channels of points features.

  • stacked_convs (int, optional) – Number of stacked convolutions.

  • num_points (int, optional) – Number of points in points set.

  • gradient_mul (float, optional) – The multiplier to gradients from points refinement and recognition.

  • point_strides (Iterable, optional) – points strides.

  • point_base_scale (int, optional) – Bbox scale for assigning labels.

  • conv_bias (str, optional) – The bias of convolution.

  • loss_cls (dict, optional) – Config of classification loss.

  • loss_bbox_init (dict, optional) – Config of initial points loss.

  • loss_bbox_refine (dict, optional) – Config of points loss in refinement.

  • conv_cfg (dict, optional) – The config of convolution.

  • norm_cfg (dict, optional) – The config of normlization.

  • train_cfg (dict, optional) – The config of train.

  • test_cfg (dict, optional) – The config of test.

  • center_init (bool, optional) – Whether to use center point assignment.

  • transform_method (str, optional) – The methods to transform RepPoints to bbox.

  • topk (int, optional) – Number of the highest topk points. Defaults to 9.

  • anti_factor (float, optional) – Feature anti-aliasing coefficient.

  • version (str, optional) – Angle representations. Defaults to ‘oc’.

  • init_cfg (dict or list[dict], optional) – Initialization config dict.

forward(feats)[source]

Forward function.

forward_single(x)[source]

Forward feature map of a single FPN level.

get_bboxes(cls_scores, pts_preds_init, pts_preds_refine, img_metas, cfg=None, rescale=False, with_nms=True, **kwargs)[source]

Transform network outputs of a batch into bbox results.

Parameters
  • cls_scores (list[Tensor]) – Classification scores for all scale levels, each is a 4D-tensor, has shape (batch_size, num_priors * num_classes, H, W).

  • pts_preds_init (list[Tensor]) – Box energies / deltas for all scale levels, each is a 18D-tensor, has shape (batch_size, num_points * 2, H, W).

  • pts_preds_refine (list[Tensor]) – Box energies / deltas for all scale levels, each is a 18D-tensor, has shape (batch_size, num_points * 2, H, W).

  • img_metas (list[dict], Optional) – Image meta info. Default None.

  • cfg (mmcv.Config, Optional) – Test / postprocessing configuration, if None, test_cfg would be used. Default None.

  • rescale (bool) – If True, return boxes in original image space. Default False.

  • with_nms (bool) – If True, do nms before return boxes. Default True.

Returns

Each item in result_list is 2-tuple.

The first item is an (n, 6) tensor, where the first 4 columns are bounding box positions (cx, cy, w, h, a) and the 6-th column is a score between 0 and 1. The second item is a (n,) tensor where each item is the predicted class label of the corresponding box.

Return type

list[list[Tensor, Tensor]]

get_points(featmap_sizes, img_metas, device)[source]

Get points according to feature map sizes.

Parameters
  • featmap_sizes (list[tuple]) – Multi-level feature map sizes.

  • img_metas (list[dict]) – Image meta info.

Returns

points of each image, valid flags of each image

Return type

tuple

get_targets(proposals_list, valid_flag_list, gt_bboxes_list, img_metas, gt_bboxes_ignore_list=None, gt_labels_list=None, stage='init', label_channels=1, unmap_outputs=True)[source]

Compute corresponding GT box and classification targets for proposals.

Parameters
  • proposals_list (list[list]) – Multi level points/bboxes of each image.

  • valid_flag_list (list[list]) – Multi level valid flags of each image.

  • gt_bboxes_list (list[Tensor]) – Ground truth bboxes of each image.

  • img_metas (list[dict]) – Meta info of each image.

  • gt_bboxes_ignore_list (list[Tensor]) – Ground truth bboxes to be ignored.

  • gt_bboxes_list – Ground truth labels of each box.

  • stage (str) – init or refine. Generate target for init stage or refine stage

  • label_channels (int) – Channel of label.

  • unmap_outputs (bool) – Whether to map outputs back to the original set of anchors.

Returns

  • labels_list (list[Tensor]): Labels of each level.

  • label_weights_list (list[Tensor]): Label weights of each level.

  • bbox_gt_list (list[Tensor]): Ground truth bbox of each level.

  • proposal_list (list[Tensor]): Proposals(points/bboxes) of each level.

  • proposal_weights_list (list[Tensor]): Proposal weights of each level.

  • num_total_pos (int): Number of positive samples in all images.

  • num_total_neg (int): Number of negative samples in all images.

Return type

tuple (list[Tensor])

loss(cls_scores, pts_preds_init, pts_preds_refine, gt_bboxes, gt_labels, img_metas, gt_bboxes_ignore=None)[source]

Loss function of SAM RepPoints head.

loss_single(cls_score, pts_pred_init, pts_pred_refine, labels, label_weights, rbbox_gt_init, convex_weights_init, sam_weights_init, rbbox_gt_refine, convex_weights_refine, sam_weights_refine, stride, num_total_samples_refine)[source]

Single loss function.

offset_to_pts(center_list, pred_list)[source]

Change from point offset to point coordinate.

points2rotrect(pts, y_first=True)[source]

Convert points to oriented bboxes.

roi_heads

class mmrotate.models.roi_heads.GVRatioRoIHead(bbox_roi_extractor=None, bbox_head=None, shared_head=None, train_cfg=None, test_cfg=None, pretrained=None, init_cfg=None, version='oc')[source]

Gliding vertex roi head including one bbox head.

forward_dummy(x, proposals)[source]

Dummy forward function.

Parameters
  • x (list[Tensors]) – list of multi-level img features.

  • proposals (list[Tensors]) – list of region proposals.

Returns

list of region of interest.

Return type

list[Tensors]

simple_test_bboxes(x, img_metas, proposals, rcnn_test_cfg, rescale=False)[source]

Test only det bboxes without augmentation.

Parameters
  • x (tuple[Tensor]) – Feature maps of all scale level.

  • img_metas (list[dict]) – Image meta info.

  • proposals (List[Tensor]) – Region proposals.

  • (obj (rcnn_test_cfg) – ConfigDict): test_cfg of R-CNN.

  • rescale (bool) – If True, return boxes in original image space. Default: False.

Returns

The first list contains the boxes of the corresponding image in a batch, each tensor has the shape (num_boxes, 5) and last dimension 5 represent (cx, cy, w, h, a, score). Each Tensor in the second list is the labels with shape (num_boxes, ). The length of both lists should be equal to batch_size.

Return type

tuple[list[Tensor], list[Tensor]]

class mmrotate.models.roi_heads.OrientedStandardRoIHead(bbox_roi_extractor=None, bbox_head=None, shared_head=None, train_cfg=None, test_cfg=None, pretrained=None, init_cfg=None, version='oc')[source]

Oriented RCNN roi head including one bbox head.

forward_train(x, img_metas, proposal_list, gt_bboxes, gt_labels, gt_bboxes_ignore=None, gt_masks=None)[source]
Parameters
  • x (list[Tensor]) – list of multi-level img features.

  • img_metas (list[dict]) – list of image info dict where each dict has: ‘img_shape’, ‘scale_factor’, ‘flip’, and may also contain ‘filename’, ‘ori_shape’, ‘pad_shape’, and ‘img_norm_cfg’. For details on the values of these keys see mmdet/datasets/pipelines/formatting.py:Collect.

  • proposals (list[Tensors]) – list of region proposals.

  • gt_bboxes (list[Tensor]) – Ground truth bboxes for each image with shape (num_gts, 5) in [cx, cy, w, h, a] format.

  • gt_labels (list[Tensor]) – class indices corresponding to each box

  • gt_bboxes_ignore (None | list[Tensor]) – specify which bounding boxes can be ignored when computing the loss.

  • gt_masks (None | Tensor) – true segmentation masks for each box used if the architecture supports a segmentation task. Always set to None.

Returns

a dictionary of loss components

Return type

dict[str, Tensor]

simple_test_bboxes(x, img_metas, proposals, rcnn_test_cfg, rescale=False)[source]

Test only det bboxes without augmentation.

Parameters
  • x (tuple[Tensor]) – Feature maps of all scale level.

  • img_metas (list[dict]) – Image meta info.

  • proposals (List[Tensor]) – Region proposals.

  • (obj (rcnn_test_cfg) – ConfigDict): test_cfg of R-CNN.

  • rescale (bool) – If True, return boxes in original image space. Default: False.

Returns

The first list contains the boxes of the corresponding image in a batch, each tensor has the shape (num_boxes, 5) and last dimension 5 represent (cx, cy, w, h, a, score). Each Tensor in the second list is the labels with shape (num_boxes, ). The length of both lists should be equal to batch_size.

Return type

tuple[list[Tensor], list[Tensor]]

class mmrotate.models.roi_heads.RoITransRoIHead(num_stages, stage_loss_weights, bbox_roi_extractor=None, bbox_head=None, shared_head=None, train_cfg=None, test_cfg=None, pretrained=None, version='oc', init_cfg=None)[source]

RoI Trans cascade roi head including one bbox head.

Parameters
  • num_stages (int) – number of cascade stages.

  • stage_loss_weights (list[float]) – loss weights of cascade stages.

  • bbox_roi_extractor (dict, optional) – Config of bbox_roi_extractor.

  • bbox_head (dict, optional) – Config of bbox_head.

  • shared_head (dict, optional) – Config of shared_head.

  • train_cfg (dict, optional) – Config of train.

  • test_cfg (dict, optional) – Config of test.

  • pretrained (str, optional) – Path of pretrained weight.

  • version (str, optional) – Angle representations. Defaults to ‘oc’.

  • init_cfg (dict, optional) – Config of initialization.

aug_test(features, proposal_list, img_metas, rescale=False)[source]

Test with augmentations.

forward_dummy(x, proposals)[source]

Dummy forward function.

Parameters
  • x (list[Tensors]) – list of multi-level img features.

  • proposals (list[Tensors]) – list of region proposals.

Returns

list of region of interest.

Return type

list[Tensors]

forward_train(x, img_metas, proposal_list, gt_bboxes, gt_labels, gt_bboxes_ignore=None, gt_masks=None)[source]
Parameters
  • x (list[Tensor]) – list of multi-level img features.

  • img_metas (list[dict]) – list of image info dict where each dict has: ‘img_shape’, ‘scale_factor’, ‘flip’, and may also contain ‘filename’, ‘ori_shape’, ‘pad_shape’, and ‘img_norm_cfg’. For details on the values of these keys see mmdet/datasets/pipelines/formatting.py:Collect.

  • proposals (list[Tensors]) – list of region proposals.

  • gt_bboxes (list[Tensor]) – Ground truth bboxes for each image with shape (num_gts, 5) in [cx, cy, w, h, a] format.

  • gt_labels (list[Tensor]) – class indices corresponding to each box

  • gt_bboxes_ignore (None | list[Tensor]) – specify which bounding boxes can be ignored when computing the loss.

  • gt_masks (None | Tensor) – true segmentation masks for each box used if the architecture supports a segmentation task. Always set to None.

Returns

a dictionary of loss components

Return type

dict[str, Tensor]

init_assigner_sampler()[source]

Initialize assigner and sampler for each stage.

init_bbox_head(bbox_roi_extractor, bbox_head)[source]

Initialize box head and box roi extractor.

Parameters
  • bbox_roi_extractor (dict) – Config of box roi extractor.

  • bbox_head (dict) – Config of box in box head.

simple_test(x, proposal_list, img_metas, rescale=False)[source]

Test without augmentation.

Parameters
  • x (list[Tensor]) – list of multi-level img features.

  • proposal_list (list[Tensors]) – list of region proposals.

  • img_metas (list[dict]) – list of image info dict where each dict has: ‘img_shape’, ‘scale_factor’, ‘flip’, and may also contain ‘filename’, ‘ori_shape’, ‘pad_shape’, and ‘img_norm_cfg’.

  • rescale (bool) – If True, return boxes in original image space. Default: False.

Returns

a dictionary of bbox_results.

Return type

dict[str, Tensor]

class mmrotate.models.roi_heads.RotatedBBoxHead(with_avg_pool=False, with_cls=True, with_reg=True, roi_feat_size=7, in_channels=256, num_classes=80, bbox_coder={'clip_border': True, 'target_means': [0.0, 0.0, 0.0, 0.0], 'target_stds': [0.1, 0.1, 0.2, 0.2], 'type': 'DeltaXYWHBBoxCoder'}, reg_class_agnostic=False, reg_decoded_bbox=False, reg_predictor_cfg={'type': 'Linear'}, cls_predictor_cfg={'type': 'Linear'}, loss_cls={'loss_weight': 1.0, 'type': 'CrossEntropyLoss', 'use_sigmoid': False}, loss_bbox={'beta': 1.0, 'loss_weight': 1.0, 'type': 'SmoothL1Loss'}, init_cfg=None)[source]

Simplest RoI head, with only two fc layers for classification and regression respectively.

Parameters
  • with_avg_pool (bool, optional) – If True, use avg_pool.

  • with_cls (bool, optional) – If True, use classification branch.

  • with_reg (bool, optional) – If True, use regression branch.

  • roi_feat_size (int, optional) – Size of RoI features.

  • in_channels (int, optional) – Input channels.

  • num_classes (int, optional) – Number of classes.

  • bbox_coder (dict, optional) – Config of bbox coder.

  • reg_class_agnostic (bool, optional) – If True, regression branch are class agnostic.

  • reg_decoded_bbox (bool, optional) – If True, regression branch use decoded bbox to compute loss.

  • reg_predictor_cfg (dict, optional) – Config of regression predictor.

  • cls_predictor_cfg (dict, optional) – Config of classification predictor.

  • loss_cls (dict, optional) – Config of classification loss.

  • loss_bbox (dict, optional) – Config of regression loss.

  • init_cfg (dict, optional) – Config of initialization.

property custom_accuracy

The custom accuracy.

property custom_activation

The custom activation.

property custom_cls_channels

The custom cls channels.

forward(x)[source]

Forward function of Rotated BBoxHead.

get_bboxes(rois, cls_score, bbox_pred, img_shape, scale_factor, rescale=False, cfg=None)[source]

Transform network output for a batch into bbox predictions.

Parameters
  • rois (torch.Tensor) – Boxes to be transformed. Has shape (num_boxes, 5). last dimension 5 arrange as (batch_index, x1, y1, x2, y2).

  • cls_score (torch.Tensor) – Box scores, has shape (num_boxes, num_classes + 1).

  • bbox_pred (Tensor, optional) – Box energies / deltas. has shape (num_boxes, num_classes * 5).

  • img_shape (Sequence[int], optional) – Maximum bounds for boxes, specifies (H, W, C) or (H, W).

  • scale_factor (ndarray) – Scale factor of the image arrange as (w_scale, h_scale, w_scale, h_scale).

  • rescale (bool) – If True, return boxes in original image space. Default: False.

  • (obj (cfg) – ConfigDict): test_cfg of Bbox Head. Default: None

Returns

First tensor is det_bboxes, has the shape (num_boxes, 6) and last dimension 6 represent (cx, cy, w, h, a, score). Second tensor is the labels with shape (num_boxes, ).

Return type

tuple[Tensor, Tensor]

get_targets(sampling_results, gt_bboxes, gt_labels, rcnn_train_cfg, concat=True)[source]

Calculate the ground truth for all samples in a batch according to the sampling_results.

Almost the same as the implementation in bbox_head, we passed additional parameters pos_inds_list and neg_inds_list to _get_target_single function.

Parameters
  • (List[obj (sampling_results) – SamplingResults]): Assign results of all images in a batch after sampling.

  • gt_bboxes (list[Tensor]) – Gt_bboxes of all images in a batch, each tensor has shape (num_gt, 5), the last dimension 5 represents [cx, cy, w, h, a].

  • gt_labels (list[Tensor]) – Gt_labels of all images in a batch, each tensor has shape (num_gt,).

  • (obj (rcnn_train_cfg) – ConfigDict): train_cfg of RCNN.

  • concat (bool) – Whether to concatenate the results of all the images in a single batch.

Returns

Ground truth for proposals in a single image. Containing the following list of Tensors:

  • labels (list[Tensor],Tensor): Gt_labels for all proposals in a batch, each tensor in list has shape (num_proposals,) when concat=False, otherwise just a single tensor has shape (num_all_proposals,).

  • label_weights (list[Tensor]): Labels_weights for all proposals in a batch, each tensor in list has shape (num_proposals,) when concat=False, otherwise just a single tensor has shape (num_all_proposals,).

  • bbox_targets (list[Tensor],Tensor): Regression target for all proposals in a batch, each tensor in list has shape (num_proposals, 5) when concat=False, otherwise just a single tensor has shape (num_all_proposals, 5), the last dimension 4 represents [cx, cy, w, h, a].

  • bbox_weights (list[tensor],Tensor): Regression weights for all proposals in a batch, each tensor in list has shape (num_proposals, 5) when concat=False, otherwise just a single tensor has shape (num_all_proposals, 5).

Return type

Tuple[Tensor]

loss(cls_score, bbox_pred, rois, labels, label_weights, bbox_targets, bbox_weights, reduction_override=None)[source]

Loss function.

Parameters
  • cls_score (torch.Tensor) – Box scores, has shape (num_boxes, num_classes + 1).

  • bbox_pred (Tensor, optional) – Box energies / deltas. has shape (num_boxes, num_classes * 5).

  • rois (torch.Tensor) – Boxes to be transformed. Has shape (num_boxes, 5). last dimension 5 arrange as (batch_index, x1, y1, x2, y2).

  • labels (torch.Tensor) – Shape (n*bs, ).

  • label_weights (torch.Tensor) – Labels_weights for all proposals, has shape (num_proposals,).

  • bbox_targets (torch.Tensor) – Regression target for all proposals, has shape (num_proposals, 5), the last dimension 5 represents [cx, cy, w, h, a].

  • bbox_weights (list[tensor],Tensor) – Regression weights for all proposals in a batch, each tensor in list has shape (num_proposals, 5) when concat=False, otherwise just a single tensor has shape (num_all_proposals, 5).

  • reduction_override (str, optional) – The reduction method used to override the original reduction method of the loss. Defaults to None.

refine_bboxes(rois, labels, bbox_preds, pos_is_gts, img_metas)[source]

Refine bboxes during training.

Parameters
  • rois (torch.Tensor) – Shape (n*bs, 5), where n is image number per GPU, and bs is the sampled RoIs per image. The first column is the image id and the next 4 columns are x1, y1, x2, y2.

  • labels (torch.Tensor) – Shape (n*bs, ).

  • bbox_preds (torch.Tensor) – Shape (n*bs, 5) or (n*bs, 5*#class).

  • pos_is_gts (list[Tensor]) – Flags indicating if each positive bbox is a gt bbox.

  • img_metas (list[dict]) – Meta info of each image.

Returns

Refined bboxes of each image in a mini-batch.

Return type

list[Tensor]

regress_by_class(rois, label, bbox_pred, img_meta)[source]

Regress the bbox for the predicted class. Used in Cascade R-CNN.

Parameters
  • rois (torch.Tensor) – shape (n, 4) or (n, 5)

  • label (torch.Tensor) – shape (n, )

  • bbox_pred (torch.Tensor) – shape (n, 5*(#class)) or (n, 5)

  • img_meta (dict) – Image meta info.

Returns

Regressed bboxes, the same shape as input rois.

Return type

Tensor

class mmrotate.models.roi_heads.RotatedConvFCBBoxHead(num_shared_convs=0, num_shared_fcs=0, num_cls_convs=0, num_cls_fcs=0, num_reg_convs=0, num_reg_fcs=0, conv_out_channels=256, fc_out_channels=1024, conv_cfg=None, norm_cfg=None, init_cfg=None, *args, **kwargs)[source]

More general bbox head, with shared conv and fc layers and two optional separated branches.

                            /-> cls convs -> cls fcs -> cls
shared convs -> shared fcs
                            \-> reg convs -> reg fcs -> reg
Parameters
  • num_shared_convs (int, optional) – number of shared_convs.

  • num_shared_fcs (int, optional) – number of shared_fcs.

  • num_cls_convs (int, optional) – number of cls_convs.

  • num_cls_fcs (int, optional) – number of cls_fcs.

  • num_reg_convs (int, optional) – number of reg_convs.

  • num_reg_fcs (int, optional) – number of reg_fcs.

  • conv_out_channels (int, optional) – output channels of convolution.

  • fc_out_channels (int, optional) – output channels of fc.

  • conv_cfg (dict, optional) – Config of convolution.

  • norm_cfg (dict, optional) – Config of normalization.

  • init_cfg (dict, optional) – Config of initialization.

forward(x)[source]

Forward function.

class mmrotate.models.roi_heads.RotatedShared2FCBBoxHead(fc_out_channels=1024, *args, **kwargs)[source]

Shared2FC RBBox head.

class mmrotate.models.roi_heads.RotatedSingleRoIExtractor(roi_layer, out_channels, featmap_strides, finest_scale=56, init_cfg=None)[source]

Extract RoI features from a single level feature map.

If there are multiple input feature levels, each RoI is mapped to a level according to its scale. The mapping rule is proposed in FPN.

Parameters
  • roi_layer (dict) – Specify RoI layer type and arguments.

  • out_channels (int) – Output channels of RoI layers.

  • featmap_strides (List[int]) – Strides of input feature maps.

  • finest_scale (int) – Scale threshold of mapping to level 0. Default: 56.

  • init_cfg (dict or list[dict], optional) – Initialization config dict. Default: None

build_roi_layers(layer_cfg, featmap_strides)[source]

Build RoI operator to extract feature from each level feature map.

Parameters
  • layer_cfg (dict) – Dictionary to construct and config RoI layer operation. Options are modules under mmcv/ops such as RoIAlign.

  • featmap_strides (List[int]) – The stride of input feature map w.r.t to the original image size, which would be used to scale RoI coordinate (original image coordinate system) to feature coordinate system.

Returns

The RoI extractor modules for each level feature map.

Return type

nn.ModuleList

forward(feats, rois, roi_scale_factor=None)[source]

Forward function.

Parameters
  • feats (torch.Tensor) – Input features.

  • rois (torch.Tensor) – Input RoIs, shape (k, 5).

  • scale_factor (float) – Scale factor that RoI will be multiplied by.

Returns

Scaled RoI features.

Return type

torch.Tensor

map_roi_levels(rois, num_levels)[source]

Map rois to corresponding feature levels by scales.

  • scale < finest_scale * 2: level 0

  • finest_scale * 2 <= scale < finest_scale * 4: level 1

  • finest_scale * 4 <= scale < finest_scale * 8: level 2

  • scale >= finest_scale * 8: level 3

Parameters
  • rois (torch.Tensor) – Input RoIs, shape (k, 5).

  • num_levels (int) – Total level number.

Returns

Level index (0-based) of each RoI, shape (k, )

Return type

Tensor

roi_rescale(rois, scale_factor)[source]

Scale RoI coordinates by scale factor.

Parameters
  • rois (torch.Tensor) – RoI (Region of Interest), shape (n, 6)

  • scale_factor (float) – Scale factor that RoI will be multiplied by.

Returns

Scaled RoI.

Return type

torch.Tensor

class mmrotate.models.roi_heads.RotatedStandardRoIHead(bbox_roi_extractor=None, bbox_head=None, shared_head=None, train_cfg=None, test_cfg=None, pretrained=None, init_cfg=None, version='oc')[source]

Simplest base rotated roi head including one bbox head.

Parameters
  • bbox_roi_extractor (dict, optional) – Config of bbox_roi_extractor.

  • bbox_head (dict, optional) – Config of bbox_head.

  • shared_head (dict, optional) – Config of shared_head.

  • train_cfg (dict, optional) – Config of train.

  • test_cfg (dict, optional) – Config of test.

  • pretrained (str, optional) – Path of pretrained weight.

  • init_cfg (dict, optional) – Config of initialization.

  • version (str, optional) – Angle representations. Defaults to ‘oc’.

async async_simple_test(x, proposal_list, img_metas, rescale=False)[source]

Async test without augmentation.

Parameters
  • x (list[Tensor]) – list of multi-level img features.

  • proposal_list (list[Tensors]) – list of region proposals.

  • img_metas (list[dict]) – list of image info dict where each dict has: ‘img_shape’, ‘scale_factor’, ‘flip’, and may also contain ‘filename’, ‘ori_shape’, ‘pad_shape’, and ‘img_norm_cfg’.

  • rescale (bool) – If True, return boxes in original image space. Default: False.

Returns

a dictionary of bbox_results.

Return type

dict[str, Tensor]

aug_test(x, proposal_list, img_metas, rescale=False)[source]

Test with augmentations.

forward_dummy(x, proposals)[source]

Dummy forward function.

Parameters
  • x (list[Tensors]) – list of multi-level img features.

  • proposals (list[Tensors]) – list of region proposals.

Returns

list of region of interest.

Return type

list[Tensors]

forward_train(x, img_metas, proposal_list, gt_bboxes, gt_labels, gt_bboxes_ignore=None, gt_masks=None)[source]
Parameters
  • x (list[Tensor]) – list of multi-level img features.

  • img_metas (list[dict]) – list of image info dict where each dict has: ‘img_shape’, ‘scale_factor’, ‘flip’, and may also contain ‘filename’, ‘ori_shape’, ‘pad_shape’, and ‘img_norm_cfg’. For details on the values of these keys see mmdet/datasets/pipelines/formatting.py:Collect.

  • proposals (list[Tensors]) – list of region proposals.

  • gt_bboxes (list[Tensor]) – Ground truth bboxes for each image with shape (num_gts, 5) in [cx, cy, w, h, a] format.

  • gt_labels (list[Tensor]) – class indices corresponding to each box

  • gt_bboxes_ignore (None | list[Tensor]) – specify which bounding boxes can be ignored when computing the loss.

  • gt_masks (None | Tensor) – true segmentation masks for each box used if the architecture supports a segmentation task. Always set to None.

Returns

a dictionary of loss components.

Return type

dict[str, Tensor]

init_assigner_sampler()[source]

Initialize assigner and sampler.

init_bbox_head(bbox_roi_extractor, bbox_head)[source]

Initialize bbox_head.

Parameters
  • bbox_roi_extractor (dict) – Config of bbox_roi_extractor.

  • bbox_head (dict) – Config of bbox_head.

simple_test(x, proposal_list, img_metas, rescale=False)[source]

Test without augmentation.

Parameters
  • x (list[Tensor]) – list of multi-level img features.

  • proposal_list (list[Tensors]) – list of region proposals.

  • img_metas (list[dict]) – list of image info dict where each dict has: ‘img_shape’, ‘scale_factor’, ‘flip’, and may also contain ‘filename’, ‘ori_shape’, ‘pad_shape’, and ‘img_norm_cfg’.

  • rescale (bool) – If True, return boxes in original image space. Default: False.

Returns

a dictionary of bbox_results.

Return type

dict[str, Tensor]

simple_test_bboxes(x, img_metas, proposals, rcnn_test_cfg, rescale=False)[source]

Test only det bboxes without augmentation.

Parameters
  • x (tuple[Tensor]) – Feature maps of all scale level.

  • img_metas (list[dict]) – Image meta info.

  • proposals (List[Tensor]) – Region proposals.

  • (obj (rcnn_test_cfg) – ConfigDict): test_cfg of R-CNN.

  • rescale (bool) – If True, return boxes in original image space. Default: False.

Returns

The first list contains the boxes of the corresponding image in a batch, each tensor has the shape (num_boxes, 5) and last dimension 5 represent (tl_x, tl_y, br_x, br_y, score). Each Tensor in the second list is the labels with shape (num_boxes, ). The length of both lists should be equal to batch_size.

Return type

tuple[list[Tensor], list[Tensor]]

losses

class mmrotate.models.losses.BCConvexGIoULoss(reduction='mean', loss_weight=1.0)[source]

BCConvex GIoU loss.

Computing the BCConvex GIoU loss between a set of predicted convexes and target convexes.

Parameters
  • reduction (str, optional) – The reduction method of the loss. Defaults to ‘mean’.

  • loss_weight (float, optional) – The weight of loss. Defaults to 1.0.

Returns

Loss tensor.

Return type

torch.Tensor

forward(pred, target, weight=None, avg_factor=None, reduction_override=None, **kwargs)[source]

Forward function.

Parameters
  • pred (torch.Tensor) – Predicted convexes.

  • target (torch.Tensor) – Corresponding gt convexes.

  • weight (torch.Tensor, optional) – The weight of loss for each prediction. Defaults to None.

  • avg_factor (int, optional) – Average factor that is used to average the loss. Defaults to None.

  • reduction_override (str, optional) – The reduction method used to override the original reduction method of the loss. Defaults to None.

class mmrotate.models.losses.ConvexGIoULoss(reduction='mean', loss_weight=1.0)[source]

Convex GIoU loss.

Computing the Convex GIoU loss between a set of predicted convexes and target convexes.

Parameters
  • reduction (str, optional) – The reduction method of the loss. Defaults to ‘mean’.

  • loss_weight (float, optional) – The weight of loss. Defaults to 1.0.

Returns

Loss tensor.

Return type

torch.Tensor

forward(pred, target, weight=None, avg_factor=None, reduction_override=None, **kwargs)[source]

Forward function.

Parameters
  • pred (torch.Tensor) – Predicted convexes.

  • target (torch.Tensor) – Corresponding gt convexes.

  • weight (torch.Tensor, optional) – The weight of loss for each prediction. Defaults to None.

  • avg_factor (int, optional) – Average factor that is used to average the loss. Defaults to None.

  • reduction_override (str, optional) – The reduction method used to override the original reduction method of the loss. Defaults to None.

class mmrotate.models.losses.GDLoss(loss_type, representation='xy_wh_r', fun='log1p', tau=0.0, alpha=1.0, reduction='mean', loss_weight=1.0, **kwargs)[source]

Gaussian based loss.

Parameters
  • loss_type (str) – Type of loss.

  • representation (str, optional) – Coordinate System.

  • fun (str, optional) – The function applied to distance. Defaults to ‘log1p’.

  • tau (float, optional) – Defaults to 1.0.

  • alpha (float, optional) – Defaults to 1.0.

  • reduction (str, optional) – The reduction method of the loss. Defaults to ‘mean’.

  • loss_weight (float, optional) – The weight of loss. Defaults to 1.0.

Returns

loss (torch.Tensor)

forward(pred, target, weight=None, avg_factor=None, reduction_override=None, **kwargs)[source]

Forward function.

Parameters
  • pred (torch.Tensor) – Predicted convexes.

  • target (torch.Tensor) – Corresponding gt convexes.

  • weight (torch.Tensor, optional) – The weight of loss for each prediction. Defaults to None.

  • avg_factor (int, optional) – Average factor that is used to average the loss. Defaults to None.

  • reduction_override (str, optional) – The reduction method used to override the original reduction method of the loss. Defaults to None.

class mmrotate.models.losses.GDLoss_v1(loss_type, fun='sqrt', tau=1.0, reduction='mean', loss_weight=1.0, **kwargs)[source]

Gaussian based loss.

Parameters
  • loss_type (str) – Type of loss.

  • fun (str, optional) – The function applied to distance. Defaults to ‘log1p’.

  • tau (float, optional) – Defaults to 1.0.

  • reduction (str, optional) – The reduction method of the loss. Defaults to ‘mean’.

  • loss_weight (float, optional) – The weight of loss. Defaults to 1.0.

Returns

loss (torch.Tensor)

forward(pred, target, weight=None, avg_factor=None, reduction_override=None, **kwargs)[source]

Forward function.

Parameters
  • pred (torch.Tensor) – Predicted convexes.

  • target (torch.Tensor) – Corresponding gt convexes.

  • weight (torch.Tensor, optional) – The weight of loss for each prediction. Defaults to None.

  • avg_factor (int, optional) – Average factor that is used to average the loss. Defaults to None.

  • reduction_override (str, optional) – The reduction method used to override the original reduction method of the loss. Defaults to None.

class mmrotate.models.losses.KFLoss(fun='none', reduction='mean', loss_weight=1.0, **kwargs)[source]

Kalman filter based loss.

Parameters
  • fun (str, optional) – The function applied to distance. Defaults to ‘log1p’.

  • reduction (str, optional) – The reduction method of the loss. Defaults to ‘mean’.

  • loss_weight (float, optional) – The weight of loss. Defaults to 1.0.

Returns

loss (torch.Tensor)

forward(pred, target, weight=None, avg_factor=None, pred_decode=None, targets_decode=None, reduction_override=None, **kwargs)[source]

Forward function.

Parameters
  • pred (torch.Tensor) – Predicted convexes.

  • target (torch.Tensor) – Corresponding gt convexes.

  • weight (torch.Tensor, optional) – The weight of loss for each prediction. Defaults to None.

  • avg_factor (int, optional) – Average factor that is used to average the loss. Defaults to None.

  • pred_decode (torch.Tensor) – Predicted decode bboxes.

  • targets_decode (torch.Tensor) – Corresponding gt decode bboxes.

  • reduction_override (str, optional) – The reduction method used to override the original reduction method of the loss. Defaults to None.

Returns

loss (torch.Tensor)

class mmrotate.models.losses.KLDRepPointsLoss(eps=1e-06, reduction='mean', loss_weight=1.0)[source]

Kullback-Leibler Divergence loss for RepPoints.

Parameters
  • eps (float) – Defaults to 1e-6.

  • reduction (str, optional) – The reduction method of the loss. Defaults to ‘mean’.

  • loss_weight (float, optional) – The weight of loss. Defaults to 1.0.

forward(pred, target, weight=None, avg_factor=None, reduction_override=None, **kwargs)[source]

Forward function.

Parameters
  • pred (torch.Tensor) – Predicted convexes.

  • target (torch.Tensor) – Corresponding gt convexes.

  • weight (torch.Tensor, optional) – The weight of loss for each prediction. Defaults to None.

  • avg_factor (int, optional) – Average factor that is used to average the loss. Defaults to None.

  • reduction_override (str, optional) – The reduction method used to override the original reduction method of the loss. Defaults to None.

Returns

loss (torch.Tensor)

class mmrotate.models.losses.SmoothFocalLoss(gamma=2.0, alpha=0.25, reduction='mean', loss_weight=1.0)[source]

Smooth Focal Loss. Implementation of Circular Smooth Label (CSL).

Parameters
  • gamma (float, optional) – The gamma for calculating the modulating factor. Defaults to 2.0.

  • alpha (float, optional) – A balanced form for Focal Loss. Defaults to 0.25.

  • reduction (str, optional) – The method used to reduce the loss into a scalar. Defaults to ‘mean’. Options are “none”, “mean” and “sum”.

  • loss_weight (float, optional) – Weight of loss. Defaults to 1.0.

Returns

loss (torch.Tensor)

forward(pred, target, weight=None, avg_factor=None, reduction_override=None)[source]

Forward function.

Parameters
  • pred (torch.Tensor) – The prediction.

  • target (torch.Tensor) – The learning label of the prediction.

  • weight (torch.Tensor, optional) – The weight of loss for each prediction. Defaults to None.

  • avg_factor (int, optional) – Average factor that is used to average the loss. Defaults to None.

  • reduction_override (str, optional) – The reduction method used to override the original reduction method of the loss. Options are “none”, “mean” and “sum”.

Returns

The calculated loss

Return type

torch.Tensor

utils

class mmrotate.models.utils.ORConv2d(in_channels, out_channels, kernel_size=3, arf_config=None, stride=1, padding=0, dilation=1, groups=1, bias=True)[source]

Oriented 2-D convolution.

Parameters
  • in_channels (List[int]) – Number of input channels per scale.

  • out_channels (int) – Number of output channels (used at each scale).

  • kernel_size (int, optional) – The size of kernel.

  • arf_config (tuple, optional) – a tuple consist of nOrientation and nRotation.

  • stride (int, optional) – Stride of the convolution. Default: 1.

  • padding (int or tuple) – Zero-padding added to both sides of the input. Default: 0.

  • dilation (int or tuple) – Spacing between kernel elements. Default: 1.

  • groups (int) – Number of blocked connections from input. channels to output channels. Default: 1.

  • bias (bool) – If True, adds a learnable bias to the output. Default: False.

forward(input)[source]

Forward function.

get_indices()[source]

Get the indices of ORConv2d.

reset_parameters()[source]

Reset the parameters of ORConv2d.

rotate_arf()[source]

Build active rotating filter module.

class mmrotate.models.utils.RotationInvariantPooling(nInputPlane, nOrientation=8)[source]

Rotating invariant pooling module.

Parameters
  • nInputPlane (int) – The number of Input plane.

  • nOrientation (int, optional) – The number of oriented channels.

forward(x)[source]

Forward function.

mmrotate.models.utils.build_enn_divide_feature(planes)[source]

build a enn regular feature map with the specified number of channels divided by N.

mmrotate.models.utils.build_enn_feature(planes)[source]

build a enn regular feature map with the specified number of channels.

mmrotate.models.utils.build_enn_norm_layer(num_features, postfix='')[source]

build an enn normalizion layer.

mmrotate.models.utils.build_enn_trivial_feature(planes)[source]

build a enn trivial feature map with the specified number of channels.

mmrotate.models.utils.ennAvgPool(inplanes, kernel_size=1, stride=None, padding=0, ceil_mode=False)[source]

enn Average Pooling.

Parameters
  • inplanes (int) – The number of input channel.

  • kernel_size (int, optional) – The size of kernel.

  • stride (int, optional) – Stride of the convolution. Default: 1.

  • padding (int or tuple) – Zero-padding added to both sides of the input. Default: 0.

  • ceil_mode (bool, optional) – if True, keep information in the corner of feature map.

mmrotate.models.utils.ennConv(inplanes, outplanes, kernel_size=3, stride=1, padding=0, groups=1, bias=False, dilation=1)[source]

enn convolution.

Parameters
  • in_channels (List[int]) – Number of input channels per scale.

  • out_channels (int) – Number of output channels (used at each scale).

  • kernel_size (int, optional) – The size of kernel.

  • stride (int, optional) – Stride of the convolution. Default: 1.

  • padding (int or tuple) – Zero-padding added to both sides of the input. Default: 0.

  • groups (int) – Number of blocked connections from input. channels to output channels. Default: 1.

  • bias (bool) – If True, adds a learnable bias to the output. Default: False.

  • dilation (int or tuple) – Spacing between kernel elements. Default: 1.

mmrotate.models.utils.ennInterpolate(inplanes, scale_factor, mode='nearest', align_corners=False)[source]

enn Interpolate.

mmrotate.models.utils.ennMaxPool(inplanes, kernel_size, stride=1, padding=0)[source]

enn Max Pooling.

mmrotate.models.utils.ennReLU(inplanes)[source]

enn ReLU.

mmrotate.models.utils.ennTrivialConv(inplanes, outplanes, kernel_size=3, stride=1, padding=0, groups=1, bias=False, dilation=1)[source]

enn convolution with trivial input featurn.

Parameters
  • in_channels (List[int]) – Number of input channels per scale.

  • out_channels (int) – Number of output channels (used at each scale).

  • kernel_size (int, optional) – The size of kernel.

  • stride (int, optional) – Stride of the convolution. Default: 1.

  • padding (int or tuple) – Zero-padding added to both sides of the input. Default: 0.

  • groups (int) – Number of blocked connections from input. channels to output channels. Default: 1.

  • bias (bool) – If True, adds a learnable bias to the output. Default: False.

  • dilation (int or tuple) – Spacing between kernel elements. Default: 1.

mmrotate.utils

mmrotate.utils.collect_env()[source]

Collect environment information.

mmrotate.utils.find_latest_checkpoint(path, suffix='pth')[source]

Find the latest checkpoint from the working directory.

Parameters
  • path (str) – The path to find checkpoints.

  • suffix (str) – File extension. Defaults to pth.

Returns

File path of the latest checkpoint.

Return type

latest_path(str | None)

References

1

https://github.com/microsoft/SoftTeacher /blob/main/ssod/utils/patch.py

mmrotate.utils.get_root_logger(log_file=None, log_level=20)[source]

Get root logger.

Parameters
  • log_file (str, optional) – File path of log. Defaults to None.

  • log_level (int, optional) – The level of logger. Defaults to logging.INFO.

Returns

The obtained logger

Return type

logging.Logger

Read the Docs v: v0.2.0
Versions
latest
stable
v0.2.0
v0.1.1
v0.1.0
main
dev
Downloads
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.