Object Heading Detection Dataset (OHD-SJTU)
Data Description

Data Description

OHD-SJTU is our open source new dataset for rotation detection and object heading detection. OHD-SJTU contains two different scale datasets, namely OHD-SJTU-S and OHD-SJTU-L. OHD-SJTU-S is collected publicly from Google Earth with 43 large scene images sized 10,000x10,000 pixels and 16,000x16,000. It contains two object categories (ship and plane) and 4125 instances (3,343 ships and 782 planes). Each object is labeled by an arbitrary quadrilateral, and the first marked point is the head position of the object to facilitate the head prediction. We randomly selected 30 original images as the training and validation set, and 13 images as the testing set. The scenes cover a decent variety of road scenes and typical: cloud occlusion, seamless dense arrangement, strong changes in illumination/exposure, mixed sea and land scenes and large number of interfering objects. In contrast, OHD-SJTU-L adds more categories on the basis of OHD-SJTU-S, such as small vehicle, large vehicle, harbor, and helicopter. The additional data comes from DOTA, but we reprocess the annotations and add the annotations of the object head. According to statistics, OHD-SJTU-L contains six object categories and 113,435 instances. Compared with the AP50 used by DOTA as the evaluation indicator, OHD-SJTU uses a more stringent AP50:95 to measure the performance of the method, which poses a further challenge to the high accuracy of the detector.

Authors
  • Xue Yang, Shanghai Jiao Tong University, China
  • Junchi Yan (corresponding author), Shanghai Jiao Tong University, China
Download
Example Images
Each object is labeled by an arbitrary quadrilateral, and the first marked point is the head position of the object to facilitate the head prediction.



Baseline Methods

We divide the training and validation images into 600x600 subimages with an overlap of 150 pixels and scale it to 800x800. In the process of cropping the image with the sliding window, keeping those objects whose center point is in the subimage. All experiments are based on the same setting, using ResNet101 as the backbone. Except for data augmentation (include random horizontal, vertical flipping, random graying, and random rotation) is used in OHD-SJTU-S, no other tricks are used.

Performance on OBB task of OHD-SJTU-L:

MethodPLSHSVLVHAHCAP50AP75AP50:95
R2CNN89.9971.9354.0065.4666.3655.9467.2832.6934.78
RRPN89.6675.3550.2572.2262.9945.2665.9621.2430.13
RetinaNet-H90.2066.9953.5863.3863.7553.8265.2934.5935.39
RetinaNet-R89.9977.6551.7781.2262.8552.2569.2939.0738.90
R3Det89.8978.3655.2378.3557.0653.5068.7335.3637.10
OHDet89.7277.4052.8978.7263.7654.6269.5241.8939.51


Performance on OBB task of OHD-SJTU-S:

MethodPLSHAP50AP75AP50:95
R2CNN90.9177.6684.2855.0052.80
RRPN90.1476.1383.1327.8740.74
RetinaNet-H90.8666.3278.5958.4553.07
RetinaNet-R90.8288.1489.4874.6261.86
R3Det90.8285.5988.2167.1356.19
OHDet90.7487.5989.0678.5563.94


The performance of object heading detection on OHD-SJTU-L:

TaskPLSHSVLVHAHCIoU50IoU75IoU50:95
OBB mAP89.6375.8846.2175.8861.4333.8763.8837.4536.42
OHD mAP59.8841.9026.2135.3441.2417.5337.0224.1022.46
Head Accuracy74.4969.7162.2157.9576.6649.0665.0165.7764.60


The performance of object heading detection on OHD-SJTU-S:

TaskPLSHIoU50IoU75IoU50:95
OBB mAP90.7388.5989.6675.6261.49
OHD mAP76.8986.4081.6565.5155.09
Head Accuracy90.9194.8792.8993.8194.25