Sentinel-1 is a satellite managed by the European Space Agency. Read about the use of imagery from Sentinel-1 in Skylight here.
Model
The Sentinel-1 Model is based upon the classic object detection model, Faster R-CNN, which is a two stage detector made up of a Region Proposal Network (RPN) and a classification head. The RPN’s job is to generate proposals, where there is likely an object of interest, and the classification head grabs the most confident of these proposals and predicts scores (such as is_vessel) for each proposal.
The model detects vessels and predicts vessel attributes from Sentinel-1 SAR images. In particular, it uses the dual polarization mode (VV + VH) of the Interferometric Wide swath (IW) acquisition mode of Sentinel-11, and produces point detections of vessels, cropped outputs surrounding those detections, and attributes associated with the detected vessel; currently estimated length is displayed in Skylight).
Training Data
The current version of the model has been trained on Sentinel-1 scenes from several geographic areas, mostly near coast, that were annotated by hand by subject matter experts. A total of 55,499 point labels were used. These areas are distributed globally and shown in the image below.
...
To distinguish moving ships from static objects like islands, platforms. and other static non-vessel structures, the model is trained on data that includes overlapping images captured at different times. The model compares these images and learns to disregard static images.
Validation Data
The validation set is data held back from training specifically so it can be used to validate the model. For the Skylight Sentinel-1 model a total of 6156 point labels were used for validation. The global distribution for the validation set is roughly the same as the training set.
...