Deploying deep learning models into space missions is difficult due to the scarcity of real-life data from space.
Particularly in spaceborne computer vision applications, while training can rely on synthetic data from computer renderers,
the validation of the trained neural networks remains a big challenge. Our SPEED+ dataset
addressed this challenge by introducing Hardware-In-the-Loop (HIL) images captured from the
Testbed for Rendezvous and Optical Navigation (TRON) facility
at the Space Rendezvous Laboratory (SLAB). Amounting to nearly 10,000 in quantity, these HIL images
are captured with a real camera and a mockup satellite model in high-fidelity spaceborne illumination conditions physically re-created on Earth,
making it possible to evaluate the robustness of trained models across domain gap without accessing space.
The Satellite Hardware-In-the-loop Rendezvous Trajectories (SHIRT) dataset extends SPEED+ and includes sequential images
of the target mockup satellite in simulated rendezvous trajectories. Similar to SPEED+, the dataset contains both synthetic imagery from
OpenGL and HIL lightbox images from TRON corresponding to identical pose labels of two representative rendezvous scenarios: ROE1 and ROE2.
In ROE1, the servicer maintains the along-track separation typical of a standard v-bar hold point while the target spins about one principal axis,
whereas in ROE2, the servicer slowly approaches the target tumbling about two principal axes.
As shown in the GIF above, the synthetic and lightbox images for the same trajectory
share geometric and illumination consistency while also exhibiting stark difference in visual features. For more information, see
Section V of the paper below.
|