SHIRT: Satellite Hardware-In-the-loop Rendezvous Trajectories Dataset
Tae Ha Park
Simone D'Amico

[Stanford Digital Repository]
[BibTeX]

Example low-resolution images of the synthetic and lightbox domains of ROE2 trajectory.


Introduction

Deploying deep learning models into space missions is difficult due to the scarcity of real-life data from space. Particularly in spaceborne computer vision applications, while training can rely on synthetic data from computer renderers, the validation of the trained neural networks remains a big challenge. Our SPEED+ dataset addressed this challenge by introducing Hardware-In-the-Loop (HIL) images captured from the Testbed for Rendezvous and Optical Navigation (TRON) facility at the Space Rendezvous Laboratory (SLAB). Amounting to nearly 10,000 in quantity, these HIL images are captured with a real camera and a mockup satellite model in high-fidelity spaceborne illumination conditions physically re-created on Earth, making it possible to evaluate the robustness of trained models across domain gap without accessing space.

The Satellite Hardware-In-the-loop Rendezvous Trajectories (SHIRT) dataset extends SPEED+ and includes sequential images of the target mockup satellite in simulated rendezvous trajectories. Similar to SPEED+, the dataset contains both synthetic imagery from OpenGL and HIL lightbox images from TRON corresponding to identical pose labels of two representative rendezvous scenarios: ROE1 and ROE2. In ROE1, the servicer maintains the along-track separation typical of a standard v-bar hold point while the target spins about one principal axis, whereas in ROE2, the servicer slowly approaches the target tumbling about two principal axes. As shown in the GIF above, the synthetic and lightbox images for the same trajectory share geometric and illumination consistency while also exhibiting stark difference in visual features. For more information, see Section V of the paper below.


Format

The SHIRT data is hosted on the Stanford Digital Repository (SDR) and is released under the CC BY-NC-SA 4.0 license. The single .zip file (3.01GB) containing the dataset is organized as shown below:

  • roe1
    • - synthetic
      • - images: folder containing JPEG images of the ROE1 synthetic trajectory
    • - lightbox
      • - images: folder containing JPEG images of the ROE1 lightbox trajectory
    • - roe1.json: list of all file names and associated pose labels
    • - metadata.json: list of all metadata, such as absolute states and simulation parameters
  • roe2
    • - synthetic
      • - images: folder containing JPEG images of the ROE2 synthetic trajectory
    • - lightbox
      • - images: folder containing JPEG images of the ROE2 lightbox trajectory
    • - roe2.json: list of all file names and associated pose labels
    • - metadata.json: list of all metadata, such as absolute states and simulation parameters
  • camera.json: list of camera intrinsic parameters
  • LICENSE.md: dataset license file
  • METADATA.md: file explaining the content of metadata.json files


Paper and Supplementary Material

Park, T. H., D'Amico, S.
Adaptive Neural-Network-Based Unscented Kalman Filter for Robust Pose Tracking of Noncooperative Spacecraft.
Journal of Guidance, Control, and Dynamics (2023).


[arXiv]
[Official]
[Bibtex]


Acknowledgements

The authors would like to thank OHB Sweden for the 3D model of the Tango spacecraft used to create the images. This work is partially supported by Taqnia International through contract 1232617-1-GWNDV.

This template was originally made by Phillip Isola and Richard Zhang for a colorful ECCV project; the code can be found here.