In this project, I trained 4 deep learning segmentation models on an artificial Lunar Dataset to see how they will perform on real images from Nasa.
For this project, I trained and tested 4 different segmentation models:
UNet
LinkNet
PSPNet
FPN
All of them had very similar training procedure, you can therefore consult the notebook I used to train the FPN and extrapolate the main components of it to the others.
The second notebook is where I tested my model on the test dataset and on real moon images. This dataset comes from kaggle.
I worked with:
Around 7000 images for train set
Around 2000 images for validation set
Around 1000 images for the test set
Around 40 images from real moon pictures
I then tried my model on an Apollo video shot from a rover driven during the 1972's Apollo 15 mission. All these results are consultable in my presentation video.
You can also dive into my code in these multiple notebooks !
My research interests include deep learning technologies, automatic feature extraction and computer vision, all of them applied to Remote Sensing problematics, more precisely to Synthetic Aperture Radar (SAR) acquisitions.