-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Training issue - multiple image pairs #38
Comments
Hi @quizz0n Nice images! I suspect that this comes from the dynamic of the input HR images used to train the model. Indeed, when you train a model from several (LR, HR) pairs, the most important thing - and difficult to achieve, is to preserve the LR --> HR radiometric transformation: it must be the same for all images. Else, the network might perform some "random" radiometric transform, because it has learnt multiple "plausible" mappings, but that do not transform the radiometry the same way. The dark spots impacting your images still looks "plausible" locally, because the model might have learnt multiple Maybe you want that your model preserves the input image radiometry. In this case, one simple thing do to prior to training, is to align radiometrically the HR patches over the LR patches. To do that, you can for instance normalize each HR patches image over the LR patches image (e.g. compute the
|
Hi @remicres, thank you for the fast reply! In order to normalize each HR patches image over the LR patches image, I'm not sure I understand what would the |
For instance you can compute them with least square regression for an entire patches-image, considering that one point is the average pixel value for a single patch. |
I've seen that in GRASS there's the
This should compute the |
Ah nice if there is such thing in another OSGeo software ! 👍 You could compute the averaged HR image over the same pixel spacing as LR. Maybe using |
Thanks @remicres! |
Yes, you could compute them with least square regression for an entire patches-image, considering that one point is the average pixel value for a single patch. |
Hi @remicres, From left to right: pre-trained model, original Spot-6 image, own model (about 2k patches). |
Hi @quizz0n , Thanks for the feedback. Out of curiosity, which part of the world is that? (our pre-trained model has been only trained from patches over France mainland) |
Thanks! |
Hi @remicres,
After a successful run a while ago (only one pair of images: Sentinel-2/Spot-6 - 1276 patches), I wanted to train a new model with 4 additional pairs of images above different cities. So in total 5 image pairs - 2208 total number of patches (1276+345+158+264+165 patches).
I fallowed the same steps when preparing the patches images, however the training with 5 image pairs is not successful (incomplete parts in the final image).
I also did a run with only 2 image pairs, first image pair used in the first run a while ago + a random image pair to check if maybe there's an issue with one of the pairs. It looks to be the same just not as spread, probably because of the smaller quantity of additional patches.
Another test that I did was to apply the initial successful model to all the new Sentinel-2 images used for the new patches of the 5 image pairs. I thought I can check by this if there's something wrong with the dynamic of the new images or anything else. All of the resulting images were good.
TensorBoard:
Applying the model to a Sentinel-2 image:
I'm thinking maybe it's the way the files are read in the train command?
Thanks!
The text was updated successfully, but these errors were encountered: