-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
images to sr.py #15
Comments
Hi @raialvaro , It looks like your model hasn't converged, or, it has been applied to an image with the wrong dynamic. To answer your questions:
|
Hello Remi, very interesting what you are saying. In my case I have the same problem. I would like to ask you how to put Tensorboard to be able to visualize the training curve as well as the validation in images. More specifically how should I add it to the train.py code to make tensorboar work correctly?. Thank you very much for your attention. Best regards |
You have to use In the mean time, you start tensorboard with something like
Then open you favorite web browser and connect to your tensorboard (e.g. Tensorboard will help you to find "good" losses weights for L1, L2, GANs , etc. since it enables you to watch how losses evolves during training. My advice is to start with losses that have the same magnitude order. Do not change the GAN loss weight, but carefully adjust L1 (or L2) and VGG losses weights (if you use perceptual loss with pretrained VGG). |
I will improve the documentation soon! |
Hi Remi!
I have managed to get a training model with Sentinel2 images in float32 and I am about to test the weights.
I have trained with a 696 patches and a 120 epochs (default)
I have two questions:
and the result of passing sr.py is the following
I don't know if it requires more epochs or patches to train or is it not the optimal image format to predict
Traceback (most recent call last):
File "sr.py", line 74, in
infer.ExecuteAndWriteOutput()
File "/work/otb/superbuild_install/lib/otb/python/otbApplication.py", line 2801, in ExecuteAndWriteOutput
return _otbApplication.Application_ExecuteAndWriteOutput(self)
RuntimeError: Exception thrown in otbApplication Application_ExecuteAndWriteOutput: /work/otb/otb/Modules/Remote/otbtf/include/otbTensorflowMultisourceModelFilter.hxx:480:
itk::ERROR: TensorflowMultisourceModelFilter(0x1af6e760): Error occured during tensor to image conversion.
Context: Output image buffered region: ImageRegion (0x7ffd9ed9faa0)
Dimension: 2
Index: [0, 0]
Size: [512, 512]
Input #0:
Requested region: ImageRegion (0x7ffd9ed9fad0)
Dimension: 2
Index: [0, 0]
Size: [32, 32]
Tensor shape ("lr_input": {1, 32, 32, 3}
User placeholders:
Error:
itk::ExceptionObject (0xb89f53f0)
Location: "unknown"
File: /work/otb/otb/Modules/Remote/otbtf/include/otbTensorflowCopyUtils.cxx
Line: 191
Description: itk::ERROR: Number of elements in the tensor is 0 but image outputRegion has 786432 values to fill.
Buffer region:
ImageRegion (0x7ffd9ed9fcc0)
Dimension: 2
Index: [0, 0]
Size: [512, 512]
Number of components: 3
Tensor shape:
{1, 0, 0, 3}
Please check the input(s) field of view (FOV), the output field of expression (FOE), and the output spacing scale if you run the model in fully convolutional mode (how many strides in your model?)
What should be the format of the images that sr.py needs to predict correctly?
Thank so much!!
The text was updated successfully, but these errors were encountered: