-
Notifications
You must be signed in to change notification settings - Fork 19.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
FCN for Semantic Segmentation via Keras #3354
Comments
Hi, @rmkemker what version of keras you were using? In keras 1.0.0 or above, I can't import Deconvolution2d. |
Hey @McVilla, it is version 1.0.6...but I had to build it from source (not yet migrated to the PIP wheel yet I think). I rebuilt it this morning. |
@rmkemker Any progress here with dynamical shape inference? I'm getting to exactly same problems with a little bit different model. |
Same here. I tried the example from https://keras.io/layers/convolutional/#deconvolution2d and got also: |
I suppose you are using TF to solve this. Instead of None, you need to use the exact batch size. On Fri, Sep 16, 2016 at 1:57 PM, kielnino [email protected] wrote:
Regards, |
The answer is yes, but it involved writing my own layer. It worked, but I am working on some newer models that don't use Deconvolution2D. The models I am using now infer shape without any issue. |
Are you using UpSampling followed by Convolution? On Sun, Sep 18, 2016 at 12:54 AM, rmkemker [email protected] wrote:
Regards, |
Yes I am. It is faster. |
@rmkemker would it be possible to share your code? I am running into a similar problem while trying to implement the semantic segmentation paper. |
@shehzaadzd
|
Hey, thanks! 😃 |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed after 30 days if no further activity occurs, but feel free to re-open a closed issue if needed. |
Also if you model is fully convolutional you can use image of any size, but in batch images should be same size and training with batch_size=1 can be slower. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed after 30 days if no further activity occurs, but feel free to re-open a closed issue if needed. |
@rmkemker can you share you code , how to slove it |
I have seen a few iterations of people trying to solve this problem, but I haven't seen a satisfactory solution. I am using the Tensorflow backend of Keras to recreate the work found here. I am new to this arena, so I am having troubles grasping what the authors did to train their network for VOC 2012. All of the training images are different sizes; and when I look at their paper/code, I am unable to see how they account for the varying input image size. Below is an attempt I made to build part of the desired network (inspired by the author's Caffe code):
However, I get the "TypeError: Expected binary or unicode string, got (Dimension(None), Dimension(None), Dimension(None), Dimension(3))'" in the Deconvolution2D layer. Is there a better way for allowing various dimensions for training data? Should I just pad the training images to 500x500? The author says that they use the entire image vs passing patches of the image. Thank-you in advance.
EDIT: I found the new Deconvolution2D layer was recently added to Keras. I modified my code to incorporate this instead and got the new error listed above.
The text was updated successfully, but these errors were encountered: