You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for your awesome work.
Your work might be greatly helpful to all people who interest in the MTL.
I'm about to study MTL and have one question.
I think the dataset for MTL should be in form of {Input: X(i), GT: Y_task1(i), Y_task2(i), ..., Y_taskT(i)}.
However, I think that it is difficult to satisfy this condition in a real-world environment.
When we should train task-specific datasets D_task1 {Input: X_task1, GT: Y_task1}, D_task2 {Input: X_task2, GT: Y_task2} simultaneously, how we do MTL?
For example, we aim to set MTL for both salient object detection and depth estimation.
For the salient object detection task, we use saliency labels from Pascal VOC dataset.
For the depth estimation task, we use depth-map labels from NYUD dataset.
(Both datasets totally consist of different input images, and Pascal VOC does not contain depth-map labels and NYUD does not contain saliency labels)
In this condition, how we construct MTL?
Does anyone know about MTL for task-specific datasets or related works?
The text was updated successfully, but these errors were encountered:
I also want to ask how you can get the label of salient task in Pascal Dataset. The Pascal-S Dataset only has 850 images, but your dataset about sal has 10500 images. Thank you.
Ubernet is not open source, and its training strategy is that there is no assignment of gt to 0 for loss. In fact, different data sets are trained separately and sequentially, right?
Thank you for your awesome work.
Your work might be greatly helpful to all people who interest in the MTL.
I'm about to study MTL and have one question.
I think the dataset for MTL should be in form of {Input: X(i), GT: Y_task1(i), Y_task2(i), ..., Y_taskT(i)}.
However, I think that it is difficult to satisfy this condition in a real-world environment.
When we should train task-specific datasets D_task1 {Input: X_task1, GT: Y_task1}, D_task2 {Input: X_task2, GT: Y_task2} simultaneously, how we do MTL?
For example, we aim to set MTL for both salient object detection and depth estimation.
For the salient object detection task, we use saliency labels from Pascal VOC dataset.
For the depth estimation task, we use depth-map labels from NYUD dataset.
(Both datasets totally consist of different input images, and Pascal VOC does not contain depth-map labels and NYUD does not contain saliency labels)
In this condition, how we construct MTL?
Does anyone know about MTL for task-specific datasets or related works?
The text was updated successfully, but these errors were encountered: