This is the code for my paper.
Dual Triplet Network for image Zero-shot Learning(https://www.sciencedirect.com/science/article/pii/S092523121931330X)
If you can't download the paper, you can contact me.
Email: [email protected]
Fig. 1. Diagram for the proposed framework. First, the DTNet maps the attribute features into the visual space with the mapping network. Then, it employs two triplet networks for learning the visual-semantic alignment.
Fig. 2. Illustration for the proposed DTNet. It shows in detail the composition and input of AOTN and VOTN. Note the two metric networks share parameters.
Fig. 3. The illustration of and DTNet-WHTM and DTNet. Different colors represent different categories. The pentagram denotes the attribute features, and the triangle denotes the visual features. Different modalities with the same category are forced to be close, while those from different categories are forced to keep away.