摘要: Existing cross-modal retrieval methods are mainly constrained to the bimodal case. When applied to the multi-modal case, we need to train O(K~2) (K: number of modalities) separate models, which is inefficient and unable to exploit... 展开
作者 | Hongchang Wu Wei zhao Ziyu Guan Cai Xu Tao Zhi Hong Han Yaming Yang | ||
---|---|---|---|
作者单位 | |||
文集名称 | 2019 IEEE International Conference on Big Knowledge | ||
出版年 | 2019 | ||
出版社/出版地 | Institute of Electrical and Electronics Engineers / Piscataway | ||
会议名称 | IEEE International Conference on Big Knowledge | ||
开始页/总页数 | 265 / 8 | ||
会议日期/会议地点 | 20191110-11 / Beijing | 会议年/会议届次 | 2019 / 10th |
中图分类号 | TP18-53 | ||
关键词 | Cross-modal retrieval Graph attention Self attention Generative adversarial network | ||
馆藏号 | N2020070300133506 |