Residential College | false |
Status | 已發表Published |
Counterfactual Cycle-Consistent Learning for Instruction Following and Generation in Vision-Language Navigation | |
Hanqing Wang1,2; Wei Liang1; Jianbing Shen3; Luc Van Gool2; Wenguan Wang4 | |
2022 | |
Conference Name | IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) |
Source Publication | Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition |
Volume | 2022-June |
Pages | 15450-15460 |
Conference Date | 18-24 June 2022 |
Conference Place | New Orleans, LA, USA |
Abstract | Since the rise of vision-language navigation (VLN), great progress has been made in instruction following - building a follower to navigate environments under the guidance of instructions. However, far less attention has been paid to the inverse task: instruction generation - learning a speaker to generate grounded descriptions for navigation routes. Existing VLN methods train a speaker independently and often treat it as a data augmentation tool to strengthen the follower, while ignoring rich cross-task relations. Here we describe an approach that learns the two tasks simultaneously and exploits their intrinsic correlations to boost the training of each: the follower judges whether the speaker-created instruction explains the original navigation route correctly, and vice versa. Without the need of aligned instruction-path pairs, such cycle-consistent learning scheme is complementary to task-specific training targets defined on labeled data, and can also be applied over unlabeled paths (sampled without paired instructions). Another agent, called creator is added to generate counterfactual environments. It greatly changes current scenes yet leaves novel items - which are vital for the execution of original instructions - unchanged. Thus more informative training scenes are synthesized and the three agents compose a powerful VLN learning system. Extensive experiments on a standard benchmark show that our approach improves the performance of various follower models and produces accurate navigation instructions. |
Keyword | Vision + Language |
DOI | 10.1109/CVPR52688.2022.01503 |
URL | View the original |
Indexed By | CPCI-S |
Language | 英語English |
WOS Research Area | Computer Science ; Imaging Science & Photographic Technology |
WOS Subject | Computer Science, Artificial Intelligence ; Imaging Science & Photographic Technology |
WOS ID | WOS:000870783001026 |
Scopus ID | 2-s2.0-85135456456 |
Fulltext Access | |
Citation statistics | |
Document Type | Conference paper |
Collection | THE STATE KEY LABORATORY OF INTERNET OF THINGS FOR SMART CITY (UNIVERSITY OF MACAU) |
Corresponding Author | Wenguan Wang |
Affiliation | 1.Beijing Institute of Technology 2.ETH Zurich 3.SKL-IOTSC, University of Macau 4.ReLER, AAII, University of Technology Sydney |
Recommended Citation GB/T 7714 | Hanqing Wang,Wei Liang,Jianbing Shen,et al. Counterfactual Cycle-Consistent Learning for Instruction Following and Generation in Vision-Language Navigation[C], 2022, 15450-15460. |
APA | Hanqing Wang., Wei Liang., Jianbing Shen., Luc Van Gool., & Wenguan Wang (2022). Counterfactual Cycle-Consistent Learning for Instruction Following and Generation in Vision-Language Navigation. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2022-June, 15450-15460. |
Files in This Item: | There are no files associated with this item. |
Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.
Edit Comment