Skip to content

Commit

Permalink
Update conference name: NIPS-> NeurIPS
Browse files Browse the repository at this point in the history
  • Loading branch information
ronghanghu committed Nov 28, 2018
1 parent 389ee02 commit 7393982
Showing 1 changed file with 3 additions and 3 deletions.
6 changes: 3 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,12 +2,12 @@

This repository contains the code for the following paper:

* D. Fried*, R. Hu*, V. Cirik*, A. Rohrbach, J. Andreas, L.-P. Morency, T. Berg-Kirkpatrick, K. Saenko, D. Klein**, T. Darrell**, *Speaker-Follower Models for Vision-and-Language Navigation*. in NIPS, 2018. ([PDF](https://arxiv.org/pdf/1806.02724.pdf))
* D. Fried*, R. Hu*, V. Cirik*, A. Rohrbach, J. Andreas, L.-P. Morency, T. Berg-Kirkpatrick, K. Saenko, D. Klein**, T. Darrell**, *Speaker-Follower Models for Vision-and-Language Navigation*. in NeurIPS, 2018. ([PDF](https://arxiv.org/pdf/1806.02724.pdf))
```
@inproceedings{fried2018speaker,
title={Speaker-Follower Models for Vision-and-Language Navigation},
author={Fried, Daniel and Hu, Ronghang and Cirik, Volkan and Rohrbach, Anna and Andreas, Jacob and Morency, Louis-Philippe and Berg-Kirkpatrick, Taylor and Saenko, Kate and Klein, Dan and Darrell, Trevor},
booktitle={Advances in Neural Information Processing Systems (NIPS)},
booktitle={Neural Information Processing Systems (NeurIPS)},
year={2018}
}
```
Expand All @@ -19,7 +19,7 @@ Project Page: http:https://ronghanghu.com/speaker_follower

If you only want to use our data augmentation on the R2R dataset but don't need our models, you can directly download our augmented data on R2R (JSON file containing synthetic data generated by our speaker model) [here](http:https://people.eecs.berkeley.edu/~ronghang/projects/speaker_follower/data_augmentation/R2R_literal_speaker_data_augmentation_paths.json). This JSON file is in the same format as the original R2R dataset, with one synthetic instruction per sampled new trajectory.

*Note that we first train on the combination of the original + augmented data, and then fine-tuned on the original training data.*
*Note that we first trained on the combination of the original and the augmented data, and then fine-tuned on the original training data.*

## Installation

Expand Down

0 comments on commit 7393982

Please sign in to comment.