NLP systems are typically trained and evaluated in “clean” settings, over data without significant noise. However, systems deployed in the real world need to deal with vast amounts of noise. At GMU NLP we work towards making NLP systems more robust to several types of noise (adversarial or naturally occuring).
- Fine-Tuning MT systems for Robustness to Second-Language Speaker Variations
- Neural Machine Translation of Text from Non-Native Speakers
- Improving Robustness of Neural Machine Translation with Multi-task Learning
- Findings of the First Shared Task on Machine Translation Robustness
- An Analysis of Source-Side Grammatical Errors in NMT