Adversarial Evaluation for NLP Models

Hosted By
Greg B. and Lee B.

Details
Amber Wilcox-O'Hearn will lead a discussion of Noah Smith's white paper on Adversarial Evaluation for Models of Natural Language: http://arxiv.org/abs/1207.0245
Additionally Amber will be giving a demo of her work on Malaprop which introduces errors into a corpus as a means of adversarial evaluation. She has a great summary on her blog, and take a look at the code on github (https://github.com/lamber/malaprop).
Even better take the opportunity to run the code and bring your questions.

Front Range NLP (Natural Language Processing)
See more events
Adversarial Evaluation for NLP Models