September:I've left Microsoft Research, presently joining Google Research.
Early September:Was offered the amazing opportunity of being a Senior Research Scientist in Google's Machine Intelligence group.
Early August:Our former intern Ishan Misra did a fantastic job explaining the reporting bias model on the Data Skeptic podcast!
Late July: Helped advance clinical state-of-the-art in a one-of-a-kind workshop on Detecting Risk and Protective Factors of Mental Health using Social Media Linked with Electronic Health Records, part of the JSALT Workshop series. Definitely worth plugging in for a couple weeks.
CVPR 2016: Presented the first computational model for reporting bias! Use variations on this one handy trick to significantly improve captioning, detection, etc. etc.!

Gave invited talk on vision-to-language generation at the VQA Challenge! [photo]
NAACL 2016:Presented the storytelling work to the masses!
June:My kinda silly "sea of dudes" phrase, which I use to feel more relaxed as I go up and present, is taking the world by storm! Thanks, Bloomberg!
April: People are interested in both the Seeing AI work and the Visual Storytelling work!
Here are some articles on Seeing AI from Fast Company, MIT Technology Review, and Microsoft.
Here are some articles on the storytelling work from Live Science, Venture Beat, and Microsoft. The Live Science journalist was great, he was deeply concerned about how these things can be evaluated, and how you know when you're doing well.
30.March: Satya unveiled our project for assisting the visually impaired, Seeing AI. Check out the video here!
CVPR 2015: Our image captioning system won first place in the COCO captioning challenge, voted to be the most human-like! Let's hear it for thinking deeply about what kind of evaluation to focus on, and the power of logistic regression used well!

About Me

I am a founding researcher in Microsoft's Cognition Group, focusing on advancing artificial intelligence towards positive goals.
I work on vision-language and grounded language generation, focusing on how to help computers communicate based on what they can process.
My work combines computer vision, natural language processing, social media, many statistical methods, and insights from cognitive science.

Before MSR, I was a postdoctoral researcher at The Johns Hopkins University Center of Excellence, where I mainly focused on semantic role labeling and sentiment analysis using graphical models, working under Benjamin Van Durme.

Before that, I was a postgraduate (PhD) student in the natural language generation (NLG) group at the University of Aberdeen, where I focused on how to naturally refer to visible, everyday objects. I primarily worked with Kees van Deemter and Ehud Reiter.

I spent a good chunk of 2008 getting a Master's in Computational Linguistics at the University of Washington, studying under Emily Bender and Fei Xia.

Simultaneously (2005 - 2012), I worked on and off at the Center for Spoken Language Understanding, part of OHSU, in Portland, Oregon. My title changed with time (research assistant/associate/visiting scholar), but throughout, I worked on technology that leverages syntactic and phonetic characteristics to aid those with neurological disorders.
Brian Roark was my boss/mentor/supervisor.
I would not be in NLP if it were not for him encouraging me as an intelligent individual with ideas worth pursuing.

I continue to balance my time between language generation, applications for clinical domains, and core AI research.