Updates · About Me

  

Updates

1.Sep.2017:The first workshop on Women and Underrepresented Minorities in NLP had over 90 submission and over 150 participants! This has given rise to a new initiative, a Broad Interest Group on diversity and inclusion in the ACL, which I'm privileged to be helping with.
12.July.2017: Seeing AI Released! From our CVPR 2015 first place image captioning system on one slow, disorganized computer, streamlined and brought through Microsoft Cognitive Services so that the work can leverage others' work and others could leverage it, and foward to help people who are blind. Now in the hands of the users!!
6.May.2017:Pushed out an opinion piece for the first time -- on automating discrimination with machine learning. We focused on the task of 'Physiognomy', interpreting people's character from their faces. Relevant to emerging technology in computer vision.
Response has been great. Endorsed by the ACLU, already translated into other languages. Piece: Physiognomy's New Clothes, with with my manager Blaise Agüera y Arcas and Princeton professor Alexander Todorov.
21.April.2017:The Bill Nye Saves the World show is out!! I'm on Episode 3, "Machines Take Over The World", talking about my research in vision and language towards positive goals.
25.March.2017:Ethics in NLP has hit the big time with coverage in Süddeutsche Zeitung!
ACL 2017:Come check out the workshop I'm co-organizing, Women and Underrepresented Minorities in NLP!
EACL 2017:Paper accepted on predicting suicide risk using multitask deep learning! Also, come check out the workshop I'm co-organizing, Ethics in NLP!
6.Feb.2017:New article on using Storytelling to advance Artificial Intelligence.
NIPS 2016:My former intern Spandana Gella is introducing our work from the summer: The first computational model for vision-to-language specifically for people who are blind. This is one of the most important perspectives to consider if you're interested in vision-language research. At WiML.
Early November 2016:Honored to begin new position as a Senior Research Scientist at Google Research and Machine Intelligence.
Late October 2016:I have left Microsoft Research. Please invite me out for cookies and beers, I could use it.
October 2016:I had the great privilege of being an AI expert on the new Bill Nye show!
September 2016:The Visual Question Answering work is taking the world by storm.
Early September 2016:Was offered the amazing opportunity of being a Senior Research Scientist in Google's Machine Intelligence group.
Late August 2016:Our Seeing AI project has over 1.7 million views on Buzzfeed. =O
Microsoft has let me know that I am not making enough impact, and promoted the men around me who helped with the project.
Early August 2016:Our former intern Ishan Misra did a fantastic job explaining the reporting bias model on the Data Skeptic podcast!
Late July 2016: Helped advance clinical state-of-the-art in a one-of-a-kind workshop on Detecting Risk and Protective Factors of Mental Health using Social Media Linked with Electronic Health Records, part of the JSALT Workshop series. Definitely worth plugging in for a couple weeks.
CVPR 2016: Presented the first computational model for reporting bias! Use variations on this one handy trick to significantly improve captioning, detection, etc. etc.!

Gave invited talk on vision-to-language generation at the VQA Challenge! [photo]
NAACL 2016:Presented the storytelling work to the masses!
June 2016:My kinda silly "sea of dudes" phrase, which I use to feel more relaxed as I go up and present, is taking the world by storm! Thanks, Bloomberg!
April 2016: People are interested in both the Seeing AI work and the Visual Storytelling work!
Here are some articles on Seeing AI from Fast Company, MIT Technology Review, and Microsoft.
Here are some articles on the storytelling work from Live Science, Venture Beat, and Microsoft. The Live Science journalist was great, he was deeply concerned about how these things can be evaluated, and how you know when you're doing well.
30.March.2016: Satya unveiled our project for assisting the visually impaired, Seeing AI. Check out the video here!
CVPR 2015: Our image captioning system won first place in the COCO captioning challenge, voted to be the most human-like! Let's hear it for thinking deeply about what kind of evaluation to focus on, and the power of logistic regression used well!


About Me

I am a Senior Research Scientist in Google's Research & Machine Intelligence group, working on artificial intelligence.

My research generally involves vision-language and grounded language generation, focusing on how to evolve artificial intelligence towards positive goals. This includes research on helping computers to communicate based on what they can process, as well as projects to create assistive and clinical technology from the state of the art in AI.

My work combines computer vision, natural language processing, social media, many statistical methods, and insights from cognitive science.

In a nutshell, I've worked on:
  • deep learning, structured learning, shallow learning, and probabilistic systems (Math)
  • natural language generation, referring expression generation, reference to visible objects, conversation, image captioning, visual question answering, and storytelling (Grounded Language)
  • dialogue assistance for people who are non-verbal (Cerebral Palsy and Autism), visual descriptions for people who are blind, automatic diagnosis/monitoring of Mild Cognitive Impairment (a precursor to Alzheimer's), Parkinson's, Apraxia, Autism, Depression, Post-Traumatic Stress Disorder, Suicide Risk, and Schizophrenia (Assistive and Clinical Technology)



Before Google, I was a founding member of Microsoft Research's "Cognition" group, focused on advancing artificial intelligence, and a researcher in Microsoft Research's Natural Language Processing group.

Before MSR, I was a postdoctoral researcher at The Johns Hopkins University Center of Excellence, where I focused on structured prediction, semantic role labeling, and sentiment analysis, working under Benjamin Van Durme.

Before that, I was a postgraduate (PhD) student in the natural language generation (NLG) group at the University of Aberdeen, where I focused on how to naturally refer to visible, everyday objects. I primarily worked with Kees van Deemter and Ehud Reiter.

I spent a good chunk of 2008 getting a Master's in Computational Linguistics at the University of Washington, studying under Emily Bender and Fei Xia.

Simultaneously (2005 - 2012), I worked on and off at the Center for Spoken Language Understanding, part of OHSU, in Portland, Oregon. My title changed with time (research assistant/associate/visiting scholar), but throughout, I worked on technology that leverages syntactic and phonetic characteristics to aid those with neurological disorders.
Brian Roark was my boss/mentor/supervisor.
I would not be in NLP if it were not for him encouraging me as an intelligent individual with ideas worth pursuing.



I continue to balance my time between language generation, applications for clinical domains, and core AI research.