What are the most interesting projects at Facebook AI Research? originally appeared on Quora - the knowledge sharing network where compelling questions are answered by people with unique insights.

Answer by Serkan Piantino, Director of Engineering, Facebook AI Research; Site Director, Facebook NY, on Quora:

There's a tremendous amount of stuff going on, but I will try to hit some highlights and point to some general themes that are exciting.

First, in perception we can build systems that understand photos, videos, voice, sound, etc. For example, here's a demo we put together of a system that can answer questions about a photo scene, and here are some sample images from our recent submission to the MSCOCO challenge for segmenting and labeling objects:

inlineimage

inlineimage

inlineimage

This kind of thing - doing human perception tasks like labeling or answering questions on high dimension inputs like pixels, frames of video, audio samples, etc. has been blown open in the last few years, particularly using techniques like Convolutional Neural Nets, something my colleague Yann LeCun invented a long time ago and has been accelerated by the ability to crunch huge amounts of data on GPUs (like our rigs). This is exciting because it's moving so fast and it opens up a whole universe of products that understand content and intent on Facebook that never could have existed before.

We've started to turn these networks around and have them imagine scenes or images from a description instead of the other way around. Here's some work by FAIRy Soumith Chintala, in collaboration with indico Research in Boston, that can generate imagined bedroom scenes (below):

inlineimage

and has been adapted to generate faces, album covers, flowers, chinese characters, or even manga.

The next thing I would highlight is that we're giving our networks abilities that look like human memory and reasoning. The automated pieces of the Facebook M project are powered by Memory Networks, which was published by FAIRy Jason Weston. In order to process a longer sequence of dialogue we have to understand what a piece of text means, but we also need to store and access what happens so we can refer to it in future dialogue. These Memory Networks are part of a class of techniques for building networks that can learn how to build and reference facts over the course of time. Where our more traditional, stateless networks look like simple circuits that learn to wire over time, these networks start to look like full, self-wiring processors. Here's a demo of such a network reading a story and answering questions about it afterward.

Finally, we are learning at lot from Facebook itself. Having a network of 1.55 billion people connecting to each other and discovering the world around them gives us a tremendous opportunity to learn about humanity. As an engineer, building systems that scale to, and learn from, the entire Facebook social graph is tremendously exciting and something that can happen in just one place in the world - here. We've got a long way to go, but we hope to learn a tremendous amount about humanity through the work we do predicting and understanding the Facebook social graph.

This is by no means comprehensive of all the interesting research we're doing, but these are some highlights of things that I personally really love.

This question originally appeared on Quora - the knowledge sharing network where compelling questions are answered by people with unique insights. You can follow Quora on Twitter, Facebook, and Google+. More questions: