We are building a unified algorithmic architecture to achieve human-level intelligence in vision, language, and motor control. Currently, we are focused on visual perception problems, like recognition, segmentation, and scene parsing. We are interested in general solutions that work well across multiple sensory domains and tasks.
Using inductive biases drawn from neuroscience, our system requires orders of magnitude less training data than traditional machine learning techniques. Our underlying framework combines advantages of deep architectures and generative probabilistic models. We use modern software engineering practices, and we strive to maintain a codebase and a culture that are a joy to work in.