Monthly Archives: November 2014

clclab @the NIPS Deep Learning Workshop

Our paper “Inside-Outside Semantics: A Framework for Neural Models of Semantic Composition” was accepted for the NIPS 2014 Workshop on Deep Learning and Representation Learning.

Here is the abstract:

Inside-Outside Semantics: A Framework for Neural Models of Semantic Composition

Phong Le & Willem Zuidema

The Recursive Neural Network (RNN) model and its extensions have been shown to be powerful tools for semantic composition with successes in many natural language processing (NLP) tasks. However, in this paper, we argue that the RNN model is restricted to a subset of NLP tasks where semantic compositionality plays a role. We propose an extension called Inside-Outside Semantics. In our framework every node in a parse tree is associated with a pair of representations, the inner representation for representing the content under the node, and the outer representation for representing its surrounding context. We demonstrate how this allows us to develop neural models for a much broader class of NLP tasks and for supervised as well as unsupervised learning. Our neural-net model, Inside-Outside Recursive Neural Network, performs on par with or better than the state-of-the-art (neural) models in word prediction, phrase-similarity judgments and semantic role labelling.

clclab @EMNLP

Phong presented our paper “The Inside-Outside Recursive Neural Network model for Dependency Parsing” at EMNLP in Qatar. Here is the abstract:

The Inside-Outside Recursive Neural Network model for Dependency Parsing

(published pdf here)

Phong Le & Willem Zuidema

We propose the first implementation of an infinite-order generative dependency
model. The model is based on a new recursive neural network architecture, the
Inside-Outside Recursive Neural Network. This architecture allows information to
flow not only bottom-up, as in traditional recursive neural networks, but also topdown.
This is achieved by computing content as well as context representations for any constituent, and letting these representations interact. Experimental results on the English section of the Universal Dependency Treebank show that the infinite-order model achieves a perplexity
seven times lower than the traditional third-order model using counting, and tends to choose more accurate parses in k-best lists. In addition, reranking with this model achieves state-of-the-art unlabelled attachment scores and unlabelled exact match scores.

 

Interesting paper in Science: The atoms of neural computation

Science 31 October 2014:
Vol. 346 no. 6209 pp. 551-552
DOI: 10.1126/science.1261661

The atoms of neural computation

The human cerebral cortex is central to a wide array of cognitive functions, from vision to language, reasoning, decision-making, and motor control. Yet, nearly a century after the neuroanatomical organization of the cortex was first defined, its basic logic remains unknown. One hypothesis is that cortical neurons form a single, massively repeated “canonical” circuit, characterized as a kind of a “nonlinear spatiotemporal filter with adaptive properties” (1). In this classic view, it was “assumed that these…properties are identical for all neocortical areas.” Nearly four decades later, there is still no consensus about whether such a canonical circuit exists, either in terms of its anatomical basis or its function. Likewise, there is little evidence that such uniform architectures can capture the diversity of cortical function in simple mammals, let alone characteristically human processes such as language and abstract thinking (2). Analogous software implementations in artificial intelligence (e.g., deep learning networks) have proven effective in certain pattern classification tasks, such as speech and image recognition, but likewise have made little inroads in areas such as reasoning and natural language understanding. Is the search for a single canonical cortical circuit misguided?