Our paper “Inside-Outside Semantics: A Framework for Neural Models of Semantic Composition” was accepted for the NIPS 2014 Workshop on Deep Learning and Representation Learning.
Here is the abstract:
Inside-Outside Semantics: A Framework for Neural Models of Semantic Composition
Phong Le & Willem Zuidema
The Recursive Neural Network (RNN) model and its extensions have been shown to be powerful tools for semantic composition with successes in many natural language processing (NLP) tasks. However, in this paper, we argue that the RNN model is restricted to a subset of NLP tasks where semantic compositionality plays a role. We propose an extension called Inside-Outside Semantics. In our framework every node in a parse tree is associated with a pair of representations, the inner representation for representing the content under the node, and the outer representation for representing its surrounding context. We demonstrate how this allows us to develop neural models for a much broader class of NLP tasks and for supervised as well as unsupervised learning. Our neural-net model, Inside-Outside Recursive Neural Network, performs on par with or better than the state-of-the-art (neural) models in word prediction, phrase-similarity judgments and semantic role labelling.