Category Archives: News

Raquel Alhama wins best student poster award at ICCM

Computational linguist Raquel Alhama (ILLC) wins best student poster award at the International Conference on Cognitive Modeling (ICCM’15) with her work on:

How should we evaluate models of segmentation in artificial language learning?

(with Remko Scha and Jelle Zuidema).

One of the challenges that infants have to solve when learning their native language is to identify the words in a continuous speech stream. Some of the experiments in Artificial Grammar Learning (Saffran, Newport, and Aslin (1996); Saffran, Aslin, and Newport (1996); Aslin, Saffran, and Newport (1998) and many more) investigate this ability. In these ex- periments, subjects are exposed to an artificial speech stream that contains certain regularities. Adult participants are typically tested with 2-alternative Forced Choice Tests (2AFC) in which they have to choose between a word and another sequence (typically a partword, a sequence resulting from misplacing boundaries).

One of the key findings of AGL is that both infants and adults are sensitive to transitional probabilities and other statistical cues, and can use them to segment the input stream. Several computational models have been proposed to explain such findings. We will review how these models are evaluated and argue that we need a different type of experimental data for model evaluation than is typically used and reported. We present some preliminary results and a model consistent with the data.

(Extended abstract here:

CFP: Architectures and Mechanisms for Language Processing

First Call for Papers:
Architectures and Mechanisms for Language Processing 2015
AMLaP 2015
3-5 September, 2015
University of Malta
Valletta Campus
Triq San Pawl, Valletta
Invited Speakers:
Victor Ferreira (UC San Diego)
Padraic Monoghan (Lancaster University)
Stuart Rosen (University College London)
Conference website and abstract submission:
Important Dates:
1st March 2015: abstract submissions and registration open
4th May 2015: deadline for abstract submissions
5th June, 2015: notification of acceptance
1st July 2015: early registration deadline
We are delighted to announce the 21th AMLaP conference.  AMLaP was
first held in Edinburgh in 1995; in the intervening years, it has
established itself as the premier European venue for interdisciplinary
research into human language processing. After anniversary edition in Edinburgh,
AMLaP is returning to the Mediterranean, to be held at the historic Valletta Campus
of the University of Malta.
In this pictoresque venue, and the high quality of contributions we await from old friends and new,
we hope that AMLaP 2015 will be a conference to remember.
As ever, we invite submissions on a broad range of topics relevant to
the study of how people understand and produce language. Topics of
interest include, but are not limited to:
    bilingual language processing
    computational models, symbolic and connectionist
    corpus-based studies and statistical mechanisms
    cross-linguistic studies
    dialogue processing
    language comprehension
    language production
    lexical processing
    learning mechanisms
    models of acquisition
    neurobiology of language processing
    parsing and interpretation
Malta is the most southern state in the EU, so we are expecting warm and sunny weather at the time of the conference
(even though a thunderstorm cannot be ruled out early September).
The water temperature is expected to be a pleasant 26 degrees Celsius (79 Fahrenheit for our North American Guests).
Valletta, destined to be an EU cultural capital in 2018,  is a awe-inspiring fortified city full of history (and cathedrals),
and so we hope to see as many of you as possible. A further email announcing the opening of
registration will be sent out in the coming weeks.
Best wishes,
Albert Gatt & Holger Mitterer
local organizers

CfP: Workshop on Continuous Vector Space Models and their Compositionality

1st CfP: Workshop on Continuous Vector Space Models and their
Compositionality (3rd edition) (CVSC)


Workshop on Continuous Vector Space Models and their Compositionality (3rd
Co-located with ACL 2015, Beijing, China
July 31, 2015
Submission deadline: May 14, 2015

First Call for Papers

(Apologies for multiple postings)

In recent years, there has been a growing interest in algorithms that learn
and use continuous representations for words, phrases, or documents in many
natural language processing applications. Among many others, influential
proposals that illustrate this trend include latent Dirichlet allocation,
neural network based language models and spectral methods. These approaches
are motivated by improving the generalization power of the discrete standard
models, by dealing with the data sparsity issue and by efficiently handling a
wide context. Despite the success of single word vector space models, they
are limited since they do not capture compositionality. This prevents them
from gaining a deeper understanding of the semantics of longer phrases,
sentences and documents.

Regarding this issue, some pertinent questions arise: should
word/phrase/sentence representations be of the same sort? Could different
linguistic levels require different modelling approaches ? Is
compositionality determined by syntax, and if so, how do we learn/define it?
Should word representations be fixed and obtained distributionally, or should
the encoding be variable? Should word representations be task-specific, or
should they be general?

In this workshop, we invite submissions of papers on continuous vector space
models for natural language processing. Topics of interest include, but are
not limited to:

* Neural networks
* Spectral methods
* Distributional semantic models
* Language modeling for automatic speech recognition, statistical machine
translation, and information retrieval
* Automatic annotation of texts
* Phrase and sentence-level distributional representations
* The role of syntax in compositional models
* Formal and distributional semantic models
* Language modeling for logical and natural reasoning
* Integration of distributional representations with other models
* Multi-modal learning for distributional representations
* Knowledge base embedding


The workshop will showcase presentations from 4 to 6 keynote speakers. The
confirmed speakers are:

* Yoav Goldberg (Bar Ilan University)
* Jason Weston (Facebook AI Research)
* Kyunghyun Cho (Université de Montréal)


Authors should submit a full paper of up to 8 pages in electronic, PDF
format, with up to 2 additional pages for references. The reported research
should be substantially original. The papers will be presented orally or as

All submissions must be in PDF format and must follow the ACL 2015 formatting
requirements (see the ACL 2015 Call For Papers Reviewing will be double-blind, and
thus no author information should be included in the papers; self-reference
should be avoided as well. Submissions must be made through the Softconf
website set up for this workshop:

Accepted papers will appear in the workshop proceedings, where no distinction
will be made between papers presented orally or as posters.


14 May 2015 : Submission deadline
4 June 2015 : Notification of acceptance
21 June 2015 : Camera-ready deadline
31 July 2015 : Workshop


Alexandre Allauzen (LIMSI-CNRS/Université Paris-Sud, France)
Edward Grefenstette (University of Oxford, UK)
Karl Moritz Hermann (University of Oxford, UK)
Hugo Larochelle (Université de de Sherbrooke, Canada)
Scott Wen-tau Yih (Microsoft Research, USA)


Marco Baroni, University of Trento
Yoshua Bengio, Université de Montreal
Phil Blunsom, University of Oxford
Antoine Bordes, Facebook
Leon Bottou, Microsoft
Stephen Clark, University of Cambridge
Shay Cohen, University of Edinburgh
Georgiana Dinu, University of Trento
Kevin Duh, Nara Institute of Science and Technology
Yoav Goldberg, Bar Ilan University
Andriy Mnih, University College London
Mehrnoosh Sadrzadeh, University of London
Mark Steedman, University of Edinburgh
Peter Turney, NRC
Jason Weston, Facebook
Guillaume Wisniewski, LIMSI-CNRS

Read more:

CFP – Cognitive Modeling and Computational Linguistics 2015 (CMCL-2015)

2015 Workshop on Cognitive Modeling and Computational Linguistics (CMCL)
(CMCL 2015)


Cognitive Modeling and Computational Linguistics 2015 (CMCL-2015)

This workshop provides a venue for work in computational
psycholinguistics: the computational and mathematical modeling of
linguistic generalization, development, and processing. We invite
contributions that apply methods from computational linguistics to
problems in the cognitive modeling of any and all natural
language-related abilities.

The 2015 workshop will be co-located with NAACL-HLT and follows in the
tradition of earlier CMCL meetings at ACL 2010, ACL 2011,
NAACL-HLT 2012, ACL 2013, ACL 2014.

Scope and Topics

The workshop invites a broad spectrum of work in the cognitive science
of language, at all levels of analysis from sounds to discourse and on
both learning and processing. Topics include, but are not limited to:

* incremental parsers for diverse grammar formalisms
* derivations of quantitative measures of comprehension difficulty, or
predictions regarding generalization in language learning
* stochastic models of factors encouraging one production or interpretation
over its competitors
* models of semantic/pragmatic interpretation, including psychologically
realistic notions of word meaning, phrase meaning, composition, and
pragmatic inference
* models and empirical analysis of the relationship between mechanistic
psycholinguistic principles and pragmatic or semantic adaptation
* models of human language acquisition and/or adaptation in a changing
linguistic environment
* models of linguistic information propagation and language change in
communication networks
* models of lexical acquisition, including phonology, morphology, and
* psychologically motivated models of grammar induction or semantic learning

Submissions are especially welcomed that combine computational
modeling work with experimental or corpus data to test theoretical
questions about the nature of human language acquisition,
comprehension, and/or production.


This call solicits full papers reporting original and unpublished
research that combines cognitive modeling and computational
linguistics. Accepted papers are expected to be presented at the
workshop and will be published in the workshop proceedings. They
should emphasize obtained results rather than intended work, and
should indicate clearly the state of completion of the reported
results. A paper accepted for presentation at the workshop must not be
presented or have been presented at any other meeting with publicly
available proceedings. No submission should be longer than necessary, up
to a maximum 8 pages plus two additional pages containing references.

If essentially identical papers are submitted to other conferences or
workshops as well, this fact must be indicated at submission time.

To facilitate double-blind reviewing, submitted manuscripts should not
include any identifying information about the authors.

Submissions must be formatted using ACL 2015 submission guidelines at

Submission style templates are available at:

Contributions should be submitted in PDF via the submission site:

The submission deadline is 11:59PM Pacific Time on March 6, 2015.

Best Student Paper

The best paper whose first author is a student will receive the Best
Student Paper award. All accepted CMCL papers will be published
in the workshop proceedings as is customary at ACL conferences.

Important Dates

Submission deadline: 6 March 2015
Notification of acceptance: 24 March 2015
Camera-ready versions due: 3 April 2015
Workshop: June 4, 2015

Workshop Chairs

Tim O’Donnell
Department of Brain and Cognitive Sciences, Massachusetts Institute of
Technology, USA

Marten van Schijndel
Department of Linguistics, The Ohio State University, USA

Program Committee

Omri Abend, University of Edinburgh
Steven Abney, University of Michigan
Afra Alishahi, Tilburg University
Libby Barak, University of Toronto
Marco Baroni, University of Trento
Robert Berwick, MIT
Klinton Bicknell, Northwestern University
Christos Christodoulopoulos, University of Illinois at Urbana-Champaign
Alexander Clark, King’s College
Moreno Cocco, University of Lisbon
Jennifer Culbertson, George Mason University
Vera Demberg, Saarland University
Brian Dillon, University of Massachusetts Amherst
Micha Elsner, The Ohio State University
Naomi Feldman, University of Maryland
Alex Fine, University of Illinois at Urbana-Champaign
Bob Frank, Yale University
Michael Frank, Stanford University
Stefan Frank, Radboud University Nijmegen
Stella Frank, Edinburgh University
Ted Gibson, MIT
Sharon Goldwater, Edinburgh University
Carlos Gomez Gallo, Northwestern University
Noah Goodman, Stanford University
Thomas Graf, Stony Brook University
John Hale, Cornell University
Jeffrey Heinz, University of Delaware
Tim Hunter, University of Minnesota
Mark Johnson, Macquarie University
Frank Keller, University of Edinburgh
Shalom Lappin, King’s College
Roger Levy, UCSD
Pavel Logacev, Potsdam University
Titus von der Malsburg, UCSD
Rebecca Morley, The Ohio State University
Aida Nematzadeh, University of Toronto
Ulrike Pado, Hochschule fuer Technik, Stuttgart
Bozena Pajak, Northwestern University
Lisa Pearl, UC Irvine
Massimo Poesio, University of Essex
Ting Qian, Brown University
Roi Reichart, Technion University
David Reitter, Penn State University
William Schuler, The Ohio State University
Nathaniel Smith, University of Edinburgh
Ed Stabler, UCLA
Mark Steedman, University of Edinburgh
Patrick Sturt, University of Edinburgh
Colin Wilson, Johns Hopkins University
Alessandra Zarcone, Saarland University of Massachusetts
Jelle Zuidema, University of Amsterdam

Read more:

CFP: CogSci 2015 (conference 23-25 July; deadline: 1 February)

CogSci 2015
37th Annual Meeting of the
Cognitive Science Society
Mind, Technology, and Society
 Pasadena, California, USA

July 23 – 25, 2015
(Tutorials & Workshops: July 22, 2015)



The online submission is now open.  You may review the criteria and make your submission at:



Highlights Include:

Plenary Speakers:

Martha Farah, University of Pennsylvania

Christof Koch, Allen Institute for Brain Science

Rosalind Picard, MIT Media Laboratory

14th Rumelhart Prize Presentation:

Michael Jordan, UC Berkeley

Heineken Prize for Cognitive Science Presentation:

Jay McClelland, Stanford University

Invited Symposia:

Philosophy of Mind

Technological Innovation

Cognition in Society

Cognitive scientists from around the world are invited to attend CogSci 2015! The Annual Meeting of the Cognitive Science Society is the world’s premiere annual conference the interdisciplinary study of cognition. Cognitive Science draws on a broad spectrum of disciplines, topics, and methodologies, and CogSci 2015 reflects this diversity in its theme: Mind, Technology, and Society.


In addition to the invited presentations, the program will be filled with competitive peer-reviewed submissions of several kinds: research papers, contributed symposia, publication-based talks, member abstracts, tutorials, and workshops. Submissions may report on work involving any approach to Cognitive Science, including, but not limited to, anthropology, artificial intelligence, computational cognitive systems, cognitive development, cognitive neuroscience, cognitive psychology, education, evolution of cognition, linguistics, logic, machine learning, network analysis, neural networks, philosophy, and robotics.


The deadline for submissions is February 1, 2015. All submissions must be made via the conference program website.  Information regarding the submission process may be found at:


We look forward to seeing you in Pasadena!


Conference Co-Organizers:
Rick Dale, Carolyn Jennings, Paul Maglio, Teenie Matlock, David Noelle, Anne Warlaumont, Jeff Yoshimi

Cognitive & Information Sciences; University of California, Merced

Cognitive Science Society

clclab @the NIPS Deep Learning Workshop

Our paper “Inside-Outside Semantics: A Framework for Neural Models of Semantic Composition” was accepted for the NIPS 2014 Workshop on Deep Learning and Representation Learning.

Here is the abstract:

Inside-Outside Semantics: A Framework for Neural Models of Semantic Composition

Phong Le & Willem Zuidema

The Recursive Neural Network (RNN) model and its extensions have been shown to be powerful tools for semantic composition with successes in many natural language processing (NLP) tasks. However, in this paper, we argue that the RNN model is restricted to a subset of NLP tasks where semantic compositionality plays a role. We propose an extension called Inside-Outside Semantics. In our framework every node in a parse tree is associated with a pair of representations, the inner representation for representing the content under the node, and the outer representation for representing its surrounding context. We demonstrate how this allows us to develop neural models for a much broader class of NLP tasks and for supervised as well as unsupervised learning. Our neural-net model, Inside-Outside Recursive Neural Network, performs on par with or better than the state-of-the-art (neural) models in word prediction, phrase-similarity judgments and semantic role labelling.

clclab @EMNLP

Phong presented our paper “The Inside-Outside Recursive Neural Network model for Dependency Parsing” at EMNLP in Qatar. Here is the abstract:

The Inside-Outside Recursive Neural Network model for Dependency Parsing

(published pdf here)

Phong Le & Willem Zuidema

We propose the first implementation of an infinite-order generative dependency
model. The model is based on a new recursive neural network architecture, the
Inside-Outside Recursive Neural Network. This architecture allows information to
flow not only bottom-up, as in traditional recursive neural networks, but also topdown.
This is achieved by computing content as well as context representations for any constituent, and letting these representations interact. Experimental results on the English section of the Universal Dependency Treebank show that the infinite-order model achieves a perplexity
seven times lower than the traditional third-order model using counting, and tends to choose more accurate parses in k-best lists. In addition, reranking with this model achieves state-of-the-art unlabelled attachment scores and unlabelled exact match scores.


Interesting paper in Science: The atoms of neural computation

Science 31 October 2014:
Vol. 346 no. 6209 pp. 551-552
DOI: 10.1126/science.1261661

The atoms of neural computation

The human cerebral cortex is central to a wide array of cognitive functions, from vision to language, reasoning, decision-making, and motor control. Yet, nearly a century after the neuroanatomical organization of the cortex was first defined, its basic logic remains unknown. One hypothesis is that cortical neurons form a single, massively repeated “canonical” circuit, characterized as a kind of a “nonlinear spatiotemporal filter with adaptive properties” (1). In this classic view, it was “assumed that these…properties are identical for all neocortical areas.” Nearly four decades later, there is still no consensus about whether such a canonical circuit exists, either in terms of its anatomical basis or its function. Likewise, there is little evidence that such uniform architectures can capture the diversity of cortical function in simple mammals, let alone characteristically human processes such as language and abstract thinking (2). Analogous software implementations in artificial intelligence (e.g., deep learning networks) have proven effective in certain pattern classification tasks, such as speech and image recognition, but likewise have made little inroads in areas such as reasoning and natural language understanding. Is the search for a single canonical cortical circuit misguided?