Author Archives: admin

Raquel Alhama wins best student poster award at ICCM

Computational linguist Raquel Alhama (ILLC) wins best student poster award at the International Conference on Cognitive Modeling (ICCM’15) with her work on:

How should we evaluate models of segmentation in artificial language learning?

(with Remko Scha and Jelle Zuidema).

One of the challenges that infants have to solve when learning their native language is to identify the words in a continuous speech stream. Some of the experiments in Artificial Grammar Learning (Saffran, Newport, and Aslin (1996); Saffran, Aslin, and Newport (1996); Aslin, Saffran, and Newport (1998) and many more) investigate this ability. In these ex- periments, subjects are exposed to an artificial speech stream that contains certain regularities. Adult participants are typically tested with 2-alternative Forced Choice Tests (2AFC) in which they have to choose between a word and another sequence (typically a partword, a sequence resulting from misplacing boundaries).

One of the key findings of AGL is that both infants and adults are sensitive to transitional probabilities and other statistical cues, and can use them to segment the input stream. Several computational models have been proposed to explain such findings. We will review how these models are evaluated and argue that we need a different type of experimental data for model evaluation than is typically used and reported. We present some preliminary results and a model consistent with the data.

(Extended abstract here: http://www.iccm2015.org/proceedings/papers/0040/paper0040.pdf)

CFP: Architectures and Mechanisms for Language Processing

First Call for Papers:
Architectures and Mechanisms for Language Processing 2015
 
=======================================================
 
AMLaP 2015
3-5 September, 2015
University of Malta
Valletta Campus
Triq San Pawl, Valletta
Malta
 
 
Invited Speakers:
Victor Ferreira (UC San Diego)
Padraic Monoghan (Lancaster University)
Stuart Rosen (University College London)
 
Conference website and abstract submission:
www.um.edu.mt/events/amlap2015
 
Important Dates:
1st March 2015: abstract submissions and registration open
4th May 2015: deadline for abstract submissions
5th June, 2015: notification of acceptance
1st July 2015: early registration deadline
 
 
=======================================================
 
We are delighted to announce the 21th AMLaP conference.  AMLaP was
first held in Edinburgh in 1995; in the intervening years, it has
established itself as the premier European venue for interdisciplinary
research into human language processing. After anniversary edition in Edinburgh,
AMLaP is returning to the Mediterranean, to be held at the historic Valletta Campus
of the University of Malta.
In this pictoresque venue, and the high quality of contributions we await from old friends and new,
we hope that AMLaP 2015 will be a conference to remember.
 
As ever, we invite submissions on a broad range of topics relevant to
the study of how people understand and produce language. Topics of
interest include, but are not limited to:
 
    bilingual language processing
    computational models, symbolic and connectionist
    corpus-based studies and statistical mechanisms
    cross-linguistic studies
    dialogue processing
    discourse
    language comprehension
    language production
    lexical processing
    learning mechanisms
    models of acquisition
    neurobiology of language processing
    parsing and interpretation
    prosody
 
Malta is the most southern state in the EU, so we are expecting warm and sunny weather at the time of the conference
(even though a thunderstorm cannot be ruled out early September).
The water temperature is expected to be a pleasant 26 degrees Celsius (79 Fahrenheit for our North American Guests).
Valletta, destined to be an EU cultural capital in 2018,  is a awe-inspiring fortified city full of history (and cathedrals),
and so we hope to see as many of you as possible. A further email announcing the opening of
registration will be sent out in the coming weeks.
 
Best wishes,
Albert Gatt & Holger Mitterer
local organizers

CfP: Workshop on Continuous Vector Space Models and their Compositionality

1st CfP: Workshop on Continuous Vector Space Models and their
Compositionality (3rd edition) (CVSC)

CALL FOR PAPERS

*********************************************************************************************
Workshop on Continuous Vector Space Models and their Compositionality (3rd
edition)
Co-located with ACL 2015, Beijing, China
July 31, 2015
Submission deadline: May 14, 2015
https://sites.google.com/site/cvscworkshop2015
*********************************************************************************************

First Call for Papers

(Apologies for multiple postings)

In recent years, there has been a growing interest in algorithms that learn
and use continuous representations for words, phrases, or documents in many
natural language processing applications. Among many others, influential
proposals that illustrate this trend include latent Dirichlet allocation,
neural network based language models and spectral methods. These approaches
are motivated by improving the generalization power of the discrete standard
models, by dealing with the data sparsity issue and by efficiently handling a
wide context. Despite the success of single word vector space models, they
are limited since they do not capture compositionality. This prevents them
from gaining a deeper understanding of the semantics of longer phrases,
sentences and documents.

Regarding this issue, some pertinent questions arise: should
word/phrase/sentence representations be of the same sort? Could different
linguistic levels require different modelling approaches ? Is
compositionality determined by syntax, and if so, how do we learn/define it?
Should word representations be fixed and obtained distributionally, or should
the encoding be variable? Should word representations be task-specific, or
should they be general?

In this workshop, we invite submissions of papers on continuous vector space
models for natural language processing. Topics of interest include, but are
not limited to:

* Neural networks
* Spectral methods
* Distributional semantic models
* Language modeling for automatic speech recognition, statistical machine
translation, and information retrieval
* Automatic annotation of texts
* Phrase and sentence-level distributional representations
* The role of syntax in compositional models
* Formal and distributional semantic models
* Language modeling for logical and natural reasoning
* Integration of distributional representations with other models
* Multi-modal learning for distributional representations
* Knowledge base embedding

INVITED SPEAKERS

The workshop will showcase presentations from 4 to 6 keynote speakers. The
confirmed speakers are:

* Yoav Goldberg (Bar Ilan University)
* Jason Weston (Facebook AI Research)
* Kyunghyun Cho (Université de Montréal)

SUBMISSION INFORMATION

Authors should submit a full paper of up to 8 pages in electronic, PDF
format, with up to 2 additional pages for references. The reported research
should be substantially original. The papers will be presented orally or as
posters.

All submissions must be in PDF format and must follow the ACL 2015 formatting
requirements (see the ACL 2015 Call For Papers
http://acl2015.org/call_for_papers.html). Reviewing will be double-blind, and
thus no author information should be included in the papers; self-reference
should be avoided as well. Submissions must be made through the Softconf
website set up for this workshop:

https://www.softconf.com/acl2015/CVSC/

Accepted papers will appear in the workshop proceedings, where no distinction
will be made between papers presented orally or as posters.

IMPORTANT DATES

14 May 2015 : Submission deadline
4 June 2015 : Notification of acceptance
21 June 2015 : Camera-ready deadline
31 July 2015 : Workshop

ORGANIZERS

Alexandre Allauzen (LIMSI-CNRS/Université Paris-Sud, France)
Edward Grefenstette (University of Oxford, UK)
Karl Moritz Hermann (University of Oxford, UK)
Hugo Larochelle (Université de de Sherbrooke, Canada)
Scott Wen-tau Yih (Microsoft Research, USA)

PROGRAM COMMITTEE

Marco Baroni, University of Trento
Yoshua Bengio, Université de Montreal
Phil Blunsom, University of Oxford
Antoine Bordes, Facebook
Leon Bottou, Microsoft
Stephen Clark, University of Cambridge
Shay Cohen, University of Edinburgh
Georgiana Dinu, University of Trento
Kevin Duh, Nara Institute of Science and Technology
Yoav Goldberg, Bar Ilan University
Andriy Mnih, University College London
Mehrnoosh Sadrzadeh, University of London
Mark Steedman, University of Edinburgh
Peter Turney, NRC
Jason Weston, Facebook
Guillaume Wisniewski, LIMSI-CNRS

Read more:
http://portal.aclweb.org/content/1st-cfp-workshop-continuous-vector-space-models-and-their-compositionality-3rd-edition

Postdoctoral Fellowship in Toronto

Postdoctoral Fellowship in Computational Linguistics/Computational Cognitive Modeling of Language, Department of Computer Science, University of Toronto
Applications are invited for one or more postdoctoral fellowships in computational linguistics at the University of Toronto, in a research group that works on computational cognitive models of language acquisition and language processing, and on statistical methods for learning lexical semantic information from large text corpora.  We take a very multidisciplinary approach to building systems that learn about words, integrating machine learning approaches with theories and insights from the fields of linguistics and psycholinguistics.Successful candidates will contribute substantially to ongoing research activities in computational lexical semantics and/or cognitive modeling of language acquisition and processing, and participate in developing new research directions in these areas.  In addition to pursuing independent and collaborative research activities, the position will involve engagement in undergraduate student supervision and co-supervision of graduate students, along with some administrative duties.

The postdoc will work with Professor Suzanne Stevenson and her students in the Department of Computer Science at the University of Toronto (UofT), as well as with collaborators within and outside the department.  See http://www.cs.toronto.edu/~suzanne/publications.html for examples of publications coming out of the research group.  The precise research focus for the postdoctoral fellowship will be determined in consultation with the successful candidate to maximize the potential for innovative and collaborative research with the PI and her students/collaborators and for professional development of the postdoctoral fellow.

Computer Science at UofT is a top-ten department that is well known for its strength in artificial intelligence, including a world-renowned computational linguistics group with 3 faculty members and over 20 graduate students and postdocs (http://www.cs.toronto.edu/compling).  Our research group engages with an active psycholinguistics community at UofT that draws participants from Computer Science, Linguistics, Philosophy, Psychology, Speech-Language Pathology, and other departments across the campus. The university is consistently ranked in the top twenty in the world, and the St. George campus (the location of the postdoctoral fellowship) is situated in the vibrant mid-town area of the world’s most multicultural city (http://www.sgs.utoronto.ca/postdoctoralfellows/Pages/Life-in-Toronto.aspx).We’re looking for candidates who are committed to the multidisciplinary study of language from a computational perspective, and who have a PhD in computational linguistics or in a related field with experience in computational modeling.  Salary will be minimum $60,000/year, depending on the successful candidate’s qualifications and experience, plus $5,000/year in research funds.  The term is for minimum 1 year, to start as soon as a successful candidate is identified.  Course teaching is not required but opportunities to engage in teaching in computer science or cognitive science may be available.  Any inquiries concerning further details of the position should be directed to the e-mail address below.

Applications should contain: (1) a cover letter clearly indicating the candidate’s research goals for the postdoc and possible start dates,(2) a full CV, (3) a statement of research interests, (4) two or three example publications, and (5) the names and e-mail addresses of three referees.  Application materials should be e-mailed as PDF files to Suzanne Stevenson, suzanne@cs.toronto.edu.  Priority will be given to applications received by January 16, 2014.

CFP – Cognitive Modeling and Computational Linguistics 2015 (CMCL-2015)

2015 Workshop on Cognitive Modeling and Computational Linguistics (CMCL)
(CMCL 2015)

CALL FOR PAPERS

Cognitive Modeling and Computational Linguistics 2015 (CMCL-2015)
——————————————————————————————–

This workshop provides a venue for work in computational
psycholinguistics: the computational and mathematical modeling of
linguistic generalization, development, and processing. We invite
contributions that apply methods from computational linguistics to
problems in the cognitive modeling of any and all natural
language-related abilities.

The 2015 workshop will be co-located with NAACL-HLT and follows in the
tradition of earlier CMCL meetings at ACL 2010, ACL 2011,
NAACL-HLT 2012, ACL 2013, ACL 2014.

Scope and Topics
————————

The workshop invites a broad spectrum of work in the cognitive science
of language, at all levels of analysis from sounds to discourse and on
both learning and processing. Topics include, but are not limited to:

* incremental parsers for diverse grammar formalisms
* derivations of quantitative measures of comprehension difficulty, or
predictions regarding generalization in language learning
* stochastic models of factors encouraging one production or interpretation
over its competitors
* models of semantic/pragmatic interpretation, including psychologically
realistic notions of word meaning, phrase meaning, composition, and
pragmatic inference
* models and empirical analysis of the relationship between mechanistic
psycholinguistic principles and pragmatic or semantic adaptation
* models of human language acquisition and/or adaptation in a changing
linguistic environment
* models of linguistic information propagation and language change in
communication networks
* models of lexical acquisition, including phonology, morphology, and
semantics
* psychologically motivated models of grammar induction or semantic learning

Submissions are especially welcomed that combine computational
modeling work with experimental or corpus data to test theoretical
questions about the nature of human language acquisition,
comprehension, and/or production.

Submissions
—————–

This call solicits full papers reporting original and unpublished
research that combines cognitive modeling and computational
linguistics. Accepted papers are expected to be presented at the
workshop and will be published in the workshop proceedings. They
should emphasize obtained results rather than intended work, and
should indicate clearly the state of completion of the reported
results. A paper accepted for presentation at the workshop must not be
presented or have been presented at any other meeting with publicly
available proceedings. No submission should be longer than necessary, up
to a maximum 8 pages plus two additional pages containing references.

If essentially identical papers are submitted to other conferences or
workshops as well, this fact must be indicated at submission time.

To facilitate double-blind reviewing, submitted manuscripts should not
include any identifying information about the authors.

Submissions must be formatted using ACL 2015 submission guidelines at

http://naacl.org/naacl-hlt-2015/call-for-papers.html

Submission style templates are available at:

http://naacl.org/naacl-pubs/

Contributions should be submitted in PDF via the submission site:

https://www.softconf.com/naacl2015/cmcl

The submission deadline is 11:59PM Pacific Time on March 6, 2015.

Best Student Paper
————————–

The best paper whose first author is a student will receive the Best
Student Paper award. All accepted CMCL papers will be published
in the workshop proceedings as is customary at ACL conferences.

Important Dates
———————

Submission deadline: 6 March 2015
Notification of acceptance: 24 March 2015
Camera-ready versions due: 3 April 2015
Workshop: June 4, 2015

Workshop Chairs
———————–

Tim O’Donnell
Department of Brain and Cognitive Sciences, Massachusetts Institute of
Technology, USA

Marten van Schijndel
Department of Linguistics, The Ohio State University, USA

Program Committee
—————————

Omri Abend, University of Edinburgh
Steven Abney, University of Michigan
Afra Alishahi, Tilburg University
Libby Barak, University of Toronto
Marco Baroni, University of Trento
Robert Berwick, MIT
Klinton Bicknell, Northwestern University
Christos Christodoulopoulos, University of Illinois at Urbana-Champaign
Alexander Clark, King’s College
Moreno Cocco, University of Lisbon
Jennifer Culbertson, George Mason University
Vera Demberg, Saarland University
Brian Dillon, University of Massachusetts Amherst
Micha Elsner, The Ohio State University
Naomi Feldman, University of Maryland
Alex Fine, University of Illinois at Urbana-Champaign
Bob Frank, Yale University
Michael Frank, Stanford University
Stefan Frank, Radboud University Nijmegen
Stella Frank, Edinburgh University
Ted Gibson, MIT
Sharon Goldwater, Edinburgh University
Carlos Gomez Gallo, Northwestern University
Noah Goodman, Stanford University
Thomas Graf, Stony Brook University
John Hale, Cornell University
Jeffrey Heinz, University of Delaware
Tim Hunter, University of Minnesota
Mark Johnson, Macquarie University
Frank Keller, University of Edinburgh
Shalom Lappin, King’s College
Roger Levy, UCSD
Pavel Logacev, Potsdam University
Titus von der Malsburg, UCSD
Rebecca Morley, The Ohio State University
Aida Nematzadeh, University of Toronto
Ulrike Pado, Hochschule fuer Technik, Stuttgart
Bozena Pajak, Northwestern University
Lisa Pearl, UC Irvine
Massimo Poesio, University of Essex
Ting Qian, Brown University
Roi Reichart, Technion University
David Reitter, Penn State University
William Schuler, The Ohio State University
Nathaniel Smith, University of Edinburgh
Ed Stabler, UCLA
Mark Steedman, University of Edinburgh
Patrick Sturt, University of Edinburgh
Colin Wilson, Johns Hopkins University
Alessandra Zarcone, Saarland University of Massachusetts
Jelle Zuidema, University of Amsterdam

Read more:
http://portal.aclweb.org/content/2015-workshop-cognitive-modeling-and-computational-linguistics-cmcl

CFP: CogSci 2015 (conference 23-25 July; deadline: 1 February)

CogSci 2015
37th Annual Meeting of the
Cognitive Science Society
Mind, Technology, and Society
 Pasadena, California, USA

July 23 – 25, 2015
(Tutorials & Workshops: July 22, 2015)

  

 

The online submission is now open.  You may review the criteria and make your submission at:  http://cognitivesciencesociety.org/conference2015/submissions.html

 

____________________________________________________________

Highlights Include:

Plenary Speakers:

Martha Farah, University of Pennsylvania

Christof Koch, Allen Institute for Brain Science

Rosalind Picard, MIT Media Laboratory

14th Rumelhart Prize Presentation:

Michael Jordan, UC Berkeley

Heineken Prize for Cognitive Science Presentation:

Jay McClelland, Stanford University

Invited Symposia:

Philosophy of Mind

Technological Innovation

Cognition in Society

Cognitive scientists from around the world are invited to attend CogSci 2015! The Annual Meeting of the Cognitive Science Society is the world’s premiere annual conference the interdisciplinary study of cognition. Cognitive Science draws on a broad spectrum of disciplines, topics, and methodologies, and CogSci 2015 reflects this diversity in its theme: Mind, Technology, and Society.

 

In addition to the invited presentations, the program will be filled with competitive peer-reviewed submissions of several kinds: research papers, contributed symposia, publication-based talks, member abstracts, tutorials, and workshops. Submissions may report on work involving any approach to Cognitive Science, including, but not limited to, anthropology, artificial intelligence, computational cognitive systems, cognitive development, cognitive neuroscience, cognitive psychology, education, evolution of cognition, linguistics, logic, machine learning, network analysis, neural networks, philosophy, and robotics.

 

The deadline for submissions is February 1, 2015. All submissions must be made via the conference program website.  Information regarding the submission process may be found at:

http://cognitivesciencesociety.org/conference2015/submissions.html

 

We look forward to seeing you in Pasadena!

 

Conference Co-Organizers:
Rick Dale, Carolyn Jennings, Paul Maglio, Teenie Matlock, David Noelle, Anne Warlaumont, Jeff Yoshimi

Cognitive & Information Sciences; University of California, Merced

cogsci2015@cogsci.ucmerced.edu
 

Cognitive Science Society

www.cognitivesciencesociety.org

info@cognitivesciencesociety.org

clclab @the NIPS Deep Learning Workshop

Our paper “Inside-Outside Semantics: A Framework for Neural Models of Semantic Composition” was accepted for the NIPS 2014 Workshop on Deep Learning and Representation Learning.

Here is the abstract:

Inside-Outside Semantics: A Framework for Neural Models of Semantic Composition

Phong Le & Willem Zuidema

The Recursive Neural Network (RNN) model and its extensions have been shown to be powerful tools for semantic composition with successes in many natural language processing (NLP) tasks. However, in this paper, we argue that the RNN model is restricted to a subset of NLP tasks where semantic compositionality plays a role. We propose an extension called Inside-Outside Semantics. In our framework every node in a parse tree is associated with a pair of representations, the inner representation for representing the content under the node, and the outer representation for representing its surrounding context. We demonstrate how this allows us to develop neural models for a much broader class of NLP tasks and for supervised as well as unsupervised learning. Our neural-net model, Inside-Outside Recursive Neural Network, performs on par with or better than the state-of-the-art (neural) models in word prediction, phrase-similarity judgments and semantic role labelling.

clclab @EMNLP

Phong presented our paper “The Inside-Outside Recursive Neural Network model for Dependency Parsing” at EMNLP in Qatar. Here is the abstract:

The Inside-Outside Recursive Neural Network model for Dependency Parsing

(published pdf here)

Phong Le & Willem Zuidema

We propose the first implementation of an infinite-order generative dependency
model. The model is based on a new recursive neural network architecture, the
Inside-Outside Recursive Neural Network. This architecture allows information to
flow not only bottom-up, as in traditional recursive neural networks, but also topdown.
This is achieved by computing content as well as context representations for any constituent, and letting these representations interact. Experimental results on the English section of the Universal Dependency Treebank show that the infinite-order model achieves a perplexity
seven times lower than the traditional third-order model using counting, and tends to choose more accurate parses in k-best lists. In addition, reranking with this model achieves state-of-the-art unlabelled attachment scores and unlabelled exact match scores.

 

Interesting paper in Science: The atoms of neural computation

Science 31 October 2014:
Vol. 346 no. 6209 pp. 551-552
DOI: 10.1126/science.1261661

The atoms of neural computation

The human cerebral cortex is central to a wide array of cognitive functions, from vision to language, reasoning, decision-making, and motor control. Yet, nearly a century after the neuroanatomical organization of the cortex was first defined, its basic logic remains unknown. One hypothesis is that cortical neurons form a single, massively repeated “canonical” circuit, characterized as a kind of a “nonlinear spatiotemporal filter with adaptive properties” (1). In this classic view, it was “assumed that these…properties are identical for all neocortical areas.” Nearly four decades later, there is still no consensus about whether such a canonical circuit exists, either in terms of its anatomical basis or its function. Likewise, there is little evidence that such uniform architectures can capture the diversity of cortical function in simple mammals, let alone characteristically human processes such as language and abstract thinking (2). Analogous software implementations in artificial intelligence (e.g., deep learning networks) have proven effective in certain pattern classification tasks, such as speech and image recognition, but likewise have made little inroads in areas such as reasoning and natural language understanding. Is the search for a single canonical cortical circuit misguided?