Symposia

Schedule

Thursday · Oct 8
9:00 - 11:00

Representation of language networks. A talk between the Brain and Artificial Intelligence

Chairs
BrunoBianchi
Bruno Bianchi

Laboratorio de Inteligencia Artificial Aplicada, Instituto de Ciencias de la Computación, FCEyN-UBA, CONICET

kamienkowski
Juan Esteban Kamienkowski

Laboratorio de Inteligencia Artificial Aplicada (Instituto de Ciencias de la Computación, FCEyN, UBA - CONICET)

Neuroscience and Artificial Intelligence (AI) have a long and connected common history. On the one side, a better understanding of the brain could play a key role in building intelligent machines. On the other side, AI models boost Neuroscience research, providing very powerful techniques to analyze the growing data torrent that is available these days. One example of this virtuous cycle is the study of language. Human language has a unique level of complexity, allowing us to generate abstractions of concepts and to communicate it to other people that can understand these abstractions. The study of this capacity has historically been of great interest for linguists, who studies how language ​is structured, and for neuroscientists, who try to understand how it is implemented in the brain, based on several experimental techniques, such as invasive and non-invasive electrophysiology and fMRI. Recently, Computer Science has joined these fields by studying how fluid communication is achieved between humans, and between humans and machines, providing methods and models to the other disciplines. In particular, the great development in Natural Language Processing (NLP) algorithms seen in recent years has made it possible to solve highly complex linguistic tasks similar to humans. In this symposium, we will be hearing different voices in the talk between Neuroscience and Artificial Intelligence in terms of understanding brain representations of language.
Speakers

Understanding naturalistic speech processing using invasive and noninvasive electrophysiology

Liberty Hamilton

The University of Texas at Austin

Understanding natural speech involves parsing complex acoustic cues from multiple sources in order to create meaningful percepts. Our work uses encoding models to understand how the brain extracts phonological and acoustic information from naturalistic speech stimuli, using a combination of intracranial electrophysiology in patients undergoing surgical treatment for epilepsy and scalp EEG in non-patient participants. This talk will describe efforts to extend encoding models to continuous speech from highly dynamic, noisy, audiovisual stimuli.

Computational models of language processing reveal concept representations in the human brain

Alexander Huth

Departments of Computer Science & Neuroscience, The University of Texas at Austin

Natural language evokes widespread BOLD responses in the human brain, and these responses are mostly selective for particular concepts. Here we use voxelwise encoding models combined with novel computational methods to probe several aspects of concept-specific responses. First, are concept representations grounded in sensory modalities, or are they purely amodal? Using visually-grounded word embedding spaces we find that not only are representations of concrete words (e.g. apple) grounded in visual properties, but so are representations of closely related abstract words (e.g. education). Second, is it sufficient to model how the brain responds to single words, or should we also consider more complete phrases? Using new machine learning techniques we find that phrase-based models are significantly and substantially better for predicting BOLD responses in nearly every area of the brain. We also use these new phrase-based models to try to understand what concepts are represented or processed in each brain area, with some surprising results.

Visual words for mental health characterization

Diego Fernandez Slezak

Lab. de Inteligencia Artificial Aplicada, DC, FCEN, Universidad de Buenos Aires

Discourse analysis has been successfully used to identify mental state alterations caused by mental disorders or use of different types of drugs. Indeed, processing of speech may predict future mental health in a prodrome population. In this talk we will explore how this idea may be extrapolated into visual words, i.e. defining a common language of paintings and drawings which shows the underlying mental state of the authors.

Natural language in real brains and artificial neural networks

Leila Wehbe

Carnegie Mellon University

This is an exciting time to be studying language in the brain. Newly proposed NLP methods that can represent the meaning of sequences of words allows us to relate representations of the meaning of text to the brain activity acquired when participants read that text. What can this tell us about the brain? What can it tell us about those NLP models? Is there a benefit from combining both into a common model? In this talk I will set up the background behind this approach and discuss recent progress along these three topics.