The Spring 2019 Cognitive Science Colloquium Series schedule is shown below.  Details will be posted as soon as they are available.  As usual, the colloquium will be held on Fridays, from 12:00 - 1:30 p.m., in the Speech, Language, and Hearing Sciences Building, Room 205. Recordings of these talks are available on Panopto, which can be accessed with a UA NetID.

Since 2012, an annual feature of the colloquium series is a special talk given by the Roger N. Shepard Distinguished Visiting Speaker. Please follow the link for a list of past speakers.

If you would like to receive email announcements about these and other events, please contact Program Coordinator Sandra Kimball at to be added to the colloquium listserv.

Information about previous talks during this academic year can be found at the bottom of this list. Other past talks can be found at COGNITIVE SCIENCE COLLOQUIUMS ARCHIVE.




April 26

Cognitive Science Graduate Student Showcase

MINGLI LIANG, Psychology Department
TITLE: Human frontal delta-theta dynamics cod distance and time inside teleporters

Past studies have suggested the critical role of hippocampal low-frequency oscillations in spatial navigation.  Additionally, cortical and hippocampal theta oscillations often synchronize, suggesting the importance of cortical oscillations to movement-related coding as well.   In a recent study, we found increased scalp frontal-midline delta-theta oscillations during movement involving free ambulation when compared to standing-still in healthy humans.  One intriguing question, given these findings, regards the precise drivers of such low-frequency oscillations.  While past studies have suggested spatial distance (Vass et al. 2016) and movement speed (Watrous et al. 2011) may both contribute to low-frequency oscillations, temporal components may also be a significant driver.  To address this issue, participants navigated a plus maze containing four target stores at the end of each arm.  Four teleporters were also dispersed in each arm involving different spatial distances and temporal intervals.   In a trial, participants first entered a teleporter, and upon exiting, were teleported back to the center of a plus maze, at which time they were instructed to find a target store.  In the spatial distance condition, participants judged how far they travelled inside the teleporters; in the temporal interval condition, participants judged temporal interval.  On the basis of temporal interval or spatial distance (short vs. long), participants decided which target store to visit.  As in the prior study, we used the omnidirectional treadmill to provide locomotion-based VR navigation experiences, simultaneously recording scalp EEG during teleportation and navigation epochs.  Preliminary results showed that participants were able to discriminate between different spatial/temporal teleportation experiences at above chance levels and were able to apply the cues to find the appropriate targets.  Analyses involving scalp EEG will test whether 1) frontal-midline theta oscillations persist during teleportation without the presence of visual, vestibular and proprioceptive input, regardless of spatial or temporal conditions 2) whether frontal-midline theta oscillations code space, time, or both.   Our findings will help advance our understanding of the role of low-frequency oscillations in memory and navigation and deepen our understanding of the nature of “cognitive map” regarding whether a time code and a distance code co-exist in the spatial knowledge.  

ALYSSA SACHS, Speech, Language, and Hearing Sciences
TITLE: A retrospective study of long-term improvement on the Boston Naming Test

Purpose. Lexical retrieval impairment is a universal characteristic of aphasia and a common treatment focus. Although naming improvement is well documented, there is limited information to shape expectations regarding long term recovery. This was the motivation for a retrospective study of longitudinal data on the Boston Naming Test (BNT).

Methods. BNT scores were analyzed from a heterogeneous cohort of 42 individuals with anomia associated with a range of aphasia types. The data were collected over the course of 20 years from individuals who had participated in treatment and received at least two BNT administrations. A linear mixed model was implemented to evaluate effects of initial BNT score, time post onset, and demographic variables. For those over age 55, BNT change was evaluated relative to data from the Mayo Clinic’s Older American Normative Studies (MOANS).

Results. There was a significant average improvement of +7.67 points on the BNT in individuals followed for an average of two years. Overall, the average rate of improvement was +5.84 points per year, in contrast to a decline of 0.23 points per year in a healthy adult cohort from the MOANS. Naming recovery was approximately linear, with significant main effects of initial BNT score (i.e., initial severity) and time post onset; the greatest changes were noted in those whose initial severity was moderate.

Conclusions. These findings indicate a positive prognosis for naming improvement over time regardless of demographic factors and provide estimates for clinical predictions for those who seek rehabilitation during the chronic phase.






PAST 2018-19 TALKS


August 31, 2018

KOBUS BARNARD, Professor of Computer Science, UA Department of Computer Science

TITLE: Multiple-gaze geometry: Inferring novel 3D locations from gazes observed in monocular video

ABSTRACT:  I will briefly discuss the current success of black box classifiers and how they can be less less suitable for explanatory and/or mechanistic models that need to use restricted (domain specific) representations.  I will then present work on inferring what is going on videos of people using strong natural representations within a Bayesian framework. More specifically, this framework treats observed image data as being evidence for underlying models that explain it, and going from data to model is achieved using Bayesian inference executed using MCMC sampling. Using this approach, we are able to track the 3D location of people using a single, uncalibrated video camera (e.g., we do not know, in advance, things like the focal length of its lens, which we infer as part of the process).

I will then discuss recently published work on including the gaze directions of the participants, and how our approach for explicitly representing the scene in 3D naturally provides for inferring who is locking at whom or what. Finally, as suggested by the title, I will discuss how our approach can discover 3D locations of what people tend to look at, including locations not visible to the camera. This emerges from our approach rather intuitively, as the intersection of gaze angles rooted in different points in space provides evidence for 3D locations. Finally, I will mention a few possible extensions that we are considering.

While the nuts and bolts of our approach are quite technical, I will attempt to provide a largely non-mathematical understanding of such models and the associated inference engines.

This work is in collaboration with former UA CS PhD students Ernesto Brau, Jinyan Guan, and Tanya Jeffries.


September 7, 2018

TERRY REGIER, Professor of Linguistics and Cognitive Science, Department of Linguistics, Cognitive Science Program, University of California at Berkeley

TITLE: Semantic typology and the Sapir-Whorf hypothesis in computational perspective

ABSTRACT: Why do languages have the semantic categories they do, and what do those categories reveal about cognition and communication?  Word meanings vary widely across languages, but this variation is constrained.  I will argue that this pattern reflects a range of language-specific solutions to a universal functional challenge: that of communicating precisely while using minimal cognitive resources. I will present a general computational framework that instantiates this idea, and will show how that framework accounts for cross-language variation in several semantic domains. I will then address the Sapir-Whorf hypothesis - the claim that such language-specific categories in turn shape cognition. I will argue that viewing this hypothesis through the lens of probabilistic inference has the potential to resolve two sources of controversy: the challenge this hypothesis apparently poses to the widespread assumption of a universal groundwork for cognition, and the fact that some findings supporting the hypothesis do not always replicate reliably.  


September 14

STEPHEN COWEN, Assistant Professor, UA Department of Psychology

TITLE: How ketamine alters brain activity and potential mechanisms for its therapeutic and dissociative effects

ABSTRACT: Although ketamine was developed in the 1960s as an anesthetic, the potential therapeutic applications for the drug have expanded considerably in the last decade. For example, hour to days-long exposure can provide weeks-to-month reduction of treatment-resistant depression, post-traumatic stress disorder (PTSD), chronic pain, and L-DOPA-induced dyskinesias associated with the treatment of Parkinson’s disease. Ketamine is also a popular recreational drug due to its powerful dissociative and perceptual effects that include feelings of disembodiment and vivid perceptual hallucinations. Despite its widespread use and its potential for abuse, little is understood about the neural mechanisms that underlie ketamine’s therapeutic or dissociative effects. In this talk, we will review our research investigating ketamine’s capacity to produce profound changes in neuronal synchrony throughout the brain. We will also discuss how changes in synchrony may contribute to ketamine's effects on perception and its use as a potential treatment for Parkinson’s disease and L-DOPA-induced dyskinesias.


September 21

KAREN SCHLOSS, Assistant Professor, Department of Psychology and Wisconsin Institute for Discovery, University of Wisconsin at Madison

TITLE: Color inference for visual communication

ABSTRACT: Visual reasoning allows people to translate visual input into conceptual understanding. The visual reasoning system presumably evolved so organisms could quickly and flexibly interpret visual input in their natural environment. Now, humans leverage this system for visual communication by creating synthetic environments, or visualizations, for others to interpret. These visualizations include the graphs, maps, and diagrams that are central to science communication. Interpreting visualizations is easier when the encoded mappings between concepts and visual features match people’s expectations, or inferred mappings. To harness this principle in visualization design, it is necessary to understand what determines people’s inferred mappings. In this talk, I will present the Color Inference Framework for how people make conceptual inferences from color, and how those inferences influence judgments about the world. I will then discuss studies on color-coding systems for recycling and for colormap data visualizations. The results of these studies demonstrate that inferred mappings are context dependent and flexible, influenced by perceptual relations among colors in visual displays and relative activation of concepts in people’s minds. The results have implications for designing effective and efficient media for visual communication.


September 28

LINDA RESTIFO, Professor of Neurology, Neuroscience, and Cellular & Molecular Medicine, UA Department of Neurology

TITLE: Size matters: Heads, brains, and neurons in genetic intellectual disabilities

ABSTRACT: Nearly one thousand human genes are known to be essential for the development of neurotypical cognitive function. Conversely, deleterious mutations in any one of these genes cause intellectual disability (ID), either in isolation or as part of a syndrome. For a substantial fraction of these disorders, small head size (microcephaly), due to impaired brain growth, is detectable during the first few years of life. Almost all of the genes involved are very highly conserved, meaning that they are present and control brain development in simpler organisms, such as the fruit fly, Drosophila melanogaster.  Members of my research team have investigated mutants of several ID-and-microcephaly genes in Drosophila. We discovered that their brain neurons extend small arbors of branches when cultured in vitro, but each one has a distinct abnormality.  We have also used cultured neurons to identify drugs that reverse defects caused by mutations. I will present the key data in support of a strategy to develop safe and effective drugs for improving brain development in children with microcephaly-associated ID.


October 5

JANE M CARRINGTON, Associate Professor, UA Nursing

TITLE:  Nurse-to-Nurse communication using the Electronic Health Record with implications for decision-making and patient outcomes

ABSTRACT:  Nearly 100,000 patients die each year in our nation’s hospitals due to miscommunication. Unfortunately, the implementation of the current electronic health record (EHR) has done little to improve this statistic.  The EHR has increased legibility of the health record, supports data entry, and has made the record available through the network to all members of the health care team.  Nurses have also reported, however, that retrieval of patient information is difficult.  Furthermore, the patient data collected in the current EHR is not considered valuable towards continuing care of patients. These attributes threaten the effectiveness of the current EHR as a communication system. Nurses have stated their primary source of patient information is the change of shift hand-off.  Unfortunately, the hand-off is also an ineffective communication system. The hand-off is often plagued with errors. Interestingly, for the same patient, the EHR and the hand-off rarely align due to missing and inconsistent patient information. In addition, the current EHR and hand-off pose a threat to effective nurse-to-nurse communication and decision-making for patients who experience a change in status. Of particular interest are patients who experience a clinical event (CE) or pain, fever, bleeding, changes in output, respiratory status or level of consciousness. The hypothesis of my research is that effective nurse-to-nurse communication can reduce unexpected deaths for patients who experience a clinical event.  My research has focused on language used by nurses to describe CEs in the EHR and hand-off, nurse-EHR interaction and decision-making.  My team is working towards solutions to improving the EHR using strategies that include natural language processing, machine learning, and artificial intelligence Here I will present an overview of my research exploring nurse-to-nurse communication of CEs and decision-making and their impact on patient outcomes.


October 12

ARNE DAVID EKSTROM, Associate Professor, UA Department of Psychology

TITLE:  Decoding how we represent space when we navigate

ABSTRACT:  While the field has made significant progress in understanding how other species navigate, many fundamental questions remain regarding how humans accomplish this important everyday function.  Here, we present studies that attempt to understand our unique and flexible code for space.  We present experiments investigating the interface between cartographic maps and spatial representation, cued recall and spatial representation, and verbal/linguistic codes and spatial representation.  Together, these findings will explore key differences in our navigational code compared to other species, suggesting that cognitive representations for cartographic maps and language are dynamically integrated with space to a greater extent than suggested previously.



October 19

GARY LUPYAN, Associate Professor, Department of Psychology, University of Wisconsin at Madison

TITLE:  From perception to symbolic thought: How language augments human cognition

ABSTRACT:  Language is often held to be one of the defining traits of our species. Yet for all its claimed importance, most cognitive scientists work under the assumption that language, while useful for communicating pre-existing thoughts, plays a minor if any role in their construction. I will argue that this view is mistaken and that words play a much more central role in forming useful mental representations than is generally acknowledged. I will show how the use of language actively modulates performance on “nonverbal” tasks from low-level perception to higher-level reasoning. Taken together, the results suggest that some of the unique aspects of human cognition may stem from the use of words to flexibly transform mental representations into more categorical states. These findings have relevance for understanding the cognitive consequences of language impairments and for questions concerning linguistic relativity.


October 26

CHANGXU WU, Professor, UA Department of Systems & Industrial Engineering

TITLE:  Human performance modeling and its applications in systems engineering

ABSTRACT:  This research seminar introduces the major research activities at the Cognitive System at University of Arizona, focusing on human cognition/performance modeling with its applications in systems engineering (e.g., human-in-the-loop transportation systems and human-machine interaction). Human performance modeling is a growing and challenging area in human factors and cognitive systems engineering. It builds computational models based on the fundamental mechanisms of human cognition and human-system interaction, employs both mathematical and discrete event simulation methods in industrial engineering, and predicts human performance and workload in real-world systems. It can be used to design, improve, and evaluate systems with human in the loop. Current and future research topics will also be introduced.


November 2

CHARLES NOUSSAIR, Professor of Economics, UA Eller Department of Economics

TITLE:  Emotions and economic decision making

ABSTRACT:  This talk describes a number of studies relating emotional state and economic decision making. Two technologies that are new to economics are described. These are (1) the use of facereading software to measure and track emotional state, and (2) the use of 360 degree videos shown in virtual reality to induce emotions, The talk describes some results regarding the relationship between emotions and risk taking, honesty, cooperation, charitable giving, and reciprocity.


November 9

ELIZABETH L GLISKY, Professor, UA Department of Psychology

TITLE:  Enhancing memory in normally-aging adults

ABSTRACT:  This talk will describe two sets of experiments looking at ways to enhance memory in older adults:  1) a cognitive strategy relying on self-referential processing, and 2) a social strategy relying on social interaction.


November 16

MASSIMO PIATTELLI-PALMARINI, Professor, UA Departments of Linguistics & Psychology

TITLE:  Normal language in abnormal brains

ABSTRACT:  There is little doubt that, in the adult, specific brain lesions cause specific language deficits. Yet, brain localizations of linguistic functions are made problematic by several reported cases of normal language in spite of major brain anomalies, mostly, but not exclusively, occurring early in life. The signal cases are hydrocephaly, spina bifida and hemispherectomy. Many patients have normal syntax and lexicon, but suffer from grave problems in the use of language (they are linguistically dyspraxic), showing that the interface is affected. These cases are discussed and possible solutions are suggested: namely a vast redundancy of neurons and/or the role of microtubules as neuron-internal processors and key factors in signaling and guiding the growth and reconfiguration of the brain.


November 30

LAURA WAGNER, Associate Professor, Department of Psychology; Director, Language Sciences Research Lab, The Ohio State University

TITLE:  Performance factors influencing competence with linguistic aspect

ABSTRACT:  It is frequently argued that children are competent with some dimension of language, but their knowledge is being masked by performance limitations.  However, in most cases, the evidence for these performance factors is indirect and the specific links between cognitive skills and linguistic forms is vague.  The current work examines a well-documented under-extension in children’s language and the cognitive skills that predict children’s performance of it.  The linguistic phenomenon involves aspect: children prefer to say (and better comprehend) predicates describing bounded events with perfective rather than imperfective morphology and the reverse for unbounded events.  That is, despite the fact that all four of the following sentences are grammatical, children prefer “The girl closed the door” over “The girl was closing the door” and “The girl was listening to music” over “The girl listened to music”.  Children and adults were tested on their ability to understand a range of aspectual combinations (both preferred and non-preferred) and were also tested on a series of independent cognitive assessments.  The results showed specific links between inhibitory control and vocabulary size with different non-preferred combinations that were consistent with formal semantic accounts of those linguistic forms.  More generally, the results show how it is possible to use performance to illuminate the nature of competence.


January 18, 2019

SUZANA HERCULANO-HOUZEL, Associate Professor, Psychological Sciences; Associate Director for Communications, Vanderbilt Brain Institute, Vanderbilt University

TITLE:  Life slows down when you have more neurons

ABSTRACT:  Sure, having more neurons in the cerebral cortex must make it capable of more complex and flexible cognition, so our sixteen billion cortical neurons place humans at a clear cognitive advantage over all other animals. But in this talk I'll argue that the most consequential effect of having so many neurons is something else: more time to mature and then to live once independence is reached. With more cortical neurons comes more time to gather information, build knowledge, and exchange it with past and future generations – hand in hand, of course, with the increased computational capacity that makes it all possible.


January 25

MELANIE SEKERES, Assistant Professor of Psychology and Neuroscience, Baylor University

TITLE:  Run for the cure: Using exercise to minimize cognitive impairment and neurotoxicity following cancer treatment

ABSTRACT:  Patients receiving radiotherapy and chemotherapy treatments for brain and non-brain cancers, commonly report cognitive disturbances in memory and executive function. Treatment disproportionally impacts the ability to form new (anterograde) memories, while relatively sparing older (retrograde) memories. Growing evidence from pre-clinical studies in rodents confirm clinical reports in patients and suggest that such cognitive disturbances are mediated by the neurotoxic effects of the radiation and chemotherapeutic drugs which reduce hippocampal volume, neurogenesis, and white matter, and increase expression of pro-inflammatory cytokines. Exercise is a modifiable lifestyle factor with known therapeutic benefits. Considerable overlap exists between the cellular mechanisms supporting running-enhanced cognition, and cellular mechanisms altered by chemotherapy and radiation treatment, including opposing effects on neurogenesis, and inflammatory cytokines. I will discuss findings in patients and rodents suggesting exercise, and running in particular, may be an effective means of promoting functional recovery from radiotherapy and chemotherapy treatment-related cognitive impairment.


February 1

MANDY J. MAGUIRE, Associate Professor of Behavioral and Brain Sciences, University of Texas at Dallas

TITLE:  Using event related potentials and neural oscillations to study developmental changes in language comprehension and word learning

ABSTRACT:  EEG, primarily via ERPs (Event Related Potentials), has provided a window into complex and difficult to assess aspects of cognition and language processing for decades. Current advances in data collection and analysis have led to an increase interest in expanding EEG analyses to include studies of event related neural oscillations. The multidimensionality of this data (simultaneous changes in multiple frequency bands at each electrode site) and the fact that neural oscillations are less time limited than ERPs have made them particularly interesting for studying language and language development. Here we review a series of studies using ERPs and neural oscillations to study language comprehension and word learning in children and adults. Overall the findings indicate that ERPs and neural oscillations provide complimentary but sometimes unique windows into language development. These studies provide new insights about developmental changes in neural engagement related to semantics, syntax and word learning.  We will discuss implications of this work as well as new applications for using ERPs and neural oscillations during word learning tasks to study the vocabulary gap between children from low- and higher SES homes in grade school.


February 15

ZOE DRAYSON, Assistant Professor of Philosophy, University of California at Davis

TITLE:  Inferential cognition

ABSTRACT:  What does it mean to describe a cognitive process as inferential? In cognitive science it is common to make a distinction between processes that are inferential and processes that are associative. There is, however, no consensus as to how this distinction should be drawn. I explore various options in the literature, and situate them in the context of broader questions about cognition. Some philosophers, for example, argue that inferential thought is necessarily conscious and thus deny that unconscious processes are genuinely inferential. There is a related concern that inference is tied to notions of rationality and reasoning in a way which renders us responsible for our inferential cognition, which is difficult to reconcile with unconscious processes over which we have no control. I discuss the way that some of these considerations play out with respect to the psychological and philosophical literature on implicit bias.    


February 22

GERRY ALTMANN, Professor, Department of Psychological Sciences, University of Connecticut; Director, Connecticut Institute for the Brain and Cognitive Sciences

TITLE:  The challenge of event cognition: Object recognition at the interface of episodic and semantic memory

ABSTRACT:  To understand the event corresponding to e.g. “the chef chopped the onion” requires understanding (i) that the things under consideration have properties shared with other similar things (i.e. inherited from their type), (ii) that they have specific properties that uniquely distinguish them from other things of the same type (i.e. they are specific tokens), and (iii) that these properties change over time; the chef and the onion have (intersecting) histories that started with them in one state and ended with them in another. These histories are in fact trajectories of changes in state across time and space, and their intersection defines the interactions between objects (in this case, the action of the chef on the onion). To comprehend events therefore requires that we access knowledge about types of objects and combine this with knowledge about the dynamic episodic properties of individual tokens – that is, it requires creating on-the-fly representations of object tokens and their changes in state. In this talk I shall outline an account of how this might be accomplished in a brain that is able to distinguish the systematic associations that define semantic memory for object types from the non-systematic accidental associations that define the episodic characteristics of object tokens. The talk will include some slime mould, fMRI, and EEG, but presented for the neuroscientific novice.


March 1

JAY NUNAMAKER, Regents' and Soldwedel Professor of Management Information Systems, Computer Science, and Communication; Director, UA BORDERS Center

TITLE:  AVATAR--Automated Virtual Agent for Truth Assessment in Real-Time

ABSTRACT:  The automated interviewing system called AVATAR is designed to screen people for credibility assessment and deception detection. This talk will focus on an overview description of the AVATAR technology and an AVATAR demonstration.

Who is better at distinguishing truth-tellers from liars—A person or an artificial agent?

Humans are notoriously poor at detecting lies and other tell-tale signs of malintent. Using artificial intelligence and sensor technologies, BORDERS researchers are developing an avatar-based screening system that may be able to identify suspicious behavior more accurately than any human.

   The AVATAR Kiosk is designed to flag suspicious behavior that should be investigated more closely by a human agent in the field. This “primary screening” technology is designed for use at ports of entry, including border crossings and airports. The kiosk also has many other security applications such as visa processing, asylum requests and personnel screening and interviewing.

Generation Four:  Technology and Sensors.

BORDERS researchers have investigated over 300 psychophysiological and behavioral cues including vocalics, linguistics, kinesics, cardiorespiratory, eye behavior, and facial skin temperature. Based on findings, the AVATAR is equipped with non-invasive and non-intrusive instruments that record an individual’s physiological and behavioral reactions during the interview process:

  • Kinesics and facial emotion—Computer vision algorithms via video camera
  • Vocalics—Computer aural perception algorithms via-audio—microphone
  • Saccade, gaze duration, pupillometry—eye-tracking via near infrared camera
  • Linguistic content—natural language processing via deception detection algorithms
  • Physiological—Heart rate, respiratory rates, blood pressure, heat signature around eye on nose, mouth.

The Many Faces of the AVATAR.

BORDERS research shows that people react differently to various types of avatars. For example, the avatar’s gender, ethnicity and demeanor may produce dissimilar effects on the person being screened. Other factors include the avatar’s perceived power, trustworthiness, composure, expertise, likability and attractiveness.

   This finding has important implications for future screening practices. For example, human agents may select different avatars based on the individual being screened. Cultural considerations and context are also significant and must be taken into account.

What is BORDERS?

BORDERS is a multi-university research center established in 2008 by the Department of Homeland Security (DHS) as a Center of Excellence in border security and immigration until 2016 for $21 million.

The total funding for the AVATAR project is $31 million. Presently BORDERS is funded by the US Army and NSF. Partners include University of California, Santa Barbara, Stanford University, Dartmouth University, University of Maryland, Rutgers, West Virginia University, Clarkson University, San Diego State University and University of Nebraska at Omaha.


March 15

JACQUELINE GOTTLIEB, Professor of Neuroscience; Principal Investigator, Zuckerman Institute, Columbia University

TITLE:  Mechanisms of curiosity and information sampling in humans and non-human primates

ABSTRACT:  The vast majority of neuroscience research focuses on tasks in which participants have extensive prior knowledge about the relevant features, usually via explicit instructions that strongly constrain what they should memorize, attend to or learn. In natural behavior however, we rarely have the benefit of such explicit instruction. Instead, our brains must endogenously decide which one, of the practically infinite set of available signs and cues, to use to guide our learning, perception and action. Because of our field's overwhelming reliance on the "instructed cognition" paradigm, the mechanisms of active sampling remain very poorly understood. I will review the significance of this lacuna for current theories of cognition and decision making. I will then discuss behavioral and neurophysiological evidence pertaining to this question from our laboratory, with a focus on single neuron responses in the parietal cortex during the sampling of instrumental (decision-relevant) cues and during sampling of non-instrumental information motivated by curiosity. Time permitting, I will also describe studies analyzing instrumental and curiosity-based sampling in humans using new behavioral tasks and electroencephalography (EEG). Together, these studies begin to reveal the distributed processes through which the brain estimates the benefits and costs of gathering information and implements active sampling policies.


March 22

CATHERINE HARTLEY, Assistant Professor of Psychology, New York University

TITLE:  Developmental tuning of action selection

ABSTRACT:  Computational reinforcement learning models provide a framework for understanding how individuals can evaluate which actions are beneficial and which are best avoided. To date, these models have primarily been leveraged to understand learning and decision-making in adults. In this talk, I will present studies characterizing developmental changes, from childhood to adulthood, in the cognitive representations and computations engaged to evaluate and select actions. I will discuss how these changes may optimize behavior for an individual’s developmental stage and unique life experiences.


March 29

NOAM CHOMSKY, Laureate Professor of Linguistics, Agnese Nelms Haury Chair, University of Arizona

TITLE:  Language architecture and evolution: Some current perspectives

ABSTRACT:  Language has been an object of fascination since classical antiquity, but it was not until the development of the theory of computation by mid-20th century that it became possible to formulate and investigate effectively the Basic Property of human language: a language L determines an unbounded array of hierarchically structured expressions, each of which is interpreted semantically as an expression of thought, each of which can be externalized in some sensory modality (typically speech).  The general faculty of language FL specifies the possible human languages.  The theory of FL (UG, “universal grammar”) must be rich enough to account for the properties of each particular language L, but also simple enough to account for the acquisition of each L and the evolution of FL.  These goals have been approached in revealing ways in recent years, with some surprising results that challenge long-held ideas.  I will review progress in this direction, along with open problems and deeper mysteries.  


April 5

NICK CHATER, Professor of Behavioral Science, Warwick Business School, University of Warwick

2018-2019 Roger N. Shepard Distinguished Visiting Scholar

TITLE:  Virtual bargaining: A microfoundation for the theory of social interaction

ABSTRACT:  How can people coordinate their actions or make joint decisions? One possibility is that each person attempts to predict the actions of the other(s), and best-responds accordingly. But this can lead to bad outcomes, and sometimes even vicious circularity. An alternative view is that each person attempts to work out what the two or more players would agree to do, if they were to bargain explicitly. If the result of such a "virtual" bargain is "obvious," then the players can simply play their respective roles in that bargain. I suggest that virtual bargaining is essential to genuinely social interaction (rather than viewing other people as instruments), and may even be uniquely human. This approach aims to respect methodological individualism, a key principle in many areas of social science, while explaining how human groups can, in a very real sense, be "greater" than the sum of their individual members. This viewpoint has implications for the nature of communication, the ‘moral emotions,’ and the emergence of norms, rules and institutions.


April 12

LOGAN T. TRUJILLO, Assistant Professor, Department of Psychology, Texas State University

TITLE:  Testing the free energy principle for the brain during visual categorization in humans

ABSTRACT:  According to the theory of active inference, the brain predicts sensations and infers their causes via a generative model of the world. Active inference is achieved when the brain minimizes its free energy, an information-theoretic upper bound on the difference between the brain’s current and predicted states; this minimization of brain free energy is termed the “Free Energy Principle (FEP)”. Perception and action correspond to two ways a brain can minimize free energy: i) changing its beliefs about the world (i.e. its generative model), or ii) acting on the world in order to change sensory input in accordance with its beliefs. The free energy principle may provide a general explanation for how the brain realizes perception and action; however, empirical confirmations of this principle are currently limited.
   This talk will report my efforts to empirically quantify the free energy of global states of the human brain by combining techniques from experimental psychology, electroencephalography, computational modeling, and machine learning. These efforts focus on
global brain free energy states arising during the active inference of visual category structure (2-AFC categorization of Gabor stimuli, where the categories are defined by a combination of stimulus orientation and spatial frequency). I find that global brain free energy is lowest when the brain’s discrimination of visual categories matches the reported perception of these categories, whereas global brain free energy is highest when the neural discrimination and the reported perception are mismatched. This finding is as expected if visual categorizations are based on a relatively accurate generative model of the true visual category structure. Moreover, total global brain free energy correlates with the free energy and choice precision parameters of a computational model of the categorization task (a partially observable Markov decision process implementing approximately Bayes-optimal decisions). These findings provide evidence for a relationship between visual categorization, active inference, and brain free energy minimization.


April 19

YEJIN CHOI, Assistant Professor, Department of Computer Science and Engineering, University of Washington

TITLE:  From native physics to folk psychology: Modeling common sense in language

ABSTRACT:  Intelligent communication requires reading between the lines, which in turn, requires rich background knowledge about how the world works. However, learning and reasoning about the obvious, but unspoken facts about the world is nontrivial, as people rarely state the obvious, e.g., ``my house is bigger than me.’’ In this talk, I will discuss how we can reverse engineer aspects of commonsense knowledge—ranging from naive physics to more abstract social commonsense knowledge—from how people use language. A key insight is this: the implicit knowledge people share and assume systematically influences the way people use language, which provides indirect clues to reason about the world. For example, if ``Jen entered her house’’, it must be that her house is bigger than her.
   In this talk, I will present two complementary formalisms that can organize and represent various aspects of commonsense knowledge: commonsense frames and graphs. In particular, I will introduce ATOMIC, an atlas of everyday commonsense reasoning, organized through 877k textual descriptions of inferential knowledge. Compared to existing resources that center around taxonomic knowledge, ATOMIC focuses on causes and effects of everyday events (e.g., "if X pays Y a compliment, then Y will likely return the compliment”). I will then present two complementary approaches—probabilistic inference and deep neural networks—that can learn to reason about commonsense knowledge encoded in language. I will conclude the talk by discussing the challenges in current models and formalisms, pointing to avenues for future research. 


April 19

YEJIN CHOI, Assistant Professor, Department of Computer Science and Engineering, University of Washington

TITLE:  From native physics to folk psychology: Modeling common sense in language

ABSTRACT:  Intelligent communication requires reading between the lines, which in turn, requires rich background knowledge about how the world works. However, learning and reasoning about the obvious, but unspoken facts about the world is nontrivial, as people rarely state the obvious, e.g., ``my house is bigger than me.’’ In this talk, I will discuss how we can reverse engineer aspects of commonsense knowledge—ranging from naive physics to more abstract social commonsense knowledge—from how people use language. A key insight is this: the implicit knowledge people share and assume systematically influences the way people use language, which provides indirect clues to reason about the world. For example, if ``Jen entered her house’’, it must be that her house is bigger than her.
   In this talk, I will present two complementary formalisms that can organize and represent various aspects of commonsense knowledge: commonsense frames and graphs. In particular, I will introduce ATOMIC, an atlas of everyday commonsense reasoning, organized through 877k textual descriptions of inferential knowledge. Compared to existing resources that center around taxonomic knowledge, ATOMIC focuses on causes and effects of everyday events (e.g., "if X pays Y a compliment, then Y will likely return the compliment”). I will then present two complementary approaches—probabilistic inference and deep neural networks—that can learn to reason about commonsense knowledge encoded in language. I will conclude the talk by discussing the challenges in current models and formalisms, pointing to avenues for future research.