Burcu A. Urgen, PhD
BIO: Burcu A. Urgen is a post-doc at University of Parma, Italy with Professor Guy Orban. She received her PhD in Cognitive Science from UC San Diego under the supervision of Professor Ayse P. Saygin, and holds BS in Computer Science from Bilkent University, and MS in Cognitive Science from Middle East Technical University. Her primary research interest is the neural mechanisms underlying visual perception of actions in the human brain. In her PhD, she investigated whether the human brain shows specificity for biological agents using humanoid robots by collaborating with Professor Hiroshi Ishiguro’s lab, and using neuroimaging methods fMRI and EEG as well as machine learning techniques. This interdisciplinary research has allowed her to address questions about robot design in social robotics. Her interdisciplinary work has been published and presented at interdisciplinary journals and conferences. She was also one of the three awardees of the Interdisciplinary Scholar Award by UC San Diego with her interdisciplinary work between cognitive neuroscience and human-robot interaction.
Contact: burcu [dot] urgen [at] gmail [dot] com
TALK ABSTRACT: Cognitive Neuroscience and Artificial Agent Design: A Case Study from Uncanny Valley
Our social milieu has changed tremendously in recent years, introducing us to social partners that are dramatically different from those the human brain has evolved with over many generations. The introduction of such artificial forms in our lives has in turn allowed us to study the fundamentals of human social cognition. Uncanny valley is a phenomenon that refers to humans’ response to artificial human forms, which possess almost human-like characteristics. Theoretical work proposes prediction violation theory as an explanation for uncanny valley but no empirical work has directly tested it. In this talk, we provide evidence that supports this theory using event-related potential recordings from the human scalp, which indicate that uncanny valley could be explained by violation of one’s expectations about human norms when encountered with realistic artificial human forms.
Ayse P. Saygin, PhD
BIO: Prof. Ayse P. Saygin directs the Cognitive Neuroscience and Neuropsychology Lab (Saygin Lab) at the University of California, San Diego, where she is an Associate Professor of Cognitive Science and Neurosciences. She received a PhD in Cognitive Science from UC San Diego, followed by a European Commission Marie Curie fellowship at the Institute for Cognitive Neuroscience and Wellcome Trust Centre for Functional Neuroimaging at University College London. She holds an MSc. in Computer Science from Bilkent University, and a BSc. in Mathematics from Middle East Technical University, both in Ankara, Turkey. Dr. Saygin and her lab study human perception and cognition using a range of experimental and computational methods, including psychophysics, EEG, MRI, fMRI, brain stimulation, neuropsychological patient studies, machine learning, and brain-computer interfaces. As an NSF CAREER awardee, Dr. Saygin has built upon her PhD and postdoctoral work to develop a research program exploring the perceptual and neural mechanisms supporting the processing biologically and socially important objects and events such as the body movements and actions of other agents. With additional support from DARPA, Kavli Institute for Mind and Brain, the Qualcomm Institute, and the Hellman Foundation, Saygin lab also aims to inform human-robot interaction by integrating methods and theory from cognitive neuroscience, neuroimaging, human perception, artificial intelligence, computational modeling, social robotics and social cognition.
Contact: apsaygin [at] gmail [dot] com
TALK ABSTRACT: Human Neuroscience and Robotics
In recent years, we have become accustomed to seeing images of the human brain “lit up” for various types of tasks and domains encountered both in the scientific literature and in the news. It may thus seem surprising that neuroimaging actually is a field that is still in its early development. Human neuroscience has advanced rapidly in the past few decades, so fast that it seems surprising that it was only decades ago that researchers had to wait for patients who had suffered strokes and exhibited specific deficits (e.g., speech production, object perception) to pass away to then explore the location of the lesion with the hopes of establishing brain-behaviour relationships. Today, we have multiple methods with which we can explore the structure and function of the human brain. In the past few decades, advances in multiple disciplines as well as computation has lead to the availability of neuroimaging methods that allow us to non-invasively explore the human brain. In this workshop, we aim to demonstrate how current methods in human neuroscience can be used to address questions regarding human-robot interaction and collaboration. We will introduce different methods of studying human brain function, each with different strengths, and consider how these approaches can be used to inform robotics. We aim to demonstrate that interdisciplinary collaboration between human neuroscience and robotics can be a win-win for both sides, helping to answer question in both disciplines, while at the same time contributing to advances that are greater than the sum of the parts.
Emily S. Cross, PhD
BIO: Emily S. Cross is a social neuroscientist and senior lecturer based at Bangor University in Wales, where she directs the Social Brain in Action Laboratory. By combining intensive training procedures with brain imaging and research paradigms involving artificial agents, acrobatics and dance, she explores questions concerning social influences on human-robot interaction, motor expertise, and observational learning throughout the lifespan. Following undergraduate studies in psychology and dance at Pomona College (USA), she completed an MSc in cognitive psychology at the University of Otago (NZ) as a Fulbright Fellow, followed by a PhD in cognitive neuroscience at Dartmouth College (USA). Dr. Cross’s research has been funded by a number of national and international organizations, including the National Institutes of Health (USA), the Humboldt Foundation (Germany), the Volkswagen Foundation (Germany), the Netherlands Organisation for Scientific Research, the Economic and Social Research Council (UK), and the UK Ministry of Defense. She was recently awarded a European Research Council starting grant for the project ‘Social Robots’, which runs from 2016-2021.
Contact: e.cross [at] bangor [dot] ac [dot] uk
TALK ABSTRACT: Robots on the Brain: Exploring Brain and Behavioural Correlates of Social Perception of Artificial Agents
Rapid developments in robotics technology is resulting in robotic agents becoming an ever-growing presence in human society, moving from our movie screens, television sets and science fiction novels to our hospitals, schools, workplaces and homes. As we become increasingly likely to encounter artificial agents in daily life, understanding how we perceive and interact with such agents will become increasingly important. For example, prior research (and common sense) suggests that we do not hold the same expectations for robots as we do for humans, nor do we treat them the same way. If we are going to engage with artificial agents in a social way, we must learn to negotiate “social” interactions with agents we have had extremely limited experience with (compared to the many millennia of experience we have interacting with other people and animals). In concert with rapid advances in robotics technology, equally impressive developments in methodological approaches have taken place that enable us to explore links between the human brain and behaviour. In this talk, I take the position that developments in robotics technology and human neuroscience techniques hold great potential for reciprocal advancement and benefit. I discuss several studies that combine functional neuroimaging approaches and behavioural measures in adults and babies to explore how social perception of a variety of artificial agents engages overlapping or divergent neural processing, compared to perceiving and interacting with human agents. Moreover, I highlight how current research examining the relationship between stimulus and knowledge cues to human animacy can also significantly advance understanding social perception. Findings from this latter line of research demonstrate that self-other similarities are not only grounded in physical features, but are also shaped by prior knowledge. More broadly, as artificial agents fulfil increasingly social roles, a challenge for roboticists will be to take stock of the latest findings to emerge from social and cognitive neuroscience, and to pioneer ways to manage the impact of pre-conceived beliefs while optimising human-like design.