The neuroscience of relationships: toward naturalistic interactive neuroimaging
By Erin Yeagle
Most of us don’t think twice about the fact that experimental conditions in neuroscience and psychology rarely mirror the real world. However, a small (but growing!) group of neuroscientists have made careers out of investigating so-called “naturalistic neuroimaging,” the science of visualizing brain activity in an environment that actually resembles the one in which we move and interact every day.
Advocates of naturalistic neuroimaging contend that many discoveries in traditional, highly constrained experimental setups are unlikely to generalize to the real world — and that if neuroscientists want to make discoveries that do generalize outside of the lab, it’s necessary to study the brain in an environment with fewer limiting parameters.
Five researchers whose work delves into these questions discussed their work in a SfN14 symposium titled, “Toward Naturalistic Interactive Neuroimaging.” They included:
Talma Hendler, Tel Aviv Sourasky Medical Center (chair)
Gadi Gilam, Tel Aviv Sourasky Medical Center (co-chair)
Leonhard Schilbach, University Hospital Cologne
Uri Hasson, Princeton University
Fabio Babiloni, Sapienza University of Rome
I reached out to two panelists, Leonhard Schilbach and Uri Hasson, with a few questions about their research interests and thoughts on neuroscience’s past and future.
Do we think about others differently when we interact with them, rather than just watching?
Leonhard Schilbach, a psychiatrist and social neuroscientist at the University of Cologne, thinks as much, and has made a career out of understanding what he terms the “dark matter” of social neuroscience: social interaction.
What questions are you interested in answering with your research?
As a clinical psychiatrist and lecturer in social neuroscience, I am interested in how human beings understand and make sense of each other. Here, my research is based on the assumption that social cognition is fundamentally different when we are engaged with others in real-time social interaction than when we are merely observing them. In particular, I am interested in exploring the ways in which social interaction and interpersonal coordination can be motivating and rewarding, and how this interacts with other aspects of cognition and processes of self-regulation.
What do you think is the most important (or exciting) discovery to date in the history of neuroscience/psychology?
To me the discovery and continued development of non-invasive neuroimaging techniques in conjunction with real-time behavioral measurements such as interactive eyetracking continues to be the most exciting aspect of neuroscience today as it allows us in the field of social neuroscience to perform investigations of the neural mechanisms of human social interactions and thereby to investigate issues that are at the very core of being human.
What do you think will be the most important (or exciting) discovery in neuroscience/psychology in the next ten years?
One important avenue for future research in social neuroscience in my view will be to relate new findings into the neural bases of social interaction to previous work investigating the neural bases of social observation as well as to find ways to directly compare the two, e.g. by means of multivariate data analyses. By doing so, we may be able to empirically address the idea that neural networks established during social interaction may be “re-used” during observation.
In his talk in the “Toward Naturalistic Interactive Neuroimaging” symposium, Schilbach explained why social interaction has remained relatively unstudied compared to other forms of social cognition—in short, because it’s not accounted for by popular theories that explain how we make sense of other minds.
Traditionally, social neuroscience has been dominated by two approaches: what Schilbach termed the “first-person” approach, which accounts for our understanding of other minds through a projection of our understanding of ourselves, and the “third-person” approach, which assumes that we make use of a developed theory of other minds.
Schilbach claims that both theories “assume an epistemic gap between self and other” — and as a result, may not fully account for the reality of social cognition during interpersonal interaction. To overcome this, he argues that we need a “second-person” approach to social cognition: one that accounts for the possibility that social cognition in interactions is different from that involved in observation alone.
The case of joint attention
The Schilbach lab investigates social cognition in interactions using a novel experimental paradigm: subjects use their eye movements to interact with an on-screen virtual avatar while lying in the MRI scanner, allowing a simulation of social interactions without the technical challenges of jamming two people into one scanner.
They’ve used this paradigm to investigate the phenomenon of joint attention: as Schilbach put it, to “look at something together with someone, and know that we both are looking.”
Joint attention can be either initiated (when you look first at the object, then at the other person), or followed (when you look at the other person first). While this may not initially seem like a paradigm with particular clinical relevance, he pointed out that children with autism don’t “initiate shared experience”, suggesting an underlying abnormality in this process.
In an fMRI study, Schilbach and colleagues told subjects that a virtual character they saw on-screen represented a second person outside of the scanner, who would interact with the subject continually over the course of the experiment. The avatar would then either ignore the subject’s reciprocal gaze, or be guided by it in joint viewing, allowing the researchers to examine whether the neural correlates of joint attention are different when the shared experience has been initiated, rather than followed.
The researchers found a greater response in the ventral striatum, an area involved in reward and motivation, in the self-initiated condition—suggesting that the initiation of joint attention, and perhaps other forms of social interaction, may be its own reward.
In support of this hypothesis, subjects reported more enjoyment during the behavioral task if the avatar had shared their visual attention rather than antagonizing it.
Schilbach argued that in psychiatric disorders, deficits more commonly lie in social interaction than in social observation, making a “second-person” approach all the more critical. In support of this claim, he offered the example of cocaine dependence: cocaine users rate joint attention as less-pleasant compared to people who don’t use cocaine. They also have blunted activation in medial orbito-frontal cortex, another region implicated in reward, during the social condition of the task.
Furthermore, the degree of blunted activation in the OFC is correlated with social effects of drug abuse: cocaine users with more blunted OFC activation tended to have smaller social networks than those with a more normal pattern of activation.
Together, these findings suggest that shedding light on social neuroscience’s “dark matter” may have implications beyond the lab. More research into social interaction could lead to a better understanding of — and possibly better treatment for — the many psychiatric disorders involving deficits in social cognition.
A professor in Princeton University’s psychology department, Uri Hasson’s research draws on his background in visual neuroscience to study how the brain makes use of information acquired in real time, and to examine the synchrony between two brains during natural communication — the subject of his talk for the SfN14 symposium. To learn more about what drew Hasson to naturalistic neuroimaging, I met with him at SfN for a brief Q&A.
What questions are you interested in answering with your research?
Mainly we’re looking at how the brain processes real-life information. For example now, what’s happening to your brain when you listen to what I’m saying; what’s happening to my brain when I’m talking to you. We find that whenever you go to natural setups – real-life situations – many of the models that people work in the fields do not apply any more. So it becomes interesting. We have some more complicated findings.
We have two lines of research in the lab. One is memory, or processing timescale. And this, I’m not going to talk at all about tomorrow. But basically, we realize that most of the field is working in event-related designs. The problem is that, you know, there’s many dimensions of our conversation. Too many. And as a scientist, you say, “No, no, no, I need to control everything.”
So what people do, [in the] first stage, is to remove 90% of the dimensions. And one of the dimensions everyone is removing is time. You’re working in event-related design, usually, and each event, whether two hundred milliseconds or nine seconds, is independent of the next one. But actually now, what I’m saying is related to what I was saying five minutes ago, what I was saying a minute ago, and maybe to what you said over email two days ago.
That makes data analysis complicated.
Yeah. It also makes it interesting, because suddenly you realize that memory is everywhere. If you think about memory, what comes to your mind? Working memory, long-term memory, short term memory… but let’s think about working memory for a second, because we’re doing now online processing. It’s really unclear how to use this term, working memory, in the context of the conversation. What is the capacity limit of working memory? Is it five syllables? Five words? Five sentences? Five concepts?
And then also, memory is usually separated from the process, like in a computer. For a working memory task, [most researchers are] looking in the delay period, when you do nothing. Now, there is no delay. And you need to use the memory to understand what I’m saying. It’s very different from the way people think about memory currently.
[My lab has] the memory line of research, and we also have the communication line. We take two sides of communication, the speaker and the listener… You see what’s going on in the speaker’s brain, and in the listener’s brain, and [you] want to see how information now is transformed — from my brain, via this sound wave, we become coupled. I have a dynamical system in my brain, you have a dynamical system in your brain, and the sound waves couple them together.
How did you get interested in studying real-life scenarios?
I used to be a vision scientist, and we always used these highly artificial stimuli. In my Ph.D., I wanted to know what happened in real-life vision – only focusing on the visual system, not communication. And I said okay, let’s run a movie, and see how the visual system responds to movies. And we were sure that this [was] probably, you know, a fun experiment, but we’d never publish.
And then something interesting happened: I looked at the data, and developed a new way of analysis, because we didn’t know how to analyze this. Ten minutes of a movie, it’s a lot of dimensions. So the first thing we saw – I took a brain area and decided to look on the time courses, thinking that I would go back to the movie and see what [was] driving the activation. A reverse correlation. And I went to the face area and saw, each time you view the face, the fusiform face area lights up. In the movie, it was really fun to see the brain tell you what it likes.
But then I saw that if we go to my brain, and your brain, the fusiform face area will be very similar. But okay, we’re in the fusiform face area. So I started to go area by area, and I saw that about 60% of the cortex responded very similarly across people. And I said, “Wow, how can it be?” Because, you know, as a scientist, we learn if we are not controlling all the parameters, we’re going to have variability.
So we expected to see huge variability, but we saw this huge convergence – not only in the auditory cortex or the visual cortex, but in the frontal areas, in many brain areas. High order, low order – and suddenly I started to ask, “Why are people similar?” That’s what drove me to this line of research.
Why do you think naturalistic neuroimaging matters?
We want to understand how the brain is working. There is a tension, right? If you’re working in this highly complex real life parameter space, it’s difficult to know which intervening parameters are important and which are not. Let’s say you’re working in a narrative parameter space, in real life. But in your experiment you have only four variables. You control for three, and you vary the one. What happens if you add parameters? Let’s say you add four more parameters, ten more parameters, twenty more parameters. Is [your result] going to stay?
Basically, is it possible to generalize the controlled lab to real life, or not? If it’s not going to generalize, and adding three parameters is going to change the entire picture, then what did you learn? There is a tension in science between the controlled and real life, and you need to do both.
This edited compilation originally appeared as three posts,
published on the PLOS Neuro Collaborative SfN14 Blog site on Medium.com, Nov 15 – Dec 12, 2014.
The views expressed in this post belong to the individual blogger and interviewees, and do not necessarily represent the views of PLOS.
PLOS Neuro blogger Erin Yeagle received her B.A. in Neuroscience from Wellesley College. She is now a research assistant in Ashesh Mehta’s lab at the Feinstein Institute for Medical Research, using intracranial EEG and electrical brain stimulation to study the neural underpinnings of cognition and visual perception in patients with epilepsy.