Telepathy seems to be all the rage among neuroscientists these days. Here, Micah Allen, from University College London, who blogs and tweets as @neuroconscience, takes us through an exemplary tale of how open access ultimately improves scientific knowledge. — P.M.
By Micah Allen
A paper published recently in the journal F1000 Research rose more than a few eyebrows by claiming to support the existence of telepathy. Adding to my own confusion, PLOS ONE also recently published an article about brain-to-brain communication that was widely misreported as substantiating ‘telepathy’ even though it was nothing of the sort – participants in that study were linked by a brain-computer interface and transcranial magnetic stimulation. I was thus pretty surprised when I realized the F1000 article was about actual telepathy rather than the more benign brain-powered Morse code (read the PLOS Neuroscience Community’s coverage of that study here).
Mind you, there’s nothing in principle wrong with studying telepathy and other “psi” or paranormal phenomena. That is so long as you allow for the incredibly low prior probability of these effects, and conduct appropriately rigorous research.
Laudably, Tressoldi and co-authors pre-registered the study protocol and provided full documentation of all relevant data from the study. This was fortunate as it allowed my colleague Sam Schwarzkopf, from University College London, to conduct thorough control analyses on their data, which are now publically available as citable reviews on the F1000 website. While these analyses point towards some critical flaws in the paper, they also form a nice demonstration of how open science can wrest value out of even the most dubious investigations.
The experimental protocol
Let’s take a look at exactly how Tressoldi and colleagues set about testing the curious hypothesis that “two brains, and consequently two minds, can be entangled in a quantum-like manner” to produce telepathic communication sans a physical connection. Reading the paper was an Alice-in-wonderland experience for me; each odd methodological revelation seemed to outdo the last. To begin, the decoding algorithm, visual stimulator, and EEG headset were all proprietary commercial-use equipment, suggesting that this study might be a lead in for an eventual do-it-yourself commercial telepathy toy. Of course there’s nothing fundamentally amiss about using proprietary “neurotoys” in an experiment, but one wonders if slightly more sophisticated scientific equipment might not have made a better bet for detecting the (presumably) subtle quantum entanglements the authors were after. More curious still were inclusion criteria specifying that participants were to have held a friendship lasting more than five years and to have experience in martial arts or meditation. Apparently telepathy only works between ninja-yogini best friends, a relatively easy to replicate sampling criterion.
Moving on to the experimental protocol, we see where the real fun begins. The paper’s methods were preregistered almost verbatim (changing only the future tense to the present and sometimes not even that) at the Open Science Framework. This ensured that all analyses were fully planned prior to publication, which should in theory allow us to sidestep the usual worries of statistical voodoo—although in this case the methods don’t provide much of the necessary detail. Participants were split into pairs between a laboratory at Padova University and a “private laboratory” 200km away in Florence. Each participant took turns being a ‘sender’ or a ‘receiver’. The sender was hooked up to a Cyclopean (and proprietary) visual stimulator created by one of the authors, which delivered the image of the receiver’s face:
The sender thus sat listening to 30 second clips of a baby crying and was instructed instructed “when ready, you must concentrate in silence for one to three minutes to relax and prepare to receive the stimulation to send to your partner…” and the receiver was told “when ready, you must concentrate in silence for one to three minutes to relax and prepare to receive the stimulation sent by your partner….”. The sender was then stimulated in repeated segments of 30-seconds stimulation and 1-minute silence. Segments of 3, 5, or 7 signal-silence repetitions were preceded by an interval of either 1 or 2 minutes of silence, in an attempt to make the stimulation protocol difficult to predict.
Analyzing the data
Tressoldi et al then apply their proprietary non-linear support vector machine classification approach to categorize silence and signal events from sender to receiver, using EEG data recorded with the (also proprietary) emotive consumer EEG headset. Here is an example of the sender-receiver traces they decoded:
For each pair the top row shows the stimulus presented to the sender (in blue) and the bottom rows show the results that were decoded from the receiver’s EEG. Coincidences (i.e. overlap between sender stimulation and receiver-decoded signals) were defined as anytime where the start or stop of a decoded event overlapped with actual stimulation. This procedure accurately decoded approximately 78.4% of trials in the receiver, leading Tressoldi et al to conclude their support for telepathic communication between the pairs. They also correlated the pair’s EEG data and found significant correlations in the alpha and gamma bands between participants. Stop the presses, physics is dead! Long live the telepaths!
The public reviews: unmasking confounds
If you’re a physicist who just tossed their dissertation in the trash, you might want to hold off on the career change. Although Tressoldi et al should be applauded for their excellent use of open-publishing, several serious design and analysis flaws totally confound their results. Shortly after the article was published, my colleagues Ged Ridgway (from University of Oxford) and Sam Schwarzkopf submitted public comments and reviews asking for clarifications on several points of the design and analysis, which are presented in extremely sparse detail (even considering the preregistration). These in-depth reviews point towards a variety of fundamental issues with the study and are well worth reading themselves as they are quite interesting.
The crux of the issue is that decoding analyses use extremely sensitive, opportunistic algorithms to identify correlated features within a multidimensional dataspace, and as such are highly susceptible to being confounded by autocorrelation, the temporal structure of stimulation, the definition of chance level , improper experimental randomization, and other similar problems. The published reviews make clear that Tressoldi and colleagues befall most these pitfalls. In one bizarre example, Sam found that all 7 participants in the study, whose first names are recorded in the raw data logs, exactly match those of the study authors. As the entire interpretation of the decoded traces depends upon receivers being unable to predict the sender’s stimulation, these problems make it likely that the receiver simply guessed the approximate pattern of stimulation. As Sam points out, aside from the initial period of random silence, the entirety of the remaining stimulation protocol follows a totally predictable 30 second stimulus/ 60 second silence pattern. The ‘receiver’ could have easily guessed the pattern of stimulation, particularly since in many cases they had just participated as sender and knew the design of the study!
Re-analysing the data: the power of open access and data sharing
To drive this point home, Sam then reanalysed the data to demonstrate that the designation of sender/receiver or stimulus/silence has almost no impact on decoding accuracy:
Here you can see an example from Sam’s analysis in which the data from a single receiver has been split in half. Instead of using the stimulus-silence labels from the experimental design, the labels were assigned arbitrarily as a sequence of on and off events (compare this to the actual signal traces shown above where there was only one 30s stimulus period in the middle of the recording session). The black boxcar shows whether an event was classified as silence or signal. The classifier clearly has no problem ‘decoding’ with high accuracy the temporal evolution of events even when the stimulus labels are totally made up. Coupled with the high inter-participant alpha-band EEG correlation and poor randomization, these findings strongly suggest that the decoder was simply learning about the temporal evolution and autocorrelation of the signal, potentially driven by slow, periodic oscillations in the sender and receiver’s attentional state (which are linked to alpha activity).
I am sure you are now enjoying great relief, thankful that the dominant physical paradigm of Science has prevailed. Yet there is also something deeper and more motivating to be gleaned from the entire exchange.
Although the paper’s topic is highly controversial and the methods are far from optimal, the use of preregistration and open publication here clearly generated a valuable (and citable) exchange that is now public record. Improper setting of chance level, trial randomization, and the interpretation of what actually drives nonlinear classification are critical issues not only in the world of psi research but in cognitive neuroscience in general. By using an open-science publishing approach that doesn’t hinge upon a paper’s supposed ‘goodness’ or ‘badness’, but instead focuses on the quality of documentation, the scientific process unfolds in full public view. In this case, within a few weeks of being published, qualified researchers were able to evaluate and reanalyse the data, in a publically recorded exchange that can be referred to for future research. The fact that the paper itself was dubious only improved the final outcome, generating far more critical attention than the average decoding paper applying many of the same methods and principles. And this is really the most crucial point: in moving beyond the notion of publication as the final step of quality assurance, open science allows even the most dubious or difficult papers to generate useful knowledge. Now go read the reviews!
Note: At the time of publication, Tressoldi et al have just uploaded a newly revised manuscript attempting to address these issues, the content of which is not directly covered by this post. A cursory examination of the revision suggests that the authors are unable to address the substantive criticisms without fundamentally redesigning their experiment, although it is worth keeping tabs on the ongoing review process to see the outcome. The reviews in particular are well worth reading as they are quite illustrative of general problems in decoding designs.
Any views expressed are those of the author, and do not necessarily reflect those of PLOS.
Grau, C., Ginhoux, R., Riera, A., et al. (2014). Conscious Brain-to-Brain Communication in Humans Using Non-Invasive Technologies. PLoS ONE, 9(8), e105225. doi: 10.1371/journal.pone.0105225
Mumford, J. A., Davis, T., & Poldrack, R. A. (2014). The impact of study design on pattern estimation for single-trial multivariate pattern analysis. NeuroImage, 103:130-138. doi: 10.1016/j.neuroimage.2014.09.026
Schwarzkopf, D. S. (2014). We should have seen this coming. Frontiers in Human Neuroscience, 8:332. doi: 10.3389/fnhum.2014.00332
Tressoldi, P., Pederzoli, L., Bilucaglia, M., et al. (2014). Brain-to-Brain (mind-to-mind) interaction at distance: a confirmatory study [v1; ref status: approved 1, not approved 1, http://f1000r.es/3ky] F1000Research, 3:182. doi: 10.12688/f1000research.4336.1
Micah Allen is a post-doctoral fellow working at the Welcome Trust Center for Neuroimaging, University College of London. His current studies combine experimental psychology, cognitive neuroscience, and philosophy to develop models of consciousness and metacognition. He applies a mixture of psychophysical, computational, and connectivity based methods with a particular focus on predictive coding. He maintains an active blog at neuroconscience.com where he explores a variety of related issues, as well as broader themes in open science and publishing reform. You can find him on Twitter @neuroconscience.