Andrew Reid PhD

ABOUT CV PUBLICATIONS RESEARCH SOFTWARE CONTACT BLOG CODE

Blog posts

Intro
Search
Functional connectivity? But...
Published on 2017-07-26
by Andrew Reid
#11

The term "functional connectivity" derives from electrophysiology, where it was used to describe coherence relationships between relatively high temporal resolution time series. Friston and colleagues [1] differentiated between "structural" connectivity, referring to anatomical axonal projections observed by imaging white matter, or via tractography experiments performed on macaque monkeys; "functional" connectivity, referring explicitly to statistical dependence relationships between functional time series produced by methods such as EEG, PET, or fMRI; and "effective" connectivity, referring to inferences about (possibly directed) coupling derived through model comparison (are the observed data more probable given the presence of connection X, or its absence?). These terms have since become more or less dogma in brain connectivity research, and papers investigating these phenomena probably number in the thousands.

The term "functional connectivity", however, has been the source of more than a little confusion. This is largely because the word "connectivity" implies some sort of physical connection, which is clearly not inferable solely from the evidence of a correlative relationship. The lowliest undergrad is taught very early on in Statistics 101 that correlation does not imply causation. And yet, in brain connectivity research, we commonly employ a term that implies just that. In defense of this terminology, I commonly encounter the argument that, while functional connectivity is perhaps a misnomer, researchers in general are careful not to infer physical connectivity from correlative relationships. A reviewer once pointed out to me (whilst berating me for not being sufficiently optimistic for his/her taste) that "one of the most common refrains I observe across papers examining functional connectivity is that there is not a 1:1 correspondence between functional connectivity and measures of structural or anatomical connectivity". This is quite true. Buried somewhere in the fine print of most manuscripts is such a disclaimer, in much the same way that ads for internet gambling sites invariably include vague disclaimers indicating that they are not, in fact, gambling sites.

Except they are. And despite all the disclaimers, there are countless examples of peer-reviewed articles on functional connectivity whose methods and conclusions cannot be interpreted in any way other than that the authors are, actually, drawing the inference that functional correlations are equivalent to physical connectivity [2]. It is much sexier to be able to conclude that "Factor X is related to a decrease in connectivity in Network Y" than to conclude that "We found a structured covariance pattern in BOLD activity, some of which was significantly correlated with Factor X" — even if the latter is accurate and the former is simply a misleading overinterpretation of the data.

Functional covariance analysis is far from useless. I want to make that assertion from the start, in case this discussion is mistaken for a diatribe against connectivity research generally. Covariance patterns are extremely useful for dimensionality reduction, in that they allow us to segregate and cluster regions of brain tissue that co-activate (over a certain time window) in the presence of (or absence of) task demands. It is when we try to use functional covariance to infer integrative relationships, that things fall apart. To quote Friston [3]:

By definition, functional connectivity does not rest on any model of statistical dependencies among observed responses. This is because functional connectivity is essentially an information theoretic measure that is a function of, and only of, probability distributions over observed multivariate responses. This means that there is no inference about the coupling between two brain regions in functional connectivity analyses: the only model comparison is between statistical dependency and the null model (hypothesis) of no dependency. This is usually assessed with correlation coefficients (or coherence in the frequency domain). This may sound odd to those who have been looking for differences in functional connectivity between different experimental conditions or cohorts. However, as we will see later, this may not be the best way of looking for differences in coupling.

The importance of noise

But those are just words. Just how well can functional covariance capture physical connectivity in practice? Let's consider a simple abstraction: a three-node network. In the figure below, a sequential network is shown. If we make the A node our input node, with a 30 Hz sinusoid as a signal, we can propagate that signal along physical connections, specified by delays \(\delta_{ij}\) and connection strengths \(c_{ij}\) (for simplicity we'll set the delays to zero). Each node in the network can be specified by a Gaussian noise amplitude \(w_i\), that is added to its functional signal \(Y_i\) (we can think of this signal as representing an average firing rate, or as a BOLD response). This noise variable is crucial to understanding the behaviour of functional covariance. It should not be considered measurement noise (let's pretend we have perfect observations), but rather "neural" noise; it represents the sum of spontaneous neuronal activity intrinsic to the brain area represented by the node, and all afferent projections to node \(i\) that aren't explicitly modelled.

We can use this little simulation (Matlab code available here as "fc_models") to ask some specific questions. How well can correlation coefficients between functional signals from two regions be used to estimate the physical connectivity \(c_{ij}\) between them? Can thresholding correlation coefficients segregate existing connections from non-existing ones? To answer these we are going to keep the connection strengths \(c_{ij}\) constant. That is, in all simulations, all edges have equal connection strength. If our functional covariance is a valid estimate of these values, it should be equal for all edges regardless of how we manipulate noise in the model, right? Of course, this is not the case, because covariance is a function of noise.

In the figure above, I'm showing what happens when we vary the relative baseline amplitude of noise for all nodes (image y-axis, line plot x-axis), and the relative amplitude for just one of the nodes, B (image x-axis). A few important conclusions can be drawn from this result:

  • As noise increases, correlations decrease (okay that's trivial).

  • When noise is equal across nodes (\(w_a = w_b = w_c\)), we find that correlations \(\rho_{ab} = \rho_{bc}\), indicating that they do indeed predict connectivity strength \(c\).

  • When noise varies across nodes, their correlations diverge; when \(w_b\) is less than the baseline, \(\rho_{ab} > \rho_{bc}\), whereas when \(w_b\) is greater than the baseline, \(\rho_{bc} > \rho_{ab}\). Thus, the resulting correlation coefficients depend on the relative noise level in B, which implies that, if noise is unequally distributed across nodes, correlation cannot be used to estimate connection strength.

  • In all cases, \(\rho_{ac}\) is less than that for the other two edges. Since there is no physical connection between A and C, this suggests that for our simple sequential network it may be possible to binarize a network to segregate existent from non-existent connections, given the appropriate threshold. But does this generalize to other more complex networks? And how do we know what the appropriate threshold is?

Parallel processing streams

We can test the generalizability of point 4 above with another simple network configuration, this time containing four nodes, two of which have a common afferent B. This is shown in the figure below. This parallel network is important to consider, because having a common input will lead to correlated time series, even in the absence of a direct connection. Moreover, we already know that the mammalian brain is highly symmetric, and comprised of major parallel processing streams, leading to strong apparent homotopic connectivity.

As before, we'll make \(c_{ij}\) equal for all edges, and in order to make my point we're going to set \(w_b\) a bit higher to make B noisy, and then vary \(w_c\) and \(w_d\) together. Showing the same plots as before:

We see that:

  • Where C and D (which have no physical connection) have relatively low noise, they correlate more than A and B (which do have a physical connection). Thus, correlation cannot be used to threshold and binarize a network.

Of course, this example was jerrymandered by me to make my point. It is quite possible that such configurations are rare, and on the whole the relationship between functional covariance and physical connectivity is still fairly generalizable. On the other hand, it may not be. Numerous examples from the literature underscore this issue. For example, functional covariance is typically strongest between homotopic brain regions, and while physical connections clearly do exist between these regions, much of their covariance might also be explained by the interhemispheric symmetry of brain activation patterns. Patients who have had their corpus callosum completely resected [4] and individuals with agenesis of the corpus callosum [5] have still been found to have robust (if somewhat reduced) homotopic functional connectivity. Additionally, using a generative model to simulate functional signals, Honey and colleagues found an average correlation of roughly 0.5 between functional and structural connectivity [6]. This means that, even in a model where the ground truth is known a priori, only about 25% of functional covariance can be explained by the physical connections used to generate it.

Partial correlations to the rescue?

Partial correlations, which are obtained after controlling for the influence of all known covariates, can potentially allow us to deal with the noise conundrum highlighted above. Specifically, if the fraction of noise attributable to the influence of competing afferents can be removed from the signal, the residual should have a stronger dependence on the remaining afferent of interest. Indeed, partial correlation has been broadly employed in brain connectivity research, and one commonly gets the impression that the issue of noise has been solved.

So let's put it to the test. The most pressing question is: Can partial correlations resolve the parallel network issue highlighted above?

Some interesting stuff happening here.

  • Firstly, and most strikingly, the non-existent CD connection has been completely eliminated by partial correlation. Great! Partial correlations may be useful for thresholding a network to eliminate non-existent edges.

  • On the other hand, some weird stuff is happening to edge AB. As C and D get noisier, \(\rho_{ab}\) increases. Why? Basically, as they get noisier, they share less and less variance with A, and thus less shared variance is removed when considering the partial correlation between A and B. This implies that partial correlation hasn't really solved the noise issue, it's simply shifted it; the relationship between \(\rho_{ab}\) and \(\rho_{bc}\) changes as a function of \(w_c\) and \(w_d\). Thus, partial correlations also cannot reliably be used to estimate physical connectivity.

Another observation of note with respect to partial correlations is that they are generally quite low in magnitude, resulting in very sparse networks. This has led to it occasionally being lauded as a method which can be used to isolate the connections about whose existence we can be most confident, and this is in a way true. However, it is also in a way false, because as we've seen, there is no trivial relationship between the actual strength of physical connectivity between regions and the resulting partial correlations; connections with equal strength will be made more or less prominent purely as a function of relative noise levels, and the configuration of projections influencing their activity patterns. Moreover, we know from animal studies that the brain is not likely sparsely connected, but more likely quite densely connected (66% ipsilaterally in macaques; [7]), despite speculative claims to the contrary. Thus, considering only the very sparsely connected networks derived from partial correlations, even if they did not suffer from the aforementioned noise bias, severely limits one's ability to draw useful inferences about whole-brain connectivity.

What does this mean for graph theory?

Graph theory is a well-established, highly useful field of mathematics that has had an enormous influence on, for example, game, signal, and network theories. There's nothing wrong with graph theory. Any mathematical method, however, is only so good as the data that are fed into it. The dictum "garbage in, garbage out" applies. Constructing graphs from functional correlation coefficients is, in my opinion, almost always a matter of "garbage in". The toy examples above illustrate largely why. When constructing a graph, it is crucial to ask ourselves, what does such a structure represent? With internet graphs, transportation system graphs, or social graphs, this is obvious; and consequently, analyzing the topology of those graphs can allow us to derive useful, interpretable conclusions.

In the case of functional covariance, if we construct a binary graph by thresholding, what do our edges represent? How do we choose a threshold? How do we have any confidence that our edges represent physical connections, and our missing edges represent non-existent connections? How do we get a handle, in other words, on false positives and false negatives? If we construct a weighted graph using correlation coefficients, what do the edges represent? Basically, some complex interaction between physical connectivity, noise, and network architecture. What does path length mean in such a structure? What does clustering coefficient mean? Betweenness centrality? Worst of all, what does efficiency mean? These metrics all have useful interpretations when computed on graphs whose edges have actual physical meaning. They are less than useful when those edges represent a poorly understood statistical dependence relationship. They are "garbage out".

In my opinion, it is imperative that we acknowledge this severe limitation as neuroscientists. As the field of brain connectivity matures, it is no longer sufficient to hand-wave about graph theoretical metrics derived from functional covariance when we haven't even established a link between functional covariance and the physical world.

We can start by refusing to call it functional connectivity.

  1. Friston et al., Human Brain Mapping, 1995

  2. I am using the term physical connectivity here to refer to both anatomical (axon-synapse) and effective connectivity, in the Friston sense. Anatomical projections between regions that conduct action potentials that invoke postsynaptic firing in the target region. Classical connectivity, of the sort one is taught in any first-year neuroscience course.

  3. Friston, Brain Connectivity, 2011

  4. Uddin et al., Neuroreport, 2008

  5. Tyszka et al., J. Neurosci, 2011

  6. Honey et al., PNAS, 2008

  7. Markov et al., Cerebral Cortex, 2014

Comments here
Functional connectivity is a term originally coined to describe statistical dependence relationships between time series. But should such a relationship really be called connectivity? Functional correlations can easily arise from networks in the complete absence of physical connectivity (i.e., the classical axon/synapse projection we know from neurobiology). In this post I elaborate on recent conversations I've had regarding the use of correlations or partial correlations to infer the presence of connections, and their use in constructing graphs for topological analyses.
Tags:Connectivity · FMRI · Graph theory · Partial correlation · Stats
connectivity,fMRI,graph theory,partial correlation,Stats
Causal discovery: An introduction
Published on 2024-09-23
by Andrew Reid
#21
This post continues my exploration of causal inference, focusing on the type of problem an empirical researcher is most familiar with: where the underlying causal model is not known. In this case, the model must be discovered. I use some Python code to introduce the PC algorithm, one of the original and most popular approaches to causal discovery. I also discuss its assumptions and limitations, and briefly outline some more recent approaches. This is part of a line of teaching-oriented posts aimed at explaining fundamental concepts in statistics, neuroscience, and psychology.
Tags:Stats · Causality · Causal inference · Causal discovery · Graph theory · Teaching
stats,causality,causal inference,causal discovery,graph theory,Teaching
Causal inference: An introduction
Published on 2023-07-17
by Andrew Reid
#20
Hammer about to hit a nail, representing a causal event.
In this post, I attempt (as a non-expert enthusiast) to provide a gentle introduction to the central concepts underlying causal inference. What is causal inference and why do we need it? How can we represent our causal reasoning in graphical form, and how does this enable us to apply graph theory to simplify our calculations? How do we deal with unobserved confounders? This is part of a line of teaching-oriented posts aimed at explaining fundamental concepts in statistics, neuroscience, and psychology.
Tags:Stats · Causality · Causal inference · Graph theory · Teaching
stats,causality,causal inference,graph theory,Teaching
Multiple linear regression: short videos
Published on 2022-08-10
by Andrew Reid
#19
In a previous series of posts, I discussed simple and multiple linear regression (MLR) approaches, with the aid of interactive 2D and 3D plots and a bit of math. In this post, I am sharing a series of short videos aimed at psychology undergraduates, each explaining different aspects of MLR in more detail. The goal of these videos (which formed part of my second-year undergraduate module) is to give a little more depth to fundamental concepts that many students struggle with. This is part of a line of teaching-oriented posts aimed at explaining fundamental concepts in statistics, neuroscience, and psychology.
Tags:Stats · Linear regression · Teaching
stats,linear regression,Teaching
Learning about multiple linear regression
Published on 2021-12-30
by Andrew Reid
#18
In this post, I explore multiple linear regression, generalizing from the simple two-variable case to three- and many-variable cases. This includes an interactive 3D plot of a regression plane and a discussion of statistical inference and overfitting. This is part of a line of teaching-oriented posts aimed at explaining fundamental concepts in statistics, neuroscience, and psychology.
Tags:Stats · Linear regression · Teaching
stats,linear regression,Teaching
Learning about fMRI analysis
Published on 2021-06-24
by Andrew Reid
#17
In this post, I focus on the logic underlying statistical inference based on fMRI research designs. This consists of (1) modelling the hemodynamic response; (2) "first-level" within-subject analysis of time series; (3) "second-level" population inferences drawn from a random sample of participants; and (4) dealing with familywise error. This is part of a line of teaching-oriented posts aimed at explaining fundamental concepts in statistics, neuroscience, and psychology.
Tags:Stats · FMRI · Hemodynamic response · Mixed-effects model · Random field theory · False discovery rate · Teaching
stats,fMRI,hemodynamic response,mixed-effects model,random field theory,false discovery rate,Teaching
Learning about simple linear regression
Published on 2021-03-25
by Andrew Reid
#16
In this post, I introduce the concept of simple linear regression, where we are evaluating the how well a linear model approximates a relationship between two variables of interest, and how to perform statistical inference on this model. This is part of a line of teaching-oriented posts aimed at explaining fundamental concepts in statistics, neuroscience, and psychology.
Tags:Stats · Linear regression · F distribution · Teaching
stats,linear regression,F distribution,Teaching
New preprint: Tract-specific statistics from diffusion MRI
Published on 2021-03-05
by Andrew Reid
#15
In our new preprint, we describe a novel methodology for (1) identifying the most probable "core" tract trajectory for two arbitrary brain regions, and (2) estimating tract-specific anisotropy (TSA) at all points along this trajectory. We describe the outcomes of regressing this TSA metric against participants' age and sex. Our hope is that this new method can serve as a complement to the popular TBSS approach, where researchers desire to investigate effects specific to a pre-established set of ROIs.
Tags:Diffusion-weighted imaging · Tractography · Connectivity · MRI · News
diffusion-weighted imaging,tractography,connectivity,MRI,News
Learning about correlation and partial correlation
Published on 2021-02-04
by Andrew Reid
#14
This is the first of a line of teaching-oriented posts aimed at explaining fundamental concepts in statistics, neuroscience, and psychology. In this post, I will try to provide an intuitive explanation of (1) the Pearson correlation coefficient, (2) confounding, and (3) how partial correlations can be used to address confounding.
Tags:Stats · Linear regression · Correlation · Partial correlation · Teaching
stats,linear regression,correlation,partial correlation,Teaching
Linear regression: dealing with skewed data
Published on 2020-11-17
by Andrew Reid
#13
One important caveat when working with large datasets is that you can almost always produce a statistically significant result when performing a null hypothesis test. This is why it is even more critical to evaluate the effect size than the p value in such an analysis. It is equally important to consider the distribution of your data, and its implications for statistical inference. In this blog post, I use simulated data in order to explore this caveat more intuitively, focusing on a pre-print article that was recently featured on BBC.
Tags:Linear regression · Correlation · Skewness · Stats
linear regression,correlation,skewness,Stats
Functional connectivity as a causal concept
Published on 2019-10-14
by Andrew Reid
#12
In neuroscience, the conversation around the term "functional connectivity" can be confusing, largely due to the implicit notion that associations can map directly onto physical connections. In our recent Nature Neuroscience perspective piece, we propose the redefinition of this term as a causal inference, in order to refocus the conversation around how we investigate brain connectivity, and interpret the results of such investigations.
Tags:Connectivity · FMRI · Causality · Neuroscience · Musings
connectivity,FMRI,causality,neuroscience,Musings
Driving the Locus Coeruleus: A Presentation to Mobify
Published on 2017-07-17
by Andrew Reid
#10
How do we know when to learn, and when not to? Recently I presented my work to Vancouver-based Mobify, including the use of a driving simulation task to answer this question. They put it up on YouTube, so I thought I'd share.
Tags:Norepinephrine · Pupillometry · Mobify · Learning · Driving simulation · News
norepinephrine,pupillometry,Mobify,learning,driving simulation,News
Limitless: A neuroscientist's film review
Published on 2017-03-29
by Andrew Reid
#9
In the movie Limitless, Bradley Cooper stars as a down-and-out writer who happens across a superdrug that miraculously heightens his cognitive abilities, including memory recall, creativity, language acquisition, and action planning. It apparently also makes his eyes glow with an unnerving and implausible intensity. In this blog entry, I explore this intriguing possibility from a neuroscientific perspective.
Tags:Cognition · Pharmaceuticals · Limitless · Memory · Hippocampus · Musings
cognition,pharmaceuticals,limitless,memory,hippocampus,Musings
The quest for the human connectome: a progress report
Published on 2016-10-29
by Andrew Reid
#8
The term "connectome" was introduced in a seminal 2005 PNAS article, as a sort of analogy to the genome. However, unlike genomics, the methods available to study human connectomics remain poorly defined and difficult to interpret. In particular, the use of diffusion-weighted imaging approaches to estimate physical connectivity is fraught with inherent limitations, which are often overlooked in the quest to publish "connectivity" findings. Here, I provide a brief commentary on these issues, and highlight a number of ways neuroscience can proceed in light of them.
Tags:Connectivity · Diffusion-weighted imaging · Probabilistic tractography · Tract tracing · Musings
connectivity,diffusion-weighted imaging,probabilistic tractography,tract tracing,Musings
New Article: Seed-based multimodal comparison of connectivity estimates
Published on 2016-06-24
by Andrew Reid
#7
Our article proposing a threshold-free method for comparing seed-based connectivity estimates was recently accepted to Brain Structure & Function. We compared two structural covariance approaches (cortical thickness and voxel-based morphometry), and two functional ones (resting-state functional MRI and meta-analytic connectivity mapping, or MACM).
Tags:Multimodal · Connectivity · Structural covariance · Resting state · MACM · News
multimodal,connectivity,structural covariance,resting state,MACM,News
Four New ANIMA Studies
Published on 2016-03-18
by Andrew Reid
#6
Announcing four new submissions to the ANIMA database, which brings us to 30 studies and counting. Check them out if you get the time!
Tags:ANIMA · Neuroscience · Meta-analysis · ALE · News
ANIMA,neuroscience,meta-analysis,ALE,News
Exaptation: how evolution recycles neural mechanisms
Published on 2016-02-27
by Andrew Reid
#5
Exaptation refers to the tendency across evolution to recycle existing mechanisms for new and more complex functions. By analogy, this is likely how episodic memory — and indeed many of our higher level neural processes — evolved from more basic functions such as spatial navigation. Here I explore these ideas in light of the current evidence.
Tags:Hippocampus · Memory · Navigation · Exaptation · Musings
hippocampus,memory,navigation,exaptation,Musings
The business of academic writing
Published on 2016-02-04
by Andrew Reid
#4
Publishers of scientific articles have been slow to adapt their business models to the rapid evolution of scientific communication — mostly because there is profit in dragging their feet. I explore the past, present, and future of this important issue.
Tags:Journals · Articles · Impact factor · Citations · Business · Musings
journals,articles,impact factor,citations,business,Musings
Reflections on multivariate analyses
Published on 2016-01-15
by Andrew Reid
#3
Machine learning approaches to neuroimaging analysis offer promising solutions to research questions in cognitive neuroscience. Here I reflect on recent interactions with the developers of the Nilearn project.
Tags:MVPA · Machine learning · Nilearn · Elastic net · Statistics · Stats
MVPA,machine learning,nilearn,elastic net,statistics,Stats
New ANIMA study: Hu et al. 2015
Published on 2016-01-11
by Andrew Reid
#2
Announcing a new submission to the ANIMA database: Hu et al., Neuroscience & Biobehavioral Reviews, 2015.
Tags:ANIMA · Neuroscience · Meta-analysis · ALE · Self · News
ANIMA,neuroscience,meta-analysis,ALE,self,News
Who Am I?
Published on 2016-01-10
by Andrew Reid
#1
Musings on who I am, where I came from, and where I'm going as a Neuroscientist.
Tags:Labels · Neuroscience · Cognition · Musings
labels,neuroscience,cognition,Musings