Redefining "functional connectivity"?
After expressing a few misgivings about functional connectivity (FC) here, the ensuing Twitter conversation led to a perspective article which has recently been published in Nature Neuroscience (link). The general motivation for this was that the conversation around FC seemed to be impeded by a disagreement over how the term "connectivity" should be defined, and what sort of inferences an observation of FC can support. Our approach in this paper was ambitious, and not a bit controversial:
- A property called "functional connectivity" should specify a causal biological mechanism as its target inference
- Assumptions required to infer causal mechanisms from observations should be made explicit
- Ambiguities in such inferences — the inability to infer directionality from a given set of observations, for instance — should also be specified
The upshot of this is: the definition of FC should include causality, which is implied by the physical term "connectivity", and references to this term should be upfront and specific about how closely a particular approach comes to elucidating biological mechanisms. Notably, it's cool if a method doesn't attempt to infer causality (such approaches can still be highly informative about brain organization), but then it shouldn't really be interpreted as "connectivity", but rather as, for example, a statistical description or a dimensionality reduction.
Why causality?
The idea that FC should always refer to causality may seem like an impossible bar, but bear with me. The idea here is not to assert that FC methods must either support full causal inferences or not call themselves FC, but rather that they should seek to infer causality, and then be honest about how short they fell of that bar. In other words, they should be formulated in such a way that the ultimate inference is a biological mechanism, but the set of assumptions, methodological issues, and confounders that introduce ambiguity into such an inference should be acknowledged. A key idea behind the framework we propose in this article is that any given concrete methodology that attempts to infer biological mechanisms will in practice likely fall short of that mark, because it will be ambiguous with respect to at least one of the necessary properties of a causal connection:
- Directionality (does information pass from A→B or B→A?)
- Directedness (does information pass directly from A→B, or through some set of intermediaries A→X→B)?
- Weight (how strongly does activity in A influence activity in B?)
The basis for this reformulation of the concept of FC was: if we are going to use a term that implies causality, then we should encourage the dialogue around that term to focus on how well, or poorly, the results of such methods support a causal inference. In other words, we need to make an implicit conversation more explicit. This means highlighting key assumptions and observational pathways (from sensors to sources) so that these can undergo proper scrutiny, and identifying major confounders and the ambiguity they impose, such that their (in)tractability can be considered and addressed.
Notably, we remain agnostic in this framework about the question of whether causality can or cannot be practically inferred from current observational methodologies. Strong cases have been made for the intractability of this pursuit (most notably by the group of Konrad Körding here and here; also, see this intuitive explanation of the confounder problem by Roberto Pascual-Marqui). On the other hand, detailed descriptions of causal modeling approaches that explicitly seek to elucidate biological mechanisms have also been proposed (see this detailed review by Pedro Valdés-Sosa and colleagues). The motivation instead was to suggest a framework by which erroneous/implied inferences could better be prevented, and through which the conversation could more directly identify stumbling blocks to progress in the field of systems neuroscience.
Moving forward
Why write this perspective? Why now? What does it add?
In my experience, this conversation has been going on outside of the literature for years. The question of whether correlations in functional signals can tell us anything meaningful about physical connectivity has been bandied about consistently, and I never really had a good answer for it. The question marks led me and colleagues to try and compare different types of methods for inferring connectivity (functional and anatomical), including diffusion weighted imaging, resting-state fMRI, meta-analytic connectivity modelling, structural covariance, and monkey tract tracing compiled in the CoCoMac database.
There are overlaps, but there are also glaring discrepancies. This is perhaps not surprising, but what is surprising (to me) is how often "connectivity" and "networks" are discussed without the disclaimer that the metrics we have are really quite ambiguous and often contradictory with respect to the actual, physical connections of the brain.
So yes, I do believe this sort of perspective is timely and necessary. We need to advance the dialogue around connectivity, and frankly be a bit more critical about claims drawn from evidence obtained through neuroimaging or even intracranial recordings in humans. In many cases we simply can't say a lot about the biological mechanisms of these observations, because there are some very large, and possibly inherent, limitations to the scope and resolution of these methods. We find interesting patterns, and it's important to discuss the implications of those patterns, but it is equally important to acknowledge what they cannot tell us.
In the course of composing this manuscript with an excellent group of peers, I am glad to say I have learned a good deal and have budged a bit in my initial skepticism of brain connectivity work. I am coming around to the idea that it is possible to evaluate functional connectivity in more explicitly causal ways. This involves firstly framing the problem in causal terms, secondly enumerating the assumptions required to support a causal inference, thirdly acknowledging the degree of causal ambiguity inherent in a given method (for example, we may be able to determine that A causes B, but not the weight or timing of this relationship; or we may only be able to establish that A precedes B, without determining whether this relationship requires an intervening entity C), and fourthly by using clever validation and modelling approaches to reduce the fairly immense search space.
It also requires designing experiments and corresponding models with causal relationships in mind. For example, Mill and colleagues had subjects learn associations between auditory and visual stimuli, and then asked them to retrieve previously encoded multimodal associations based on a cue that was either visual (Vis-Aud) or auditory (Aud-Vis). This design allowed them to define a "ground truth" of directionality between primary auditory and visual ROIs. They used this approach to validate existing methods of causal influence (Granger causality, Patel's tau and phase slope index). This approach admittedly becomes more ambiguous the further from sensory areas we look, but is nonetheless a great example of the sort of design we need to think about if we want to move from correlative observations to more causal ones.
I remain skeptical that we will ever compile a full, causal description of brain function (i.e., at the neuron level), due to the sheer size of the model space and the corresponding confounding problem, but I do think we stand a chance of characterizing informative constraints on the problem.
The quest continues...