Neurostats Digest #5
Notes round up on theoretical psychiatry, causal inference, and understanding papers
Contents
Markers of psychiatric disorders are neither necessary nor sufficient
Modern causal inference for psychiatric kinds
Subjectivity is not the problem for psychiatric endpoints
The papers that change you
The power of estimands (for digital twins)
Markers of psychiatric disorders are neither necessary nor sufficient
Many people (even MDs) use flawed heuristics like
X responded to stimulants → therefore must have ADHD (not necessarily)
X improved in a different environment → therefore not ADHD (nope, not necessarily)
All this boils down to the fact that markers of complex psychiatric disorders are neither necessary nor sufficient for the condition.
Great illustrations of the problem in this post from
Modern Causal Inference for Psychiatric Kinds
In 2019, I gave a lab talk on how we need to embrace homeostatic property clusters for psychiatry and that modern biostatistics/ causal inference / theoretical epidemiology offer some clues. A few months before Ken Kendler made the same point! As far as I know, this has still not happened. I don’t think the computational psychiatry community reads this stuff, or takes “real category ≠ biological essence” seriously enough to change their methodological playbook.
You can’t just throw canonical ML/DL/foundation models at this stuff without thinking about changing what you are fundamentally asking them to do.
Clustering does not give you relevant psychiatric kinds
The standard playbook is do something like 1) measure biology 2) learn predictors of clinical phenotype (usually cross-sectional) 3) cluster representations of the predictive model
The problem then is that there is no guarantee that “statistical” clusters correspond to sufficient causes [1]. IMO thinking about sufficient causes or other equivalent frameworks that acknowledge that component causal factors maybe neither necessary nor sufficient for phenotype to manifest in one individual is important. If we truly want to learn then, then we should reverse engineer how we approach measurement + quantitative methods to get close to this.
Causal Pies / Sufficient Component Causation
One of the ideas in the causal inference literature is to acknowledge that forward causal inference i.e manipulating A, B, C, D, E and learning that they make a difference in outcome Y only identifies a component cause. But A maybe neither necessary nor sufficient to produce an outcome (e.g diagnosis of treatment resistant disorder). It can only be necessary if every possible way for outcome to manifest always has A as one of the causal factors. On top of this, the way we measure all these component causes may result in different kinds of confounding, differential measurement error, failure to observe all possible component causes, etc..
But thinking about modeling while explicitly keeping pluralistic causes in mind helps get around to the idea of psychiatric kinds as homeostatic property clusters i.e we distinguish subtypes of a disease by differences in shared causal mechanisms giving rise to it; but mechanism here includes everything not just biology. They are of course likely to be a bit more fuzzy IRL than what the models make it look like.
The causal pies formulation of Lewis’s counterfactual causes is popular in epidemiology. Incidentally, Kendler also seemed to be inspired by Rothman/epidemiology in his criticism of monocausalities in psychiatry [2]
It goes by learning probability of necessity/ probability of sufficiency, learning sufficient cause interactions, etc.. in more modern work.
To be fair the field does think about it conceptually — that is why there is so much more experimentation with what we measure (different modalities, clinical perturbation, etc..) , but then they just get excited about the standard ML playbook instead of changing it to suit their needs.
To make this concrete: why do we adopt such a narrow lens to understanding “treatment resistance” of any disorder? This confuses the pragmatic clinical stance of what to do with treatment resistance being a strictly biological kind. What would we learn if we considered that either genetics (e.g rare variant burden) or some neurobiological circuit dysfunction may sometimes make up one sufficient cause for some people, but may generally be neither necessary nor sufficient because this interacts with all the ways disease progresses over a lifetime in the presence of other causal factors as well e.g beliefs / belief level representations x maybe other kinds of psychosocial factors that also make up component causes. Finding neuriobiological predictors of “treatment resistant” status is likely to contain all kinds of misleading representations.
Also it is possible that I have missed more recent work directly tackling these kinds of issues. So I welcome being corrected on this. I do recall for example that
[1] https://www.sciencedirect.com/science/article/pii/S0167865515001269
[2] https://pubmed.ncbi.nlm.nih.gov/31215968/
Follow the rest of the thread here
Subjectivity is NOT the problem with psychiatric endpoints
I was recently at a biopharma <> stats workshop and was disappointed to see that statisticians on the one hand who are trained to identify problems of degeneracy, lack of causal identification, etc…, unquestioningly buy intomyths like > “psychiatry is challenging to study because patient endpoints and symptoms are subjective and we don’t have endpoints/biomarkers that are objective”.
This sloppy thinking is everywhere and really deserves to be challenged.
Just how illogical is it to
accept that symptoms are “subjective” AND
that we will find “objectivity” by correlating neurobiology with symptoms.
If your instruments for measuring symptoms are not measurement invariant and if the vast majority of them are simply screening questionnaires and other very non-specific measurements then yes, your whole approach is effed up and isn’t going to yield anything generalizable or more granular.
I had to challenge a panel to consider that perhaps the more useful thing would be a far richer set of patient-specific longitudinal measurements e.g like the idiographic approach to psychopathology instead of bemoaning the subjectivity of patient measurements. Everywhere else patient reported measurements are getting richer and taken more seriously while the recent resurgence in psychiatric drug development is embracing some impoverished narratives.
Personally I’m a bit of an activist methodologist — if you care about drawing correct scientific inferences then you have to be invested in the scientific question yourself and be prepared to tackle theoretical flaws end to end. We aren’t going to get much progress by strictly deferring on scientific questions to domain experts with a “throw things over the wall” approach. The virtue of being theoretical scientists is that you can spot conceptual problems a mile away. Every theoretical / conceptual problem will bleed over into the experimental design and quantitative evidence. To ignore all that amounts to rigor-washing and ultimately this does a dis-service to patients.
Awais Aftab’s writing in the last few years lays bare a lot more of the better thinking behind the scenes that is less well known but deserves to be more widely understood. I just don’t see how we are going to get the revolution in therapies we deserve until we stop attempting to do target discovery, biomarker development etc.. with old reductionist playbooks that just don’t correspond to how things really are. And the virtue of all this interest in neurotech is that we don’t have to be naive biological reductionists anymore. Everything kind of measurement is now on the table, we don’t have to choose out of methodological convenience.
In reference to: “Treatments likewise tend to have broad, transdiagnostic effects across mental functions. Trials may be anchored to a target diagnosis, but the causal traffic usually runs through mechanisms that cut across our labels. “Physiological,” “psychological,” and “sociocultural” are not sealed ontological provinces; they are overlapping languages for a single, complicated reality. The neurophysiological strand is one thread among many —experiential, sociocultural, existential-and not always the most important one. Even so, because the mind is embodied, bodily mechanisms can be leveraged to produce desired effects, whether or not they count as “dysfunctional” in any simple factual sense. We should resist a priori privileges for either technological fixes or hermeneutic readings. The posture must be Jaspersian: causal explanation and meaningful understanding as partners, with their relevance varying case by case and…”
The papers that *change* you.
Those papers that you spent 1-12 months reading, re-reading, working through every result are the ones that make you.
And no you can’t read them like a novel or even a science paper.
Once you do this, you will not flinch at tossing out a paper with incomprehensible math in a science journal. Mathematistry is not impressive.
The power of estimands
If you are bullish about AI digital twins, virtual patients, virtual clinical trials, clinical trials in a dish and all that but you have never heard of estimands*, it is going to get a lot worse before it gets better.
* If you have heard of per-protocol effects vs. intention to treat effects, then you know just a tiny bit about clinical trial estimands. There is also the full universe of estimands for all science.





Couldn't agree more. Your articulation that markers for complex psychiatric disorders are neither necessary nor sufficient is profoundly important insight for understanding these conditions beyond simplistic heuristics and acknowledging crucial individual-environment dynamics. Thank you for underscoring the necessity of modern causal inference in this domain; it's critical shift in perspective for computational psychiatry to truly advance.