If folks are interested, I recently published a paper [1] demonstrating that fMRI activity in the visual cortex is remarkably high-dimensional!
Specifically, using a linear approach (like PCA, but slightly fancier), we find that stimulus-related information is present up along many, many dimensions of the neural response---much more than previously expected/reported.
I wouldn't say "called into question", as if the whole idea is bunk.
MRI is, in general, a lot harder than people often imagine. It uses complicated physics to measure convoluted physiological changes to indirectly measure brain activity, which is obviously stupifying involved--and then relate that to other, often complicated factors like behavior, lifestyle or disease state.
I think it's reasonably well-known that the BOLD response is complex and doesn't directly reflect "average" spiking activity. Some studies find that it's sensitive to the amount of synchrony (=more neurons firing together in time) rather than the rate. The paper you mention shows another dissociation: neurons can get more fuel by extracting oxygen more efficiently OR have having more overall oxygen to extract at the same rate. Thus, it's not noise, but it is complicated.
I don't immediately see how that paper's assertion (that some areas' fMRI response is influenced by baseline oxygenation and cerebral blood flow) relate to the reliability of an information modeling experiment?
Yeah, there's a ton of criticism of fMRI as a method, largely because of a lot of results that are statistically unsound (to say the least)!
I tend to think of fMRI data as some highly nonlinear transform of whatever neural activity is occurring in a particular region of the brain, at pretty coarse spatial resolution (~1-3 mm) and pretty bad temporal resolution (~5-15 s).
Sure, it's no direct measure of neurons firing, but that doesn't mean there isn't information in the signal that we can interpret and maybe use (see [1] for a recent example of reconstructing seen images from brain activity)
As a cognitive neuroscientist, I tend to abstract away a ton of the details (neurons, molecules) and focus on more general computational principles: how do we get complex behavior from many simple interacting units---voxels in fMRI, for instance?
Regarding the specific paper you posted, I saw some of the discourse around it but haven't read it carefully myself (it's not my area of expertise). I saw some recent re-analysis of that data [2] that argues that the result isn't valid, but need to look at it more carefully.
You can do a lot better than this if you redefine the problem from directly generating images with certain contrasts to maximizing information gain, even with weak magnets. They've since basically run out of money and are on life support, but Q Bio [0] had that tech working years ago, able to quickly derive many different image types from an entropy-maximizing scan, though they never deployed that in prod IIRC (again, they're broke).
I remember one of my diploma students continued with discrete tomography as PhD, topic "Binary Tomography by Iterating Linear Programs" and I found it super interesting to get down the number of shots and at the same time increasing the accuracy a lot.
It's a nice review but the end reads like a funding pitch.
The most important Mathematicians like donoho and Tao in the US seem to currently experience budget cuts and start to address the public.
If folks are interested, I recently published a paper [1] demonstrating that fMRI activity in the visual cortex is remarkably high-dimensional!
Specifically, using a linear approach (like PCA, but slightly fancier), we find that stimulus-related information is present up along many, many dimensions of the neural response---much more than previously expected/reported.
[1] https://journals.plos.org/ploscompbiol/article?id=10.1371/jo...
Hasn't fMRI as a whole been called into question? https://www.nature.com/articles/s41593-025-02132-9
I wouldn't say "called into question", as if the whole idea is bunk.
MRI is, in general, a lot harder than people often imagine. It uses complicated physics to measure convoluted physiological changes to indirectly measure brain activity, which is obviously stupifying involved--and then relate that to other, often complicated factors like behavior, lifestyle or disease state.
I think it's reasonably well-known that the BOLD response is complex and doesn't directly reflect "average" spiking activity. Some studies find that it's sensitive to the amount of synchrony (=more neurons firing together in time) rather than the rate. The paper you mention shows another dissociation: neurons can get more fuel by extracting oxygen more efficiently OR have having more overall oxygen to extract at the same rate. Thus, it's not noise, but it is complicated.
> Hasn't fMRI as a whole been called into question? https://www.nature.com/articles/s41593-025-02132-9
I don't immediately see how that paper's assertion (that some areas' fMRI response is influenced by baseline oxygenation and cerebral blood flow) relate to the reliability of an information modeling experiment?
Yeah, there's a ton of criticism of fMRI as a method, largely because of a lot of results that are statistically unsound (to say the least)!
I tend to think of fMRI data as some highly nonlinear transform of whatever neural activity is occurring in a particular region of the brain, at pretty coarse spatial resolution (~1-3 mm) and pretty bad temporal resolution (~5-15 s).
Sure, it's no direct measure of neurons firing, but that doesn't mean there isn't information in the signal that we can interpret and maybe use (see [1] for a recent example of reconstructing seen images from brain activity)
As a cognitive neuroscientist, I tend to abstract away a ton of the details (neurons, molecules) and focus on more general computational principles: how do we get complex behavior from many simple interacting units---voxels in fMRI, for instance?
Regarding the specific paper you posted, I saw some of the discourse around it but haven't read it carefully myself (it's not my area of expertise). I saw some recent re-analysis of that data [2] that argues that the result isn't valid, but need to look at it more carefully.
[1]: https://www.nature.com/articles/s41598-025-89242-3 [2]: https://www.biorxiv.org/content/10.64898/2026.04.21.719913v1
You can do a lot better than this if you redefine the problem from directly generating images with certain contrasts to maximizing information gain, even with weak magnets. They've since basically run out of money and are on life support, but Q Bio [0] had that tech working years ago, able to quickly derive many different image types from an entropy-maximizing scan, though they never deployed that in prod IIRC (again, they're broke).
[0] q.bio
I remember one of my diploma students continued with discrete tomography as PhD, topic "Binary Tomography by Iterating Linear Programs" and I found it super interesting to get down the number of shots and at the same time increasing the accuracy a lot.
It's a nice review but the end reads like a funding pitch. The most important Mathematicians like donoho and Tao in the US seem to currently experience budget cuts and start to address the public.