Myth (family) #1: We only use 10% of our brains.
This is one of the more obvious and well known ones, I'm including it more as a reference to say that there area already lots of lists online that cover loads of these (e.g. also big brain = smart, you're either left or right brained, we don't grow new braincells, ...) so I won't cover them again, but it's good to know them. It's surprising that they are still as pervasive as they are; e.g. the movie 'Lucy' was built off the 10% premise, and if Morgan Freeman says it, it must be true, right? I'd class these along the lines of "there's no gravity in space" or "gasses don't have mass". Thankfully, papers don't really mention these at all, and news coverage only sometimes.
Myth #2: More firing = better
This seems pretty common in fMRI coverage: you get nice images of brains (like the ones below) with big yellow or red areas of increased activation, and hence that's better, right? Like, when you're dead, your brain cells aren't doing anything, so more activation means you're using more of your brain (note: this is related to the using 10% myth). Unfortunately (?), it's much better to think of it as normal firing = better, as there are many times where particular neuron areas firing have negative side-effects, and in this case, more firing = worse. The simplest examples would be things like uncontrollable movement or speech (if your motor cortex or Broca's area is over-activated), and it seems that some anxiety symptoms come from over-activation as well. So remember, if you see lots of bright red/yellow spots, that doesn't mean the brain in that image is working better...
This myth seems to be the most prevalent in news articles, and even appears in quite a few papers. The source is usually something along the following lines: 'Functional connectivity' is a term in brain analysis where, to overly simplify: you look at the activation in two areas, find the correlation coefficient, and - if it's statistically significantly not zero - claim the two are related. This itself has a bunch of problems: the regions don't even need to be physically connected, there could be a third unrelated source for the correlation (e.g. even scanner noise), and there's usually no temporal component (so e.g. nothing like Granger causality).
For some reason, talking about 'connected' brain areas also gets reported strangely - you can see this in the LSD paper linked to at the start and under the image above. Perhaps because there's some "we're all connected" vibe? Hard to tell. But I propose a different term for connectivity: redundancy. This is because, the more connected two regions are (A and B), anything connected to A could get a lot of information from B directly, so A isn't adding much information. Take it to the extreme: what if your entire brain was 'connected': any time a neuron would fire, all of them would. I'm making a guess here, but I would assume someone in that state would pretty quickly be dead (at which point nothing fires, and their brain still has perfect connectivity!).
So in short: correlation may or may not be connection, but it is also a form of redundancy. Even if you think everyone is connected, you don't want that to also applied to your neurons...
Myth #4: If the data is there, the brain is using it.
The last myth I'm covering is one that doesn't come up in papers that often, but unfortunately I keep finding it in loads of papers. The standard pattern goes something like this:
- Measure activation in areas of the brain
- Use those to train models that try to classify behaviour
- If the classifier does better than guessing, that area was probably used for that behaviour
- (also, probably avoid validation, you probably only have 12 subjects to test on).
This approach seems to be based off the assumption that if the data is there (which usually means: in a way that is learn-able with linear regression) when doing a behaviour, then the brain is learning that somewhere and using it to achieve the behaviour.
Leaving aside all the modelling problems (the brain uses linear models? minimal validation testing on other subjects, or same subjects in different / random tests, or on random noise?), I feel this just compounds the issues from above. That is, the neuroscience way of studying car mechanisms may go along the lines of:
- We're trying to see what is important for how far down the windows are
- On 10 trips, we measured petrol levels, speed, window button press state (or, say, crank angle if you've used them), and overall weight.
- Using a general linear model trained off each, the only significant predictor was speed, negatively correlated with how far down the window was, therefore driving faster will cause the windows to go up.
What is more, even when there is a causal link in the direction desired, it still doesn't mean our brains use that information. There are techniques for using a learnt model to extract sound out of silent videos (very cool video, I'd recommend checking it out!) purely off seeing distortions in the image caused by the sound waves moving the target, camera or something in between (e.g. glass). From this, can we conclude that humans hear by seeing vibrations? Unlikely, even though the data is there, it's not processed by our brains. A lot of illusions happen this way: our brain can be pretty good at ignoring data right in front of it! Note that there's no problem at all if you're using this data to build predictive models (e.g. brain-computing interfaces), any modeled categorizer will do. It is problematic though if you're concluding it's is how brains do it.
So that's it for easily explainable recurring problems: if reading about anything brain related, remember that sometimes it's best for parts of your brain to rest, sometimes correlation is redundancy and doesn't mean the universe is connected, and sometimes brain activity will predict something without the brain doing it to. Hopefully there'll be lots more neuroscience papers and articles (without these problems) to read in the future!