Playing your Partisanship
It's election time in the US, and it's hard to avoid hearing all about it here in Canada (or, I presume, much of the rest of the world). I'm not exception to that as you can probably guess by my previous post. I've been interested to see a few of the psychological aspects of polarization, especially after the Australian elections earlier this year had similar, although to a lesser extent. Two particular stand out, and are worth looking at closer:
1A) Conspiracy theory: Bayes and Unwavering beliefs.
Bayes theorem is some maths regarding probabilities:
The interesting thing is that it can be used as a way to model how something (e.g. you, or your brain) can end up holding beliefs about how the world works based off experience. To reword the equation above in prose:
How much I should believe that a hypothesis is true, given some evidence [P(H | E)], is equal to my old probability [P(H)] times the chance of seeing the evidence assuming it's true [P(E | H)] divided by the chance of seeing the evidence at all [P(E)]. This process is also known as Bayesian inference, and is used quite a bit in AI systems, as well as in modelling human decisions.
Where does this come in? There's an Australian show called Q&A, which is aimed as a panel-format discussion although manages itself to make headlines every now and again. A little while back, a video from it made the rounds when a scientist (Brian Cox) was trying to explain to an Aussie senator (Malcolm Roberts) that global warming had lots of scientific evidence (video). Robert's response? That the data has been corrupted by NASA (and others).
We can look at this in terms of Bayesian inference. Here, H = Global warming, E = the evidence provided by Dr. Cox. You can think of a few possible inferences for Senator Roberts here:
- The evidence is more likely given the hypothesis than with no constraints [P(E | H) > P(E)], so global warming seems more likely after the evidence is given.
- The evidence is less likely given the hypothesis than with no constraints [P(E | H) < P(E)], so global warming seems less likely after the evidence is given.
- Global warming is impossible [P(H) = 0], so it will remain impossible after any evidence as 0 * anything = 0.
- P(E) = 0, hence the inference is invalid.
#1 I'll call the 'standard' response, and it's the inference most seem to take. #2 is very uncommon, and takes the approach of someone saying: "actually, if global warming were true, you'd see the opposite effect". Roberts is not saying that here. Similarly, he's not claiming #3: saying a hypothesis like global warming is impossible is very hard to defend. Instead, he's taking the fourth option: by saying the evidence is corrupted, you can avoid needing to conclude that global warming is more likely given that evidence.
This appears to be a standard way to maintain an unwavering belief in something despite evidence to the contrary. You can't react with #1, #2 and #3 are hard to rationalize (despite #3 being the actual belief), so the conclusion is #4. Does it seem familiar at all? Maybe by someone claiming the election is rigged, the media is out to get them, or that the FBI and Emmy's are corrupt...
1B) Competing hypotheses
Before jumping to the conclusion "Hahaha, Trump (or Roberts) is so stupid", there's another part of Bayes inference that's often forgotten, which is that even when the evidence is more likely, multiplying a small prior [P(H), prior to evidence] will end up still with a small posterior belief [P(H | E), posterior to the evidence]. Really, the hypothesis is part of a competing bunch, and evidence may make a competing hypothesis even more stronger.
For example, remember when it was hypothesized that a Trump speechwriter had plagiarized a Michelle Obama speech intentionally, in order to bring down the Trump campaign? Or even that Trump is sabotaging his own campaign? Quartz summarizes the first one well, with a reminder that unlikely theories will stay unlikely.
That's not to say unlikely theories will never happen. It would be terrible if everyone discounted these, as that could easily be exploited. I'm sure there are people who get away with stuff because they know they can use the logic: "as if that'd ever happen", so some skepticism is good. Just remember that, when being skeptical, the hard part isn't dismissing people you don't agree with as not having evidence. It's also important, and much harder, to be skeptical of your own P(H).
A second, much shorter observation: it's possible for one fact to be polarizing by itself. There was a wonderful example of this in the Australian election, where a very conservative candidate (Cory Bernardi) had a series of images saying why the very progressive party (Greens) were bad. Stuff like:
Amusingly, a whole lot could easily be mistaken for posters by the Greens themselves. With a single statement, those who feel that increased intake is good will be happier about the Greens, and those who want fewer refugees will be happier about Bernardi. Given the already large amount of polarization (see: CGP Grey and filter bubbles), I found it amazing that a statement can polarize two sides at once. Usually it's just one ("The Greens are bad") which causes a secondary effect ("I don't like them, because I like the Greens and they say the Greens are bad"), but this is immediate in both directions. Kind of like when Clinton says "Donald Trump will deport Muslims" or Trump says: "The FBI found Clinton not guilty", these are things that gains them support, but also gains their opponent support.
Sadly, a lot of discourse seems to end up coming down to this, which is just a sure-fire way to increase polarization. It'd be interesting instead to see people try to gain support instead by finding ways to convince opponent supporters that they agree, just are better at delivering what everyone wants. But for now, it's about calling people corrupt or deplorable...
In other updates, I might now have a Neuro lab at UBC to help out with. Also, I recently discovered the Chan centre at UBC got to see Pepe Romero (a legendary classical guitar player) last weekend, with Richard Dawkins talking there in two weeks too. Finally, Black Mirror season three is out, so I've finally signed up for Netflix. Being British, it's unfortunately only 6 episodes, but if it's anything like the first two seasons, the quality of story-line definitely makes up for lack of quantity, highly recommended!