I predict a riot

Since starting my Cognitive Systems degree, a number of people have asked what exactly that entails. Other than the more standard stuff (psychology & computer science), I figured it worth looking into one of the things I've been contemplating that may help to summarize it: predictive ability.

1. Macro prediction

It's no surprise that it's useful for people to predict stuff: daily weather forecasts, and structural integrity of buildings come to mind, but really everything uses it in some way. Science as a whole appears to be built around this idea: your theory's utility is strongly tied to its predictive power (i.e. measurable and replicable). What's more, gaining predictive power is hugely incentivized financially: the stock market is the most obvious example, but also a large amount of machine learning is going into things like predicting behaviour of consumers, or populations, or the environment etc...

This leads to some interesting questions: is our ability getting better over time? Assuming yes, how much better, and how fast? I don't know if there's a good way to measure this (some 'predictive power index' would be needed), but my guess would be, like technology, it's growing exponentially. This leads to further questions as to what the side-effects of this are - e.g can things like mortgages be used to bring future value into the present at lower risk, due to better predictions; or what happens to insurance if risk isn't balanced between everyone, but rather only those predicted to be unlucky.

A final thing of note is how this relates to our concept of intelligence: e.g. as we get better at forecasting, does that make people 'smarter'? Are ants smart because they can predict rain well, or crows because they can predict rules of water displacement?

2. Mecro prediction

(I wasn't sure what to go between macro and micro, but 'e' is linearly and semantically between 'a' and 'i', plus this is the individual level, so 'mecro' seems good, albeit corny).

Next up, rather than the societal level, we can look at individual aspects of predictive ability. The most obvious is how it plays into the theory of natural selection - the better you can predict, the more likely you are to stay alive. A number of times we make decisions by picking certain actions, predicting the outcome of each, and then picking the best ('Where should I move to? Which shirt should I wear?' ...etc). This can be improved in the absolute sense by an increase in predictive power: e.g. When considering "If I eat this food, will I get sick", it's better to improve the prediction with more cuse like smell, sight, weight etc. Absolute improvement alone is not enough however, an exponential growth comes from when improvements are useful relative to others. You are more likely to survive if you can predict what your opponent (both prey or predator) will do in situations, and you're more likely to reproduce if you can predict what others in your species will do, hence an acceleration of growth.

There are some interesting things to consider when this predictive ability breaks in individuals too - e.g. this blog covers some thoughts about the side-effects of thinking your predictions are much more likely than they actually are (maybe confidence, but maybe also delusions or paranoia).

3. Micro prediction

Things get really interesting when you look even lower - at the neurological layer. Assuming we predictive ability is good, then we'd want brains to reward correct predictions. There's a neurologist in Geneva who is looking into this (Fabienne Picard, see e.g. this article), having noted that some epilepsy patients encounter happiness that appears to come from an error-correcting part of their brain being turned off - effectively making all their predictions 'correct'. This may even explain a collection of common biases: e.g. Hindsight Bias, where we consider artificially boost the memory of our predictive power over past events. Also Confirmation Bias, where we focus more on things which agree with our predictions (as those provide more happiness).

This leads to a model where our brain includes predictive ability, plus a center for measuring the accuracy and rewarding (or maybe penalizing) based on that. But how does that work? That part is still not that well understood. One clue though I think lies in neurons themselves: There's a famous rule (Hebb's, from Hebbian theory) which is summarized as "Neurons that fire together, wire together" - that is, the synaptic strength between two neurons is increased the more that the upstream cell firing causes the downstream cell to. It is a very small leap then to conceptualize this as a 'reward' for the firing of once cell in predicting the firing of the second - perhaps higher-level prediction can then be built by large-scale combinations of these lower-level prediction atoms.

4. Problems

It might sound pretty good currently - predictive ability is encouraged at all layers of the stack. However, there are a number of issues that arrive which will be interesting to try to address in the future. I recommend trying to think of some now, and letting me know, but if you want to just skip to what I've come up with so far:

- Why is novelty fun? If we just did things we could predict, then why try anything new? Listening to the same song all the time would be 'rewarded' much more than songs you'd never heard before, but people still create new music.
- How does the difference between correlation and causation play into this? i.e. prediction is different from predictive control, forecasting the temperature tomorrow is one thing but being able to set the weather tomorrow is so much more useful. In science in general, causation is preferred, but what about the context of e.g. stock markets.
- ...and related, what are directionality implications? With Hebb's rule, the connection between A and B is rewarded, not necessarily A and B themselves - you could just as easily argue that, were time running backwards, B would better predict A (a bit abstract I know, but still, the symmetry is interesting to think about).
- Is there a good way to define a predictive index, as discussed in the macro section?
- What is the limit? e.g. are there either things that can't be predicted (quantum events?) or things that it would take too much time/power/... to predict, or will we reach some sort of prediction singularity?

For now I'm not sure what I'll do next - other than revise for mid-terms :) It might be interesting to come up with some AI algorithm based purely on predictive atoms (like neural nets, but from a different angle), or look for some more real-world examples (e.g. whether prediction correctness in games leads to more happiness, or whatever), but as always, I look forwards to any thoughts others have about this.


  1. In terms of the prediction reward not rewarding you for obvious predictions, I remember some research I read about ages ago on curious systems. The idea that obvious outcomes are not interesting, and things which have random outcomes are not interesting, but things in between are interesting and I guess would have higher prediction reward. This is my vague recollection but there was a bunch of research into how to model this.

    1. Ooh, good point; there's definitely a split between exploration (gaining knowledge) and exploitation (using knowledge for utility). It's hard to know though whether the obvious ones are not interesting just because they're learned so quickly;

      Was it random forests? From memory those have a concept of making decisions based on 'importance', which sounds similar.

      At the macro level it definitely seems true - scientific effort is concentrated on finding changes that split the model space best, there's not much study for obvious or random signals.

      The micro level seems the hardest to fit in - neurons don't seem to mind always firing together. Perhaps these fatigue after long periods of time, which would then require changing the signals before full firing can resume. Or it could only appear when clusters of neurons are examined as a whole.


Post a Comment

Popular posts from this blog

Sounds good - part 1

Perceiving Frequencies

Real life is full of Kaizo blocks