Reading up on AI

In my downtime before uni, between doing some hiking and coding, I figured I'd get into the Cognitive Systems vibe by reading up a bit, and this seems to end up often at some discussion of super-intelligent AI. I'll probably be writing up a bunch of these over the period of my course, but first up: one observation made at lesswrong from a GiveWell director - in particular, distinguishing between 'agent' (active) and 'tool' (passive) AI, see objection #2. The basic idea is that it is much safer to design 'tool' AI, as it just gives you data and does take action itself, and I'd recommend reading that article first and thinking about it before returning.

I think overall it's a good way to start thinking about the problem, but my take on this can be illustrated by the following tale (giving examples through stories seems popular in this area):

* * *

Investech was a company of researchers and quants, who had a small amount of success writing computational models of equity prices (shares, bonds, resources, ...) and used these to generate income from trading when the models predicted shifts. Over time, as more research was added, more patterns found and incorporated, Investech's model grew and the AI was given access to more and more information - company reports, social media posts, open health and weather data, etc. This culminated in all trading combined into a single AI that incorporated all their research and data, nicknamed the 'Oracle Oracle' (or O2) after the system it ran on. 

Not wanting to repeat some of the massive automated trading problems in the past, O2 was designed just as a tool - when provided with a certainty cut-off, it would display an equity and a price, that it forecast to the given certainty would be the case in the future. The human traders at Investech would then use this to make the actual trades, based on what levels of risk were acceptable.

The initial tests run by the QA team worked great: Given 5%, it provided a really high price for a resource future that would shoot up only if one country received unusually high rainfall. Given 60%, it had predicted the share price of a large fashion label would dip 25%, and sure enough, when the newest line was spurned by a few celebrities, support indeed dropped; much to the pleasure of both the CFO and O2's tech lead (now CTO) at Investech. Given 99%, O2 chugged along for a while, then hit a preset timeout and reported no results.

O2 was mostly then used internally to generate a number of semi-likely events which guided basic hedging strategies that ended up performing well. Due to the recent success of O2, Investech had started looking to grow their client base - and due to the same success being disclosed on their earnings report, they had been able to assemble a meeting of a number of the more powerful hedge fund managers to demonstrate the power of the Oracle Oracle.

It took a while to organize, but eventually Investech were able to assemble the meeting to show off O2 in front of these 20 or so managers who on paper controlled a sizeable portion of global markets (and off paper, probably even more). The explanations went well, the numbers for the past few quarters checked out, and most seemed happy with the techniques of converting the confidence levels into trading strategies. The main concern was over confidence in O2, and how much trust could be placed in this system that ran everything but, due to being AI that had evolved over a long period of time, was not well understood or easily explained / debugged.

The time had come for a live demonstration - Given 50% confidence, it believed price of corn in the mexico would remain stable - it seemed the market was fairly efficient, at least for corn. At 80%, the Oracle Oracle predicted shares in an energy storage startup would double; many were surprised by this, but the startup's technology was unproven so it seemed that O2's surety in the viability was quite high, having analysed a lot of research. 

While most of the managers were discussing or taking notes, one joked: "So, 99.99%?". Knowing of the earlier testing, the CTO laughed as well, keyed it in, and started to explain their corner case handling while O2 began its calculations. They were interrupted by results on the terminal:

99.99% Prediction: NYSX / Investech / $0.00

* * *

If you made it this far - that was my long way of saying that, for an AI which has some form of empathy, giving information is taking action. Providing it can predict a human reaction to the information it gives (which I'm assuming will be the case - more about prediction in later posts probably) then even without the AI-box issue, a tool-based AI can still 'take action', even if using the medium of humans rather than bits. It's much slower, but can only be avoided by not having empathy (i.e. prediction of how humans will react), which is pretty much impossible for a superintelligent AI, as humans have this to some level already... A more humorous/shorter version of this is Monty Python's Funniest Joke sketch, or a quote from The Interview.

Anyway, back to reading/thinking too deeply about this sort of stuff. Apologies if it's not the happiest of posts, I don't want to scare anyone with AI talk (Terminator/Ex Machina/Age of Ultron etc. do that enough already). I'm definitely leaning towards the AI-will-be-good-for-humans-in-the-long-run side of things, although we still have a lot of learning / understanding to go through before it's not still mostly guesswork (hence the degree!).

In other news, I think I accidentally got a part-time tutoring job over here (went to ask for details about the course and what they were looking for, and ended up being interviewed) and I'm moving into my permanent place soon, so within a month I'll finally have access to all those other clothes (+furniture etc.) I shipped over, and not just the 7 shirts in my suitcase, yay :)


  1. DH is working on deep learning these days. You should chat with him.

    1. Ooh, interesting, thanks - will do! My guess is he may be interested in my current project too, once finished :) I definitely need to line up a Seattle visit at some point too, now that I'm here.


Post a Comment

Popular posts from this blog

Sounds good - part 1

Perceiving Frequencies

Real life is full of Kaizo blocks