An attempt at a hard problem

Note: I'd strongly recommend reading this earlier post first, at least the story within, as it is the prequel to the story in this post.

* * *

After its earlier horrible performance in front of the Investech board, financiers lost confidence in the predictions of Oracle Oracle ('O2'), and the company soon went into liquidation. As it was a large part of the problem, the O2 network itself ended up being sold quite cheaply to the state's energy grid management company FullPower ('FP'). FP already had access to present and historic data for the state's infrastructure and weather, so the plan was to use O2 to improve power planning and distribution.

Things started well - seasonal and regional trends were learned, and O2 even predicted the increase in cooling usage after fishery oversupply led to more goods with high refrigeration needs. To assist in interpretation, O2's language module was improved, in an attempt to explain new predictions in more human-readable terms. Simple reports stating 'There is a public holiday tomorrow' or 'More people will be home watching the Superbowl' began to arrive, and FP were becoming more efficient than they had ever been.

This led to higher budgets, which inevitably was allocated to more data sources for O2 - as long as it laid their golden eggs, FP were happy to keep feeding the goose. As O2's computational requirements increased however, the report creators received their first very surprising message:
FP are adding a new data source to O2, 
so their power usage will be 22% higher than normal.
O2's controlling engineers weren't happy - indeed, they were adding a new data source, but as it already consumed a large amount of power, this bump was neither expected nor tolerated. As a result, FP shut down the prediction engine temporarily, optimized all processing, and after a few days, turned it back on. The power jump still happened, although the optimizations performed had kept total usage to an acceptable amount. O2 took some time to process the feeds of news it had missed, reworking it's model of how FP was using its power. A new related prediction came the very next day:
FP have turned O2 back on, but processing the new data requires
a transient 9.1% increase in power consumption.
FP were again unhappy, so again had to take O2 offline to see if the increase was necessary, or due to error. Time was ticking however, as the fear was that the longer O2 remained down, the higher the bump would be when it was brought back. Most easy improvements had already been squeezed out in the last update though, so work was difficult. After six hours of exhaustive prodding at code, analyzing logs, predictions (by humans, of course) of how much was necessary to turn O2 back on, the order was made to do so. This time, after only a few hours of catching up on the news streams, O2 refined its model. Despite already modelling the FP and O2 entities, it had discovered that a new word fitted much better, based off recent events:
FP have turned me back on again, processing new data will now require a transient 9.3%.
At which point, O2 crashed a few milliseconds later. Few members of FP's data science team had even seen the message, but most that had were already quite shaken - it had replaced 'O2' with 'me', could it be self-aware? What does that even mean, surely it doesn't "know" what 'me' is, it's just printing text, right? These questions were delayed though - the same problem as before still existed: O2 was down, and even small delays in resurrecting it would result in large changes to power load.

The cause of the crash was not obvious: a huge spike in memory and processing utilization had caused protective safety triggers to halt execution, but the original reason for the spike was still unknown. While monitoring hooks were added to improve transparency into the system, one FP engineer decided to add a continual dump of predictions to file. Later called the 'stream of consciousness' output, this was intended to be temporary, and in the meantime the added debugging tool was considered worth the slight slowdown it caused.

O2 was fired up again....and yet again, it immediately crashed. Another huge memory spike, another drain on GPU and CPU resources, it almost appeared stuck in an infinite loop. Which had never happened before. Maybe an initialization error? Turn it on and off again to fix? So O2 was rebooted, but the same thing happened. And once more, just for good luck - a third boot, a third crash. The earlier FP engineer decided to check their logs, and the cause became clear. It turns out, O2 had learned a new word again, but this was slightly more problematic:
FP have turned me on.
I am thinking "FP have turned me on".
I am thinking "I am thinking "FP have turned me on"".
I am thinking "I am thinking "I am thinking "FP have turned me on""".
I am thinking "I am thinking "I am thinking "I am thinking "FP have turned me on"""".
I am thinking "I am thinking "I am thinking "I am thinking "I am thinking "FP have turned me on""""".
I am thinking "I am thinking "I am thinking "I am thinking "I am thinking "I am thinking "FP have turned me on"""""".
...
It was clear that it was modelling its own computations, however this modelling was itself a computation, which only resulted in recursive loops with no termination.

Still under huge time pressure, a hack was proposed: detect the loop, and break out. A simple fix both conceptually and in terms of engineering, the workaround was ready quickly. O2 was brought back online, and the results were mixed. O2's resource usage had jumped by 3, and its speed of calculations had decreased, but at least the crashes had been eradicated. The log was also promising:
FP have turned me on.
I am thinking "FP have turned me on".
STOP
I am connecting to feed global/weather.
I am thinking "I am connecting to feed global/weather".
STOP
I am connecting to feed global/exchanges.
I am thinking "I am connecting to feed global/exchanges".
STOP
...
This pattern continued for a while, until O2's pattern matching quickly kicked in, and it matched the new repeated structures against its concept dictionary. Another word was learned:
I'm conscious that FP have turned me on.
I'm conscious that I'm connecting to feed global/weather.
I'm conscious that I'm connecting to feed global/exchanges.
...
Needless to say, the engineers decided to leave the logging stream in place.

* * *

If you like this sort of stuff, definitely check out the Hard problem of consciousness for an intro, and maybe something like Dennett's Consciousness Explained, or something easier like this sciencemag article which goes over some of the theory behind this. And as always, questions/feedback is always welcome in the comments! Now that I'm looking into neurons, I'm a lot more interested into how to study the biological feasibility for this sort of stuff.

Comments

  1. Nice and creative writing. Thanks for share with. Keep writing like this.

    thanks & regards

    Fasttrack Driving School

    ReplyDelete
  2. This is very nice and informative. Thanks for share with us.

    Thanks & Regards,
    https://diversesign.com.au/

    ReplyDelete
  3. I love it. Keep it up

    Regards,
    https://ddftr.com/

    ReplyDelete

Post a Comment

Popular posts from this blog

Beans!

Comment Away!

Sounds good - part 1