Project cutting floor


One class this semester is mostly aimed at coming up with ideas for what to study as a graduate student (not surprising, being a 4th year subject in a major that aims at getting people into grad school). I'm lucky enough to be in a position where I already have a project lined up for summer, but it was interesting instead using the time to come up with alternatives, and then try to convince myself they were worth discarding. Or at least...putting off until later. Hence this list, using a blog as my external memory of interesting things, so I can look back later and see if I was stupid or if any turn out to be interesting. Split roughly by subject, from the more abstract/philosophical to the concrete/engineering:

Philosophy: 
1) For everything (possibly?) true, can it be thought?
2) For everything thought, can it be communicated?
Consider this the philosophical version of the Incompleteness theorem (maths), or the Uncertainty principle (physics) or the Halting problem (comp sci). The tricky thing here is formalism: the temptation is to formalize communication or thought and then throw Gödel-y tricks at it, but this is probably an invalid simplification. I feel like the gut answer is 'no' to both, and maybe it's not useful, but at least in the three examples above some interesting tools came out of the proofs. Plus, it'd be interesting to know if anything important lies in the set of unthinkable truths, or uncommunicable thoughts (qualia, anyone?).


3) How much can a brain know about how brains work?
A related concept, plus relevant to neuroscience research, but it seems like there might be a natural limit: how can a system store all the information about itself ('neuron M is firing'), but still have extra room for the information required to interpret that -  e.g. how/where is 'my neuron M is firing, which means...' stored? There's some interesting overlap with quines here too. 

4) Can an environment emulate itself at a faster speed than it is running? 
Say you're running a deterministic game of Minecraft on an old computer. A newer, fast computer could run the same game at a faster speed, and in some sense 'know' what the game on the slower system will do in the future.  But now imagine that inside the game world is a red-stone computer that itself runs Minecraft - yes, people have approximations of this!! The question is this: could the computer inside the game run it faster than the external one? Note that the one inside the game can be built differently (e.g: maybe compute 2 steps at once?), so long as it still ends up with the same result.
My gut feeling again is 'no' (you have trouble emulating the part of the world which includes the emulator) but there might be some interesting implications for future prediction. e.g: how lossy does your emulation need to be before it can be faster? And how quickly do these losses snowball?

Psychology:
5) How do humans pick something they think other humans won't have picked?
You want to go fishing, and not be surrounded by people, so want to go where other people aren't. But you know that everyone else going fishing will also want to go where other people aren't. So where do you end up going?
As far as I know is an unexplored aspect of humans and novelty, but also looks at things like how humans model the minds of other humans (and impacts stuff like how people create businesses). For full details, see this earlier post I wrote about the issue.

6) When is using the Nash equilibrium for a situation suboptimal if you assume your opponents are suboptimal?
I was recently reading papers on artificial intelligence in poker after an AI recently beat pros in heads-up no-limit hold'em. By trying to adaptively learn the Nash Equilibrium strategy, in one sense it becomes 'unbeatable'. The question then is: which is better at beating normal people? Could it be that pros don't optimize their strategy against optimal opponents, but 'real' ones? It turns out, in games of Rock-Paper-Scissors, AI that don't pick randomly tend to win in contests with lots of participants. This is because random picks is a Nash equilibrium (you can't 'beaten') but it also performs badly against opponents that aren't using it (which is most, especially humans). It's this part of AI/psychology which I think will be most interesting: how to model your opponent's strategy, and adapt to optimize against it. 

Comp sci:
7) Can image classifiers work better in the frequency domain?
Current state of the art image processing uses convolutional networks, echoing how the human visual cortex is believed to work. However, it's also known that frequency information is important for some things. And convolutions in pixel space are just pointwise products on the other side of the Fourier transform - so in theory faster?

8) Can you build a blood sugar estimator from easy to measure biological signals?
I'm a type-1 diabetic, so have some familiarity with the problems of low blood sugar levels, especially ones while asleep. It made the news a little while ago in Australia, about whether governments should subsidise sugar level monitors to 'stop diabetics dying in their sleep'. As it turns out though, there are a lot of biological signals that signal hypoglycemia: including skin temperature, heart-rate, sweat, and EEG (of course :p). There are some simplistic devices, and research into multivariate models, combining these into something that can alarm on low sugar levels but doesn't have the $5k cost of glucose monitors would be good...

9) BCI "yes"/"no" EEG device based on audio attention.
Another one that intersects with neuroscience: there has been growing interest in brain-controlled 'yes'/'no' systems, and yet another made the news recently. Useful for patients who can't easily communicate any other way (e.g. ALS sufferers) but also for anyone where communicating would be unfeasible/annoying. No device yet really has good enough accuracy to use though :( They use a few different techniques (e.g. P300 ERP is popular, but also e.g. strutural pattern classification. It seems like some success was found by getting participants to concentrate on (asymmetric) audio, but using ERP rather than structural attention signals. My gut feeling is the latter could do even better...


So there's my first post of 2017, a brain dump of possible future projects in case I ever need something to do. Hopefully they sound interesting, and as always, if anyone has questions (or even better, knows related research) let me know! 
Semi-related: is there a good way for deciding what to do? Its hard to reject all these without having some regret about not being able to do them, and thinking maybe they're more interesting/useful than what I will be doing. 

Comments

  1. Pick one, using whatever method you prefer/wish to try, then keep a file open for all the others and follow the journals/reseachers involved with these. That way, you have picked one, and IF you regret working on another, you can at least be aware of what others are doing with it. That is making others do your work, somewhat...

    It is an attempt at optimizing your satisfaction about the subjects status in the research field. You cannot do everything (says the guy with a bedroom FULL of parts for future projects...), but you can develop a broader awareness. This will let you jump ship more efficiently as well, if you ever choose to do so.

    Just my two cents.

    P.S. My favorite is Psychology 5) ... Because I am a fisherman, in the Lower Mainland 8{P>

    ReplyDelete
    Replies
    1. Thanks! Good advice, I definitely find it problematic to try too many things at once.
      Out of all of these, I think #5 is the one I'm most likely to try to make progress on too

      Delete
  2. You have shared a impressive article. Keep writing continue.

    Thanks & Regards
    Perth BYO Restaurant

    ReplyDelete

Post a Comment

Popular posts from this blog

Beans!

Sounds good - part 1

Comment Away!