Wednesday, March 24, 2021

Why not look at Augmented Human Intelligence, ahead of Artificial General Intelligence?

 As part of the Turing's AI UK Conference I was thinking about where we should be in 5,10,30 years

I'd like to see if we can reverse Frank Zappa's observation about scientists' incorrect belief that Hydrogen is the most abundant substance in the Universe, and rather, it is far exceeded by Human Stupidty.


Given peoples' blatant lack of discernment in social media, voting, and generally outrageously dumb collective behaviour, e.g. in the face of existential threats like climate and nuclear weapons, this seems like an urgent matter. and building AI to mimic humans seems, at this point, like a bit of a seriously losing proposition.


So how could we use AI to augment human intelligence? The trick is not to democratise the writing of black-box AI (giving people visual programming languages for convolutional Neural Networks is an even worse idea than increasing the world's population of buggy C, Python coders.

The idea is to make existing work on AI legible. Not just explainable, but teachable. so when making a decision, an augmented human might use an AI method, and at the end, not just no why it recommended what it did, and not only why, but how to internalise the knowledge and skill to use that method herself.

This is akin to the idea of the mentat characters in the novel, Dune. Humans carry out computational tasks, and computers have long since been banned after the fictional Butlerian Jihad, on the basis that they are unethical. In my view, that is somewhat of a limited view - we need to retain the AIs, but they become mentors.

To this end, we need to concentrate on AI tools and techniques that are intelligible not just explainable. So while simple ML tools like regression and random forests are ok, you also need tools like generalised PCA and probablistic programming systems, and Bayesian inferencing that clarifies confounders, and, if  we must go on using neural nets, at least SHAP, path-specific counterfactual reasoning and energy landscapes, to illustrate the reason for relationship between inputs and outputs. GANs fit here fine too Ultimately all these systems should really be a pair - a model, that is self-explanatory (e.g. physics, engineering, biological cause/effect) coupled with the statistical system that embeds the empirical validation of that model, and, possibly a hybrid of symbolic execution and data-driven systems. Of course, people in guru/hacker mode writing the next gen AI need to document their processes, including their values, as this is all part of making the results teachable/legible/learnable too.

In the end, these systems will also likely be vastly more efficient (green cred), but also intellectually, will contribute to human knowledge by exporting the generalisable models they uncover and make more precise, and allow humans to individually, and collectively, stop behaving like a bunch eejits.


Then we can let the AIs all wither away, as we won't need them any more.

Tuesday, March 09, 2021

The Genies that probably won't go back in the Bottle

 One discovery made about people in organisations using video conferencing was in the early days of the Defense Simulation Internet - this was about 30 years back (DSINet started around 91) and made extensive use of the Mbone technology to provide many-to-many real time video, audio and shared applications. One of the UIs for this had a prototype of the "hollywood squares" that many Zoom users will nowadays be familiar with, 

Most of the real users of this system were wargaming (the shared apps included highly detailed battlefield maps with animations of army vehicles etc). At some point, the generals got really upset because they noticed the rank-and-file were talking directly to each other, rather than up-and-down the chains of command. Students of history will know that such a peer-to-peer organisation was also how the anarchist brigades operated in the Spanish civil war - it is highly effective as it is highly resilient (there's no leader to decapitate, and it is lower latency to get information to the people who need it to make decisions and take action).

This all applies to any overly hierarchical organisation, be it university, company or indeed, entire nation states. We cut out those annoying pointless "leaders" who make the wrong decisions because they are a bottleneck, and swamped with either too much advice, or too many filters, or too many lobbyists distorting the information, The Internet may finally actually democratise socieity, but not as previously envisaged.

By the same lockdown token, people have more time to consider content delivered by digital communication. Consideration may lead to more nuanced decision making (e.g. not responding to clickbait, or believing fake news, or even taking care to remember who was responsible for these things and mentally marking their future utterances as suspect, or at least "to be fact checked carefully when I have time after this".

Evidence for the increasing discernment by the broad public can also be seen in the search for relatively subtle explanations of what is happening (rules for lockdown, vaccine safety etc) - where people would dismiss experts, they now much choose an expert who explains about exponential increases in cases when R0 is above one, or the nature of false positives and false negatives in different tests. This is because after a year of hearing experts and politicians, it is increasingly obvious whose explanations and predictions are based in some sort of discipline, and whose are just self-serving attempts to maintain a wobbly power base. 

You can fool some of the people some of the time, but 12 months in, everyone starts to realise who the real fools are. Or indeed, crooks.

Blog Archive

About Me

My photo
misery me, there is a floccipaucinihilipilification (*) of chronsynclastic infundibuli in these parts and I must therefore refer you to frank zappa instead, and go home