Wednesday, March 24, 2021

Why not look at Augmented Human Intelligence, ahead of Artificial General Intelligence?

 As part of the Turing's AI UK Conference I was thinking about where we should be in 5,10,30 years

I'd like to see if we can reverse Frank Zappa's observation about scientists' incorrect belief that Hydrogen is the most abundant substance in the Universe, and rather, it is far exceeded by Human Stupidty.


Given peoples' blatant lack of discernment in social media, voting, and generally outrageously dumb collective behaviour, e.g. in the face of existential threats like climate and nuclear weapons, this seems like an urgent matter. and building AI to mimic humans seems, at this point, like a bit of a seriously losing proposition.


So how could we use AI to augment human intelligence? The trick is not to democratise the writing of black-box AI (giving people visual programming languages for convolutional Neural Networks is an even worse idea than increasing the world's population of buggy C, Python coders.

The idea is to make existing work on AI legible. Not just explainable, but teachable. so when making a decision, an augmented human might use an AI method, and at the end, not just no why it recommended what it did, and not only why, but how to internalise the knowledge and skill to use that method herself.

This is akin to the idea of the mentat characters in the novel, Dune. Humans carry out computational tasks, and computers have long since been banned after the fictional Butlerian Jihad, on the basis that they are unethical. In my view, that is somewhat of a limited view - we need to retain the AIs, but they become mentors.

To this end, we need to concentrate on AI tools and techniques that are intelligible not just explainable. So while simple ML tools like regression and random forests are ok, you also need tools like generalised PCA and probablistic programming systems, and Bayesian inferencing that clarifies confounders, and, if  we must go on using neural nets, at least SHAP, path-specific counterfactual reasoning and energy landscapes, to illustrate the reason for relationship between inputs and outputs. GANs fit here fine too Ultimately all these systems should really be a pair - a model, that is self-explanatory (e.g. physics, engineering, biological cause/effect) coupled with the statistical system that embeds the empirical validation of that model, and, possibly a hybrid of symbolic execution and data-driven systems. Of course, people in guru/hacker mode writing the next gen AI need to document their processes, including their values, as this is all part of making the results teachable/legible/learnable too.

In the end, these systems will also likely be vastly more efficient (green cred), but also intellectually, will contribute to human knowledge by exporting the generalisable models they uncover and make more precise, and allow humans to individually, and collectively, stop behaving like a bunch eejits.


Then we can let the AIs all wither away, as we won't need them any more.

No comments:

Blog Archive

About Me

My photo
misery me, there is a floccipaucinihilipilification (*) of chronsynclastic infundibuli in these parts and I must therefore refer you to frank zappa instead, and go home