Monday, September 25, 2023

boxing clever with AI

 There was this AI creative challenge where the you had to figure out things to do with 4 objects, as follows:

A box, a candle, a pencil and a rope


Here's my 3 proposals:

1. Draw a still life on the box of the candle and the rope so that it looks like 3D (i.e. draw on all 6 sides of the cube, with the pencil)

2. make a clock out of setting fire to the candle, the rope and the pencil - they will burn at different rates and you could mark out the seconds, minutes and hours with box lengths, then sit on the box, passing time

3. Have a boxing match between the pencil and the candle, in a ring made by the rope.

Thursday, September 21, 2023

dangerous AI piffle...

 So what's a dangerous model?


The famous equation, E=mc^2 is dangerous - it tells you about nuclear power, but it tells you about A-bombs too.

This famous molecular structure dangerous too - it tells you about DNA damage, but it tells you about eugenics too.

[picture credit By Zephyris, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=6285050]


So we had Pugwash and Asilomar, to convene consensus not to work on A bombs and not to work on recombinant DNA. Another example - the regulator has just approved exploiting the RosebankUK oilfield, despite that solar and wind power are now cheaper than fossil fuel, and that COP26 made some pretty clear recommendations about not heating the planet (or losing biodiversity) any more.

What would a similar convention look like for AI? Are we tallking about not using Generative AI (LLMs, Stable Diffusion etc) to create misinformation? really? seriously? that's too late - we didn't need that tech to flood the internet and social media with effectively infinite amounts of nonsense.

So what would be actually bad? well, a non explainable AI that was used to model climate interventions and led to false confidence about (say) some Geo-Engineering project, that made things worse than doing nothing. That would be bad. Systems that could be inverted to reveal all our personal data. That would be bad. Sytems that were insecure and could be hacked to break all the critical infrastructure (power, water, transportation, etc) - that would be bad. So the list of things to fix isn't new - it is the same old things, just applied to AI like they should have been applied to all our tech (clean energy, conserving bio-diversity, building safe resilient critical infrastructures, verifiable software, just like aircraft designs etc etc)...

n.b. the trivial Excel error that led to UK decision to impose austerity, that was exactly incorrect:-

Recall the Reinhart-Rogoff error:
https://theconversation.com/the-reinhart-rogoff-error-or-how-not-to-excel-at-economics-13646

So dangerous AI is a red herring. indeed, the danger is that we get distracted from the real problems and solutions at hand.


Late addition:- ""There's no art / to find the mind's construction in the face."

sad Duncan, ironically, not about Macbeth...

So without embodiment, AI interacts with us through very narrow channels - when connected to decision support systems, it is either via text, images or actuators, but there is (typically) no representation of the AI itself (it's internal workings, for example) so we construct a theory of mind, about it, without any of the usual evidence that we rely on (construction in the face...) to infer intent (humour, irony, truth, lie etc)

We then often err on the side of imparting seriousness (truth, importance) to the AI, without any supporting facts. This is where the Turing test, an idea devised by a person somewhat on the spectrum by many accounts, fails to give an account of how we actually interact in society.

This means that we fall foul of outputs that are biased, or deliberate misinformation, or dangerous movements, far more easily than we might with a human agent, where our trust would have to be earned, and our model of their mental state would be acquired over some number of interactions, involving a whole body (pun intended) of meta-data.

Of course, we could fix AIs so they did this too - embody them, and have them explain their "reasoning", "motives" and "intents"... That would be fun.


Blog Archive

About Me

My photo
misery me, there is a floccipaucinihilipilification (*) of chronsynclastic infundibuli in these parts and I must therefore refer you to frank zappa instead, and go home