Monday, June 27, 2022

Algorithmic Governance

 Back in 1868, James Clerk Maxwell wrote about governors - he was talking about systems that regulate things like the heat under a water tank in a steam engine, to make sure that it delivered a desired outcome (e.g. steam engine speed stayed constant, whether train is going up or down hill or on level) - this was an early contribution in what became control theory. Note the words, governors, regulators and controllers.


So why is this not the subject of governance discussions? Of course it was - once such systems were deployed on the railroads (and many other places later) they becaame subject to all sorts of rules, embedded in a complex context 


steam engines should not blow up

trains should not derail

signalling systems should not fail (including human failings) and let trains collide....

so there is then a whole slew of legal, regulatory and ethical considerations that pertain.


No AI or "Algorithm" in sight.

As with recent (many) failings in fairness (and even safety) just in naive use of spreadsheets, perhaps we want to extract what the actual specific problem that the idea algorithm adds to the mix that is actually new.

Tuesday, June 21, 2022

The Ministry of Intelligence Test

 Julia was getting very worried - she had failed the test 6 times, and this was the last attempt she'd be allowed for another year. The MiT was essential for being allowed to take part in society as a full human, otherwise one's options were severely limited to social media compliance checking and self-driving car monitoring. The Ministry had emerged post 3rd world war from the various MI#s combined with a realisation that the Turing Test, which had been allowed during hostilities to permit AIs to make lethal decisions, was incredibly unreliable. As with witnesses in court, and narrators in novels, humans are very badly calibrated to determine if another being, machine or meat, was actually "one of them". It was in the genes, in fact, to be biased against anyone or thing sufficiently different. So the MiT was handed over to machines, wo were far more able to tell reliably who was sentient, and what was a souped up combine harvester on the make.


Julia had invited her best friends Pascal and Ada over for some moral support and help. Ada had past the test recently so was her best hope. "Don't try to think too many things at once" she advised. "You mean like a late Russel T Davies Dr Who narrative arc?", Julia asked. "Yes, exactly", replied Ada " it just is a give away that you've had a committee help you prepare - We just aren't that good at multitasking.". "Exactly!" and "No way!" cried Pascal and Julia at the same time.

Thursday, June 16, 2022

privacy delusions during covid

 there was a massively misguided movement to provide privacy for exposure notification during the 1st yr of the covid 19 pandemic. in reality, the notification delivery network (e.g. in Germany run by Telekom) new the imea/phone that the message was delivered to, so it was a total delusion, irrespective  the GAEN API guff. - and the privacy of who might have been exposed was hidden from public health (epidemiologists) who actually needed to know (and said so many times) to be able to figure out the infectiousness, susceptibility and superspreader incident rates by ages/location type etc, but were reduced to poor inferences based in very small samples.

so the people who could work things out were the least trustworthy (telco/tech) whereas the people who actually were trustworthy (health practitioners) could not. 


meanwhile, vaccine certificates were not only issued en mass, but could be looked at by arbitrary authorities who the subject had no idea about their trustworthiness. given vaccination status tells you very little  about either infectiousness or susceptibility, but exposure to a person who has actually  tested positive recently tells you a lot.


totally the wrong way around. What this tells you is far more about the relative power of various groups (privacy wonks, apple, google, telcos, border control, public health researchers) than actually about priorities for  health and personal safety.


makes me very cross.


Tuesday, June 07, 2022

cybernetic subjection, or how to lose the knowledge forever.

 I'm reading Privacy in the Age of Neuroscience by David Grant, which is a very heavy tome indeed, but super interesting - it did make me want to see more examples of contemporary  problems- one that I think is that most of the  route finding on cloud based map apps isn't (just) an algorithm (Dijkstra, for example) but is derived from surveillance of what routes drivers actually take. The problem with this is, that once all drivers have given up doing their own search for better routes, there's nothing left for the cloud-based map system to learn from. So once you've "stolen" all t he human Knowledge (as in london cabbies) and ingenuity (as in anyone), there's no-where left to go - and all that evolution that went into allowing human early hunter gatherers to find stuff (and find their way home) is lost. Brain plasticity means you simply won't have it any more in any form (biological or silicon). but the algorithm will not know this, as it is just a list  of stuff, not an actual strategy. Not even a pheremone hill climber the way many insects work.

It is essentially cognitive vandalism on a grand scale.

Blog Archive

About Me

My photo
misery me, there is a floccipaucinihilipilification (*) of chronsynclastic infundibuli in these parts and I must therefore refer you to frank zappa instead, and go home