Monday, June 26, 2023

SF stories where individual choices impact the direction of a whole society....

 this isn't naive stuff (anti-marxist history) but more about how choosing specific technical lines of development might be the ultimate influence, viral meme etc -- so examples include

Simak's City

The Webster family bring about humanity's replacement by a society of smart dogs and robots.


Herbert's Dune

The Kwizts Haderach controls all the Spice in the universe.


Asimov's Foundation (at least 1.5 booksworth)

Hari Seldon's ghost manages 1000 years of the Empire's replacement.


Watson' Jonah Kit

People chose to believe a purposeless Universe, so the Whales leave to go to a better one.


Vonnegut's Cats Cradle

Felix Hoenikke creates Ice Nine. Lionel Boyd Johnson creates Bokononism


Hubbard's Scientology

L Ron makes up some truly daft stuff, that makes Pastafarianism look pretty sane.

add yours here...

Monday, June 19, 2023

attention is exactly what you wont get....

According to Tim Wu's great book attention is apparently all you need in the brave new world to powerthe new economy, and according to this foundation ai paper that underpins transformers and hence LLMs.

People are worried that LLMs can be used for bad as much as for good - we might call this unhappy eyeballs.

I contend that this is being very much overstated as a problem. Why? Receiver bandwidth (reading time, thinking time, reaction time etc etc).

This documentary on fake news showed that (as did Tim Wu) this is an old old problem that did  not require AI of any kind to create massively engaging false stories in print, on the radio, on tv and so on, over 120 years of mis/dis-information - it has thrived without computers, without the internet and without ai.

So why does the threat of generative ai not impress me much?

well, people are already saturated with stuff -whether it is adverts misrepresenting or mis-selling goods, services, products, or political campaigns repeating lies, damn lies and statistics. Of course there was a shift when the internet, online shopping and social media allowed profiling of individuals (from their personal behaviour or inferred from their friends, family, acquaintances and location and pets and so on), which allows (possibly) for targetted adverts (as per the infamous C.Analytica). However, there's actually precious little evidence that this made a big difference.

So will Chap G&T (for want of a better product name) success where C.Analytica and Amazon and Netflix have so far failed to move the dial very much?

I doubt it. I doubt it very much: because users also have tools at their disposal (discerning brains, filters, the off button and so on).  The fraction of people that are easily swayed is fixed - they are the conspiracy theorists. The fraction is not fixed by the media, it seems culturally determed at some much deeper level, and is usually, a relatively small part of society. What is more, spreading the message (the earth is flat, AIs are coming to kill you, warning, warning martians have landed, etc etc) doesn't work for very long as a lot of people hear other messages and choose the ones that match their world model (scientific method is actually quite human!), and ignore things that are a poor fit.

So the main existential threat  I see from LLMs is to journalists.

:-)


Gloss


Model


An artificial representation of something 

that lets you explore the thing, without having to mess

with the  real thing



AI


Artificial Intelligence  is a collection of technologies that implement

models using statistics, computer science, cognitive and neuroscience, 

social sciences and cybernetics. These can be embedded in systems

that continuously update those models with new input.

Furthermore, they may interact with the environment generating output. 


AGI (Artificial General Intelligence) is sometimes used to describe AIs 

that approach fully humman capabilities. Where are we on this spectrum 

today is a matter for debate.



Foundation Models


Foundation Models are large AIs trained on large data.

e.g. LLMs, Stable Diffusion



Generative AI


A Generative AI is a type of AI that creates new data that has 

similar characteristics to the data it was trained on- 


e.g. LLMs and Sustainabile diffusion systems like Midjourney and DALL-E


Large Language Models (LLMs) 


A particular kind of foundation model that is a generative AI, and can create 

(usually) text output that is as natural/human as its input: 

examples include  Bard, GPT or LLAMA



Deep Learning


A collection of techniques for acquiring models from complex input data

without having a human in the loop to write down the model.


Neural Networks


A specific technology inspired by neuroscience and human brains (though

typically very different) for implementing deep learning.



Other tools and methods that aren't neural nets but are widely used today

include regression analysis, random forests, causal inference, 

bayesian inference, probablistic programming, Phi-ML...<add yours here>


Some AI properties of interest:

uncertainty quantification - confidence isn't just a trick - knowing your limitations is also important - how good is a classification or decision matters if it is a matter of life or death, or even just wealth and well being.

explainability - there are at least4 good XAI techniques - most add 4x to training costs, but massively increase the value of an AI

sustainability - $4M per train is not sustainable, nor is 4% of global electricity production.

scalability distributed/federated/parallel - one very nice line of enquiry in AI is mixing systems - so XAI can be used to reduce the complexity of a neural net massively whilst retaining uncertainty quantification and hence also making the system sustainable. In some scientific domains, Phi-ML does just that and can get orders of magnitude speedup/cost saving etc whilst hugely improving explainability from first principles.

Federation also offers improvements in efficiency (by combining models rather than raw data) _ this also improves privacy, and reduces dependency on centralised agency. So (for example) instead of the UK giving all our health and economic data to the US tech companies we can just federated it locally (as is being done in HDR UK) and retain its value to us, without huge loss of (data) sovereignty  - we can then lease (or sell) our models to other countries. That seems like a much better idea than targeted political campaigns through more precise human-like generative AI-text-bots.

Wednesday, June 14, 2023

The 9^H^H 10 immutable laws of Gikii

 The 9^H^H 10 immutable laws of Gikii


Something must be done


Something must be done, but noy by you, tech bro


Nothing can ever be undone. Ever. Especially not laws.


Everything follows the Gartner Hype Curve, especially the use of the Gartner Hype Curve


Belief in the Blockchain is Immutable.


Schroedinger's cat isn't.


The umlaut is not an hesitant football hooligan


Celestial Emporium of Benevolent Knowledge, from afar, looks like The Matrix.


Stochastic Parrots are npt pining for the Fjord.


This list entailed national language processing.

Monday, June 12, 2023

test, verify and attest -

 when you build a system, you'd like to know it is that system you run and that nothing has been modified since that build. Or at least mostly (maybe you have dynamically linked libraries, or are running as a component in a distributed system, or are re-running on a new OS release/VM/Container etc etc) so you also want to know that those systems are (mostly) the same too - you want 

mutual attestation

but also assurance about the system behaviour either side of that mutual divide.

so one thing one might do is have a behavoural signature for a system - basically an execution trace - tim harris built such a system for pervasive debugging a while back - the trace can often be massively compressed since much of it is repetitive - indeed, there was a nice demo of actually being able to run programmes backwards!

so each system would log a trace in the attestation service, and then carry a manifest (signed digest of the trace) as well as an integrity check of the actual system...


then it'd be up to some runtime checker (like the aforesaid pervasive deubgger) to decide what level of deviation from the typical trace constituted a possible problem. This could use a similar approach to vigilante to detect bad behaviours, or sign systems that have run without any detected deviation (note, this is not a guarantee, but could give a tradeoff  - see next):-


We could apply this as part of Data Safe Havens to give some level of assurance, automatically that small changes to applications or to the haven, after a given release, have not deviated beyond some acceptable threshold (this could be zero in extreme, or even by default) .... would also let developers try stuff with a little flexibility....

Monday, June 05, 2023

basic behavoural biometric

 Many camera phones now use lightfield/AI hacks by taking multiple shots in rapid succession, then using the fact that the camera (phone/camera shake) is very rarely stationary between each frame, so the perspective shifts slightly, and one can derive a 2,5D or depth information (with a bit of nifty graphics co-processing).

Given people use face selfies as a biometric, this suggests two improvements to how that works

1. use the depth info as part of the biometric - this prevents still image replay attacks since a print or screen won't have depth info in it

2. use the actual camera shake as proof of liveness, but even more, use the specifics of how the camera moves as a "signature" which might prove to be relatively distinct for a given user (and would help prevent attacks with adversarial people "updating" their photo, for example (pretending to be a person by borriowng their phone and trying to replace their face id so later attacks would work - unique hand movement might be enough to make this hard to do:-)

Blog Archive

About Me

My photo
misery me, there is a floccipaucinihilipilification (*) of chronsynclastic infundibuli in these parts and I must therefore refer you to frank zappa instead, and go home