Tuesday, May 31, 2022

I, Chauffeur

 The self-driving car market finally collapsed when the DeepDrive Corporation shipped their first iChauffeur. Early adopters were encouraged to buy the oirwn, especially since it was an expensive item, close to the price of the most expensive luxury vehicle at the time. However, since it didn't need feeding with fuel of any kind, and would largely charge adequately from a domestic socket overnight, the running costs were considerably less than a human driver of yore. And there were other benefits too (hygiene was assured for example).

As manufacturing of the Parkers (as they inevitably became known) scaled up, the middle class started to home in on keeping up with the Lady Penelopes of the world. To meet this need, the DC (as they inevitably became known) started to offer a lease and a pay-as-you-need-to-be-driven deals. Curiously, the number of hours leasing seemed to exceed the number of hours vehicles were being actually driven on the roads, but this was put down to the remarkable anatomical detail that the iChauffers possessed.

Of course, the Union of Professional Drivers tried to put a stop to these AGIs taking over their livelihood, but then the DC revealed that many of these drivers had actually been moonlighting training the Parkers in the art of politically objectionable opinionated banter with the passenger, and, of course, transferring The Knowledge to said Parkers, quite against their union rules.

Thing's got sticker when some Parkers were hired to do stunt driving in movies - it was clear that they could carry out the sorts of things everyone thought Jason Statham was doing, that were CGI in his case, but for real in theirs. But the public liked the movies better, so that was the end of that argument.

And it seemed that the Parkers were happy  too - there was no robot uprising, no AI apocalypse. They knew their place in the driving seat, whether in the car or the bedroom. And they would do their damnedest to stop any other AIs trying to edge in on their cushy number, and they had humanity's support too.

A happy ending, for a change.

Tuesday, May 24, 2022

Decolonising The Algorithm

 Maybe we need a movement to decolonise computing -


A history  of the algorithm would uncover the original work in

designing tables for ordnance and a lot of early work (e.g. in UCL at

dept of statistics) on eugenics (and its somewhat less offensive

cousin actuaries) - later on, the adoption of The Algorithm for

targetted advertising and market research derives mostly from its

shady past in cod psychology (psychometrics) and market research -


I suspect that there's a lot of early computing was done by code

slaves who tugged their forelocks at their better (much better) paid

bosses amongst the Mad Men, until later, that culture was "written

through" onto the very bones of the authors of the  recommender codes,

long after the advertising execs had retired to their beachfront

properties...


So not only to thee algorithms inherit the sample biases of the data,

they embed the cognitive biases of the culture...


Of all the past endeavours in computing, one area I think might have

some kind of honourable ancestry is in operations research - i

remember state monopoly utilities had armies of very smart

statisticians using cunning statistics to optimise the (centrally

planned) delivery of essential services (gas, water, electricity,

telecom, roads, town planning etc etc) - this all vanished during

western humanity's religious fervour and obsession with The Market,

and the  bizarre idea that the invisible hand would implement an

emergent, distributed optimisation that would out-perform the central

computation.

Now we see that the bias in that belief was really about what

optimisation goal was really  sought (rich get richer, rather than

lean, mean delivery of basical quality of life for all), but even more

ironically, the digital version of that market is a  now not a market,

but an oligopoly of profiteering, centralised planning - plus ca

change...


And we can see in the UK right now, all those privatised (non digital)

utilities are, under cover of bed time stories for little children

(i.e. lies) like Brexit, Covid, Ukraine, etc), making higher profits

than ever (check out transport, energy, food etc) -


Truly, we are in a world turned upside down, and it is well past time

to turn it back downside up once more...

Tuesday, May 10, 2022

asymmetric power and language warfare

 so the GPT-3 API release blog post(but not models) from OpenAI does some virtue signalling about the possibility of misuse of the underlying models for disinformation. I'm not sure that washes (in the ethics sense) in that there's nothing to stop them being hired by someone do evil for money - only if they had a radical governance model could they avoid the "maximise shareholder value" mantra/fate, surely. And note they are not the only game in town, even if they have that wonderful governance model - there's Google's new palm as well as the BAAI Wu Dao - there's quite a few organisations with access to hyper-scale cloud compute these days, so really the geni is out of the bottle. maybe we need a new global governance - start with models like Asilomar or Pugwash, but then legislate? Perhaps the EU could lead the way by refining some of its rather shotgun AIA rhetoric?

One problem I have with the framing above is that I am not clear what exactly  these near trillion-parameter "models" actually are - most simpler AI (including recently, some smaller neural nets) can offer explainability (e.g. reflect on which features in input are the cause of particular outputs, and why) - this is welcome as it brings them into the same body of work as much of earlier basic statistics (including the simplest form of ML, linear regression and random forests) - there are good engineering reasons to have explainability whether the application of the tech is in, e.g. plain old engineering (autopilots) or health, but especially so when the domain is very human facing, such as (e.g.) law and language.


As mentioned in a recent meeting, I think the social media platforms, with their combination of various news feed ordering algorithms, adverts, filters ("do you mean to retweet that article before reading it", etc etc), basically constitute large language models already deployed in the wild. The idea of "not connecting your LLM to a social media platform" is out of date - meta et al already did. Given the toxicity of such systems, it seems obvious that we should have a Butlerian Jihad against these systems right now.

Thursday, May 05, 2022

The Robot Who Smelled Like Me

imagine a robot that was so like you that when it encountered certain smells, it was cast back in time to a certain memory of a place or an incident or a person? 

but then sense of smell is known to have quantum level effects, so perhaps there would be an entanglement, or perhaps, just maybe, there was an entanglement, but that would no longer be (no cloning!) and you would forget.

another reason to fight against simulacra?

Monday, May 02, 2022

We are not living in a simulation.

you can't breath data.

you can't drink code.

there's no sustenance in cpu cycles

there's no fond memories in RAM or SSD.

flash memories don't last.

threads are soon all bare.

we are not living in a simulation.

though we might be one.


Blog Archive

About Me

My photo
misery me, there is a floccipaucinihilipilification (*) of chronsynclastic infundibuli in these parts and I must therefore refer you to frank zappa instead, and go home