Sunday, April 28, 2019

machines like me? not really...

Literary writers trying their hand science fiction often causes problems. for SF aficionados, it annoys, as often the literary writer is unaware of the tropes of the genre, or that a particular topic may have been extremely well explored in a novel of ideas, that was published in the genre and not recognised in the classical literary world. for the non-SF aficionado reader, the sudden importation of tics and tells of the SF world may also rankle, although this has become much less so in recent years as so many of the "great and good" of the aforesaid classic world of letters have decided to write books which at least tackle the world's recent challenges by first reading a batch of the appropriate science (whether the dismal science of economics, to write about the crash of 2008, or the warm science of anthropology, to write about gender in alternative histories of the future (you know who I'm talking about, right? not Ursula Le Guin:), or else the science of computers, to create stories about the impact of apparently intelligent machines on robots - oops, sorry, got that the wrong way round - we humans aren't apparently intelligent machines, that's the robots, that is :-)

Hence to Ian McEwan's latest, Machines Like Me, an everyday tale of rape, suicide, and possibly, murder. Fairly everyday stuff from this author you might think....however, in this case, he's decided to write a book set in an alternative present, with an alternative recent past, which, crucially, allows him the luxury of having Alan Turing as a living character who has pursued many of the directions hinted at in his work, so that lifelike robots are now (almost) an every day, if somewhat expensive reality in the world. McEwan acknowledges Hodges fabulous biography of Turing as a source for background, rightly, as the character is pretty much what you'd get from that work, or else from the play, Breaking the Code (though less so from the film, Imitation Game). Turing is also supported by a cadre of interesting loosely fictionalised people, to render the progress on AI tech more plausible - most notably, the real-life Demis Hassabis (DeepMind, also acknowledged as a source at the end of the book) is relocated about 25 years earlier in time than his real self, to help Turing create the more important foreground character of (would you believe it, as the Cockney's have it, would you Adam and Eve) Adam, an apparently functional synthetic human (don't get me started on why McEwan seems unaware of the fabulous exploration of this topic in the wonderful Swedish, then Brit TV series, Humans). The key "real" humans in this fictional work are Charlie, a mid-30s man of somewhat relative moral virtue, and his friend, Miranda, a student of very dull eras of history. I assume she's called Miranda as a sly reference to Shakespeare's character from the Tempest, who uttered the words "Oh Brave New World, that has such people in it", on first seeing men. And of course, the source of the title of Aldous Huxley's classic literary SF dystopic vision.

Here, much of the dystopic vision is of the political/economic kind, and is set in a kind of mash up between 1970s/1980s Britain, with a few (c.f. Man in the High Castle, Philip K. Dick's alt.history) with some amusing takes on Thatcher, if we'd (spoiler alert)... or what if Tony Benn..... or what about that IRA bomb in Brighton... oh, ok , I won't spoil those bits, as they make up some of the novel's interesting bits, in the sense that, for this reader at least, they created a very interesting alternative exploration of why the UK is where it is today, 3 years after the Brexit referendum.

As for the foreground tale of two humans and a machine like them, I thought that Charlie and Miranda were underwritten, and Adam was overwritten. i thought that the exploration of ideas like "what is consciousness" was about ok, but did not bring anything new to the table - the tension between Phenomenology and the Turing test (for what it is worth) for intelligence, and notions of EQ, was covered far more effectively 5 decades ago in "Do Androids Dream" by the aforesaid Philip K DIck (who had actually studied philosophy and could write character and plot). I wanted to know more about Miranda's cranky dad (another non spoiler - there's a funny reflection on dementia and human mis-judgement that involves him and Charlie and Adam).

The ending did not bring a sense of an ending for me, rather left various plot-line, moral questions, and unknown unknowns, still unknown.

Still, McEwan sure can write, so I'd recommend this book as a decent read, though below his best. He really should get out more, and so should I.

Wednesday, April 24, 2019

TL;XR

we need a program of work like the internet (web) and like rasperry pi, for a sustainable future - some core building block (not sure what, but I'd call it a Perpetual Notion Machine), that doesn't over specify what we do  but leads to a plethora of new ideas from the grass roots/maker communities that would create cheap (free) energy and clean (safe) water and so on

one example tech I like is solar powered stirling engines (you can get toy ones e.g., or ones that actually generate quite a bit of electricity) as they can be build from junk (i.e. don't require solar cells) - we need a set of recipies for things like that so kids (climate strikers and extinction rebellion folks) everywhere can start to save us, as we sure and capable of saving ourselves and its more their future than ours anyhow...

Tuesday, April 09, 2019

another thing you could do with disaggregated identity/credential systems

table 1. online harms in scope in the government report just released is a good summary of the problems....but it would be possible to do better - for me, it would be nice to pull out the applicable law (to the left most column) and discuss its adequacy e.g. in extending laws about publishing lies during elections (making sure law that appies to print and broadcast media applies to internet channels in the same way, etc etc)...

so on the technical side, we then have the problem of provenance/attribution. compliance with takedown requests (whether content is illegal, offensive or just wrong) can be driven by social pressure - so we need to know also if the content creator/distributor, and the compainers are legit - this needs accountable id - so it could be possible to use the same tech we might develop (and has been around in some research outputs 15+ years ago) to have a low cost mechanism for online channels to carry out "Know Your Customer" as much as banks do - but to use 3rd party credential providers (in the same way a bar can check "are you over 18" without having to know anything else about you...). this would then mean that we can verify things - and in the event of actually illegal content, with appropriate checks and balances, map the online pseudonym to an actual person....

advantage is also that it can be used for alibis too:-)

of course, some people still require genuine anonymity - e.g. whistleblowers or folks working under dangerous regimes - and that requires enough cover traffic from apparently non anonymous or just pseudonymous users, to work effectively.....

that also probably means we need to think about rate limiting the creation of new accounts and figuring out what tolerable rates are (they won't be zero)

all of this would need another report - which maybe the Turing Institute could offer to advise on:-)


Wednesday, April 03, 2019

reclaim the internet

at the scloss dagstuhl seminar (on programmable network data planes) we have had a couple of key speeches, one from nick mckeown (kind of on why we are where we are with P4 and switches that implement it) and one from dave tennenhouse on the big picture of where we are in programmable networks as a result of 4 decades of software (internet, atm, active nets and so on).

What I find increasingly weird is that the desperate need for speed is driven more and more by machine-to-machine communication (including the inevitable analytics/ML/AI) or just software updates or media pushing to caches.

but the net doesn't have to be fast for humans - humans could be supported by around 2 Mbps per person 24*7 for 10 billion people, that is no big deal. but humans are dsitributed for many good reasons (food, land/water contstraints) and moving humans is expensive (and not sustainable)

but it is possible to put the machines in the right place (near the data) instead of moving all the data to the machines.

so we not only need re-dencentralization for latency, energy, privacy, we also need to get all the machine data out of the way so humans can go back to talking to each other instead of being talked to by adertisement/recommender bots who steal all our data and use it to sell us stuff we don't want so
humans are replacing the old adage (you can trade time for money, or money for time) to paying bots in both time and money.

Blog Archive

About Me

My photo
misery me, there is a floccipaucinihilipilification (*) of chronsynclastic infundibuli in these parts and I must therefore refer you to frank zappa instead, and go home