Thursday, May 31, 2018

digital person(ae)

regarding decentralised, fair analytics?
https://www.eventbrite.co.uk/e/the-digital-person-a-symposium-tickets-43050737916

some possible discussion/questions

1. who can proxy for a hub in the home, for great grandfather...the bank, the kids, the bbc, the GP, all of the above...(see 4)...

2. price discrimination v. differentiation - do we need "cloud neutrality"

3. how near to privacy/security/utility tradeoff curve are we in practice in central v. decentralised cloud/analytics?

4. what about identity systems? are we ready for multiple pseudonyms each with a subset of our attributes (am-over-18, or am-a-citizen of country x) instead of centralised id with everything?

5. who will power the infrastructure when its completely decentralised?
we're a long way from microgeneration...

6. in edge ai, what are the distributed analytics _coordination_ challenges

7. in edge ai, what are the distributed analytics _privacy_ (diff?) challenges

8. how do we get assurance (sousveillance/someone-elses-pov dashboard) in the decentralised world?

Tuesday, May 29, 2018

edge to edge bogus arguments in systems design

since the arrival of blockchain tech, we're seeing a lot of bandwagonning on
edge-computing.

what most the pundits pushing for this is "just-inside-the-edge" computer/storage/services -

i.e. its still owned by network providers or co-lo kit from a good-old-fashioned-cloud service - same-old same-old. it is getting lower latency/higher availability, less backhaul network costs and (possibly) able to localize service behaviour to geographic jurisdiction, which are all ok things to do.

but it isn't p2p.

but it isn't end-to-end.

e2e was/is the liberating architectural feature of the net that lets anyone run a service. that lets value scale super-linearly (between n*ln(n) and n^2 depending who you believe).

p2p was a failed tech predicating on everyone running things e2e in their home, pocket/car. it failed because of three barriers
i) asymmetric capacity in access networks - this is hard to blame on anyone - its a feature of using old copper capacity and how shared medium spectrum works for fixed and wireless broadband. slowly, it is becomine less the case (last time i looked, 10M out of 35M households in the UK had fiber, which doesn't have these constraints.
ii) IPv4 address space depletion leading to being NATted to death, rather than deploying IPv6 (or anything else).
iii) software deficiencies leading to patriarchical firewalling of systems with vulnerabilities, rather than fixing the root cause (poor systems security).
iv) add yours here....

if you don't run the ledger, file service, social network, messenger platform in your home/pcket/car, it isn't end2end. if it isn't p2p, it isn't e2e. if it isn't e2e, its still 0wned by someone else. even if you have a spare set of keys.

Friday, April 27, 2018

Quantum Computing and Quants and the Turing Institute's mission

so people are afraid of quantum computing.
people should be far more afraid of today's algorithmic trading systems.
the world is run by a bunch of computer programs which have never been scrutinized - they might just each be nonsense, but in combination, even if they are all very correct, they certainly create nonsense. The fiction that the world is running on some financial fantasy academic structure known as a market is a wonder to behold - essentially, the sustained collective delusion in the face of obvious massive fractural objections (american exceptionalism:  why is the dollar worth what it is, aside from military might? protected markets inside China; split level economies like Brazil and so on) - all this is like the medieval world where the SMith would charge one price to shoe the farmer's horse, and another (massively higher) price to shoe the Lord of the Manor's horse - why? and why didn't anyone run arbitrage on this? because society and people don't mix with the idea of money really at all well.

So a lot of work in machine learning and AI and finance purports to address problems like money laundering and fraud and so forth. And yet we live in a world where the whole operation of existing algorithms is based on a false belief, that they operate in a market. Many of them operate in the casino that is the stock market, which is even more divorced from reality than the rest of the economy. Here algorithms are in an arms race, and yet it is an odd arms race, as unlike warfare i nconventional battlefields where we can pick apart the guns and planes of the other side from time to time and take their ideas, or at least reverse engineer them, the algorithmic race in the stock market is too complex and too fast to allow this - code just interacts via the symptomatic (observed) behaviour of the system, and rarely if ever directly interacts with other code. Most weird.

SO the real first duty of ML&AI in the world of finance should be to expose and fix these structural problems - first, to model the worlds economy properly (e.g at least as well as people like Piketty, but more so, dynamically), and to build a sound system for running investments without casinos like the stock market, but also without fantasies like Adam Smith's invisible hand.

So what has this to do with Quantum Computers[1]?

Well, QC promises to run some new algorithms in a new way - there aren't any signs of a full working QC piece of hardware yet, and there are precious few actual algorithms[2] so far, but one in particular has grabbed a lot of attention, which is the possibility to factorize numbers super fast compared with good old fashioned von-Neumann computers (faster even parallel and distributed vN machines) - this is due to the qualitatively different way that QC hardware works (highly parallel exploration of state spaces, scaling exponentially with the number of Qbits).

So what does this threaten?

Well, it threatens to break cryptograpy, which means our privacy technologies for storing and transmitting (and possibly even securely processing[3]) data are at risk. Bad guys (or just curious people) will be able to see our secrets.

So two thoughts
A) why can't we just devise new QC encryption algorithms, which just moves the arms race along in the usual way (a million bit keys for example, or something really new we havnt thought of yet)? Then we are back to the same old normal world where most data breaches will be because of social engineering or stupidity and self inflicted (minister leaves unencrypted USB stick on the bus) wounds.
B) Maybe we get more cautious as a whole and just don't send stuff around so glibly or provide remote access to our computers so readily. Maybe access control and authentication and just implementing least privlege properly could work most of the time, and the whole idea of crypto saving us was a chimera and a delusion, just like the whole idea of the market was a snare and a delusion?

my 2 q-cents.

j.
1. not to be confused with quantum communication where we use entanglement just to detect eavesdroppers - a perfectly sound, existing tech with a very narrow (point to point, not end-to-end) domain of usefulness.
2. shor's algo for example. one puzzling thing is that we had hundreds of algorithms for von-neumann style computers before we had any working hardware. why is it so hard to conceive of algorithms for QC? seems like it is a poor match for how humans are able to express methods for solving problems (which are many, and varied, but don't seem to fit ensemble/state space exploration, except perhaps in MCMC:-)
3. eg.. homomorphic crypto - possibly also at risk from QC, although re-applying ideas like garbled circuits to a QC machine shouldn't be too hard:-)

Tuesday, March 27, 2018

How science progresses - falsifiable, probably or paradigm shift, likely?

Reading Staley's excellent introduction to the philosophy of science was reminded of reading Popper's Objective knowledge back in the 1970s, but now I'm a recovering Bayesian, and am immersed in social science explanations like the structure of scientific revolutions by Kuhn, or even the whole idea of funding/groupthink/paradigms, I'm now convinced we don't have a good basis for choosing the right description of the process (or classifying best practice) until we study the past, both its pre- and post- states - i'm thinking that people choose to run occam/popper after they intuit a new paradigm shift (e.g. copernican model of planets) and use some confidence models to decide that, when the new theory has objectors, the objectors are outliers, whereas the old new outliers the new theory explains were more important than the new old outliers - of course, the new theory can still be wrong, but the smart money is that it isn't...

Tuesday, February 06, 2018

what if you were the only real person in the world

Anyone read Theodore Sturgeon's fabulous short story It Wasn't Syzygy?

Trying to wean people off facebook by creating an alternative (e.g. advert free, subscription, but open to link to other platforms) system, so everyone always starts by saying "you can't beat the network effect". so at what scale does this network effect magically become unbeatable? for example, the web has beaten TV even though TV had a billion users. Metcalfe claimed the value was n-squared, others have toned that down tho n log(n), but i think it's ignoring the _negative_ contribution level from spam/phish/troll/advert/attention grabbing, which inevitably grows with the network, but usually, over time, faster in the end. so here's my proposal anyhow: we invite you to our new net which has "everyone" in your network on it, but initially, your friends are al just bots emulating your real friends they make you feel at home there. now you tell your real friends about your safe new, ad free social net, and as they join, they replace the avatar/bot of them (a bit like the opposite of the stepford wives). oh, did I forget to tell you. we already did it. No, really, We didn't have to do it, they are doing it to themselves - c.f. Dr Wu's fine book on the attention merchants...

+
 so in fact we can model this from the fact that the network is directional, and end points (humans) take more time to create new content than to consume (new to them) content - so even if we aren't all couch potatoes, this asymmetry in creativity versus consumption means that the network will tip from peer-to-peer, to being dominated by a small number of producers and a large number of consumers - the cost of creation will drive the quality of creation down, but the quantity up (to keep it new - well known to pornographers for example - c.f. https://yourbrainonporn.com/)

Tuesday, October 31, 2017

declining big data, colour me rimbaud

first you beg for data
then you bag the data
then you de-bug the data
then you get bog-ged down in the data
finally, you big up the data

so what's new in this , i beseech you from the depth of my vowels?

Monday, June 26, 2017

appear to peer - ideas for glastonbury from 2017

so standing in the middle of a very large field surreounded by 200,000 people, but within about 100 peoples' handshakes of a bar, why not build a massive p2p version of uber for beer? you register and then people literally pass beer across to you and you pass money back.....you'd need a trust/reputation system - there'd be some spillage....but that's true anyway (I got wrong change at least 3 times at the bar the traditional way)

the world's first firechat-style beer-to-beer network.....


could also work for snack deliveries...and recycling

meanwhile, in the traditional Real Life, observing someone walk from the Village Pub to the center of the crowd in front of the Pyramid (watching The National, if you want to know) carrying 2 pints + 2 plates of fine ethnic stacked high food, narrowly avoiding many scurrying people, we are a Very Long Way Away Indeed from self driving AI robots navigating a space this complex & dynamic.

if you care  about music, what was good? most stuff, like Thundercat, Joseph, the Lemon Twigs, and some oldies like Barry Gibb and Chic, and a blistering opening set from the Pretenders, with la Hynde in excellent voice. Radiohead? Nah, a bit meh, really. Kris Kristofferson (81) charming, but frail. The aforesaid National? Very Good Indeed. Beyond all possible descriptions? Father John Misty and London Grammar - both of them made time. stand. still.  loads of good comedy, politics, amusing high wire acts & lessons. and a very very chill mood (helped by fairly fine weather almost the entire time!)

Friday, April 28, 2017

unfairness in automated decision making in society.

reading this book about mis-use of maths/stats recently, i think we can go further in condemning the inappropriate approach taken in some justice systems to decide whether a guilty person receives a custodial sentence or not.

The purpose of locking someone up (and other stronger sentences) is complex - it can be to act as a disincentive to others; it can be to protect the public from that person re-offending; it could be a form of societal revenge; and it might (rarely) be an opportunity to re-habilitate the offender.

So we have a Bayesian belief system in action, and we have a feedback loop.  But we better be really careful about i) the sample of inputs to the system and ii) the sample of outputs....and not forget these are humans, and capable of relatively complex and highly adaptive behaviours.

So what could be wrong with the input? (sigh, where to start) -
people who commit crimes are drawn from a subset of society, but people who are caught are drawn from a biased subset - firstly, they're probably less well educated, or dumber, or both, because they get caught. secondly, they're probably from a socially disadvantaged group (racial minority).
people who are found guilty are also the subject of selection bias (and people who get away with it, are party to survivor bias too) - juries have re-enforced the bias in the chance they are caught.

people who are sentenced acquire new criminal skills - this may make them less likely to get caught if they are just poor, but more likely if they are dumb.

So in there' I count at least 4 ways that a decision system that looked at re-offending rates, and properties of the person found guilty, would be building in positive feedback that will lead to more and more people being incarcerated, with less and less justification.

occasionally, external changes (accidental natural experiments) perturb the system and make this more obvious - in the film documentary, the House I live in , the absurd war on drugs is shown to be massively counter-effective - near the end, the huge bias that this has set against african americans starts to wane, simply because of the move in the poor white working class of america into making and consumption of crystal meth (so brilliantly portrayed in Breaking Bad - suddenly, the odds stacked against on group, multiplied by re-enforced prejudice 3 or 4 times over (indeed, one more time for the 3 strikes rule), hit lots of "trailer trash"....

An interesting research task would be to run a model inference tool on the data and see how many latent causes of bias we can find - maybe my 3,4 or 5 is not enough.

truly the world is broken, when it comes to evidence based decision making!

Saturday, March 25, 2017

of the internet, for the internet, by the internet

what have we wrought?

i don't think it is about the echo chamber, bubble, or
faddish claims about fake news and alternative facts.

nor do i accept that  the internet offers a zero-cost channel - the internet switched the value-propositions around by reducing cost for sender, but for some kind of content, it simply moves the cost somewhere else

1/  to the receiver (spam/advert/recommend, whatever you call them) -
2/ to the content creator (for music, film.games etc)
3/ to regulator (to ensure neutrality, control monopolistic tendencies etc)
4/ to the service provider as real competition drives profits to truly marginal
5/ somewhere we havn't thought of yet

so what we didn't think about was how to design robust games to allow people to design and choose appropriate system architectures for sustainable worlds, whether journalism (that doesn't let the vocal extreme minority control the agenda) or creative industries (so original work is rewarded), or peer-economic structures like uber, airbnb, etc that treat the means of production/labour force fairly...

hard times

[yes, i know this is sort of a version of jaron lanier's stuff, but it is becoming more and more evident that the complaint is right, but we need an actual fix, and that that is the hard problem, not identifying the cause, but designing the solution]

Monday, June 13, 2016

Five Digital Epistemological Objects for 2064

Five Digital Epistemological Objects for 2064
or
A Digital Narrative Ark of the Knower

jon crowcroft, Cambridge, 8.5.2014


dreams, visions and prophecies in bits

It is too hard for humans to fully comprehend humans, but it may be possible
to construct a digital model, a computer simulation or even emulation,
that is accurate, not just descriptive, but also predictive. Such a
model would embody modes of thinking that are not entirely rational,
which is what current "AIs" attempt, but would extend to domains
which, I believe, are entirely human, such as dreaming and visionary
or prophetic processes - these are not magic, or pseudo-science ideas,
but ways in which human thought processes leapfrog piecewise or
incremental steps, perhaps building on such mundane stages, but only
revealing themselves thus-wise, as revelations. Not blue gene beating
humans at chess, but more surprising.


computational ethics

We struggle with ethical dilemmas. Why? there are ambiguities or
paradoxes. These are quite easy to express in the right formal
systems, so we should be able to create, perhaps with help from
machines, ethical props, crutches, to help guide us to what is right.
Asimov laws of robotics (4 in the end) were naive, but a start - we
should play with more such. The history of robots (golems, rossum's
universal, mary shelley's etc) is littered with great examples.


diseases who think

it is a high pomp of pretentiousness that only humans think. we know
(e.g. from Dunbar's (and Alison Richards') studies of apes)
that the theory of mind is present to some degree in other creatures,
and sometime, less so in some people.
But the most alien of creatures, such as hive animals, and,
in extremis, bacteria are capable of collective reasoning. Can we
train them to help us? Can we
infect people with thoughts, literally, rather than merely
figuratively?

Bring meaning to Pat Cadigan's notion  of being incurably informed
(see Synners).


haunts - memories stronger than reality

smells, and superstitions, influence us and resonate more than careful
abstract recollections. Perhaps there's an embodiment of knowledge in
these modalities that we could build better, artificially, than
already exist. Can we code ghosts?

learning to un-banish ghosts might be the ultimate rationalisation.

embodiment of knowing in the knower, is in some cases physiological
(scent, muscle memory, belief) - capturing this missing element (where our
typical current digital media representations address typically only 2
or 3 (sight, hearing, perhaps touch) of the more boring senses, seems
like a worthy goal in terms of understanding our understanding more
deeply.


frailty -

we need digital analogues for flakiness  - just as digital
transmission of moving pictures can "degrade gracefully", perhaps
knowledge can be coded in ways that can still be usefull when partly
rotten - as with the human suffering from dementia, still able to carry
out some cognitive tasks, perhaps artificial thinking can be made
resilient. [today's programs, if even slightly corrupt, simply work
then fail - this is a poor show].

In a deeper sense, reflection on the inherent inaccuracy of representation
is needed, etc

indeed, the optical metaphor can be (over-)extended, using the notion
of different lenses, not just for different viewpoints (different
epistemic architectures) but also for level-of-detail - zooming in to
some (reductionist) model, or retreating to some level of abstraction.
Technology (that is processable - i.e. usually digital) can help with
this - indeed, statistics, visualisation, modeling in general, or
towers of models, are all about this.

losing detail is not necessarily loss of knowledge - indeed, the
ability to ignore detail (see the wood for the trees, or the aforesaid
abstraction process) is one of the more useful human (cognitive?)
skills.


--------> Notes and Websites

The mantra data -> information -> knowledge -> wisdom
(c.f. tofler and brunner's future shock/shockwave rider)
is glib, but useful. each stage in this notional process adds
some sort of structure and processing, whose algorithms and
representational choices are themselves just more data (as per the
Eckert/Von Neumann Stored Programme Computer Architecture - sometimes
incorrectly ascribed to Alan Turing:)

Provocations from the meeting of 7.5.14 at CRASSH:

Q.what diff does move from analog to digital make w.r.t knowledge?
[not restricted to humanities part of digital (humanities)

-ve A
n.a. no change
n.b. networking/
n.c. distributed knowledge

n.d. just scale/efficiency...
[me, but emergence - see below and

http://rappers.mdaniels.com.s3-website-us-east-1.amazonaws.com/
http://www.theliteraryplatform.com/
http://fufufo.com/

two types of DH
1. boring: use of computational tools to do studies like word count in jane austin
2. more interesting - humanities study of social/digital/new media

what about both? e.g. study of sampling/mash up?
or
http://www.digitalgovernmentreview.org.uk

+ve A - changes knowledge & also modes&modalities of knowing...

so not H applied to D, but D  to H
so how do humans change when they go digital...

e.g. measure of time - exact? v. inexact
so exactitude is itself a new suitable topic....

---->
every decoding is an encoding...maurice zapp, in lodge's small world:)

e.g.  science - robot scientists discovery/sharing:)
eScience program (e.g. climateprediction.com, seti@home etc)

better e.g. Maths:
proof assistants
Coq & Isabel
e.g. 4 color map & Fermat's last theorem)

Lessig: code as law

2. quantitative: cost copy -> zero (recall)
[all email since 1976]

Piketty's Capital in 21st Century - 20 countries for 150 years...
Scale sometimes is a qualitative change - emergence

3. qualitative: artefacts...

Bad - ideas (e.g. big data) broken (lose nuance)
good - new forms (susan collins @ slade - many turner prizes...


---->
Discussion:-

+ culture
+ society

Piketty: capital in 21st century - twaddle v. girlfriends...

narrative v. sci method -
just different points in process in science v. humanity work...?

versus! creative step in science is still not understood:)

The mistakes are ... interesting..-slade art...
the two brians (may&cox:)

read also: more than human (theodore sturgeon) and
shockwave rider (john brunner)

see also post modern object truth & no value judgements ?:-)
liberal  arts students in 70s who went into west coast startups
may have become unethical coz of this:)

diy:and failure machines:
http://www.katjungnickel.com/2014/02/28/dagstuhls-diy-networking-seminar-making-a-failure-machine/

Saturday, May 28, 2016

cats will 0wn the Internet of Things just like the rest of the Internet

we have a smart cat flap - from a jolly good company called sureflap. we've had it a few years. Our cat is chipped so if she gets lost she can be returned and so the vet can tell what treatments she's had etc etc - all good...

we live in a crowded cat neighbourhood, so many cats try and come into our house to eat our cats food and generally invade her space etc

so we got this cat flap as it reads pet chips, and can be programmed for a given one (actually, a bit like your WiFi AP, for which see more later, it can store up to 30 cats RF-IDs - jolly good, so far).

So then the cat flap goes wrong (starts running batteries flat every day-  normally they last nearly a year....so we go on the company website, and they have a neat diagnostic tool, and we run through it and they say it needs replacing (the smart flap, not the cat:-), and we enter the serial number (of the flap, not the cat) and they say "yes, that is still under warranty and they are sending us a new replacement (very smooth service indeed- arrives next day!).

so we install replacement asap as I am getting fed up with old one using up so many batteries, but I am in a hurry to get to work, so I put the cat flap in the default learning mode, which is that it flashes its little light once a second until the first cat goes through, at which point it stops learning and only lets that particular cat-id in/out.

so i get to work and there's a frantic phone call, and someone tells me some other cat has come into the house first, and eaten our cat's food, and now, only it can get in & out and our cat is stuck in. oops.

so you have to ask how did the alien cat know to try  just then? I mean we know which cat it is and its lived around here for 5 years and it must know it couldn't get in thruogh the old catflap, so what told it that there was a new one? cunning eh.

two things - 1 there is a different learning mode which only leaves a 10 second window, but you have to have a tractable cat that will oblige and train the flap on demand - hard to do. there isn't a way to "migrate" the old cat learned IDs from an old flap to the new one (the way you migrate your contacts lists from old phones to new ones) which would be neat, especially if you had 30 cats! waiting to train all of them could be like, errrrrr, herding cats :-)

on the other hand, alien cats will have it purrrr0wned in 9 1/2 seconds!

Friday, May 27, 2016

credentials, careers and punctuated equilibrium

life is like a sequence of flights where there's an exciting (and unnerving) takeoff (often preceded by stressful and boring waits) followed by the moment you break through the clouds, and the plane levels off in the light, and coasts. the metaphor seems to fit school, college, job changes, partners, kids, deaths/bereavements in family, etc

so purely from a work perspective, this applies to some research projects i've done.....

most the 1980s, we were building/measuring/optimising the basic internet (both on paper, similation, and real code and networks) - culminated in multicast, tcp congestion control, satellite access (in 1988), which smarter people at the other end of the net wanted to test, so we were happy to be on this end of those tests....

then in the 1990s we were doing multicast - both applications (games, vr, and most interesting, Reuters realtime share trading network), and realtime multimedia (Internet TV - what became the main way AT&T, Telefonica and Virgin/NTL built their TV streaming service; and internet telephony/conferencing-  with video, audio, shared whiteboards - etc - what became skype, webex, etc).

then in the 2000s, we were doing opportunistic networking (community mesh, also) + cloud + social media analytics....how well does a kickstarter campaign work? how do people find or follow unbiased news on twitter etc.

now what? I guess its either data science (inferencing latent variables and models) or internet of things, or both, or neither....

one underpinning theme is decentralization - the early internet was, and cloud was meant to be - so now we're revisiting both the wireless net and the cloud to see if we can make them work better without centralization and loss of privacy. See this talk for why&how

Oh yes, why "punctuated equilibrium"? because basically that's another metaphor for what happens in evolution, applied to ideas - change of environment, leads to specation. selection/crossover leads to new ideas and refinement. next....

Wednesday, May 18, 2016

Every Day Data Science Challenges

1 Guitar Strings

Different strings break at different rates. You can buy them singly or in sets of 6 (occasionally with a spare top E) - really what you want is just in time delivery of a new set wtih a distribution of strings (EBGDAE) that matches the wear/tear rate for you, your guitar (classical, flamenco, acoustic, electric etc) and tone/newnewss you like - this could be crowdsources by instrumenting tuning apps on phones which would notice when you tune from way below (e.g more than a 5th below the right note for that string, probably indicates a new string being put on) -

The statistics could be aggregated, and classes of users found, and then companies (like my fave ) could build orders for you -

2 Bicycle Wheel Spokes

I've lost 4 spokes over the last 5 months cycling in Cambridge - probably, they went on the appalling potholes on station road, or the tree roots across burrel's walk - wouldn't it be nice to know where these occcurred so I could report them to the council (and get money:-)

This could easily be done with accelerometers in smart phones....and GPS - look for rapid up/down movement - then afterwards (when a spoke has gone) you should be able to find the periodic wave of the bike as the wheel is now eliptical....

Thursday, April 07, 2016

Miro's Law


To his excellency, High Admiral, 1st Fleet, Third Arm Galaxy 74434
From Captain Moore, First Contact Team, outer quadrant 2,
planet three, yellow star 17.

Your Excellency,

I am writing to report on the results of the strange mission we have undertaken exploring the deserted planet full of wonders known by its former inhabitants, as far as we can tell, as Silt or perhaps Clay - our translators are working flat out to improve on our understanding as there is so much potentially to be gained from their technology and civilization.

We have uncovered a most exciting cache of documents which may finally explain the departure of the beings who constructed these wonderful buildings and devices. Everywhere we look there are suitable perches on tall polls, connected by long nesting strings. Marching across the countryside between what we believe to be their latrine sites, are long flat landing strips where competition for suitability for breeding stock amongst our warriors would be idea. Enough of the marvels. Back to the documents, which appear to be from a savant, desperate to solve some crisis that has struck their ecosystem....he writes

"I am sending you this letter to ask for your advice. As I travel often to far away lands, and present my work at conferences, I am inevitably showered with gifts, and amongst these is always at least one ball point pen. Since we developed the fishnet" (we're not sure if that is the right term) " I have had almost no use for these gadgets, and have steadily been accumulating them in killing jars" (again, not clear, but it seems that the 'jar' is something to preserve things in at least we can tell that). "Now, I am out of jars, and am unable to store sufficient foods for the winter" (now you see why we are fairly sure of our translation).

"Hence I am writing to all my fellow savants, to ask if they have any idea how we can solve this problem. The number of ball points is growing hyper-exponentially, and threatens the whole of the Fellowship of the Royal Society, and so we realise that we have to reach out to our cousins across the seas for help

I remain yrs, etc etc

p.s. I enclose a pen for your use in response"


Your excellency, there are many responses, but most echo the concern, and dare I say despair at the situation. Finally, however, after much work, it seems that one amongst these giants of intellect proposes a possible solution, for it is the last letter, and in it are many strange symbols which resemble our own formalism for hyperspace drive, and yet appear to arrive at a simpler solution....the letter concludes

"and so we suddenly understood that if we could figure out how to channel the gravity waves just right, the spaghettification phenomenon will allow even the largest of our fellow humans to fit down the inter-dimensional tunnel formed from the tubular casing of the ballpoint through the frames, and into the landscapes that miro draw, it seems from true life, rather than, as critics of the day said, his dreams. the rest can easily be worked out from this sketch...

I leave now, as you know, however, I enclose your last pen, which you will find sufficiently charged to allow you to join us in Blue II should you wish, although as you are of hungarian origin, you may resonate with the richer Mikrokosmos..."

So, your excellency,
It seems that the "humen" of "mud" were finally able to lose enough weight through some fantastic new plan, to soar to other dimensions.....we have yet to complete the proof from the sketch mentioned above, but I hope that we can report to you on this soon,

I remain, as ever, your nest-issue of the fourth degree, captain Moore...

captain etc

p.s. I enclose a pristine unused biro pen for your perusal, and who knows, perhaps use....

Sunday, February 28, 2016

Opening up the Billion Sided Market for our IoT data.

In the HAT project, we came up with the idea for  starting a data exchange for all of us to exploit our data for fun and profit.

There are several important innovations we are bringing to the IoT world:

  • Multi-sided market - we are all now used to the two-sided markets of the smart phone & the cloud - we get apps and services for "free", in reality, trading data about our selves (wishes from searches, preferences from likes,  places from location checkins, etc etc). However, the market is heavily tipped in favour of the large cloud providers, and the user has little knowledge or control over her data, and in particular, very little view of its use and value. The HAT changes all that by providing a hub for each user with storage processing and interfaces for access by other parties, but with visibility, control and above all, valuation for the data.
  • Democratised data - HAT providers store the data and provide access, so we need a marketplace for the valuation - an exchange, where bidders can establish openly a price. This could be at a fine or coarse grain - for example, usage of utilities (power, water etc) typically is interesting for service providers, but typically, aside from billing, fine grain use is only really interesting to the actual consumer in their home or office. Alternatively, monetizing usage information about retail goods could be traded directly with retailers or even wholesalers for discounts, loyalty points, or money, and can include preferences for really accurately targeted advertisements in exchange for further discounts or e-cash. 
  • Freedom - freedom to switch hub, to choose aggregators who have a better deal, or provide stronger service guarantees, is a given  - the large number of HATs is trivially deployed and scaled out in today's cloud based world. This encourages innovation in HAT technology itself. The symmetry of the business relationships allows this dynamic, in contrast to the asymmetric power wielded by the centralised services of the last fifteen years.
  • Silo Busting - the IoT world is notoriously not an Internet of Things, but a hodge podge of many different services, overlayed on the internet and the cloud, but not in any way connected to each other. The HAT changes that by creating a collection of places where data from multiple worlds can be integrated by new applications and new customers from any of the millions of sides of the new market. We are strongly technology agnostic when it comes to IoT at the "lower level" - of course there are good reasons for different systems to work in different ways. We break open the silos by allowing user-centered integration of data. Its about you, so you control it, whatever it is. Cosmetics, entertainment, clothes, energy, well-being, you name it. Think of the value being missed by existing isolated systems when they cannot put 1+1+1 together, but can only see how single values (kilowat hours, litres, meters) increase over time, instead of being able to combine together information with meaning! A space for a million apps for combining your data - more innovation, driven by new value made out of new joins across the seams of the legacy disjointed IoT world.
  • Privacy Protection - We really care about our privacy. The legacy cloud systems today (your social media, web mail, search, travel portal usage) currently do a half baked job on this. When we take far more personal information into the HAT, it is essential we offer much much stronger assurances, applying the very best practice in technology and also written in to the terms & conditions in plain language. If one HAT doesn't get it right, it is easy to move to another. This enables an eco-system with constantly improving, transparent, control over data visibility - once again, another dimension on which to innovate.

Monday, February 15, 2016

Zika App idea

Back in the day, during the H1/N1 epidemic, we did this Flu Phone App to track people's encounters (via phone proximity using say bluetooth (could also use GPS tracking on phone, or even call data records with cell phone company cooperation, if you want less accuracy). The idea was to extract events when people self reported with symptoms, and then (in a privacy preserving manner) extracet the encounters between that individual and others (infected or not) in the population, and then to work out from this various epidemic parameters (susceptibility of different members of population, infectiousness, recovery rate, asymptomatic carriers/herd immunity levels in segments of population, etc etc), as well as possibly nailing elements of the vector....

So with the current Zika virus, it is pretty clear that it is spread by a particular mosquito type (the same as spreads Dengue Fever).

So we could take the app described above (and its reporting infrastructure) and
add one very simple thing - if the phone app turns on the mike, you can tell from sound whether there is one of these little beasties near you- wing sounds have characteristic frequency which is in audio range and sensitivity of human ear and certainly of the (usually better) microphone/audio system on a phone - more info about the Aedis Agypti sound of female mosquito which is the one you care about being not bitten by in terms of Zika.

If there were several people running such an app in the same location, you might even tll roughly where the mosquito was and avoid it (though that's a bit fanciful).

At least, however, you'd be able to look at the incidents of people being co-locaed with mosquitos of the right type, and the infection rate. ANd possibly (over time) look at the spread caused by an uninfected mosquito biting an infected person....thus
mosquito -> person -> mosquito ->person

of course, the same app might possibly also tell you of cases of person->person where there's no mosquito detected...which would also be useful data for epidemiologists

A thought.

Tuesday, February 09, 2016

panic, moi?

So there's this great new report from the Berkman about the worries various governments have that the technology we are starting finally to make use of to protect our privacy may also mean that "bad guys" can get away without being caught.

It is deeply ironic that there's precious little evidence that having untramelled access to everyone's Internet data for the last 20 years has done a single thing to prevent one terrorist death. It is also ironic that when there was access to encrypted data, during WWII, from Station X (Bletchley, breaking the code, the Enigma and its variations etc etc), it was not used to prevent Atlantic shipping from being sunk by U-boats as that would have given away the fact the allies knew where the subs were (i.e. had likely broken all the codes). It was finally "used" to know that the germans did not know about where the D-Day landings were to be. This was to prove useful (although not necessarily decisive) in winning/ending the second world war.

However, note interestingly that spotter planes could often see U-Boats surface, and it was the location of the sub when it sent an encrypted report (aka "meta-data") that let the Turing folks break the code the 2nd time. There's no evidence that the NSA have known about Al Quaeda before 9/11 or that the Spanish, UK and French had any idea about the Madrid, London or Paris terrorists ahead of time. If they did, and didn't say because it would "reveal" their capability, in a post Snowden era, this is just plain stupid, actually criminal. Given several events have happened after Snowden, and there's precious little evidence the bad guys used much more than basic comms (SMS, instant messaging) then, it is evidence that the security apparatus is not fit-for-purpose.

Thus, the report above is right about meta-data (what's sometimes called communications data, as opposed to content, or "control" as opposed to "data").

Interestingly, was talking to some lay folks recently about what the police do if they find someone unconscious (or worse) with no id, but a smart phone, and that smart phone is locked (and, in modern iphone or android, encrypted). So
1/ If you have an ICE ("In Case of Emergency") configured, it can be called from a locked screen on an iPhone, and you can configure android the same if you want.
2/ The phone company can workout what the IMEI and number of the phone is from the location, and from that, could give the police a list of caller and callee IDs so they could try a few til they get someone...plus the account information would likely give name/address/bank info.
3/ If the phone is backed up in the iCloud, its quite likely the back up isn't encrypted

All of this could also be done with someone "of interest" who is perfectly conscious, but unaware:-)

So there. Fire the NSA and GCHQ and get someone in who has a clue.

Monday, January 25, 2016

blockchain for gun control

so distributed ledger technology is a new technology that is all the rage in some government circles. while Bitcoin as the exemplar of the use of the technology for an electronic replacement for cash and credit cards, has its detractors (and they are mostly not wrong), the underlying system allows one to track transaction history associated with a physical object  - one of the UK government's use cases in the report linked above, is the idea of being able to avoid buying "blood diamonds".

so how  about we propose using this for arms control (everything from nukes, to hand guns, and ammo) ? there are ways even without putting "smarts" in the gun (ballistics can often match gun/ammo to each other in any case, and one can move to more careful signatures easily)...

then one could start to look at liability. i.e. people that own weappns would have to take responsibility for a change.

Thursday, January 07, 2016

investigatory ploughsharing bill - srambling for safety

for a thorough report on today's scrambling for safety 2016 debate, its hard to beat George Danezis blog - one thing I was going to ask about was the really broken part of the bill, which prevents any discussion between a service provider and the agency that serves a warrant on them for intercepoton (whether a standard surveillance or a bulk one, or interference on a device or a broad spectrum of devices).

I realize that some level of stealth is, by definition, needed during the surveilance - however the world is rapidly evolving, and it is clear that operators and service providers are at the bleading edge and are able to offer (and do, in practice under today's laws in the UK)  on a request  (e.g. no, you don't want that IP address, you want this URL prefix, as that's a load balancer/VM, NATed device that changes etc etc) - in my example question (no., you don't want to run interference on that device as it isn't just a routine users ipad, its their tesla dashboard, and if you weaken the random number generator in the OS on that device, you open it up to hackers who will crash the car), not only is it obvious the security and police agencies don't have expertise yet in the area, we need to have a cooperatively evolveable law - latching the law (the first in 500 years to admit that agencies need these powers, but under legal controls) we need to make sure it isn't the last law made in the area either - just as the "Internet Connection Record" is meaningless in the world today, so the interference model is extremely dangerous in the IoT space, where there are currently more devices that are not end-users comms gadget (==phone/skype) than are - pretty soon, there will be 100s or 1000s of devices - monitoring these is mostly a waste of resources (more haystacks to not find needles in) - interfering with these devices (e.g. pacemakers, car brakes, traffic lights) is incredibly dangerous - [footnote...]

proportionality requires risk assessment - "collateral damage" that is a death because of interference on a device which causes a car crash or a heart failure, is not assessable today. it may be one day, but I posit that it is not an acceptable risk level for gleaning a little bit more sigint, that probably wont be acted on anyhow. Basically, this blows out of the water any fig leaf of proportionality, unless there is a wholly different way to manage (transparently) the codes of practice, in a way that future proofs (actually makes fit for purpose for today's internet) this dodgy draft bill.

footnote - let not forget algorithmic lawyers - when the music biz wanted to chill the p2p file sharing world, they started getting s.w that generated letters to threaten disconnecting users from their ISP - one fabulous case ended up with a tech guy defending himself in court, because the IP address the lawyers s/w detected allegedly uploading music in breach of copyright from. was his HP laster printer. doh. if they can get that wrong, then the spooks software can and will confuse a crims phone with an innocent ("collateral damage") bystander's  auto-defibrillator or internet enabled insulin pump.

Tuesday, January 05, 2016

Will we ever fix that last s/w (h/w) security vulnerability?

A recent talk bu Johanna Rutkowska sparked a discussion about whether the number of vulnerabilities is potentially infinte, or whether the cost and/or value of exploiting and/or fixing them them is slowly increasing (or decreasing) or (thanks to Markus Kuhn and others) it is cyclic, as phases of technological innovation wash up and down the shores of human society....

so my take - we spend ages in the OS community trying (as
per the talk) to nail down the smallest piece of the trusted tiny
center of the kernel (and talk to the hardware people about it very
closely - even modifying their designs), so that the attack surface is
minimized - including, as you say, improved tools and techniques 9type
safe software fuzzers, verifiers etc etc...

then some skunk works thing from the h/w comes along and changes the
whole game (in terms of complexity to start with, but also in terms of
massively opening up the attack space) _ usually its coz of some
geniuine user demand for something faster/cleverer (as per the talk,
add in GPUs, add in smarer NICs with offloading, add in multicore, add
in more instructions for graphics, even for security itself!)

another example of this can be seen on the net  - since well before
current scandels (back in 1990s) we've been trying to batten down the
hatches everywhere  with DNS, BGP and end-to-end crypto (and now
betterer DNSSEC, better certificate ideas, better router-router
systemic ways to prevent problems, better e2e crypto (c.f. tcpcrypt)
etc

and then some bozo comes along and re-jogs the entire mobile phone net
to be IP based (but with lots of little, devilish little changes)

then some mega-bozo comes and puts a rspi in every thing that has a
moving part, and connects that to the interweb (and builds a new stack
with COAP and IPv6 and lowpan/zigbee so we have no idea what new
sneaky things there are in there)...

then some dolt comes and builds million core data centers and modifies
the entire stack and routing system coz it doesn't scale to their
needs....so we don't know what new corner cases have now appeared on
the masive geodesic (no longer nice shiny smooth, hard  thing)

and we have to start a l l   o v e r   a g a i n
thrice.

It's like you build defences around your big city with walled gardens
and gated communities, and someone comes and builds a massive shanty
town right outside, a favella, which you need, coz, after all, someone
has to come and clean the floors and make your tea and take out the
trash...oops


Sunday, December 20, 2015

Disasters bring out the best and the worst in people


I've been reading about disasters for a few years now.
As a result of friends struggling to let all their families know they were ok in the Tsunami in South East Asia a few years back, we embarked on the Haggle opportunistic networking
project, and more recently, partly fuelled by other problem in society including the current massive movement of refugees from the middle east, we instigated n4d, the networking for development lab, in cambridge, with many partners around the world, and leverage via the Internet Research Task Force's Global Access to the Internet for All (GAIA) activity.

Back at the beginning, I read this fine book about how people behave remarkably altruistically during disasters, that is until the first responders arrive (typically, 72 hours later) -- this made me quite optimistic about our efforts:
A Paradise built in Hell

However, more recently I've read this account of the neo-liberal industrial-military complex way of engaging, which makes for much more depressing prognostication:

Disaster Capitalism

(Contrast Haiti with Cuba just for a moment, but closer to home, the description of private security forces ("we're not mercenaries" and "we're only here for the money" occur multiple times in the same irony-free breath), look at the imposition of austerity on Greece,  where much European refugee money goes to non-greek security firms to run camps for Syrians and others arriving there, before moving on to Germany (the place that needs them for cheap menial labour but imposes restrictions on what the Greek government can do that stop employment for greek nationals picking up again. Grrrrr....

I'm not sure how to regain my optimism (or even sanity) but am tempted to re-target Mao's slogan Combat (Neo-)Liberalism sometime soon. Oddly enough, today someone pointed me at this excellent blog on insurrectionist civics in an age of mistrust which might help

Saturday, November 07, 2015

Review of "The tools and techniques of the adversarial reviewer"

This is my review of the paper
"How to review a paper \\ The tools and Techniques of the adversarial reviewer"
by Graham Cormode.

This paper appeared in the SIGMOD Record in December of 2008, but appears not to have gone through proper peer review. The paper suffers from at least three major problems

Motive  - is it really an interesting problem that reviewers are adversarial? Surely if reviewers colluded with the authors, we'd end up accepting all kinds of rubbish,  swamping our already bursting filing cabinets and cloud storage resources further, and taking cycles away from us just when we could be updating our blog or commenting on someone's Facebook status.
Is the fact that a reviewer doesn't like a paper a problem? Do we know that objective knowledge and reasoning based on the actual facts are the best way to evaluate scholarly work? Has anyone tried random paper selection to see if it is better or worse?

Means - the paper doesn't provide evidence to support its own argument While there is much anecdote, there are no data. The synthetic extracts from fictional reviewers are not evaluated quantitatively - e.g. to see which are more likely to lead to a paper rejection -- for example, it is not even shown that perhaps accepted papers may have more adversarial reviews than rejected papers, which may attract mere "meh" commentary.

Missed Opportunity - the paper could have a great opportunity to publish the names of the allegedly adversarial reviewers together with examples of their adverse reviews, to support the argumentation, and to allow other researchers to see if the results are reproducable, repeatable, and even useful.
For example, multiple programme committees could be constituted in parallel, and equipped with versions of reviewing software that modify reviews to include more or less adversarial comments. The outcomes of PC meetings could generate multiple conference events, and the quality of the different events compared. If particular outcomes can be determined to be superior, then the review process could subsequently be fully automated. It is only a small step from there to improving the automatic authoring of the papers themselves, and then the academic community will be relieved of a whole slew of irksome labour, and can get on with its real job.

Sunday, October 25, 2015

the thing is...

part un ..with the form factor of a hand, the thing can control any legacy actuator - possessed of several simple electromechanical motors, a set of fiber optics in the finger tips, leading back to a camera in the raspberry pi controller at the wrist, and a light, to look at stuff in the dark (extra-sensory perspective), the thing can run around your house and turn stuff on and off - it might be a bit scary (especially if you have several of them, and you see them going up stairs, or hanging off the old thermotat controller or VHS video or microwave) but through online legacy device manuals, these are the new universal remote control  - instead of getting a remote for each device, even devices which have no digital/IR/WiFi/Bluetooth/Zigbee/Audio interface can now be managed via an app on your phone which talks to your family of things...

this is cheaper, more deployable than expensive new tech, more secure (modulo any recurrences of early "hands of orlac" bugs), and can deal with tricky situations (e.g. get spider out of bath, unblock toilet) that most IoT engineers blanche at the thought of (which).

these things can turn your old dial phone into a cellular like device (indeed allow you to dial remotely using your cell phone) can take readings from utility meters and scan, OCR and email them to you, and then let you turn down the heating or turn up the gas as you can afford, without leaving the comfort of your internet cafe.

no cloud needed. no nudges or winks from a psychology/marketing department, just plain old wrist action and common sense.

its true, there may be a re-guard legal fight with the estate of charles addams, but we expect that to be handled easily

part deux - lust as actuators should be made visible agents, sensors too -- every thing that contains a sensor  should have a face - for example, any sensor should show a picture of the people currentlly looking at the output of the sensor - this is the moral equivalent of the facebook "show me as others see me" interface or the statistics on google's search dashboard...

this would give us the inverted panopticon (aka sousveillance) - this is not hard to do - indeed, a similar idea was applied for logging in to public wifi hotspots  where the router has a camera and display which yo ucan use from your laptop in a cafe, to make sure (or at least, improve your confidence that) you are using the real router, not some hacker sitting near by

this is also psychological. so using information flow control, and tracing, one could easily implement this - given the total number of people who should be able to see sensors' output is small, this should actually be scalable too

it could also e a service offered by HATDeX :-)

Friday, October 02, 2015

driverless cars uninsurable?

so some of the push to get autonomous vehicles out there appears to have support from the automotive insurance business.

this seems odd, in the long run for this kind of obvious reason.

driverless cars reduce the risk of accidents. when all vehicles are driverless, the risk (of accident, or "taking&driving away" theft) is zero. so why would you want or need insurance?

of course, there's the other thing - why would _you_ want a car either? the goal will be to maximise the use of  all vehicles so you'll just call up one via uber-uber-zip-zip

oh, and poor taxi drivers - bad enough to get ubered- but this will make them complete toast.

maybe a few chauffeur limo businesses will remain as "bespoke handicraft" signs of conspicuous consumption?


Blog Archive

About Me

My photo
misery me, there is a floccipaucinihilipilification (*) of chronsynclastic infundibuli in these parts and I must therefore refer you to frank zappa instead, and go home