Friday, October 02, 2015

driverless cars uninsurable?

so some of the push to get autonomous vehicles out there appears to have support from the automotive insurance business.

this seems odd, in the long run for this kind of obvious reason.

driverless cars reduce the risk of accidents. when all vehicles are driverless, the risk (of accident, or "taking&driving away" theft) is zero. so why would you want or need insurance?

of course, there's the other thing - why would _you_ want a car either? the goal will be to maximise the use of  all vehicles so you'll just call up one via uber-uber-zip-zip

oh, and poor taxi drivers - bad enough to get ubered- but this will make them complete toast.

maybe a few chauffeur limo businesses will remain as "bespoke handicraft" signs of conspicuous consumption?

Sunday, August 30, 2015

technology embedding ethics

Having tried to confront ethics in internet experiments at a conference a couple of weeks ago, I'm trying to think about the way technologies like the internet embed ethics and how this challenges us both in every day life and as researchers into communications, but also for developers creating new technologies.

One obvious place where this shows up repeatedly is in the tension between free speech and hate crimes, as well as between privacy and accountability. but it also shows up in the power imbalance between large organisations and the individual. Taking these in the reverse order, then:

1/ The cloud is run by a small number of very big trans-national companies, for profit - original technical reasons for centralising resources (compared with the world wide web of the 1990s, or even early 2000s) was economic - scale gets price/performance advantages for servers, as management and control are run in one place for a lot of people. prevention of various attacks (e.g. denial of service on small sites) goes away as a problem, since few bad guys have the resource to launch a big attack on today's giant data centres (outside a couple of government agencies:)

However, once all that information is concentrated in one place, it affords opportunities which didn't make sense when it was decentralised. A for profit company has no choice: It has to maximise the shareholder value. This isn't mission creep, its just capitalism.

Of course, putting all your eggs in one basket does have a downside when there is a successful hack (viz the ever increasing list of Cablegates, Sony's, Ashley Maddison's)

2/ Privacy is not assured in a network. when you communicate, at least one other party now knows what you said. In a computer network, your privacy is now in their hands, since everything you "send" to them is a copy. They now have a copy. It can be copied again. While the parties to your original communication may be accounted for (you think/hope), further copies are not accounted for. This is a consequence also of the near-zero cost of copying. In days of illuminated manuscripts, only people with a monastery full of monks could copy stuff again. Nowadays, as downloaders know, anyone can re-upload. As YouTube demonstrated, you can even legitimise file sharing if you have enough power (see 1/:)

However, privacy is not dead. There are social norms which prevent us repeating all secrets to all and sundry. There are also legal situations where confidentiality is required (patient/doctor or client/lawyer, or just gf/bf but only if at least one of them is a celeb:)

What isn't clear because of the way technology has made copying easy, is when you are trespassing on those social norms - the technology could be less neutral - it is fairly easy now to provide tools that look (locally, on your device) at the information you have and make suggestions about reasonable use of it (delete now for ever, do not copy, etc) - in organisations that care about security (defense agencies) classification provides rules about where data can go - we could help every day people by building better support for remembering what you should and should not do.

3/ Many systems provide platforms for public utterances - blogs, have-your-say, comment-is-free, etc etc - but also just being able to get a throw away e-mail account at the drop of a hat.

Many are geared to ease of use, so don't need accountable sign-on, and don't check what is said or who it is said to.

Such systems allow trolling and other offence.

The problem is that the policing of what people can say, to whom, and where is generally regarded as a form of censorship, and in some countries, strongly opposed.
This conflates two things
a) anonymity (which has its place for whistleblowers, or people in countries where free speech is anathema anyhow)
b) free speech.

In general, if you think you have a right to say something, you should be able to stand by what you say, hence being anonymous is not only not a requirement, it is actually a really weird thing to require. Experiments on requiring true names, or at least accountable identities, on some sites have resulted in visible reduction in abuse.

So what I'm trying to say here is that we have built an internet, web, cloud, which to a large degree is not fit for human purpose.

  • It is good for corporations to make money at your expense
  • It is excellent for us to live in a panopticon
  • It is a fine public space for people to shout abuse at others while wearing a mask.
Time to fix this.

Thursday, August 20, 2015

lost and not not forgotten

social scientists studying the right to be forgotten have forgotten why

Tuesday, July 21, 2015

sharing and hiding - e-books and crypto comms

two ideas for the day:
1/ when you're reading an ebook, people around you don't have the pleasure of seeing what you're reading as they do by seeing the cover of a paper book....

so e-books have wireless for download - why not (up to you to turn on/off) use a whispernet style ad hoc meassage to broadcast to people nearby what you currenty are lookin at....?

2/ when you type an email that has a word like "attachment" in it, the mailer notices if there isn't an attachment often, & asks you if you meant to have one
how about the mail app (or browser) could also look at the email and make a guess "this looks private, don't you want to use the recipient's public key"?


Saturday, July 18, 2015

democracy and debate - what's wrong with vanguards etc

just listening to lots of talks by social scientists - when people talk about politics, they've spent a lot of time reading, digesting, thinking, synthesising and so on. so then they report their results back. what's the problem?

well, basically, TL;DR

the process has to be a process for al potential involved parties - this is why syndicalist anarchism is the way forward - direct democracy has to engage, so the naive extension of representative democracy into direct democracy just burdens people with too many irrelevant discussions, so is alienating in a worse way.

Friday, June 26, 2015

towards an antisocial contract

Towards an Anti-Social Contract

I've read the David Kaye's report, which I very much like (clarity and precision, but also happen to agree).
What is missing? A clear way to measure proportionality, and a social/legal framework to implement judgement of what is a (currently on hold) proposals to replace the European Human Rights proportional way to suspend crypto rights. So for example, the UK's where decisions are made, and replace them with politicians - with a UK Bill of Rights threatened to remove judges as the place Anderson's report makes it plain this is unacceptable (not just and proportional scheme to carryout lawful intercept, the advent of ethically, given conflict of interest, but constitutionally).
However, there's a very real threat that without a transparent, fair, intercept. Government agencies need to be persuaded to reduce their really good perfect forward secrecy mechanisms, and better key  management in general, will basically mean there will be no feasible (child porn , terrorism organisation, money laundering etc) would mission creep (similar to commercial agencies abuse of personal data) as that would mean legitimate policing of really bad uses of the net simply go completely unchecked.

There's a secondary threat, which is that wholesale monitoring by too: If citizens feel confident that monitoring is only done for good reason, and without weakening out crypto-systems, they may not feel the need to adopt unbreakable systems. Many agencies will result in a massive breach of privacy when should never have had access to 2 Million documents - modern cloud (inevitably) one of those agencies accidentally leaks a collection of monitoring data. This is the other lesson from Snowden (the NSA's internal security procedures were incompetent, in that one person providers do not let their system administrators have such privilege.
This is the balancing act that needs to be created, in my view. and nor should a security agency, and what better way to enforce this, than only to collect necessary and sufficient data in the first place - the needle, not the whole haystack.
A sort of Anti-Social Contract c.f. always on
So maybe we need a new arbiter organisation - a distributed citizenship v. government tie-breaker - not the police, business or the press or current national judiciary - a sort of 7th estate. It should, like the Internet itself, admit of no kings, just working codes of practice. It could manage rights to be forgotten too. It might need to employ some very smart social machines to cope with ddos, edit war, troll, bot farms etc etc

Friday, June 19, 2015

science and policy #101

Three recent pieces of work in Cambridge came to light

1. scientists have been working on the basis for randomized trials, and realized that, of course, we must have some non-randomized trials, to check if the very basis for randomization as part of scientific empirical method is sound.
In a bold inter-disciplinary move, the scientists collaborated with the department of history and analyzed a number of UK and other policies for economics and military action, to see if one could find random (e.g. the 100 years) and non-random (e.g. the 1st world) wars, as well as economics (e.g. monetarism, and austerity). The results will be published very soon, but are currently under embargo, in case they disturb a current experiment with Greece.

2. Engineers in Cambridge have long wanted to build a railway to replace the ageing bus and taxi system. Working from earlier chinese experiments with mono-rails, and the guided by the guided bus success, the proposal is not to take the modern electric line from Royston to King's Lynn, where customers are already used to the trains splitting at Cambridge, with one half going forward, for example, to Ely, and the other half, soon, to the Science Park. From next year, they hope to split the train laterally, with the left half going around the pieces (Christs, Parkers) and Commons (Midsummer etc), and the right half going in a long overhead loop, to Ely, allowing the Eels much easier migration along their breeding paths in the fens. If the duo-mono-rail is a success, the engineers propose to extend the routes to Paris and Brussels, where onward mono-mono routes could serve ski-resorts and some of the Belgian mountain regions where the finer beers are produced.

3. For some time now, a very ambitious project in CRASSH has been working on Consipiracy Theory. This work has involved linguists, computer scientists, taxi drivers and publicans, and has recently yielded a breakthrough. A new tool has been built that can detect consipiracy theories with a false positive rate of 2% and a false negative rate of 3%. The method is based on a mix of Bayes and various NLP clustering algorithms. Currently the tool is part of a possible startup and venture capitalists are clamouring to fund the work. The business case is unclear as yet, and there have been some suggestions that at least one major journalism organisation may have prior art, although scientists suggest that their conspiracy generator is based on different technology (followers of Chomsky will understand that recognition and generation are quite different linguistic machines). At least one government agency claims that they had build a system exactly like this in 1961, and that it correctly identified Cuba and Suez, but they could not reveal the technology for fear of showing potential national enemies how much more advanced the UK was than them. Security analysts have asked them to "put up or shut up" as this is not the first time that they have claimed to have approaches to their work that would save time and money, but have not deployed because they would have, err, saved time and money and lives and red faces.

Meanwhile, CRASSH were not available for comment.

Blog Archive

About Me

My Photo
misery me, there is a floccipaucinihilipilification (*) of chronsynclastic infundibuli in these parts and I must therefore refer you to frank zappa instead, and go home