Sunday, November 03, 2013

human machine improved collective intelligence....

[Background reading list:
warning- lots!!!]

1. I am very skepticle of some of the far out machine
intelligence/singularoty folks (kurzweil et al) -

They hark back to the big AI errors of the 1960s,
and all the advances in real machine "intelligence"
that appear to be clever have been made on the back of
a) a lot of data and fast processors
b) some very simple mechanisms - e.g. Bayesian Inferencing

Of course, there's some very clever algorithmic work
making big systems go fast -
Just for example, facebook run around 3000 interactive jobs
a day in their entire graph (1 billion users)
to explore various business questions - the tools
(data centers with a million cores,
map/reduce and Pregel style highly
distributed/parallel or large
memory system processing frameworks) 
are not like anything in the past, 
but nor are they anything to do with AI, 
nor do they exhibit any emergent properties we don't expect:)

2. In hybrid human/machine thinking, 
such as we do now with big data in commerce 
(google, aforesaid facebook) and Big Science 
(LHC, Astronomy, Genomics, Proteomics etc) 
there are plenty of cool things to do, 
but they don't involve large groups of people, 
rather small numbers of skilled smart people 
with a LOT of silicon slaves...

3. So in the collective space, what do we have? 

Things like twitter for news, 
Wikipedia as a knowledge base, 
Kickstarter for investment, 
Liquid for democracy, 
EBay for commerce, etc and so on -  
These are emergent social thinking machines, its true - 
and they evolved/emerged out of web systems  - 
so what changed since Vanavar Bush's seminal article, 
As We May Think?

A bunch of things, really 
but they havn't been codified/captured very well...
which are the meta-behaviour constraints that have evolved to
control bad behaviour in online social worlds, 
e.g. to reduce trolling, 
help people defend against fishing and grooming, 
and to damp down flame wars and so on - 

IBM, back in the day, did a study of the
use of Lotus notes in a lot of customer sites, 
and ended up buidling some nice systems that, 
with human help, reduced the 
incidents of antisocial colleapse:
People were allocated roles 
(the "lightening conductor" was one role I liked,
who would take the heat when someone was becoming abusive -
like proxy victim!);
Studies of bulletin board use 
(Usenet News, the Well and so on, 
in the 60s and 70s 
showed informal evolution of similar roles, albeit informally...

So wikipedia now has lots of distributed controls 
to prevent edit wars, 
and liquid and ebay have a bunch of heuristics 
that do a lot of damage limitation.
These systems look a bit kludgy:- 
they evolve to meet needs;
they look a lot like immune systems; 
it looks like they work! 

Systems like recommendation networks, 
and reputation systems (with +ve and -ve) 
which use strategy-proof algorithms (like pagerank)
seem promising, although obviously Wikipedia 
is interesting in that they don't use explicit 
named author&reputation, so its a lesson in
another approach that can work too.

So these involve preventing the 
collapse of group comms into chaos, 
or domination by small vocal groups, 
but don't necessarily demonstrate improved intelligence 
over traditional think thanks/meetings of minds 
(the royal society, the academy of science in US , 
and ad hoc groupings formed to solve particular problems - 
e.g. NASA's moon mission, the IPCC, 
and DARPA's autonomus car (precursor to google cars), 
the LHC, the genome, the search for HIV vaccine, etc etc

I don't have anything to offer that solves the 
problem of increasing intelligence above existing human levels, 
but stopping groups becoming more stupid than the dumbest member 
seems a goood start. 
Also, it depends on your goal - 
if the goal is to have a society with
collective intelligence on the average human level, 
but with buy-in because everyone is involved,
engaged and owns it, then that seems good enough...

Stuff with nanotech meets quantum computing meets singularity 
is not relevant really (in my opinion). All it does (for me) 
is scale up the technolgty we have to day. There's no indication 
that it makes computational thinking easier. 
It just means we can stay on the curve we are on led by Moore's Law 
and other amazing engineering feats of improving performance that
computing has managed in storage and communications as well as

By the way, a nice paper on the non-utopic use of bitcoin by some
friends (and an ex student) of mine:
and I am sure this is just a step in the arms race there...

we just finished a modest EU project in this area which might provide
a few more useful pointers (perhaps:)

No comments:

Blog Archive

About Me

My photo
misery me, there is a floccipaucinihilipilification (*) of chronsynclastic infundibuli in these parts and I must therefore refer you to frank zappa instead, and go home