Saturday, April 13, 2024

social media convestions and aliens in the ether


Starting locally, I've notice how there are many different conventions about how people use different social media platforms (social networks, email, microblogging, etc) 


At one extreme, some people DM me on slack - this is annoying as, to save my sanity, i have turned off notifications on everything, and I look at different platforms with different frequencies - slack, mostly, once a day, compared to say, whatsapp (and signal and matrix), once an hour at least. While I don't use. teams for messaging, I know people who do, but they are signed on while at work all working hours, so that works ok for them.

At another extreme, some people only use a platform in broadcast mode, so an email list is flooded with "how do I leave this list" messages, or a whatsapp group is flooded with "please stop sending your messages to everyone" messages.


Which leads me to the global problem- without interoperability, we have to select a channel we use for a mode of use, and there are going to be lacunae, or indeed, black holes, and inter-galactic wastelands with no information at all


Which leads me to the universal problem and may be one explaination for Fermi's Paradox - we are hearing from alien's in the ether all the time, but most of them are using broadcast (as are we) and what happens to the shared spectrum when everyone broadcasts all the time? You get a descent into pure noise - indeed, we can work out that lots of alien's are NOT using broadcast otherwise we'd be subject to Olber's paradox, which is to say, the sky would be (modulo quantum limits) white noise from all the interfereing broadcasts. 

A slightly more advanced alien civilisation might think "aha, broadcast - shared spectrum, we need to employ collision detection, or even better, collision avoidance" just like Ethernet and WiFi do already on our planet. However, a little more thought would suggest that the protocol for this might suffer from rather high latency when waiting for a "Clear to Send" response to a "Request to Send" message over the light years.  So obviously smart aliens would do one of three things:

  • frequency division multiplexing - each civilisation gets a specific RF band to use
  • space or code division multiplexing - we develop really good collimaters or inter-stellar chipping sequences
  • cooperative relaying and power management - we place "cell towers" at convenient places (e.g. white holes and black holes) and then avoid interference by switching out of this universe (like cellular switching onto glass fiber networks, but in this case, interstellar wormholes).
The other thing these really smart aliens would do would be to prevent our wildly stupid RF reaching them at all by clever filtering. A "really clever filter" is a very big faraday cage, which could be built out of suitably designed dark matter. This is also why we don' see the white noise - we are in our own RF bubble. We are alone. All the clever people are the other side of the barrier. 

Sometimes they do visit us, but to avoid detection, they largely use obsolete social media platforms like MySpace and Orkut, where they can have a laugh.

Sunday, March 10, 2024

Witch Consumer Magazine, review of the leader boared top three LLMs "Conformité Ecologique" (the ubiquitous CE marque)

 



Witch Consumer Magazine, review of the leader boared top three LLMs "Conformité Ecologique"  (the ubiquitous CE marque)

We analyzed the CE claims of the following three large languish models, with respect to four key metrics for the Ecologique, as agreed in European law, namely enthalpy, internet pollution (measured in LoCS -- libraries of congress), bio-dediversification,and general contribution towards the heat death of the universe.

Currently, according to the boared, these are the top-of-the-heap in terms of hype-parameters:

The Faux Corperation's Pinocchio

Astravista's Libration

Sitting Duck's Nine Billion Dogma


We hired some prompt engineers to devise a suitably timely benchmark suite, and embedded the the three systems in our whim tunnel taking care to emulate all aspects of the open road to avoid any repeat of the folk's wagon farage.

Indeed, we used all three systems to design the whim tunnel, and compared the designs to within an inch of their lives until we were satisfied that this was a suitably level playing field on which to evaluate.


The benchmark suite will be made avaialble later, but for now, suffice it to say that we were able to exceed the central limit theorem requirements, so that our confidence is running high that the results are both meaningful, and potentially explainable, but certainly not excusable.


Enthalpy

Pinocchio

Pinocchio ran very hot, both during training and during every day use.

Libration

Libration was about half the temperature of Pinocchio

Dogma 

Roughly 12.332 times less than the next worst.


Pollution

Pinocchio

The Internet was worse off after this tool was used by 

approximately 3 LoCs

Libration

Again about a half as bad

Dogma

Was difficult to measure as the system never stabilised, but oscillated between getting worse,                     and then better, however,  the improvements were usually half the degradations.


de-diversification

Dogma

This was a shock - we expected better, but in fact the outcome was really rapid removal of                         variance.

Libration 

Around half as bad as Dogma

Pinocchio

very slightly less bad than Libration


Entropy

Libration

Excess use of Libration could bring the heath death of the  universe closer about 11 times faster                 than a herd of small children failing to tidy up their rooms

Pinocchio

Absurdly only 3x better than Libration.

Dogma

Appeared to gain from the Poppins effect, and generally ended up  tidier than before


Some critics have pointed out that Enhalpy and Entropy are two sides of the same coin, and pollution is likely simply the inverse of de-diversification, nevertheless,  we proceeded to evaluate all four in case later we might find different.

In general, none of these products meet the threshold for a CE mark, and for your health, and sanity, we strongly recommend that you do not use any of them, especially if you are in the business of prediction. Next week, we will review a slew of probablistic programming models with a special emphasis on the cleanliness of the Metropolitan Hastings line.


Monday, February 26, 2024

Towards International Goverance of AI

 I wonder what people are really thinking when they think of governance of Intelligence?

If we were considering human intelligence (which we are by extension) we better tread carefully, especially when considering who owns it. The ability to reason, creatively, to innovate is not really the same as any other thing we have sought governance over - 


nuclear weapons (test ban treaty, and pugwash convention)

spectrum allocation

orbits around earth

maritime&air traffic - fuels, tracking, control etc

recombinant DNA (asilomar conference

the weather (and interventions like geo-engineering e.g. see RS report on same)

what's similar about these, and what is different? 

Well we only have one go at each - there's a very countable human race, planet, sea, zombie apocalypse, climate emergency. we don't have time to muck about with variants of rules that apply to fungible material goods. We need something a tad more radical.

So how about this: A lot of AI is trained on public data (oxygen==the common crawl) - this is analgous to robber barons who enclosed the commons, then rented out the land to farmers to graze their cattle on, which used to be a free shared good...

A fix for this, and to re-align incentives is to introduce a Piketty style tax on the capital value of the AI - we could also just "re-nationalise" it, but typically, most people don't believe state actors are good at managing things and prefer to have faith in the invisible hand-  however, history shows that the invisible hand goes hand-in-glove with rich-get-richer, so a tax on capital (and as he showed in great detail in Capital in the 21st Century, it does not have to be a very high rate of tax to work), we can return the shared value of the AI to the common good.

A naive way to compute this tax might be to look at the data lakes the AI was trained on, although this may not all be available (since a lot of big AI companies add some secret sauce as well as free or appropriated ingredients) - so we can do much better by computing the entropy of the output of the AI.

A decent algorithm should produce very information rich output, compared to the size - e.g. a modern LLM with 100s of billions of dimensions, should produce short sentences or images which are highly instructuve - we can measure that, and tax the AI accordingly.

This should also mitigate the tendency to seek data without agreement or consent. 

I realise this may sound like a tax on recording media (back in the day, there were campaigns about "hope taping is killing the music industry"), but I claim there's a difference here in terms of the over-claimed, over-hyped "value add" that the AI companies assert - the real value was in the oxygen, public data, like birdsong or folk tunes, which should stay free or we die - in not being able to make it free, I suggest we do the next best thing and tax the rich. Call me old fashioned, but I think a capital value Piketty tax to mitigate rentiers is actually a new idea, and might actually work. We could call it VAIT.

Sunday, February 18, 2024

Government Procurement of Open Systems Interoperability or Open Source - a lesson for Digital Public Infrastructure

40+ years ago the US and European countries devised a government procurement policy which was to require suppliers to conform to Open Systems Interconnection standards - this was a collection of documents that could be used in RFP (request for proposals) to ensure that vendors bidding for government contracts to supply communications equipment, software, systems and even infrastructure would comply to standards that meant the government could avoid certain pitfalls like lockin, and monopolies of vendors arriving in the communications sectore.

It worked - we got the Internet - probably the worlds first digital public infrastructure provided both by public and private service providers, equipment and software vendors, and a great deal of open source software (and some hardware).

There's one review of how this evolved back in 1990 that represents an interesting transition point, from what were International Standards for Interconnection provided by the UN related organisation ISO or the ITU, to the Internet Standards, which were just about to come to dominate real world deployments - 1992 was a watershed point when the US research fudning agencies stopped funding IP infrastructure, and commercial ISPs very rapidly crystalised out of regional and national (and later, international) community run networks (where communities had been collaborations of research labs and universities funded by DARPA and NSF, or similar in Europe).

Why did the Internet Standards replace the ISO/ITU standards as the favourites in goverment procurement? It is hard to prove this, but my take is that they were significantly different in one simple regard - the specifications were matched with open source implementations. From around the early 1980s, one example was Berkeley Unix which included a rock solid TCP/IP software stack, funded by DARPA (derived from one at BBN (and required to be open source so others (universities, commerce and defense) could use and add to it as needed in the research programs of the 1980s, as actually happened. By 1992, just as the network went beyond government subsidy status, Berners-Lee released the first open source web server and browser (and specifications) and example sites boomed. Then we had a full fledged ecosystem with operational experience, compelling applications, and a business case for companies to join in to extend and make money, and governments to take advantage of rapidly improving technology, falling prices, and a wide choice of providers.

So in a competing world, standards organisations are just more sector, and customers, including some of the biggest cosumters, i.e. governments, can call the shots in who might win.

Now we face calls for Digital Public Infrastructures for other systems (open banking, digital identity being a cornerstone of that, but many others) and the question arises about how the governance should work for these.

So my conclusion from history is that we need open standards, we need government procurement to require interoperability (c.f. Europen Digital Markets Act requirement) and we need open source exemplars for all components to keep all the parties honest.

I personally would like to go further - I think AI today exploits the open availability of huge swathes of data to create new knowledge and artefacts. This too should be open source, open access, and required to interoperate - LLMs for example could scale much better if they used common structures and intermediate model formats that admitted of federation of models (and could even do so with privacy of training data if needed)...

We don;t want to end up with the multiple silos that we currently have in social media and messaging platforms, or indeed, the ridiculous isolation between video conferencing apps that all work in browsers using WebRTC but don't work with each other. This can all be avoided by a little bit of tweaking of government procurement, and some nudging using the blunt instrument of Very Large Contracts :-)

Saturday, February 17, 2024

mandatory foley sounds

you know it was suggested that EVs that are so beautifully silent, should be required to make a bit of fake engine or tyre noise just so pedestrians and cyclists are aware they are there.

but what is far more urgent is that we need people carrying phones they are staring at to do the same (oh, ok, maybe not revving diesel, or screeching rubber - maybe some other thing like belches, or farts or other human like sounds)....then if i'm cycling along, i know there's a stupid pedestrian who doesn't know I am there because they aren't looking before they step into the road. 

the phone could also emit a radio beacon to warn EVs to slam the brakes on.

or we could just let darwin play out...


oh, thinking about this, we could also imagine that the reason aliens have not been in touch with earthlings in all the 100 years we've been beaming out radio to them is that it is entirely possible that any sufficiently advanced civilisation has forgotten where the unmute button is.

Monday, February 12, 2024

explainable versus interpretable

 This is my explanation of what I think XAI and Interpretable AI were and are - yours may differ:-)


XAI was an entire DARPA funded program to take stuff (before the current gibberish hit the fan) like convolutional neural nets, and devise ways to trace just exactly how they worked - 

Explainable AI has been somewhat eclipsed by interpretable AI for touchy-feely reasons that the explanations that came out (e.g. using integrated gradients) were not accessible to lay people, even though they had made big inroads into shedding light inside the old classic "black box" AI - so a lot of stuff we might use in (e.g.) medical imaging is actually amenable to giving not just an output (classification/prediction) but also what features in the input (e.g. x ray, mri scan etc) were the ones, and indeed, what labelled inputs were specific instances of priors that led to the weights that led to the output.

Interpretable AI is much more about counterfactuals and showing from 20,000 foot how the AI can't have made a wrong decision about you because you're black, since the same input with a white person gives same decision......i.e. is narrative and post hoc, as opposed to mechanistic and built in...

It is this latter that is, of course, (predictably) gameable - the former techniques aren't, since they actually tell you how the thing works, and are attractive for other reasons (allow for reasoned sparsification of the AI's neural net to increase efficiency without loss of precision, and allow for improved uncertainty quantification,amongst other things an engineer might value)...

None of the post DARPA XAI approaches (at least none that I know of) would scale to any kind of LLM (not even Mistral 7B, which is fairly modest scale compared to GPT4 and Gemini) - so the chances of getting an actual explanation are close to zero. given they would struggle for similar reasons to deal with uncertainty quantification, the chances of them giving a reliable interpretation (I.e. narrative counterfactual reasoning) are not great (there are lots of superficial interpreters based around pre- and post- filters and random exploration of the state space via "prompt engineering" - I suspect these are as useful as the old Oracle at Delphi...("if you cross that river, a great battle will be won"), but I would enjoy being proven wrong!

For a very good worked example of explained AI, the DeepMind Moorfields retina scan NN work is exemplary - there are lots of others out there including use of the explanatory value to improve efficicency.

Sunday, February 04, 2024

standards and interoperable AI and the lesson from the early internet...

Back in the day (e.g. 1980), when we were deploying IP networks  there were a ton of other comms stacks around, from companies (DEC, IBM, Xerox etc) and from international standards orgs like ITU (was CCITT- X.25 nets) and ISO (e.g. TP4/CLNP). They all went away because we wanted something that was a) open, free, including code and documentation...

and

b) worked on any system no matter who you bought it from, whether very small (nowadays, think rasperrry pi etc) or very large (8000 core terabytes of ram, loads of 100s Gbps NICs etc), and 

c) co-existed in a highly federated, global scale system of systems.

So how come AI platforms can't be the same? We have some decent open source, but I don't see much in the way of interoperability right now, yet for a lot of global problems, we would like to federate, at coarse grain/large scale - e.g. for healthcare or environmental models or for energy/transportation so we get the benefit e.g. better precison/recall, longer prediction horizons,  more explainability, and, indeed, more sustainable AI, at the very least, since we wont all be running our own silos doing the same training again and again.

We should have an IETF for AI and an Interop trade show for AI and we should shun products and services that don't play - we could imagine an equivalent of what happened to europe and US GOSIP) Government Open Systems Interconnection Procurement) - which evolved into "just buy Internet, you know it makes sense, and it should be the law".

Monday, January 29, 2024

Centralization, Decentralization, and Digital Public Infrastructures

 


Centralization, Decentralization, and Digital Public Infrastructures 

with apologies to Mark Nottingham, https://www.rfc-editor.org/rfc/rfc9518.html

Through the Control/Power Lens.

Governments have typically centralized control of the things that governments do-  raising tax to fund provision of certain services - education, health,  transportation, defence, and even in the not too distance past, telecommunications. Decentralised government (e.g. syndicalism) has been rare. On the other hand, most governments in recent history have left domains outside of government to markets or communities, although not without some (perhaps limited) regulation or control of governance. 

In the past, communities have built cooperative ventures (shared barns, shared savings and loans) and more recently community networks and power grids.

Through the Economic Lens

Markets often espouse competition where multiple providers offer  equivalent products and services. Various models exist for central versus decentralised economies.

There's interaction between government and economy,  through regulation, especially when  threat of monopoly, or even just oligopolies, through coercion, government v. companies w.r.t making sure market operates transparently, efficiently and fairly. (see later feds v. apple, and in the UK, IPA v. GDPR.)

Through law, government may also provide citizens with Agency, Representation, Redress.Of course, there are good and bad government, and typically this can show up in terms of poor practice, or deliberate removal of rights (to agency or redress, e.g. concerning unfair treatment, including exclusion etc 

Governments may be good now, but bad later, or vice versa. It is not an accident that Germany has the strictest privacy laws in the world, it was as a result of their past experience of East Germany under the Stasi. They are not so naive as to believe that that couldn't happen again one day (sadly).

Through the Technology Lens

The Internet is probably the best example of something that had been largely a decentralised system for decades.


Horizontal - services - interoperation/federation

E-mail, web, name spaces

Vertical - stacks - silos

cloud, social media, online shopping, entertainment

Horizontal systems are somewhat decentralised (or at least distributed) whilst in some informational sense, vertical systems are somewhat centralised.

Through the Information Lens

Where is data, is orthogonal to who can read it, who can alter it.ownership and control depends on access, and access control, and legibility. So whether I can get at my, or your data depends on my role and my privilege level but also whether I can then actually decode your data depends on my having the right software.

At some level, we can expect most data today to be encrypted , at rest, during transfer and even while processing. Protection through access control is not sufficient, since there are mistakes, insider attacks and coercion. Software has vulnerabilities. Hence we employ keys, and encryption/decryption depends on having both data and relevant keys.

One step further, who has accessed the data, and who has been able to decode the data is part of audibility (who can see who can see - Quis custodiet ipsos custodes?  etc)

If the user controls the keys, they may not care too much where the data is (except for potential denial of access) since others copying the data will not be able to decode it. On the other hand, if the government keeps copies of keys, then they can access any relevant data, wether it is central or decentralised. Of course, if the government accesses my data o my computer, I may be aware of that (through audit trails). but that might not do my much good in the face of a "bad" government. 

There are two separate aspects of identity systems where visibility of data matters, in terms of threats to citizens from bad actors: firstly, foundational id provides linkability across surveillance of actions (voting, signing on to services, etc) so exposes the individual's digital footprint to long term analysis; secondly, functional id includes particular attributes (age, gender, race, religion, licenses to operate vehciles, medical, academic qualifications etc etc), which offers opportunity to discriminate (treating groups preferentially or excluding or reducing rights of other groups etc etc). A bad actor doesn't need the whole government (or its service providers) to misbehave - just that systems are poorly designed so that insiders can exploit vulnerabilities). The perception of this possibility is enough to create distrust, and disengagement, which itself will mitigate against vulnerable groups in society more than privileged.

Through the Efficiency Lens

We can put all the data in the world in one place, or we can leave it where it was originally gathered. This is a choice that represents two points on a spectrum of centralized versus decentralized data. One can also copy the data to multiple places.

There are efficiency considerations in making this choice, which entail more energy, higher latency, lower resilience, worse attack surface,  and potential for catastrophic mistakes, when taking the centralised path. The decentralised path reduces these risks, but still requires one to consider copies for personal data resilience.

These choices are orthogonal to the access choices, which merely concern who has rights ad keys, and where they keep those, not who holds data where.


Conclusions, regarding Alternative Solutions in the Digital Identity Space

A digital public infrastructure such as an identity system needs to be trusted (so people use it), and therefore considerations about whether the user base trust the government or not matter.

If we don't trust the government, we might choose a decentralised system, or at least a system with decentralised keys (like the Apple iCloud eco-system).

The question of whether there should be one provider, or six, or 10 billion is orthogonal to this trust, although it does impact resilience and latency, i.e. efficiency. If the keys are owned by users, then this impacts governments'  ability to use identity data (attributes, and identity usage) to plan, whether for good or bad. That said, some privacy technology (e.g. FHE or MCS)  combined with decentralised learning might allow non privacy invasive statistics  to be gathered by a centralised agency (i.e. government) without actual access to  individually identifying attributes. A good example of this was the  Google Apple Exposure Notification system designed for use for  digital contact tracing during Covid, which could have been adapted to offer statisticcal information (e.g. differentially private) if necessary (though it wasn't used that way in practice).

All of this leads to the question about who provides key management, and a related question of certification (i.e. why should we trust the key management software too). One solution to this is to provide a small (e.g. national scale) set of identity services, but a decentralised key management system that can also be used to federate across all the identity services (cross-border/ or between state and private sector).One technology that we built to provide that independent key management for identity systems is trustchain[1], which is a prototype that services to replace a (somewhat) centrally owned platform such as Web PKI.

An interesting oligopolistic system that offers somewhat decentralised certificates is the Certificate Transparency network (of about 6 providers) that sign keys for the Internet -- this arose because the previously centralised CAs were hit by attacks which caused major security breaches in the Internet. We would argue that a similar scale system for key management and certification for digital identity is evidentially the bare minimum for acceptability for any trustworthy system.

Whether the system infrastructure itself is decentralised or not is a separate question which concerns efficiency, and, perhaps, some types of resilience (Estonian Digital Citizenship systems are distributed over several countries for backup/defensive/disaster recovery reasons).

[1] trustchain is a prototype that is based on ION and makes parsimonious use of the bitcoin proof-of-work network to provide decentralised trustworthy time, and then can create/issue keys in a way not dependent on any central provider or service, resilient to coercion, collusion and sybil attacks. We are currently investigating replacing the proof-of-work component with TimeFabric, which itself depends on a ledger, but can use a proof-of-stake or proof-of-authority and is therefore massively more sustainable.


Thursday, January 11, 2024

Replacing the Turing Test with the Menard Test

 In Borges short short tale, "Paul Menard, author of the Quixote", he reports on the astonish tale of the 19th century author, who livs a life so exemplary in the literal sense that it is exemplary of what the author of the classic work, Don Quixote should be like, that when Menard produces the Quixote, it is not a copy of the work by Cervantes, but a better work, despite being word-for-word identical. But it is not a copy, it was made through the creative efforts of Menard, based on his experiences and knowldge and skills.

Imagine an AI that was trained i n the world, not on a large corpus of text, so that it didn;t just acquite a statistical model of text, but acquired an inner life, and then could use that inner life to create new works.

Imagine such an AI was able to produce, for example, a book called Don Quixote, without having read the work by Cervantes.

That AI would necessarily contain a model of Cervantes, or at least something that had many of the same elements.

This model of a creative human is quite different from a model of lots of blocks of text, which can be regurgitated with many small variations, but are, of necessity, merely stochastic parrots.

Was one to interrogate the true creative AI, it might respond with other works, that Cervantes might have written, if he were still around.

A similar AI with an inner life, that modelled, say, Schubert, might be capable of completing symphony number 8, or another, with the "eye" of Jackson Pollock, might move from abstract expressionism, to hyper-realism one day.

Such AIs might be able to introspect (e.g. in the manner of Alfred Hitchock, when interviewed by Francois Truffaut about why he used certain approaches for sense in his film).

Such systems would really be interesting, and not rote learned in how to pass trivial Turing Tests.


Tuesday, January 02, 2024

AI predictions with the possibility of fairness?

 There's a bunch of work on impossibility results associated with machine learning and trying to achieve "fairness" - the bottom line is that if there is some characteristic that splits the population, and the sub-populations have different prevalence of some other characteristic, then designing a fair predictor that doesn't effectively discriminate against one or other sub-population isn't feasible.


one key paper on the impossibility result covers this (alternative is to build a "perfect" predictor, which is kind of infeasible).


On the other hand, some empirical studies show that this can be mitigated by building a more approximate predictor/classifier, perhaps, for example, employing split groups and even to try to achieve "fair affirmative action" - this sounds like a plan, but (I think - please correct me if I am wrong), assumes that you can

  • work out which group an individual should belong to
  • know the difference in prevalence between the sub-groups
Suggests also to me that it might be worth looking at causal inference over all the dimensions to see if we can even determine some external factors that need policy intervention to, perhaps, move the sub-populations towards having equal prevalence of those other characteristics (high school grade outcomes, risk of re-offending, choose your use case)....

I guess one very important  value of the work above is to make these things more transparent, however the policy/stats evolve.

Blog Archive

About Me

My photo
misery me, there is a floccipaucinihilipilification (*) of chronsynclastic infundibuli in these parts and I must therefore refer you to frank zappa instead, and go home