I wonder what people are really thinking when they think of governance of Intelligence?
If we were considering human intelligence (which we are by extension) we better tread carefully, especially when considering who owns it. The ability to reason, creatively, to innovate is not really the same as any other thing we have sought governance over -
- nuclear weapons (test ban treaty, and pugwash convention)
- spectrum allocation
- orbits around earth
- maritime&air traffic - fuels, tracking, control etc
- recombinant DNA (asilomar conference
- the weather (and interventions like geo-engineering e.g. see RS report on same)
What's similar about these, and what is different?
Well we only have one go at each - there's a very countable human race, planet, sea, zombie apocalypse, climate emergency. we don't have time to muck about with variants of rules that apply to fungible material goods. We need something a tad more radical.
So how about this: A lot of AI is trained on public data (oxygen==the common crawl) - this is analgous to robber barons who enclosed the commons, then rented out the land to farmers to graze their cattle on, which used to be a free shared good...
A fix for this, and to re-align incentives is to introduce a Piketty style tax on the capital value of the AI - we could also just "re-nationalise" it, but typically, most people don't believe state actors are good at managing things and prefer to have faith in the invisible hand- however, history shows that the invisible hand goes hand-in-glove with rich-get-richer, so a tax on capital (and as he showed in great detail in Capital in the 21st Century, it does not have to be a very high rate of tax to work), we can return the shared value of the AI to the common good.
A naive way to compute this tax might be to look at the data lakes the AI was trained on, although this may not all be available (since a lot of big AI companies add some secret sauce as well as free or appropriated ingredients) - so we can do much better by computing the entropy of the output of the AI.
A decent algorithm should produce very information rich output, compared to the size - e.g. a modern LLM with 100s of billions of dimensions, should produce short sentences or images which are highly instructuve - we can measure that, and tax the AI accordingly.
This should also mitigate the tendency to seek data without agreement or consent.
I realise this may sound like a tax on recording media (back in the day, there were campaigns about "hope taping is killing the music industry"), but I claim there's a difference here in terms of the over-claimed, over-hyped "value add" that the AI companies assert - the real value was in the oxygen, public data, like birdsong or folk tunes, which should stay free or we die - in not being able to make it free, I suggest we do the next best thing and tax the rich. Call me old fashioned, but I think a capital value Piketty tax to mitigate rentiers is actually a new idea, and might actually work. We could call it VAIT.
No comments:
Post a Comment