Wednesday, July 23, 2025

How to reform a National AI Institute?

The Reform Club

A lot of people have been busy recently writing plans for the Turing Institute most of which revolve around criticising the pace and direction of changes that it has been attempting over the past 2 years, and several culminating in the trivial proposal to put all the eggs in one basket (defense and security) and use the big stick that the EPSRC has of deciding whether to continue the block core funding for the next 4+ years to "make it so". This got up to ministerial level.

That isn't reform. That's simply asset stripping. Not that the asset isn't a good one - the defense&security program has always been strong (at least as much as we can publically know) - other work had variable mileage, although commercial partners from banking/commerce, pharmaceuticals et al, keep coming back with more money, which suggests that they liked what they got (with their real world hard-nosed attitide to funds, especially in the UK where R&D spend is so typically low from industry). We're talking £1M per annum per partner levels.

Also the Turing managed to hire very strong people, both as postdoc researchers and as research engineers, in the face of fierce competition from the hyperscale companies (that all have significant bases in the UK, e.g. Google&Deepmind based here, Microsoft, Amazon in Cambridge, Meta and X in London, OpenAI has a London lab, etc etc) - as well as quite a reasonable set of opportunities in UK universities in places significantly more affordable than London (or the South in general) - so presumably, the interesting jobs had challenges, and an open type of work, that attracted people who had a choice.

How not to reform a National AI Institute?

Make it defense. You will lose a large fraction of the other money, and the majority of staff will not be allowed to work here. As with medicine, the UK does not have capacity to provide AI expertise at the level needed from UK citizens alone. Those left with real clue will often find it easier to work at one of those companies too. Especially since salaries start at 3 to 4 times as much.

  • You will lose all the cross-fertilization of ideas that happen in one domain into others especially with the Turing's unique engineeing team acting across all the possible projects, acting as a conduit for tools and techniques. 
  • You will lose all the visitor programmes including the PhD visitors, which benefits students from all over the UK, many of whom would not be allowed to visit.
  • You'd lose access to much interesting data, which would not be allowed to be given to people that can't transparently say what they will do with it. Legally. 
  • You wouldnt have a "national institute" - you'd just have another defense lab. It might be very good, but no-one would know. In fact, how come there isn't already one, e.g. in Cheltenham? They have plenty of funds?

What's my alternative?

To be honest, I don't have one. The nearest thing I have is what the Turing has managed to do in weather prediction (see Nature papers on Aardvaak and Aurora), and what we did (still do) in Finance & Economics with some very nice work in explainable AI and fraud detection, and synthetic data, which have multiple applications across many domains. Likewise the engineering teams work on data safe havens which is useful in aforesaid finance, but also in practical privacy preserving machine learning in healthcare and any other sensitive domains. And recent work on lean language models. There are quite a few other things one could mention like that.

You can't predict where good or useful ideas will come from. Who knew giving a ton of money to CERN would lead to the World Wide Web? Who knew a Dutch free OS (minix) would incentivise a Finnish gradute student to write Linux (the OS platform on most the cloud, half the smart phones out there)? Who knew that some small TV company (the BBC) request for a simple low cost computer would lead to the founding of ARM (that has more chips in the world than Intel or anyone else - again in your mobile device)? Who knew this neat little paper on Attention is all you need, would lead to all the harsh language about peoples' failure to predict the importance of LLMs (hey some of those people predicted that blockchain might not be a good idea:-) Who knew?

And who knows how to reform a national AI institute? 

Tuesday, July 22, 2025

persistent technology design mistakes that go on giving

 History of technology is littered with design mistakes. Quite a few are mercifully shortlived, and some, deservedly, just don't even see the light of day at all (sadly in some cases as they might be useful lessons in what not to do:-)

some mistakes aren't mistakes - one famous one was the choice between VHS and Betamax video tape formats - this was actually a tradeoff in cost/price/distribution and quality - in the end it didn't really matter.

Others somehow survive, grow and persist, and persist in having negative consequences for decades - 

In Computer Science, these are things like the C++ language....enough has been written about that and alternative histories (if only Algol 60, or if only Objective C or if only people had written better compilers and libraries for functional languages earlier) - 

In Communications Systems, two examples I'd pick: Bluetooth, and Edge.

An early workshop (25+ years ago) in california had presentations on bluetooth and explained why they had made it look like a serial lines (think COM3 on a windows box), stemming from everything lookling like circuits, or telephones to the folks who made this up. And Edge was made reliable (local retransmissions) to mask wireless packet loss, when everything should just "look like an ethernet", as I think Craig Partride put it. The cellular data folks eventually got it (and have done a number of other things much better than wifi), but bluetooth's horribleness persists, partly because the broken architecture was baked in to early software stacks, and is very very hard to persuade people to just ditch. In the IoT world, this led to a plethora of other wireless technologies (zigbee, lorawan etc) at least partly so people could avoid that fate, although there were other goals too.

Anyhow, we avoided being stuck with X.25, but we are stuck with QWERTY. We avoided being stuck with VHS, but we are still stuck with IPv4.


Saturday, July 12, 2025

a brief history of asking for forgiveness versus permission - the napsterisation of AI...

 

back in the day, the Jesuits used to say that it was better to ask forgiveness than permission. I think that this may refer to the idea that people may have committed minor errors without knowing what they did was wrong, so they were less blameworthy, especially if, after the event, when the priest or other wise person explained to them the error of their ways, they recanted and were forgiven. To ask permission implies that the answer might be "no".

So now we are in a. world where people are being paid to run stuff like this legitimised botnet, effectively becoming part of a P2P file sharing world. Once upon a time (a generation ago, or almost infinitely in the past) if you ran a thing like this (the Napsterised Internet) you would get sharp letters from lawyers or even just be fined by the copyright infringement police.

Post Napster, Google acquired Youtube and took an interesting step...they basically took the Jesuit line, with a vengance - the trick was that Google went and did massive deals with all the large copyright owners (actually paying quite serious money) and then if you or I uploaded something already covered by that agreement, then no problem. If we uplaoded something not yet covered by the agreement, Google had an offer - they could offer advertising revenue, or possibly market research (popularity metrics), or as a last resort, take down the content.

While the large copyright owners have not been the best of friends to the artists who actually create stuff, this was at least semi-legitimate (I'm not a lawyer, obviously, but it seems to follow the aforesaid Jesuit model, and that has history behind it:-)

Now we have all those GenAI tools trained on a  lot of content that is avaialble on the non pay-walled Internet - this does not mean it isn't copyrighted. The AI/LLM companies are notably trying to claim fair use type arguments (which search engines 20 years ago most notably did not) - the difference may reflect a change of culture, a shift in legal interpretation (of say fair use) or perhaps simply a shift in power (AIs owned by companies that have a larger market cap than the GDP of most countries).

At least one of those AIs is run by the aforesaid search engine company. But others are not, and don't necessarily have search, and certainly don't appear to have done the large deals for content with those big copyright owning companies...

So the game is afoot... ... ...

Wednesday, July 02, 2025

from dyson to dolby

 was at a thing in imperial college in their dyson center then went to the new cavendish (3.0) lab in cambridge's new dolby building, and got worried about the idea that these might both be about noise cancellation. obviousdly vacuum cleaners make sure that your vacuum is really really high quality so sound won't propagate at all, and dolby is all about boosting the signal and reducing the noise

but what happens if we combined these technologies, I hear you cry. Actually, I don't because that would be like the eponymous xenomorph after ripley kicks it out of the nostromo. All you can hear is the irritating sound track music.

Blog Archive

About Me

My photo
misery me, there is a floccipaucinihilipilification (*) of chronsynclastic infundibuli in these parts and I must therefore refer you to frank zappa instead, and go home