The Reform Club
A lot of people have been busy recently writing plans for the Turing Institute most of which revolve around criticising the pace and direction of changes that it has been attempting over the past 2 years, and several culminating in the trivial proposal to put all the eggs in one basket (defense and security) and use the big stick that the EPSRC has of deciding whether to continue the block core funding for the next 4+ years to "make it so". This got up to ministerial level.
That isn't reform. That's simply asset stripping. Not that the asset isn't a good one - the defense&security program has always been strong (at least as much as we can publically know) - other work had variable mileage, although commercial partners from banking/commerce, pharmaceuticals et al, keep coming back with more money, which suggests that they liked what they got (with their real world hard-nosed attitide to funds, especially in the UK where R&D spend is so typically low from industry). We're talking £1M per annum per partner levels.
Also the Turing managed to hire very strong people, both as postdoc researchers and as research engineers, in the face of fierce competition from the hyperscale companies (that all have significant bases in the UK, e.g. Google&Deepmind based here, Microsoft, Amazon in Cambridge, Meta and X in London, OpenAI has a London lab, etc etc) - as well as quite a reasonable set of opportunities in UK universities in places significantly more affordable than London (or the South in general) - so presumably, the interesting jobs had challenges, and an open type of work, that attracted people who had a choice.
How not to reform a National AI Institute?
Make it defense. You will lose a large fraction of the other money, and the majority of staff will not be allowed to work here. As with medicine, the UK does not have capacity to provide AI expertise at the level needed from UK citizens alone. Those left with real clue will often find it easier to work at one of those companies too. Especially since salaries start at 3 to 4 times as much.
- You will lose all the cross-fertilization of ideas that happen in one domain into others especially with the Turing's unique engineeing team acting across all the possible projects, acting as a conduit for tools and techniques.
- You will lose all the visitor programmes including the PhD visitors, which benefits students from all over the UK, many of whom would not be allowed to visit.
- You'd lose access to much interesting data, which would not be allowed to be given to people that can't transparently say what they will do with it. Legally.
- You wouldnt have a "national institute" - you'd just have another defense lab. It might be very good, but no-one would know. In fact, how come there isn't already one, e.g. in Cheltenham? They have plenty of funds?
What's my alternative?
To be honest, I don't have one. The nearest thing I have is what the Turing has managed to do in weather prediction (see Nature papers on Aardvaak and Aurora), and what we did (still do) in Finance & Economics with some very nice work in explainable AI and fraud detection, and synthetic data, which have multiple applications across many domains. Likewise the engineering teams work on data safe havens which is useful in aforesaid finance, but also in practical privacy preserving machine learning in healthcare and any other sensitive domains. And recent work on lean language models. There are quite a few other things one could mention like that.
You can't predict where good or useful ideas will come from. Who knew giving a ton of money to CERN would lead to the World Wide Web? Who knew a Dutch free OS (minix) would incentivise a Finnish gradute student to write Linux (the OS platform on most the cloud, half the smart phones out there)? Who knew that some small TV company (the BBC) request for a simple low cost computer would lead to the founding of ARM (that has more chips in the world than Intel or anyone else - again in your mobile device)? Who knew this neat little paper on Attention is all you need, would lead to all the harsh language about peoples' failure to predict the importance of LLMs (hey some of those people predicted that blockchain might not be a good idea:-) Who knew?
And who knows how to reform a national AI institute?