Sunday, February 04, 2024

standards and interoperable AI and the lesson from the early internet...

Back in the day (e.g. 1980), when we were deploying IP networks  there were a ton of other comms stacks around, from companies (DEC, IBM, Xerox etc) and from international standards orgs like ITU (was CCITT- X.25 nets) and ISO (e.g. TP4/CLNP). They all went away because we wanted something that was a) open, free, including code and documentation...

and

b) worked on any system no matter who you bought it from, whether very small (nowadays, think rasperrry pi etc) or very large (8000 core terabytes of ram, loads of 100s Gbps NICs etc), and 

c) co-existed in a highly federated, global scale system of systems.

So how come AI platforms can't be the same? We have some decent open source, but I don't see much in the way of interoperability right now, yet for a lot of global problems, we would like to federate, at coarse grain/large scale - e.g. for healthcare or environmental models or for energy/transportation so we get the benefit e.g. better precison/recall, longer prediction horizons,  more explainability, and, indeed, more sustainable AI, at the very least, since we wont all be running our own silos doing the same training again and again.

We should have an IETF for AI and an Interop trade show for AI and we should shun products and services that don't play - we could imagine an equivalent of what happened to europe and US GOSIP) Government Open Systems Interconnection Procurement) - which evolved into "just buy Internet, you know it makes sense, and it should be the law".

No comments:

Blog Archive

About Me

My photo
misery me, there is a floccipaucinihilipilification (*) of chronsynclastic infundibuli in these parts and I must therefore refer you to frank zappa instead, and go home