Monday, August 21, 2023

AI4IP

 


Plenty can and has been said about networks (&systems) for AI,  but AI for nets, not so much.


The recent hype (dare one say regulatory capture plan?) by various organisations for generative AI [SD], and in particular LLMs has not helped. LLMs are few shot learning that make use of the attention mechanism to create what some have called a slightly better predictive text engine. Fed a (suitably "engineered") prompt, they match an extension database of training data, and emit remarkably coherent, and frequently cogent text, at length. The most famous LLMs (e.g. ChatGPT) were trained on the Common Crawl, which is pretty much all the publicly linked data on the Internet. Of course, just because content is on the common crawl doesn't necessarily mean it isn't covered by IP (Intellectual Property - patents, copyrights, trademarks etc) or indeed isn't actually private data (eg. covered by GDPR), which causes problems for LLMs.

Also, initial models were very large (350B dimensions) which means most of the tools & techniques for XAI (eXplainable AI) won't scale, o we have no plausible reason to believe their outputs, or to interpret why they are wrong when they err. Generally, this causes legal, technical and political reasons that they are hard to sustain. Indeed, liability, responsibility, resilience are all at risk.


But why would we even think of using them in networking?

What AI tools make sense in networking?

ML

Well, we've used machine learning for as long as comms has existed - for example, training modulation/coding on the signal & noise often uses Maximum Likelihood Estimation to compute the received data with best match.

This comes out of information theory and basic probability and statistics.

Of course, there are a slew of simple machine learning tools like linear regression, random forests and so on, that are also good for analysiing statistics (e.g. performance, fault logs etc)


NN

But also traffic engineering has profited from basic ideas of optimisation - TCP congestion control can be viewed as distributed optimisation (basically Stochastic Gradient Descent) coordinated by feedback signals. But more classical traffic engineering can be carried out a lot more efficiently than simply using ILP formulations on edge weights for link state routing, or indeed, load balancers.

Neural Networks can be applied to learning these directly based on past history of traffic assignments. Such neural nets may be relatively small so explainable via SHAP or Integrated Gradients.


Gassian processes 

Useful for describing/predicting traffic, but perhaps even more exciting is Neural Processes which combine stochastic functions and neural networks, and are fast/scalable, and being used in climate modeling already, so perhaps in communications networks now? Related to this is Bayesian optimisation.


Bayes

Causal inferencing (even via probabilistic programming) can be used for fault diagnosis and has the fine property that it is explainable, and even reveals latent variables (and confounders) that the users didn't think of - this is very handy for large complicated systems (e.g. cellular phone data services) and has been demonstrated in the real world too.

Genetic Algorithms

Evolutionary Programming (GP) can also be applied in protocol generation - and has been - depending on the core language design, this can be quite succesful. Generally, coupled with some sort of symbolic AI, you can even reason about the code that you get.

RL

Of course, we'd like networks to run unattended, and we'd like our data to stay private, so this suggests unsupervised learning, and with some goal in mind, especially, re-enforcement learning seems like a useful tool for some things that might be being optimised.

So where would that leave the aforementioned LLMs?

Just about the only area that I can see they might apply is where there's a human in the loop - e.g. manual configuration - one could envisage simplifying the whole business of operational tools (CLI) via an LLM. But why use a "Large" language model? there are plenty of Domain Specific (small) models trained only on relevant data - these have shown great accuracy in areas like law (patents, contracts etc), user support (chatbots for interacting with your bank, insurance, travel agent etc). But these don't use the scale of LLMs nor are they typically few shot or use the attention mechanism. They are just good old fashioned NLP. And like any decent language (model) they are interpretable too.

Footnote SD: we're not going to discuss Stable Diffusion technologies here - tools such as Midjourney and the like are quite different, though often use text prompts to seed/boot the image generation process, so are not unconnected with LLMs.


Monday, August 07, 2023

re-identification oracle

 surely, chatgpt should be a standard piece of any attempt to show whether allegedly anonymised data is?

effectively it is a vantage point from which to triangulate (any and almost every angle)...

Friday, August 04, 2023

postman pidge

 I'm getting very tired of the infestation of sky rats (as germans call pigeons) in london - they make a mess, are unbearably stupid at getting in the way of cars and cyclists and pedestrians, and serve no obvious use - apparently, they taste so awful that none of the cats or urban foxes in our area will devour them. We need a solution fast.


I asked folks about putting up a hawk silhouette, but apparently this would scare off all birds indiscriminately and we have meadow grass for the express purpose of having some nice critters like our garden space, which any others do, when not flocked out by aforesaid grey menace.


I'm also not a fan of drone delivery systems - ok, for crop spraying or parcels going across to the Orkneys, that's fine, but in urban spaces, those quad copters are just too noisy.


I've considered getting a slingshot, to practice taking out both the pigeons and drones (2 birds with 1 stone, even - if one was lucky could crash the drone into the pigeon or vice versa) - could even be a game, but then there are the neighbours windows, and the people down below to worry about, so that probably doesn't fly (ha ha).

so then I thought about building drones with wings instead of rotors, and then, designing the drones to tackle the pigeons - even further, could we use pigeon as a form of biofuel for the drone, fitting them into the ecosystem in a special sustainable postal niche? seemed possible but tricky bio-engineering.

So then it occurred to me the answer was much more obvious, and more obviously darwinian. 

What we need is a hawk that looks like a pigeon, can cary more than a pigeon, finds its way like a pigeon, and lives on pigeons. Hopefully, the cross breeding programme can just be done right away and doesn't need any GM flocks, though in this case, I am not against it.

I can imagine a society of hawks (or perhaps falcons or some other raptor) living in a very aristocratic manner, serving humans as friends, not slaves, whilst the "cattle" are bred and kept high up on rooftops as fuel.  Cities would once more be adroned with beautiful creatures instead of ugly grey winged rodents, and the postal service would be quiet, prompt, and free, if occasioally stained with pigeon blood.


I can see no downsides.

Wednesday, August 02, 2023

The Enigma Variationals

 After many years of study, Scientists at the Alan Tuning Institute have finally decoded this machine, and we are now ready to show you, or indeed, play to you want it was originally intended for.



Many years ago, Edward Elgar the Elder was strugling to complete his final symphony and turned to his friend Curt Yödel, who was only able to contribute a theory that suggested that some compositions could be finished, but wrong, while others would be perfect, but unfinished. Of course, there was one famous prior, Tomas Albinionini, whose unfinished work, the Adagio Al Fresco was found written in the margins of the remains of the library of Eberbach, possibly scrawled there by the long dead monk, George Borgesi.

Alan Tuning found this keyboard in the belongings of Edward the Elder after his demise, and being familiar with Yödel's Unfinished Therem, devised his own approach to figuring out what El Gar may have been finguring out. His inspiration was that whilst the dominant and tonic notational semantics in use at the time relied on letters (A,B,C,D,E,F,G,H and so on), or even entire words ("doh", "ray" etc), these could easily be represented by numbers - for example, 1,2,3, or in the later case 646F68, if you didn't mind risking the wrath of the coven. Given this, one could work through all the combinations and pernotations that could be played on the keyboard, and evaluate whether they sounded plausible - this could be "fed back" to the player, via a small electric shock system, devised to deliver a higher voltage if the sound was sufficiently unpleasant, or a lower voltage, if the direction of travel (gradient) was promising.  This method of learning to play pleasing sequences became known as "voltage scaling" and was in use in the best sanitoria and conservatories such as the Sheboygan until relatively recently, when the Muskatonic link became more popular.

I've transcribed the piece here for the guitar, as it is easy to play than the old Enigma Keyboard, which frankly has atrocious action, and makes too much fret noise too. I've taken the liberty also of transposing it to the Allen Key.

Here is my modest attempt at the piece. I do hope you like the results - I had a super conductor.

You'll note that this is in Sonata form, and features several themes with recapitulations.

Tuesday, August 01, 2023

teaching CS topic X top down for X={networks, graphics, databases, operating systems...} but what about AI?

 computer science text books have often been written bottom up - start with hardware (here's a CPU, here's a disc, here's a link) and move from physical characteristics, through low level representation of data and processing properties (ISA, memory, errors, coding&modulation, etc) up through the layers of abstraction.


Then along came the pedagogic idea of teaching a couple of CS topics top down. Famous example is Kurose/Ross book on networks, and also Mel Slater and Anthony Steed's book on graphics

(start with web, start with ray tracing etc)

Other books have tried to do this for data bases, operating systems, and (to some extent) PL.


So what would a top-down approach to AI look like? eh? eh, Chat-bard, llamadharma, out with it.



Blog Archive

About Me

My photo
misery me, there is a floccipaucinihilipilification (*) of chronsynclastic infundibuli in these parts and I must therefore refer you to frank zappa instead, and go home