Wednesday, November 13, 2024

Travel Risk Assessment Made Simple

The following is taken from the new Unversity of Llambridge's travel risk assessment system - this is part of the training to use the new system - in each pair of risks, if you think the first one applies, then we advise you not to travel. In the case of the second of the pair occurring, your insurance may cover you, or in the worst case, the Government may send gunboats and airlift you out, although they may reserve the right not to. In all cases your mileage wil vary.


Asteroid Strike

Blue Screen takes out all the ticket and time table systems


Super volcano

Regular volcano disrupting flights


Sea level rise and drown all US coastal cities

Tornados disrupting flights


Zombie plague

Trans-species pandemic


Global thermonuclear war

Military invasion


Great depression

World wide banking near collapse


New ice age brings 1km glaciars down over southwestern europe

3 weeks of snow shut down all airports and most roads


Aliens invade earth "to serve man"

Immigrants arrive to work in the health service


The laws of physics change slightly so that moore's law runs out 25 years ago.

A rogue piece of GPU malware melts all the Nvidia devices on the internet


The celestial emporium of benevolent knowledge disrupts human cognition world wide.

Why Fish Don't Exist is compared favourably to Zen and the Art of Motorcycle Maintenance


The Foreign Office travel advice tells Elon Musk that it is now safe to go to Mars

The Home office tells everyone that it is now fine to go to Sidcup

Monday, November 11, 2024

catastrophic unlearning...

Unlearning in AI is quite a tricky conundrum. 

we really ought to do it, because 

1/ we might be asked by a patient to remove their medical record from the training data as they didn't consent,. or we breached privacy in accessing it... 

2/ we might be iinformed that some datum was an adversaries input designed to drift our model away from the truth, 

3/ it might be a way to get a less biased model than simply adding more representative data (shift the distribution of training data towards a better sample could be done either way). 

There may be other reasons.

The problem technically is that the easiest way to do unlearning is to retrain from the start, but omitting the offending inputs. This may not be possible, as we may no longer have all the inputs.

A way some people propose is to apply differential privacy to determine if one could remove the effect of having been trained on a partcular datum, without removing that training item - this would naively invovle adding training with an inverse of that datum (in some sense) - the problem is that this doesn't actually remove the internal weights in a model that might be complex (convolutions) of that with previous and subsequent training data. And hence later training still might again reveal that the model "knew" about the fobidden input.

But there's another problem - there's also the value of the particular data to the model in terms of its output - this is kind of like a reverse of differentially private arguments. Two examples

a/ rare accident video recording (or even telemetry) data for training self-driving cars

b/ dna data from indoviduals with (say) very rare immunity to some specific medical condition (or indeed, very rare bad reaction to a tratment/vaccine)

These are exactly the sorts of records you want, but might specifically be the kinds of things indoviduals want removed (or adversaries input to really mess with the robot cars and doctors).

Perhaps this might make some of the Big AI Bros think about what they should be paying people for their content too.

Blog Archive

About Me

My photo
misery me, there is a floccipaucinihilipilification (*) of chronsynclastic infundibuli in these parts and I must therefore refer you to frank zappa instead, and go home