Monday, April 28, 2025

zero trust

I'm pretty sure this sort of thing happens because of bad parenting - people that don't trust anything come up with the idea of distributed systems that have no need of any anchor for trust anywhere.


so i have lots of problems with this - starting from systems  - and the classic ken thompson reflections on trusting trust. These deecntealization extremists have to confront that they run software on hardware, and even if they build their own hardware and write their own software, they probably use an OS and a Compiler from somewhere else. However, it gets worse. Why should we trust the actual zero knowledge protocols they use? who has verified them, and how? why do we trust those verfication tools (peoples' brains too). And worse still. Why should we trust this new fangled idea zero. The Romans and Greeks and ancient mesopotamians got along fine without it.

No. I have zero trust in zero trust.


Saturday, April 26, 2025

powerless trio

Three things we have that are between 50 and 100+ years old, still work, and do not require electricity - classic portable type writer from Remington and a Singer 66K sewing machine from 1917

when civilisation collapses, we'll still be able to write letters, fix clothes and listen to some old 78s!!

 

Tuesday, April 22, 2025

Artificial Intelligences as Trusted Third Parties (AI as TTPs)

AI as TTPs is a recent posting by Bruce Schneier who has impeccable security credentials.

However,  I'm not convinced that the paper he is highlighting is as groundbreaking as he is.

The authors of the paper also have great track records and include AI, but I think they're missing something basic that means that a single or group of TCMEs ("Trusted Capable Model Environment") can't actually do anything different than any other computation, subject to basic privacy controls (e.g. access control authorisation, auditing, encryption of data at rest, in transit, during computation (e.g. using FHE and TEEs etc etc).

But also:

a) visible communication in/out of the computation - i.e. information flow control

b) control over specificity of that data (i.e. differential privacy - can you tell if an individual record is present or not, to put it. crudely)

c) secure multiparty computations and zero knowledge systems

which the paper compares and contrast with their new TCME notion. However, I think the dimensions they use for comparison are a bit of a stretch.

The main problem I think is that the TCME seems to be indistinguishable from any other trusted program.

Any shared secret between models (e.g. federated or decentralised learning) is just the same for AI/ML as for any other algorithm. Perhaps the intersection of probability distributions looks a bit different to juse being able to say "the richest person is A" without knowing how rich A (B, or C) actually is - but in the end, the distribution has some moments and can be described by some number of those more or less precisely - a distribution of distributions can be aggregated with more or less precision or uncertainty (e.g. respecting differential privacy, and some widest level, or preventing set membership inference at the finest grain) - the model itself can be protected from outside model inversion attacks by various schemes, but I don't see what TTP function is provided that isn't just a different mix of existing techniques for providing trust.

Blog Archive

About Me

My photo
misery me, there is a floccipaucinihilipilification (*) of chronsynclastic infundibuli in these parts and I must therefore refer you to frank zappa instead, and go home