There's some confusion about this - the e2e principle was originally about OS layering and the idea of parsimony. It was transfered by folks at MIT to the functionality of communications protocol layers, hence we get the "thin waist" of the TCP/IP stack, IP, and the plethora of link and physical layer technologies, and the diversification of transport (end-to-end) protocols and applications and shims above, particularly end-to-end encryption (TLS or QUIC built in etc). All good.
A. Now add in-network compute and two things happen -
1/ Compute is the end point of some data and normally would therefore need keys to decrypt comms.
2/ Compute is another resource along a path so we now have recursive layering - the common use cases assumes there are "final" end points, but we need all the usual services we expect "end-to-end" for those AND for the in-network compute middle-end point - i.e. not just crypto, but also, integrity, reliability, flow and congestion control, and so on, as these intermediaries are talking over IP, which doesn't do that, because thin-waist etc
3/ So we just have recursive e2e - no problem there. Just another tunnel/vps etc
B. Ah, but now lets do something less old-fashioned - what if
a) the in-network compute is able to work on encrypted data (e.g. is homomorphic crypto function) or is a secure multipaty computation and
b) the in-network compute is redundant (or loss tolerant) too.
Then we don't need it to be a principle in the e2e2e crypto. Nor do we need integrity or reliability checks.
However, in both A and B, we do still need flow/congestion control, and, what is more, that resource management is now no longer merely based on queues (ECN etc), but is based on computational (and possibly associated storage) resource management too. And we need to signal that across the e2e2e protocol, not something TCP or QUIC do, but perhaps could be added in to MASQUES for example....
just a thought.
No comments:
Post a Comment