Tuesday, January 27, 2026

fabless industries aren't exactly new

 I'm reading Apple in China, which is interesting at the detail level (basically how they trained up with the help of Taiwan literally millions of skilled people to produce all their tech - it is now hardly surprising that china doesn't need the US's help any more to make new stuff...)

But the author seems to think that outsourcing tech manufacturing was something terribly new and clever.

Don't get me started on clothing and the east india company and the british empire...

Closer to home for the USA, however, is a much more instructive story - the electric guitar:

(Also lots of other instruments- remember Yamaha flutes were jolly good an way cheaper than Gemeinhardt)

but gibeon and fender both started manufacturing in mexico and japan then later in indonesia - sometimes trying to "brand" things different(ly) so western prejudice about "lower build quality" from those foreigners was offset by calling things Epiphone or Squier (actually its a bit more complex than that, but you get the idea)...

But in reality the outsorced products weren't just a whole lot cheaper. I own a japanese made custom shop fender strat and a 1982 squier tele (possibly/probably made in korea) - both are a very very good - better than instruments i've tried at 5-10 times the price (the tele was off a friend but if you look at the time they were £150. the strat was used so hard to know what new cost was but probably £1500.

And nowadays, i'd still buy a squier or epiphone (i had a beautiful 335 for a while) or just bite the bullet and buy an ibanez (actually, just did - £300 - fantastic quality - though i did recently buy a G&L Tribute (fretless) bass which was made in the USA and was very sensibly priced (unde £500).

Anyhow, what Apple did (hollowing themselves out) was absolutely nothing new. It wasn't even rock'n'roll.

Sunday, January 18, 2026

Mind your Ps and Qs, and the K, M, G, and Ts will look after themselves....

  • M I remember back in the early 1980s Sun Microsystems (old unix workstation maker of fine machines) said that the breakthrough was 3Ms - Megabyte of memory, MegaHz processor speed and Megabits of networking (I seem to recall 4Mb, 4MHz and 3Mbps) - prior to this, of course, we'd had 64Kbytes of RAM in PDP11s and 64kbps of network speed and some Khz process clocks).
  • G A decade later, Craig Partridge published a great book on Gigabit networking,  and storage and processor speeds were, indeed, at least heading for Gbytes and GHz.
  • T It took a bit longer, but it is certainly possible to get Tbps or close (800Mpbs) of networking, and Tbytes of storage (RAM is still a bit pricey for that to be common, but in cluster compute its a thing, but on my laptop, Tbyte SSD is totall affordable). While Moore's law ran out of steam fairly recently, so an individual core might still be GHz, I can have a lot of cores, so total processing throughput (including CPU and GPUs) might easily look like THz.
  • P It's interesting to speculate what tech will look like for Petabits a second, but the optical fiber photonics folks definitely can do that and there are other transmission technologies which are fast and affordable. In storage, there seems to be no obvious reason - in fact if either of glass or synthetic DNA storage get on the market (and that is not science fiction) then Petabyte storage (at least write once/appeand) could easily be affordable soon too. So then there's processing speed - i think this will probably not happen in a simple way, partly because it probably isn't necessary in personal devices, but I could be wrong - I don't think it would come from neuromorphic hardware because that doesn't have to be fast, it is just massively more efficient than using tensor processors to do operations on neural network graph data structures. 
  • Q - so what about quantum computing? While they don't really work yet, and also are not exactly going to fit in your new rayban specs or wrist-borne device, they might help speed things up (in some possibly rather limited, cases), however...
Many of those hyperscale companies led by personalities are guilty of massive hubris. Because they are hugely successful in one domain, their leaders assume they can repeat that in arbitrary other areas, and I think this has a massively negative effect on anyone else making progress (or even just getting funding) - 4 examples of over-claiming that have happened multiple times in recent years...
  • Quantum Supremacy - multiple times there have been announcements that embarassingly jumped the gun on people actually making a working quantum computer that actually a) has enough Q bits to be useful and b) has Q bits that don't get overwhealmed by noise or decoherence so fast they are not even a ghost in the machine.
  • Neural Interfaces - there have been beautiful experiments with bio-feedback systems that are now being used to treat various neurological conditions (and problems with phyisical consequences like Parkinsons) - but having an every day mind-computer interface is still science fiction really. Despite various people jumping the gun.
  • Metaverse- we've (I myself included) being doing decades of work in immersive virtual reality. It was a staple of cyberpunk SF books 30+ years back (and more). But it is still a mess outside of games. While augmented reality as a tool (e.g. for repairing things or even surgery) will totally be a thing, I think the metaverse (as per Snow Crash, at least, let alone Neuromancer or Altered Carbon or even just the old Star Trek Holodeck) is not with us yet affordably. I think this is more a failure of use case/compelling application than actual technology, however.
  • AGI - all the money in the world is being spent on better predictive text, but the same people getting the money promise AGI. Hah. Bunch of Marketroids (that's not an insult - they are very good at it given the levels of investment they are getting compared to the total paucity of actual usefulness of their tech beyond prompting governments to worry about their energy grids).


Thursday, December 04, 2025

UK government introducing digital id by stealth - without proper inclusion or safeguards as far as I can see...

 If you are a director of a company in England, you will recently have received a letter requiring you to register for this: https://www.gov.uk/using-your-gov-uk-one-login to be able to continue to do some things (e.g. mandatory for directors of companies). https://www.gov.uk/guidance/verifying-your-identity-for-companies-house

When i did this, the website said that you could do this at a post office 
if you only had paper documentation (e.g. birth cerfiticates, paper passports etc) 
and could use notaries to verify some of the documents.

However, it seems that this alternative may not have ever worked properly,
since now, if you look at the companies house web site guidance,
you have to use the digital service first to input your document details online..

So talking to someone else more recently , they said that this meant they had to step down as a company
director since they did not wish (or perhaps could not) use a digital id upload - 

Is this legal? The government is effectively ruling out people from some things in 
society if they cannot use the digital service -  it appears that companies house forces this, as when you look at the link for verifying your id via the postoffice, 

This seems inequitable at least, and possibly completely a massive security danger,
given the risk of leakage of photo id which many online organisations have so 
publically experiences in recent years....

Also, the behavoural/liveness checks on the photo-id look naive -  when many biometric systems are moving to multi-modal  (e.g. face+movement, + speech / speaker recognition when saying random given phrase or face + movement + fingerprint or iris etc etc)

So is the uk government a) introducing digital id by stealth? and worse b)
introducing a centralised, insecure system and c) ignoring any requirements
for inclusivity? and d) without any public debate?

Thursday, November 20, 2025

Cloud versus Edge - I think the jury is still out - sort of...

With recent Cloudflare and AWS (and previous meta outage) it comes down to one simple tradeoff :

On the one side is everyone running an Internet service saves some money by paying a Cloud provider to run the infrastructure for them -

So the cloud outfit get to amortize a lot of costs over all the customers by having a small number of big data centers (numbered in thousands) instead of millions of enterprise computing services run by every tom, dick harry, tescos, twitter, openai, slack, signal, ticketmaster etc etc (all actual examples of people who lost service during aws and cloudflare outages)

As Spidey says "with great power comes great responsibility". So cloud providers not only provision carefully,
but they do actually provide some levels of fault tolerance by providing redundant servers,
and even run some consistency protocols to make sure of a customer's service needs very high availability,
so long as a majority of the duplicate servers are running, the service is ok - this can operate globally,
so even if there's a whole country disconnected (e.g. international fiber cut, or national grid outage, both also real events in recent years), the rest of the world can move on ok...

But it would seem that they don't fully apply this distributed, replicated, fault tolerant/high availability,
possibly somewhat even decentralised thinking to the implementation of their own internal necessary infrastructure - so in AWS and Cloudflare case, the error was central - someone in AWS didn't consider a particular pattern of performance that meant a DNS config (their design is 95% sane) led to a slow server updating DNS and overwriting more recent entries, causing customer services that had needed those new entries to be unable to find them In the Cloudflare case,
a centrally managed configuration file grew in one overnight update by twice the size, exceeding cloudflare services maximum file size constraint (this is actually rather sad in terms of being fairly esasy to prevent by normal system checking/validation processes. The AWS one is slightly more subtle, but not much more. in fact, earlier outages in replicated/distributed services (actually at cloudflare earlier) took PhD level thinking to come up with long term solution - see this paper for one example: Examining Raft’s behaviour during partial network failures

The cloudflare example is also a little reminiscent of the Crowdstrike outage, but that wasn't cloud - crowdstrike has a rulebase for its firewall productsm and microsoft windows is required to allow thir parties to install firewall products (ven though a modern microsoft  OS firewall is actually good) - crowdstrike had a bug in a new rule base so when all the windows machines using that product updated their rules, the firewall code (inside the OS, allowed in by microsoft due to anti-monopoly rules) read a broken file, which caused an undetected code bug to triger an exception, and, due to oversight the crowdstrike software engineers had not put in an exception handler, which would have led to a safe exit of that code, so instead the exceptin caused an OS crash (i.e. bluescreen!)...i this case, the central error affected millions of edge systems directly. - and due to the way the s/w update worked, needed a lot of manual intervention by many many people in many organisations..


In a non-cloud setup, you'd have natural levels of diversity in all the millions different enterprise deployments (even if just different versions of things running) so outages would typically be restricted to particular services, but in the cloud setup, an infrastructure outage takes down thousands of enterprises....(I think AWS reckoned about 8000 large cusomters - not sure about Cloudflare but estimates are they run about 25% of the Internet ecosystem's defenses)...



background
AWS explainer
Cloudflare explainer
Replication failure during partial network outages
UK government very useful report on data center sustainabilty (has lots of useful statistics):

To be fair to the cloud service folks at AWS and Cloudflare, they found, fixed and publically reported the problems in under a day, so the concentration of resources in cloud also meant a concentration of highly paid, really expert people who can troubleshoot a problem, and once its fixed, the deployment is also quick. on the other hand, a decentralised setup (more like the frowdstrike example) can also be deployed fairly fast if they had been slightly more careful about their s/w update process....

So i'd say clould v. edge, at the moment hard to pick which is more resilient, which is cheaper

Thursday, November 13, 2025

prisoner of cellular blockheaded thinking #9

So the cellular industry is often lauded for its planning and general coherent approach to the world.

Let's remind people how inaccurate that is.

From the get go, Bell Labs & parent AT&T decided, having invented cellular telephony, that there was no market for it. (repeating the famous IBM error that "the world will only need 3 computers, 2 in America and maybe one in England).

Later, people invened the bluetooth stack (yech - serial line emulation, modem control tones, and no sensible mesh mode for ages)

Then blackberry and nokia tried to copy apple's iphone ideas and almost totally tanked what had previously been incredibly smart and succesful industries.

They they caused the fixed telephone service to run out of telephone numbers multiple times.

Most recently, car owners and smart meter owners are finding that the ability to remote access said vehicles and devices is being turned off because providers won't be running edge, gprs (2.5G) Or even 3G pretty soon - so because there's no backwards compatibility until much later generations, goodbye to all those useful services.

And what is 6G actually for, remind me?

My wifi still works ok :-) on IPv4 (only 46 years old) and IPv6 (20+ years old...

Friday, October 31, 2025

AI versus Humanity

humantity is too stupid to build an AI that would threaten its existence (humanity, not AI's existence...we could easily build a self-destructive AI - see below).

the main reason is logistical. we are rubbish at supply chains - food comes across the planet out of season, for rch folks but bypasses many people on the way who are suffering shortages. we make computing and communications devices that depend on rare earths only available in war zones or our adversaries land.

we throw away working stuff.

Any AI will have to build itself reliable supply chains for replacement parts, software maintenance and energy. To do that, it will need an army of robots (in the origianl sense of robot, Karel Capek's obedient tireless servants). But any humans spotting such a reliable supply chain will immediately take it over and steal from it, ratehr than rely on their own rubbish production line. Capitalism and natural selection mean a race to the bottom - humanity's inferiority will be AIs downfall. The seeds of our digital overlords demise are built in, due to the inherent contradictions in the rules of engagement.

Friday, October 10, 2025

zero inbox wars

 i've had a zero inbox policy since first getting e-mail (late 1970s) - having moved through various systems (roughly once a decade), recently landed on fastmail (which I very very much recommend- extremely fast, but also very easy to migrate to, integrate with other mail systems and calendars (!) and very good support).


so throughout various systems, I've used different tools for managing incoming, and also archives - i have kept all e-mail to/from/cc:d me since 1976:-) 

at some point, everything is esentially kept in a bunch of directories (folders) organised with a small (<=3) levels of hierarchy - somewhat like internet name space (.edu .com etc) - with names of people (students) or projects or personal (money, health, house, transport etc).... 

not sure what fastmail uses behind the scenes but seems to scale well and has nice rule system for automatically processing incoming too....plus I really like how it interacts with other mail systems ( I have to maintain several outlook accounts for some places I work, and at least one doesn't allow forwarding, but fastmail can pretend to be a client, and make the mail look like it was just got through imap etc)....


anyhow, through various stages (cambridge's ownbrew, then exchange baed system, gmail, and now fastmail) have seen a steady decrease in spam - really very little getting through at all these days (maybe one a day) and very low false positives too ...


but what's left has two things steadily increasing 

1/ academic "spam" - e.g. calls for papers, invitations to review, offers to publish my work "for free" (like why would I ever pay?) etc

2/ mandarin - not reading or speaking any version of chinese, I'm assuming this is actually more of 1/ but just for chinese events and publications...

I'm looking for a two stage LLM to deal with those two cases - one "translate", two see if it is relevant - could train the model (or refine/fine tune) based on my own publications or conferences i'm on programme committee for...

maybe a student project!


Anyhow, in my current fastmail setup with 5G of mail, there are only 3 messages in the inbox...soon to be 0.

Blog Archive

About Me

My photo
misery me, there is a floccipaucinihilipilification (*) of chronsynclastic infundibuli in these parts and I must therefore refer you to frank zappa instead, and go home