having a conversation in the Turing with my mentor and discussing whether LLM is just AGI because AGI is "just" statistics, and also "just passed the Turing Test"....and we both observed that most interactions we have with other GIs (human intelligences) are pretty dumb.
so my main concern with this is the usual repetition of the Theodore Sturgeon comment about most SF being pretty terrible, and he responded with "most everything is pretty terrible". Intelligence is rare - most GIs can exhibit it, but only do so very occasionally, as intelligence is really not often very useful - habit is much more useful (thinking fast, rather than slow, is a survival trait according to kahenman and tverski).
so like many things, smartness is zipf/heavy tailed -
the title of this entry refers to scholarly works - most papers are cited less than once. A few papers get tens of thousands of citations.
So you train an LLM on the common crawl, or on the library of congress, and the majority of stuff you've trained it on isn't even second class, it is just variations of the same thing.
This isn't model collapse - this is an accurate recording of a model of what most people's visible output looks like. Dim, dumb, and dumber. So what?
Well, going back to the Turing test, if you, an Average Joe, pick an LLM at random, prompt it with some average prompts and compare it to the average GI, you will unsurprisingly conclude the LLM has passed the turing test.
But what if you had Alan Turing (assuming still alive) at the other end of the GI teletype, I ask? and what if you got Shakespeare and Marie Curie and Onara O'Neil to ask some questions of it and the LLM.
Then I suspect you'd find your LLM was a miserable failure, like the rest of us. Except that every now and then, we rise to the occasion and actually engage our brains, which it cannot do.
No comments:
Post a Comment