According to Tim Wu's great book attention is apparently all you need in the brave new world to powerthe new economy, and according to this foundation ai paper that underpins transformers and hence LLMs.
People are worried that LLMs can be used for bad as much as for good - we might call this unhappy eyeballs.
I contend that this is being very much overstated as a problem. Why? Receiver bandwidth (reading time, thinking time, reaction time etc etc).
This documentary on fake news showed that (as did Tim Wu) this is an old old problem that did not require AI of any kind to create massively engaging false stories in print, on the radio, on tv and so on, over 120 years of mis/dis-information - it has thrived without computers, without the internet and without ai.
So why does the threat of generative ai not impress me much?
well, people are already saturated with stuff -whether it is adverts misrepresenting or mis-selling goods, services, products, or political campaigns repeating lies, damn lies and statistics. Of course there was a shift when the internet, online shopping and social media allowed profiling of individuals (from their personal behaviour or inferred from their friends, family, acquaintances and location and pets and so on), which allows (possibly) for targetted adverts (as per the infamous C.Analytica). However, there's actually precious little evidence that this made a big difference.
So will Chap G&T (for want of a better product name) success where C.Analytica and Amazon and Netflix have so far failed to move the dial very much?
I doubt it. I doubt it very much: because users also have tools at their disposal (discerning brains, filters, the off button and so on). The fraction of people that are easily swayed is fixed - they are the conspiracy theorists. The fraction is not fixed by the media, it seems culturally determed at some much deeper level, and is usually, a relatively small part of society. What is more, spreading the message (the earth is flat, AIs are coming to kill you, warning, warning martians have landed, etc etc) doesn't work for very long as a lot of people hear other messages and choose the ones that match their world model (scientific method is actually quite human!), and ignore things that are a poor fit.
So the main existential threat I see from LLMs is to journalists.
:-)
Gloss
Model
An artificial representation of something
that lets you explore the thing, without having to mess
with the real thing
AI
Artificial Intelligence is a collection of technologies that implement
models using statistics, computer science, cognitive and neuroscience,
social sciences and cybernetics. These can be embedded in systems
that continuously update those models with new input.
Furthermore, they may interact with the environment generating output.
AGI (Artificial General Intelligence) is sometimes used to describe AIs
that approach fully humman capabilities. Where are we on this spectrum
today is a matter for debate.
Foundation Models
Foundation Models are large AIs trained on large data.
e.g. LLMs, Stable Diffusion
Generative AI
A Generative AI is a type of AI that creates new data that has
similar characteristics to the data it was trained on-
e.g. LLMs and Sustainabile diffusion systems like Midjourney and DALL-E
Large Language Models (LLMs)
A particular kind of foundation model that is a generative AI, and can create
(usually) text output that is as natural/human as its input:
examples include Bard, GPT or LLAMA
Deep Learning
A collection of techniques for acquiring models from complex input data
without having a human in the loop to write down the model.
Neural Networks
A specific technology inspired by neuroscience and human brains (though
typically very different) for implementing deep learning.
Other tools and methods that aren't neural nets but are widely used today
include regression analysis, random forests, causal inference,
bayesian inference, probablistic programming, Phi-ML...<add yours here>
Some AI properties of interest:
uncertainty quantification - confidence isn't just a trick - knowing your limitations is also important - how good is a classification or decision matters if it is a matter of life or death, or even just wealth and well being.
explainability - there are at least4 good XAI techniques - most add 4x to training costs, but massively increase the value of an AI
sustainability - $4M per train is not sustainable, nor is 4% of global electricity production.
scalability distributed/federated/parallel - one very nice line of enquiry in AI is mixing systems - so XAI can be used to reduce the complexity of a neural net massively whilst retaining uncertainty quantification and hence also making the system sustainable. In some scientific domains, Phi-ML does just that and can get orders of magnitude speedup/cost saving etc whilst hugely improving explainability from first principles.
Federation also offers improvements in efficiency (by combining models rather than raw data) _ this also improves privacy, and reduces dependency on centralised agency. So (for example) instead of the UK giving all our health and economic data to the US tech companies we can just federated it locally (as is being done in HDR UK) and retain its value to us, without huge loss of (data) sovereignty - we can then lease (or sell) our models to other countries. That seems like a much better idea than targeted political campaigns through more precise human-like generative AI-text-bots.