Discover more from Thinking in Bets
Gary Marcus on ChatGPT
When in doubt, ask an expert
In my last mailbag, I was asked for my thoughts in ChatGPT. I am not an expert in that area so hearing my thoughts on it would be kind of like listening to some random person next to you at a bar pontificate. So I invited someone who is an actual expert to give his thoughts: Gary Marcus. I hang on every word he says about AI. Enjoy the answer below!
ChatGPT is the fastest growing consumer product in history, and it’s brought Google to its knees. Type anything you like, and you get a cogent, or at least cogent-sounding answer. Some people see it as tantamount to the holy grail of “AGI” (artificial general intelligence). I don’t. Instead, I see it as a triumph of big data, but a long way from intelligence that we can trust.
The thing about ChatGPT (and its siblings like GPT3) is that it lies, prolifically. Well, technically speaking it doesn’t lie, because it has no intentions, and lying implies an intention to mislead. But it does say an awful lot of things that are untrue, and mixes them together with things that are true, and out comes an authoritative-sounding mixture of the two that nobody should really trust.
If you look at ChatGPT’s mathematics, and you are a mathematician, you will be unimpressed. If you look at its history, and you are historian, you will be unimpressed. If you have it write your own biography, you will be unimpressed; some things will be true, some will be false, and most people who look won’t be able to tell the difference.
Still, there are definitely going to be some use cases; this is not like driverless cars, in which $100 billion dollars have been invested, and we are still very much at the prototype stage, with demos, but nothing like the promised product (point to point driving, without human intervention, wherever you want to go) even close to delivery. Large Language Models (the core tech in ChatGPT) are already being used to help computer programmers (who know how to debug errors when they see them), and to cheat on term papers. They are also already be used to make dubious blogs to promote websites, and to semi-automate dull tasks like writing letters of recommendation. The prose is banal, but when boilerplate is all you need, it can be adequate. People are going to try to use ChatGPT as a search engine. The accuracy issues loom large, and have not yet been solved. For now, the novelty factor is high. How it ends up is still open. You can find my best guess is here:
How much net positive impact they have on society overall remains to be seen; there are also causes for concern. Chatbots like ChatGPT are clearly disruptive to the educational system, they may be used to give medical advice (even though they don’t really understand medicine and are likely to make errors, sometimes serious ones), and they will be used by bad actors to create misinformation at a scale we have never seen before. Personally, I am worried. ( https://www.scientificamerican.com/article/ai-platforms-like-chatgpt-are-easy-to-use-but-also-potentially-dangerous/)
I would say right now that AI is a bit like a teenager, starting to feel its power, but not really entirely possessed of good judgement. Things are going to get wild, and nobody can really project exactly how. Real AGI, AI that we can trust, will be even more amazing, but already this is a kind of dress rehearsal for the future; a teachable moment as we try to come to grips with the many ways in which our world may start to change.
Gary Marcus (@garymarcus), scientist, bestselling author, and entrepreneur, is a skeptic about current AI but genuinely wants to see the best AI possible for the world—and still holds a tiny bit of optimism. Sign up to his Substack (free!), and listen to him on Ezra Klein. His most recent book, co-authored with Ernest Davis, Rebooting AI, is one of Forbes’s 7 Must Read Books in AI.
You can subscribe to his Substack at: