Super Prompt: Generative AI
Examining generative AI—not to hype breakthroughs or warn of apocalypse, but to understand how things actually work. Mental models over hot takes. Technology specifics over marketing fog.
Welcome to Super Prompt. Hosted by Tony Wan, ex-Silicon Valley insider.
For The Independents—people who think for themselves, refuse narrative capture, and value depth over certainty.
Independent analysis. Unsponsored. Weekly.
The future belongs to better questions.
Episodes
28 episodes
AI Safety: Constitutional AI vs Human Feedback
With great power comes great responsibility. How do leading AI companies implement safety and ethics as language models scale? OpenAI uses Model Spec combined with RLHF (Reinforcement Learning from Human Feedback). Anthropic uses Constitutional...
•
Season 1
•
Episode 27
•
16:38
Open Source LLMs: How Open Is "Open"?
Notable open source large language models from Meta, French AI company Mistral (valued at $2B), Microsoft, and Apple. Not all open source models are equally open—the restrictions and licensing constraints you need to know before deploying one. ...
•
Season 1
•
Episode 26
•
13:28
Open Source AI: The Safety Debate
Why enterprises and entrepreneurs choose open source LLMs like Meta's Llama—cost-effectiveness, control, privacy, and security. The safety and ethics debate: which poses greater risk to humanity, open source or proprietary AI models? Both? Neit...
•
Season 1
•
Episode 25
•
16:29
LLM Benchmarks: How to Know Which AI Is Better
Beyond ChatGPT and Gemini: Anthropic's Claude and the $4 billion Amazon investment. How AI industry benchmarks work, including LMSYS Arena Elo and MMLU (Measuring Massive Multitask Language Understanding). How benchmarks are constructed, what t...
•
Season 1
•
Episode 24
•
10:35
Multimodal AI: When ChatGPT Learned to See
Recent updates from Google and OpenAI feature multimodal capabilities—AI that processes multiple input types simultaneously. Why multimodal models outperform single-modality systems, demonstrated through a hypothetical chatCAT that helps owners...
•
Season 1
•
Episode 23
•
10:00
Google Gemini: Three Models, One Strategy
Google's Gemini family of multimodal AI models compared to OpenAI equivalents. What Nano, Pro, and Ultra each do, how they compare to GPT-3.5, GPT-4, and ChatGPT, and what "multimodal" means in practice. Solo episode on Google's LLM strategy.
•
Season 1
•
Episode 22
•
7:11
Building Custom GPTs: Seven Lessons from GPT Builder
Creating a travel planning AI using OpenAI's GPT Builder with no code. Seven takeaways from building a custom GPT, what the process reveals about prompt engineering, and the constraints of the no-code approach. Includes demonstration of the Hol...
•
Season 1
•
Episode 21
•
25:56
Enterprise LLMs: Cloud Deployment Strategy
Deploying large language models for enterprise applications. Rackspace CTO Jeff DeVerter discusses implementing Google PaLM for sales, enabling Azure and AWS customers, and why your LLM choice should probably match your cloud provider. Private ...
•
Season 1
•
Episode 20
•
57:35
ChatGPT in the Classroom: Yale's Response
How is generative AI changing teaching, learning, and evaluation? Yale's Assistant Dean of Academic Affairs Alfred Guy discusses the university's AI guidance and its implications for education. What educators are grappling with, and how institu...
•
Season 1
•
Episode 19
•
1:34:11
AI Screenwriting: GPT-4 Meets the Writers Strike
Creating a science fiction blockbuster pitch using ChatGPT power prompts: Role Play, Chain of Thought, and Self Critique. How these techniques improve output, and the AI-related issues at stake in the 2023 Writers and Actors Strike. Solo episod...
•
Season 1
•
Episode 18
•
33:09
ChatGPT Guardrails: The One Question It Won't Debate
Ask ChatGPT if it possesses human-like intelligence and you'll get a definitive "NO"—unusual for a system that typically provides balanced perspectives. This guardrail reveals the ethical concerns OpenAI built into the system. What those concer...
•
Season 1
•
Episode 17
•
19:31
ChatGPT vs The Onion: Can AI Get the Joke?
Does ChatGPT have a sense of humor? We test whether it would find an Onion headline funny: "Microsoft renames ChatGPT to ClippyChat." Large language models are better at analyzing humor than creating it—here's why. Solo episode exploring AI and...
•
Season 1
•
Episode 16
•
15:55
ChatGPT Jailbreaks: The Grandma Exploit
How do you extract prohibited information from ChatGPT? Grandma and DAN exploits trick language models into violating their own policies. Why these techniques work, what they reveal about LLM architecture, and how companies protect against prom...
•
Season 1
•
Episode 15
•
23:44
AI Hallucinations: Bug or Feature?
What are AI hallucinations, and should we consider them bugs or features? The top 10 categories of AI hallucinations with examples, how ChatGPT might hallucinate an answer about Blade Runner, and ChatGPT debating itself on whether hallucination...
•
Season 1
•
Episode 14
•
23:07
LLM Training: Superman's Kryptonite-Proof Suit
Why isn't Superman's suit Kryptonite-proof? This question reveals how large language models are trained. We break down transformers (the T in GPT), self-attention mechanisms, and the inference process—using Superman to explain why GPT-3 can gen...
•
Season 1
•
Episode 13
•
18:56
Large Language Models: Getting from GPT-3 to chatGPT
How do ChatGPT, GPT-3, and Large Language Models relate to each other? We explore the hierarchy: artificial intelligence, neural networks, large language models, GPT-3, and ChatGPT. What each term means, how they connect, and why the order matt...
•
Season 1
•
Episode 12
•
22:17
What Is ChatGPT? Explained
How would you describe ChatGPT in your own words? This solo episode provides definition and context for newcomers to AI. I answer the question myself, then ask ChatGPT to evaluate my answer. Introduction to what ChatGPT is, how it works, and wh...
•
Season 1
•
Episode 11
•
25:21
DALL-E: Why AI Can't Make Your Perfect Pizza
Why is it so hard to get DALL-E to create the exact image you envision? PhD candidate and entrepreneur Arijit Ray discusses generative AI constraints, and his startup training AI to predict social media responses and run marketing focus groups....
•
Season 1
•
Episode 10
•
51:43
Voice AI: Your Phone Knows You're Sick
AI that screens for diseases like COVID-19 based on voice patterns. Speaking a simple phrase into your phone, the system analyzes your voice profile for respiratory illness, and can be trained for conditions ranging from obesity to substance us...
•
Season 1
•
Episode 9
•
1:01:56
AI Art Authentication: Faking a $450M Leonardo
Can neural networks authenticate artwork? Husband-and-wife team Steven and Andrea Frank developed AI to assess painting authenticity. They tested it on Salvator Mundi, which sold for $450 million as a Leonardo da Vinci—the most expensive painti...
•
Season 1
•
Episode 8
•
24:08
AlphaGo vs Lee Sedol: The Move That Shocked Humanity
AlphaGo, created by DeepMind, played 9-dan champion Lee Sedol in a televised match that reportedly changed how China's leadership viewed AI. The game of Go has more board positions than atoms in the observable universe. How AlphaGo approached t...
•
Season 1
•
Episode 7
•
44:51
Self-Driving AI: Training in The Matrix
A digital replica of the driving environment is being built for autonomous vehicle training. The goal: make simulation indistinguishable from reality for self-driving AI. How this works, why it matters, and what happens when virtual training me...
•
Season 1
•
Episode 6
•
26:02
Self-Driving AI: The Bicycle Problem
What's easy for a teenage driver but hard for autonomous vehicles? Bicycles, bike racks, motorcycles—edge cases that reveal the gap between current AI capabilities and the roads we actually drive on. Tesla and other companies pitch fully autono...
•
Season 1
•
Episode 5
•
30:18
Self-Driving Cars: Why Tesla and Waymo Disagree
Tesla and Waymo take fundamentally different approaches to self-driving. We examine the technical differences, autonomous driving levels 1-5, and why stop signs remain one of the hardest problems to solve. Featuring Maroof Farooq, AI Engineer a...
•
Season 1
•
Episode 4
•
35:16
Facial Recognition: How to Disappear with Makeup
How facial recognition works for surveillance—and how to fool it. We cover adversarial techniques using physical objects: clothing, accessories, even makeup. The technical principles behind why these methods work, and what they reveal about com...
•
Season 1
•
Episode 3
•
35:04