As part of the podcast series, Jason runs a hot-take stand, inviting guests to voice sometimes contrary opinions to the general wisdom. Considering the state of the tech world, AI has been a recurring subject.
I thought this a perfect format to chat about Responsible AI. I plan to dive a little deeper to a few points in this article, but you can find the podcast on:
And most other places you’d consume your auditory mental stimulation.

Some of the topics we covered:
AI is a tool.
I use the angle grinder analogy - amazing tool around a building site for cutting and polishing metal bits, not really good for pruning roses.AI hype cycle and eventual crash
The dangers of selling something and under-delivering. Especially when you’re over-selling general capabilities, and the impact ends up killing people.AI in Product Management
Educating yourself, not trying to replace people and creativity, and never outsourcing your thinking (even though some people are tempted).
You can listen to all these points in depth on the 20 minute podcast (or only 16 if you speed up playback 😅).
Now for the topics I’d like to explore deeper, which we just touched on the podcast.
Upper limit of LLMs
Training on massive amounts of data has been the unlock in machine learning, allowing to pre-train foundational models, followed by self-attention for LLMs. With computing power becoming cheaper and faster, we saw larger and larger models. At first, it seems like each iteration was a massive jump forward. But then diminishing returns hit: the jump in performance between GPT3 and GPT4 wasn’t in line with the increased amount of data, model size, and processing power (energy expenditure) needed to train those models. The exponential increase of resources resulted in only incremental improvement.
That is generally referred to as an S-curve, and typically follows breakthroughs. While LLMs would no doubt provide interesting insights, they won’t by themselves be the path to higher generality of intelligence (AGI).
At this point we should talk about reasoning. One of the interesting emergent properties of large models is their ability to “reason” to a degree. You can ask a question, or even ask it to deconstruct a problem and work through the steps. It’s pretty amazing from a research perspective, but it’s not close to human reasoning. While 2025 will likely see focus on and improvements in the reasoning capabilities of foundational models, it won’t be anywhere close to what some pundits make it to be. For one, even with OpenAI’s o1, there are signs of data poisoning — it performs well on intelligence tests, because those were included in the training set. Sometimes all it takes is changing “Harry went to the shops to buy eggs” to “Sally went to the shops to buy apples” for it to completely fail to answer correctly. (And that doesn’t even touch on IQ being a measurement of the thing and not the thing itself).
I don’t believe that just trying to run bigger and bigger sets, with less and less ethically collected data and larger power bills would be the answer. And, unlike Eric Schmidt, I don’t think we should destroy the planet just in case I happen to be wrong about it. Something about this risk/reward equation looks awfully like money-grabbing on the ex-Google CEO billionaire’s part.
For my part, I see smaller models with ethically sourced data sets and far more reasonable training and inference costs as providing a more sustainable and better outcome. In the same principle as the move off mainframes and into networked computers, a swarm of small models might out-perform a larger one and might exhibit its own emergent properties.
Humans are the problem
Jason Knight titled the episode “AI Is Just A Tool - What Matters Is How We Use It” which pretty much hits the nail on the head. The collection of technologies we refer to under the umbrella term AI, or even more specifically under GenAI, is very interesting and has some good use cases. The problem isn’t with the technology itself, but — just like any hammer — is in its application. That is, the problem of misuse lies with humans.
First is how the current generation of tools was build (appropriating stuff found on the internet, without any consideration of copyrights). That’s going to be a fun little circus that will play out in courts all over the world. Right now we’re in the Wild West days, but regulation will come. And before you think ‘who cares?’ consider two points. One, without regulation you’d still have cocaine and morphine in your cough syrup, and probably die. Regulation is good for you, despite what you might have heard. Two, for those outside the Silicon Valley bubble, the results of AI are often ‘off.’ As in, off-putting. Consider how your target market might react and what would be the implications on your brand.
Second, and more sinister, is the over-promising of capabilities. Unfortunately, just like Wild West Snake Oil, many companies over-sell what their technology can do. They either do it intentionally to ride the hype wave, or — because their research is not subject to peer review — they are unaware of the problem. But the implications of this over promising on people lives can be horrendous. We aren’t talking about over-promising a product recommendation engine, we’re talking about over-promising medical accuracy that may lead to a loss of life. How would you like to be the product manager responsible for a product that’s killing users?
In both cases, it’s not the base technology at fault. Whether an angle grinder or a simple hammer, it’s a tool that’s good at some tasks and bad at others. It’s the human misuse and misapplication of AI that’s the issue.
The What’s Next?
Harping on the hype around AI and advocating for responsible use of the tool still begs the question of “so now what?”. That’s usually where words like ‘governance’ enter the conversation. But for PMs out there, or anyone who’s building, implementing, selling, or buying AI-powered apps, just start by working through this list:
What the implication when the technology goes wrong? This is not an if, but when. What could be the worst impact on a human’s life, and can you deal with it?
What are the second order implications you haven’t covered above? (For a profession obsessed with ‘first principles’ thinking, we do little to consider 2nd and 3rd order ripple effects).
If you are using your own data, do you have the legal and moral rights? (And I’m not referring to the biggest lie on the internet, that of “I have read and agreed to the Terms & Conditions”). How do you ensure data privacy? Commercial and legal rights?
After clearing the legal and ethical hurdles, what are the biases present in your data set? News flash: all data sets have biases in them, as they deal with humans and have been compiled by humans. How can you account and minimise for all of those?
When implementing and buying, how can you validate a vendor’s claim? How can you safely test a technology solution, to ensure it delivers on its promises, and doesn’t behave catastrophically in edge cases?
Go back to your product management strategy basics: What is your vision about serving your customers? How can AI help? What are the limits and dangers of the tool? How can you guardrail the dangers, and derive the benefits (without blinkers and wishful thinking)?
It doesn’t take a lot to build a sane AI strategy, usage policy, vision, and roadmap. All it takes is a bit of careful consideration, thinking through for yourself rather than jumping on the hype wave hoping it doesn’t drown you when it crashes.
The counter-point
At this point you might think that I’m anti-AI (or at least anti GenAI), which isn’t the case. Rather, I’m anti-hype, because of the dangers inherent in it. In fact, I encourage people to listen to multiple viewpoints, including those I disagree with, and then comparing and contrasting. Self-education and critical thinking are quintessentially human, essential to product management, and the one thing you can’t outsource (whether to a consultant or ChatGPT).
Take the previous guest on Knight’s Hot-Take corner, Matt Maier:

His hot take is that within 5 years, employment as we know it will sharply decline. Matt predicts that advancements in AI will render traditional employee-employer relationships obsolete, because why would companies hire people to do easily automatable tasks? On the other hand, Matt believes this is a good thing and will enable an entirely new way of working.
Sounds interesting, and there’s certainly been some talk about the level of automation that AI would enable (as in, replace the need for employees). But let’s dig in to some of the claims made.
Coders are doing drudgery work, you only need software architects!
How do you think you get software architects? It take time to build up the skills and taste to be able to architect systems. And that comes with experience, which does involve the drudgery of learning about many things by doing them and then debugging all the trouble.
LLMs seem to be good at generating code, and helping experienced developers work faster. In general, those who derive the most value from generative models are experts (who know what good looks like) who can automate some of the repetitive drudgery. There will be a rise of new ways of working, that rely on powerful tools like AI. But just like how getting a license to Photoshop (of Figma) doesn’t make you a designer and you still need to learn theory and work through it, so will those tools around coding and building apps.
I don’t believe we’d get (at least not soon) to a point where AI can complete replace developers, and I don’t think (based on human nature) that those who specialise would be able to do all the breadth of tasks needed in building and marketing applications. Which leads to the next point…
You just think up the idea and the AI will do it for you!
That's the same as thinking "prompting" an image generator makes you an artist. If you believe that, I'm willing to share my startup ideas with any entrepreneur — they go build the startup, and when exit comes we'll split the proceeds 50-50.
No takers? You don't think ideas are all it takes? What about all the years of hard work needed to build a business?
Same. AI won't be able to solve interpersonal relationships, and that's where value (not price) is. While you might be able to automate enough to build larger apps than today as a single developer (something larger than Flappy Bird), you still need different people with a diversity of skills and tastes to be able to build a bigger product and company.
The AI would just reach out and find you testers & consumers!
Sure. Put yourself in the consumer position. How much do you currently enjoy being spammed by bots on social media? How much would you enjoy it when it's a thousand times worse? How likely are you to interact with brands that just have bots hounding you? Or even visit social media and other platform that support that?
Go on, listen to the episode. Perhaps you’d reach a different conclusion than I did — and I’d love to hear about it. While I love to poke holes in other view points, I welcome the same with mine. It’s the discussion that matters.
In summary
My point, again, is that you can’t outsource your thinking. As much as people don’t like to spend time educating themselves and thinking through the tough issues (it’s much easier to just slap on a chatbot because that is what is being asked), that’s the one thing you can’t rely on ChatGPT, or your competitors, or your consultant, to do for you.
The level of knowledge you need to be able to reason about the tools and carry a conversation isn’t the same as building the tools. Once you understand the limitations, you can see when the tool is effective and derive the best value from it.
Anyway.
It’s a broad topic that I’m passionate about, which I hope came through in the chat with Jason. Do have a listen, and let me know what you think.