Readers of this blog know that I write mainly on Product Management and Artificial Intelligence, and in both cases I’m taking a practical, no-nonsense approach. I find common sense remarkably (and sadly) uncommon. Having witnessed quite a few hype-cycles, some days you feel tired of screaming into the void, and all you can do is sit back with some popcorn.
Well, not this day. Two good things from the past month, that make me feel that maybe — just maybe — there’s still hope.
Public voices
First, I had the pleasure of being a panellist at Indiana’s Digital Government Summit, in the session of “AI in Action”. The moderator and two of the panellists were CIOs and directors from state and local government, and two industry representatives — Deloitte and myself.
This session focused on real-world examples, but in both this and the morning’s other session on AI it was obvious that both the panel and the audience had several issues top-of-mind: the gap between hype and reality, governance, security, how to start, and other common challenges.
It was heartening to see — despite what you might feel if you’re in tech — that people do approach the technology with wary curiosity, that governance consideration (ethics, security, environmental & social impact) have a place in decision making, and that many share similar experiences.
In typical product management, outside of sharing entertaining anecdotes (there are benefits to making people laugh 😁), some key considerations that I got to expound on:
Start with the use-case: what problem are you trying to solve.
Play with the technology, to understand capabilities and limitations.
Educate yourself, and engage with others in your community
Understand impact and risk
Some are more traditional, like data privacy and security, with added twists
In the case of AI, what is the impact when it goers wrong? Inconveniencing a user with a product recommendation, or sending them to jail?
What is your governance model? Do you have rights to the data? What are the environmental and social impacts of the technology? What are the ethics of the specific tool applied to the use-case?
What is your supply chain like? Look at your vendors, the proof they have for their claims, their ethical practices, etc.
The session was well-attended, and several people sought me out afterwards to keep discussions going. I also have a few upcoming sessions with communities of practice of my employer (mainly government, higher-education, and other regulated service industries, but centred around our product platform), and — while I’m there to listen as much as to lecture — I expect similar concerns.
I’ll probably expand the above quick list into a full post, as I think the governance around “AI” (the various flavours) is an area that’s sorely needs attention.
Book Review: AI Snake Oil
Arvind Narayanan and Sayash Kapoor are both computer science researchers at Princeton, and have been looking at the current state of AI. Despite what you might think from the title, this book isn’t Anti-AI.
It does, however go into depths about the what AI is good for, where it fails, and the dangers of misuse and hype. It’s not about the details of the technology (though there are enough explanations about the salient points), but rather about the impact of how it is used in various scenarios. There’s enough there to help you separate the hyped-up promise (snake oil), from reality.
Below are some further details and my thoughts on the subject, but if you’re in a rush I can’t recommend this book enough! It should be mandatory reading for pretty much anyone who interacts with technology these days.

The books covers three artificial intelligence areas (out of several) that are particularly problematic these days:
Predictive AI
Generative AI
AI in Social Media
In each category, the authors review the history and current state of the technology, as well as some inner workings. These help explain where and why issues with it rise.
Around Predictive AI, for example, the authors explore why prediction is inherently hard for some fields, particularly social and human behaviours. In some areas we know that there is inherent randomness, so prediction is always hard. In others we may not know what are the bounds of potential accuracy. In many cases, because these systems are developed by companies that don’t share their methodology, there are many errors (once finally uncovered), and when stress-tested in the real world they end up being not much different than a coin toss. With companies being completely opaque about their training methodologies and data sets, researchers cannot reliably reproduce those results. All that’s left if trying to observe the system in action and see if it delivers of the promise. It’s not a problem if we’re talking predicting a song you might like, but when those systems are integrated into medical and judicial systems the impact of a wrong decision on people’s lives can be catastrophic.
For Generative AI, what most people these days think when the term ‘AI’ is thrown around these days, the authors explore the very long history (which started way before ChatGPT). They then show all the issues that cropped up in the transition from an academic exercise to a profit machine for investors, including the rampant unethical and exploitative aspects in building it, the huge environmental costs that are ignored, and the social implications of the final product. In this sense, the technology isn’t inherently deficient like predictive AI, but the extreme, unregulated capitalism that drives it is more akin to 19th century railroad barons and snake oil salesmen. This is how we eventually ended up with regulatory bodies around transportation and medicine, and this can’t happen soon enough around GenAI.
Other topics covered in the book include why ‘AI’ isn’t the answer to the ills of social media (even though those in control try to present it as such), debunking the doomsday peddlers (who scream that AI will end life as we know it), exploration about why both the myths and hype around AI rise and persist, and some suggestions for a better future.
This book is an absolute eye opener, and a must read for any one in the tech world or even just those who causally interact with technology (ie almost everyone). It’s an abject lesson in how good intentions can go wrong, how hype is cynically misused, and the costs we then all have to pay.
Again, this book isn’t anti-AI in general. The technology itself has a lot of potential, in many cases already being realised for the benefit of humanity. Apart from some inherent limitations, the unfortunate reality is that most of the problems are due to the humans around AI systems and the unregulated capitalistic drive. Only by understanding the technology, the hype, its symptoms and causes, can we hope to avoid the bad effects.
I strongly recommend you read it, and share it around your office and social circles. You can find a copy on Amazon, or at the book vendor of your choice.