Minter Dialogue with Peter Morgan

Peter Morgan is a physicist-turned-data-scientist whose career is a patchwork of academic pursuits, entrepreneurial ventures, and deep technology work. We met while sharing a panel, where Peter’s provocative, candid approach immediately stood out. Today, he’s founder and CEO of Deep Learning Partnership, advising companies on leveraging AI automation safely and effectively—and helping demystify what’s possible (and what isn’t) in this rapidly evolving landscape.

Our conversation takes us from Peter’s unconventional leap from theoretical physics to running a record label, to his eventual immersion in telecom and research, before landing in the world of machine learning and AI. Along the way we discuss the shifting frontiers of AI—from classical machine learning to agentic AI—where systems are increasingly able to perform complex tasks, sometimes orchestrated by other AI, for days or weeks at a time.

We dig deep into what it means to give AI “agency,” the parallels between human and machine intelligence, and the philosophical challenge of encoding empathy or subjective experience into algorithms. Peter is refreshingly honest about the limits of transparency: if we can’t explain our own neurons, how can we demand perfect explainability from neural nets? We also tackle the reality of technological disruption—how proof of concept projects are now easier to justify, and the responsibility that comes with automating tasks (and potentially eliminating jobs).

Some of the conversation’s most interesting moments involve the ethical challenges CEOs face, why proprietary data is the only lasting competitive moat, and the profound societal shifts coming as agentic AI becomes ever more central to business and daily life.

Key Points:

  • Agentic AI Is Changing Everything: The biggest shift in AI right now is the rise of agentic systems—AI that can be given objectives and act with increasing autonomy, orchestrating other AIs, often without direct human oversight. While exciting, Peter notes this “Wild West” moment requires robust governance and ethical frameworks.
  • Data Is the Real Competitive Edge: Off-the-shelf AI models are incredibly powerful, but Peter highlights that the only true competitive moat is proprietary data. Companies need to tread carefully with confidentiality and train models on their own datasets to retain advantage.
  • Ethics and Employment in an AI World: AI’s ability to automate work creates tough human resource questions. While efficiency and profit drive business decisions, Peter pushes us to recognize the importance of ethical leadership—balancing productivity demands with real human impacts.

Whether you’re a CEO, an entrepreneur, or simply curious about where AI is taking us, Peter’s perspective is a wake-up call: now’s the time to experiment, adopt thoughtful governance, and start thinking deeply about your business’s data and ethical responsibilities.

Please send me your questions — as an audio file if you’d like — to nminterdial@gmail.com. Otherwise, below, you’ll find the show notes and, of course, you are invited to comment. If you liked the podcast, please take a moment to rate it here.

To connect with Peter Morgan:

Further resources for the Minter Dialogue podcast:

RSS Feed for Minter Dialogue

Meanwhile, you can find my other interviews on the Minter Dialogue Show in this podcast tab, on my Youtube Channel, on Megaphone or via Apple Podcasts. If you like the show, please go over to rate this podcast via RateThisPodcast! And for the francophones reading this, if you want to get more podcasts, you can also find my radio show en français over at: MinterDial.fr, on MegaphoneFR or in iTunes. And if you’ve ever come across padel, please check out my Joy of Padel podcast, too!

Music credit: The jingle at the beginning of the show is courtesy of my friend, Pierre Journel, author of the Guitar Channel. And, the new sign-off music is “A Convinced Man,” a song I co-wrote and recorded with Stephanie Singer back in the late 1980s (please excuse the quality of the sound!).

Full transcript via Castmagic.io

Transcription courtesy of Castmagic.io, an AI full-service for podcasters

Minter Dial: Peter Morgan, it’s great to have you on the show. We met at a panel where you were one of the star panelists and loved hearing your perspective, always one for chewing it out and provoking occasionally, which is going to be very welcome on the show. So, in your own words, Peter, who are you?

Peter Morgan: Thanks for having me on, Minter. So, yeah, my background, I started off in physics. I was doing a PhD in theoretical physics and I like math and figuring out or thinking about how the universe works and really, really, really, really enjoyed it. But eventually I wanted to leave academia. So, I basically went into, I started my own business, which was a record company of all things, right? Because I had some friends.

Minter Dial: Really useful today.

Peter Morgan: Yeah, yeah. I mean, I don’t regret it at all because the people I met were just so cool. But yeah, didn’t earn a lot of money, but that’s okay. You’re always looking for that runaway hit, right? I think they’re all runaway hits. But anyway, yeah, the public had a different opinion. So, I did that for 5 years. And then after that, I kind of had to get a bit more serious. So, I went into technology, basically into industry, networking, telecom. The internet was just becoming a thing. So, you know, it wasn’t as bad as it sounds. It was quite fun. Those were fun days. You know, we’re on that exponential, you know, uplift. And Cisco were, you know, the big company of the day. I worked for them for a while and it was just fun, right? We were building out the internet basically. And but after that, you know, the dot-com crash, you know, I got a bit disillusioned because it wasn’t as much fun anymore. I have a short attention span. So, yeah, I went back to academia and I, as a research assistant, working on a particle physics experiment to measure the mass of neutrino. And that was fun. Yeah, I was whacked out.

Minter Dial: That was, that’s talking about sort of super small particles.

Peter Morgan: Yeah, yeah, absolutely. Quantum particle physics is quantum physics. It’s quantum field theory. Totally. So, I did that. It’s kind of, you know, what I’m good at and what I like, my first love. And so, after 3 years, I was like, okay, that was fun, but academia’s not really for me. I kind of proved it a second time. So, I went back, I was like, what the heck do I do now? It was 2013 and end of 2012. And data science, this guy had just written an article for the Harvard Business Review, data science is going to be the sexiest job of the 21st century. And I’m like, ah, there we go. That’s what I’ll become, a data scientist, because I’m, you know, used to analyzing big data sets. And, you know, I like industry. So, that’s what I’ve done ever since Mentor. And that was, you know, 13 years ago now. And I haven’t really looked back and sort of data science became machine learning, became AI, became ChatGPT, became here we are, right? So, yeah, it’s been a bit of a journey.

Minter Dial: Well, proof that you can dip and bop and weave and wave and create a really interesting career.

Peter Morgan: Yeah, you have to do something, right? You have to pay the bills somehow.

Minter Dial: That’s for sure. So, you being where you are, where you sit and talking to so many people working in different universities and universes and companies, how would you describe the state of AI today? This is, we’re recording this on the 18th of February, 2026.

Peter Morgan: Yeah. Wow. Good question. So, it’s moving very fast. I think it’s all about agentic AI today. So, if we had the podcast a year ago, it would’ve all, all been about LLMs and, and large language models and how amazingly good they’re getting. But now it’s all about that’s kind of old school now. It’s like, let’s chain a whole bunch of these large language models together, orchestrate them, and then just let them go off for a week and do a job, whether that’s coding a huge codebase or doing something within a business or a startup, you know. So, it’s really all about agentic AI today, and it’s both scary and exciting at the same time because no one really knows what the hell these agents are doing. Yeah, we haven’t really got the scaffolding in place yet, so it feels a little Wild West-ish. But yeah, I quite like that.

Minter Dial: In the word agentic, there’s this notion of agency. There’s the idea of the agent.

Peter Morgan: Yeah.

Minter Dial: But in parallel, there’s also this idea of giving it agency. Where are we in our ability to give it agency?

Peter Morgan: Yeah, well, they have agents. I mean, we give them objective functions or things to do, just like we get told to do by our managers or whatever. We need, here’s our objective, company objectives. And you know, there’s a team of you and you know, we need, here’s the plan. It’s the same with digital workers. They’re basically digital workers. So, we kind of give them agency. We, you know, they don’t sort of know what to do ahead of time. We still have to give them instructions just like we have to give humans instructions. And so, once we do that though, they are agentic. They can actually figure out what’s the best thing to do. They usually have an orchestrator agent at the top like a manager. Sort of delegating work, and this agent finishes this task, goes back, says, I finished. The manager agent will say, hand the next task on to this other agent. And you could have hundreds of agents, right? And again, running for weeks. So, in that sense, they are agentic. Yeah, no magic. It’s just, yeah, they still.

Minter Dial: One of the things that, you know, you and I spoke about a little bit last time was, well, when we were in our panel, was that Sometimes we tend to hold up to a higher standard AI than we do for ourselves. And the idea of transparency is a good one because we tend to say, well, we need to have total transparency to trust it. Yet, am I able to say, Peter, can you explain to me how your neurons are firing and what just went through your mind while I was speaking for the last 20 seconds?

Peter Morgan: Yeah, no way. So, I could self-report and sort of say, this is the logic I used. And I think agents could do that too. And they do leave logs and we examine them if things go right or wrong to see what the heck they’re thinking. It’s called chain of thought. And I mean, yeah, they think like us. I mean, it’s quite humorous. You’re like, you know, you give them a math problem to solve. It’s like, I tried this approach, that one didn’t work. So, oh, I’m feeling a bit silly now. I’m going to try this approach and read these papers. And so, yeah, they have a thought process. They’re reasoning, you know, in quotation marks. They do actually reason. Again, transparency, we have to be realistic about, you know, neuron by neuron. We don’t really know what’s going on inside them, but we, you know, we can get them to self-report, and that’s the best we can do as humans. Yeah.

Minter Dial: And, and, and you will find people in the world, probably secular, not medical, who will say, I know exactly what I’m feeling, I know exactly how my brain works. And yet a neuroscientist or a neurologist who’s been in the business for 30 years says, I have no idea.

Peter Morgan: Yeah, yeah, yeah. Neuron by neuron, we, we can’t. So, if you look, lift up the hood, you know, the algorithms and The most popular algorithm is called a transformer right now. It’s based on neural networks, kind of fashioned in the way the human brain is 6 layers deep of neurons, biological neurons. So, so we do it. It’s basically modeled, the architecture is modeled on us, right? And why not? That’s not surprising because we have had 4 billion years to evolve into these information processing machines, right, to survive. So, that’s not surprising. But we can’t‚ when we lift the hood, we can’t actually see because there’s a trillion parameters of these large models, just like there’s 100 trillion synapses in the human brain. It’s just the complexity is too great. It’s like the weather, right? You can’t do atom by atom and to use that to predict where the storm’s going to happen in 2 days’ time. Very similar system. They’re just complex systems. And so, we have to be realistic what we’ll ever be able to know about them. Yeah. Yeah.

Minter Dial: I just, I feel like doing a little bit of a tangent if you’ll allow me, but, and I enjoy taking psychedelics, so this may be coming from that part of my life, but everything is connected, right? That seems the easiest, easy statement. Everything is energy. And when you look at the super small and the super large, whether it is the set of universes out there and their connectivity between planets, not to mention neutrinos, but just at that scale, there’s a whole bunch of, let’s say, bigger neurons and synapses that are flying around in the bigger piece all the way down to quantic levels and Planck-length things that are happening. And they all seem to be rather similar. How do you react to that?

Peter Morgan: Yeah, well, doing physics, I did explore the very small down to the Planck length, quantum gravity, right up to the very large, which is the universe and, you know, clusters of galaxies. And the weird thing is that I’ve noticed is that when you look at the brain and then you look at the galactic clusters, the kind of the connections, the topology, the architecture is very similar. So, there’s something in nature that likes that sort of system. Now, whether they’re conscious or not at the galactic level, I’m probably not the best person to ask because I’d probably say no, but you never know. Yeah, but it’s interesting, that phenomena. Yeah.

Minter Dial: Yeah. And even when you look at the internet itself, distributed, the nodes, the connectivity, the powerhouse, but think the centrals and then the, you know, the absurd elements and hallucinations, for God’s sake. We even have hallucinations in AI. Or these so-called hallucinations. Yeah, we do. And, and it feels like that it, it’s kind of somewhat similar to the way we are.

Peter Morgan: Yeah, very much so. Yeah, we hallucinate all the time whether we’re doing LSD or not, right? We do. We, I mean, you, you tell, you ask a person, you know, memory is very, um, it, it’s, it’s very infernal, right? Um, we think we remember events exactly as they happen, but we don’t. We’re just so far away from anything that we, we think of ourselves, right? The reality is so different. So, in a way, we’re hallucinating the whole time. Why? I guess it comes back to because we’re complex systems and things get lost, right? Memories. I mean, what is a memory, right? It has to be some sort of physical thing within, you know, a group of neurons that are configured when we experience an event in our past. But eventually that fades over time, right? Those structures don’t last. That’s why we can’t remember certain things. Yeah. So, it’s fascinating, this whole journey about building AI. It means we have to understand ourselves. So, that’s really the great point there. We have to understand ourselves. Otherwise, I don’t think we can build a fully generally intelligent system unless we do.

Minter Dial: That’s what I wanted to lean into. I mean, when I did my research on the idea of encoding empathy, one of the biggest gifts that I saw coming out of the work on trying to create empathic AI is you need to understand what is empathy. What is it within us to somehow to encode it? If you don’t understand what it is at the sort of molecular level, then how on earth can you reproduce it?

Peter Morgan: Absolutely. So, yeah, I’m not sure we go down to molecular, but definitely the neuronal level. There’s some empathy. Okay, so for me, there’s 3 types of reality. There’s physical reality, which I studied, physics, neutrinos and atoms and galaxies and all that good stuff, right? Chemistry and biology. Then there’s conceptual, mathematical, platonic, So where the hell does math come from? You know, do you need a physical universe for math to even exist, right? It gets quite philosophical quite quickly. But then infinity. Yeah, yeah, what is infinity? And then, then we have, um, subjective reality or subjective experience, which is empathy and sadness and happiness and, and, you know, anger and everything. So, what the hell is that, right? That’s not an atom, that’s something to do with the complexity of our neural network and some weird subjective experience emerges from that complexity. So, you know, what’s so special about the mammalian brain as opposed to, say, a galaxy? These are deep philosophical questions that no one has answered yet, and maybe we’ll never be able to.

Minter Dial: Well, and may we continue to study the idea. So, you talked about AGI, artificial general intelligence.

Peter Morgan: Yeah.

Minter Dial: So, for the regular punter, what on earth is that in today’s language and how far are we from it? How would you estimate that?

Peter Morgan: Yeah, I think it’s basically us, right? AGI, general intelligence is human intelligence. So, that’s everything we can do, including empathy, including, you know, the experiential emotional intelligence, right? Social intelligence, emotional intelligence. There’s 9 types of intelligence. Howard Gardner mapped it all out nicely at Harvard again in 1980. And yeah, there’s physical intelligence, you know, someone who’s really good at athletics. We’ve just had the Winter Olympics, right? I could never do what they do, but they can ’cause they train, but also they’re naturally gifted. And then there’s, you know, mathematical intelligence. I might be quite good at that. But then there’s emotional and social and introspection and spiritual intelligence. So, we won’t have AGI until we have all of those in a machine. And I think some of those are a little further away than, so we’ve almost cracked mathematical and science and code. And that’s very, very impressive. That happened much quicker than most people and has surprised even the experts in the field, right? You’re like, whoa, we didn’t think it would happen in 3 years’ time. We thought with 2050, we’d have a bit of time to get ready, but no, we haven’t. We’ve solved coding, right? And math. So, it’s like, okay, So what about the other types of intelligence? Well, then we’re going to have to build these complex systems, more complex than the transformer perhaps, to actually replicate that in silicon. Basically what we’re doing, right? We’re just using silicon instead of biology.

Minter Dial: And these are big. And a lot of energy.

Peter Morgan: Energy and engineering and the data centers. We need gigawatts, yeah. But the only reason we need the gigawatts is because we’re training them on all the data on the internet, right? So, a little bit of an unfair comparison. ’cause apparently we can only take in a few gigabytes of data in our lifetime of say 80 years. Whereas these models are trained on all, you know, there’s, well, 8 billion humans in the history, probably 50 billion human brains put into these data centers. That’s why they need so much energy. So, yeah, anyway, yeah, a lot to think about.

Minter Dial: Yes, there is, Peter. And what, you know, I’m just listening to you speak and we talked a lot about these 9 forms of intelligence. Musical, physical, And so, on. Yeah.

Peter Morgan: Yeah.

Minter Dial: What about stupidity?

Peter Morgan: Yeah.

Minter Dial: What about mortality? What about these other things that are deeply human, but not necessarily quite as positive in most people’s minds?

Peter Morgan: Yeah, true. Well, I think stupidity and stuff like that, you know, I think that’s to do with the subjective experience. That’s kind of like a word we give to a feeling, then when we feel stupid, right? So, these are feelings, and I think it goes, we, we don’t think that these large language models can feel. They, they haven’t got emotions, but they can simulate everything and they can say, yes, I feel stupid, but we sort of know they’re not feeling that. They’ve just been trained on some huge dataset where that stupidity is part of the dataset and it’s mimicking what humans can do. So, that’s the difference, right? They can mimic us almost perfectly, but they’re still not feeling machines yet, which is interesting to me that they can’t feel. Is that a good thing or a bad thing? Yeah, I don’t know.

Minter Dial: I think it’s a reasonable statement of fact at this point, although they certainly can simulate the expression of feelings and they are oftentimes even better at detecting the emotions that others are feeling than we as humans, which is a pretty darn big bill to fill, I think.

Peter Morgan: Yeah, pretty impressive how far we’ve come, right? Their visual intelligence as well as language intelligence. It’s a good point.

Minter Dial: All right. So, a little, I wanna talk a little bit about your commercial side, Peter, rather than merely philosophizing on huge ideas, questions, which by the way I love doing. But yeah, so Deep Learning Partnership, this is your company that you say the vision, and I was just reading from your site, it says the vision is to empower humanity with safe machine intelligence. And your mission is to automate business workflows with AI.

Peter Morgan: Yes. Yes.

Minter Dial: So, I’ve sat, I’ve sat down with many an executive that likes to think that they have an idea of AI, but most of them are skeptical, it seems still in the bigger companies anyway. There’s sort of like peripheral accessorized or sparky ideas, but there’s no deep understanding. How do you translate the statements, your lofty statements that you have for your company, deep learning partnership, into making them understand and turn into concrete decision-making by these types of executives?

Peter Morgan: Yeah, well, part of our job is to educate, right? And to get everybody on the same page. Otherwise, they’re not going anywhere, right? Like, we don’t believe you. Okay. So, part of it’s education. And I think the general public, including C-suite, is‚ it’s been around, ChatGPT has been out for over 3 years now. They’re getting it. They’re starting to get it. They can’t‚ they’re not in denial so much, is my experience, anymore as they were maybe 2 years ago even. So, they sort of‚ they can see the writing on the wall. Unless we, you know, put this into production, we’re going to‚ we’re not going to be as competitive as our competition. So, we’re getting‚ seeing more willingness to do proof of concepts. And so, that’s where we start, right? We start with a proof of concept. Let’s train some models on your data, see that they can actually produce or predict accurately as well or better than any humans, and certainly much quicker and probably cheaper as well. And so, they like that. They like that. Wow, cheaper, quicker, more accurate. Yeah, we’ll have some of that. That will make us more competitive, right? And if we don’t, then we’re going to go outta business. So, yeah, we’re seeing the, the tide change a little towards more willingness to get the proof of concept underway. And if that works, then we can start putting it into production.

Minter Dial: All right, so let’s talk about a proof of concept, or at least organizing the route towards becoming very AI proficient. How do you start? Or do you pick off low-hanging fruit where there’s sort of easy accounting wins or, you know, analytics, the sort of obvious analytics of customers, How do you sort of organize where to start? Because at the end of the day, basically AI could do everything.

Peter Morgan: Yeah, it’s a general purpose technology, so eventually it can do everything. And some things it’s better at doing than others, like code software. It’s very good at writing software at the moment. So, what we do is it’s kind of domain specific, right? Retail might have a different problem set and data set than, say, oil and gas or energy. Of a different, say, they said an internet company or a SaaS company. So, basically we look at, you know, what the data, what they’re doing, what they’re trying to do, what their products and services are, what datasets they have, and then we can train from there. And sometimes it’s business unit by business unit. Do you want to start with accounting, HR, sales and marketing? You know, it doesn’t have to be the actual product. We can start in any business units. Like you say, it is a general purpose technology. But you start small. You don’t try to do all business units at once. So, that’s a good point.

Minter Dial: And when you, let’s say, start one project or one element, how do you measure success? How do you measure that this work, this proof of concept was proven?

Peter Morgan: Yeah. So, again, it comes back. Is it as accurate? Is it faster? And is it cheaper? Or in all three? Usually it’s all three. What we’re finding, especially nowadays. The models have gotten very, very accurate. It didn’t quite used to be the case. They were probably always faster and cheaper, but now they’re as accurate as humans. So, yeah, it’s an easier sell today than it was 3 years ago, for sure. But those are the benchmarks. And you benchmark it, you quantify it, right? You come up with numbers. Yeah.

Minter Dial: So, then we have a human problem, or at least an HR problem.

Peter Morgan: Yeah.

Minter Dial: One is, How do we, we, I have 2,000 employees, what should I do with them? Because you’re telling me this, this solution’s better, faster, cheaper. And so, there’s a sort of a human resources issue. And then there’s an HR, like how do you hire talent for this type of environment where that’s going to be what’s going to be happening in all sorts of forms?

Peter Morgan: Yeah, great question. So, our jobs are coming down to managing or orchestrating these LLMs or agentic AI systems So, you know, that’s a totally different skill set than, well, I’m an expert. I’ve been in this field for 20 years, so I can, you know, configure or program or, you know, do this job within this.

Minter Dial: Or I know my customers best, for example.

Peter Morgan: Yeah, yeah, yeah. You can’t replace the knowledge I have, you know, experience. We all experience. I didn’t read a book, but you can. And so, we’re seeing people have to have a willingness to step back a little bit and let the agents do the job. They’re still in charge, but I see a time where we won’t even need the human in the loop. I mean, that’s a big statement, right? But I do see it because I’ve been in the field since 2013. So, if you’re just coming into the field, you might go, no, come on, you’re kidding me. Humans are special. You’ll never replace us. But no, that’s not what I’ve seen at all.

Minter Dial: Yeah, I do think that there’s going to be space for different ways for humans to work that might, but if I’m a company here and I’m thinking, oh, I’ve gotten the idea, AI is really important. We’ve got some proof of concepts. We’ve got these starting points going along. To what extent is it important to have proprietary AI and how do you, how does one organize that thought within a company?

Peter Morgan: Yeah, that’s a great point. So, do I use off-the-shelf or do I train an open-source model? And off-the-shelf, they’re so capable these days. It’s just, if you’d asked me this question 2 years ago, I’d say you have to train an open-source model on your data or else, you know, it won’t work. But nowadays, these things are just so good. But still, you can get a competitive edge by training an open-source large language model on your own proprietary data because at the end of the day, Data is the only moat left. It’s not the models. Everyone has access to them for $20 a month or whatever it costs. Some are free now. It’s not the hardware because you can just go dial up a cloud instance on AWS, Azure, or Google Cloud or Oracle. It’s the data. That’s the only moat that is competitive at the moment. So, and perhaps it was always anyway, but yeah. So, that training on proprietary, on, on your own data, um, will give you a competitive advantage. And I don’t see that changing. So, yeah, anyway, that’s where we are today. Yeah, it may change in 2 or 3 years, but today that’s where the competitive edge is. Yeah.

Minter Dial: Well, so I, I have a flashback to an old company for whom I worked and, um, I remember sending out postcards to our customers and they would write back valuable information about their appreciation of a certain product. But those postcards typically ended up in a large cotton box, collected dust and were never used. So, you know, that’s ridiculously useful client information and your ability to understand it. It brings up this idea of confidentiality because to the extent that you are using an off-the-shelf as opposed to proprietary, well, you know, your own LLM, How do you ring fence? How do you guarantee confidentiality? How do you keep that value add that is your moat, that is your data?

Peter Morgan: Yeah, definitely.

Minter Dial: Without it being taken away or hacked.

Peter Morgan: Well, that’s right. So, these enterprise-level LLMs from OpenAI, Anthropic, and Google in particular, and Copilot from Microsoft, they will guarantee it’s written in their contract, right? And so, yeah, you’re covered. Whether that’s true, but if it’s not, you can take them to court. And there are several court cases going on now, high-level, high-profile ones where the, you know, say for example, artists are suing OpenAI. We didn’t give you permission to scrape our data. But what your question’s slightly different. If I have proprietary data, am I safe feeding it into ChatGPT? Well, the answer is yes. And, and I know there’s a lot of skepticism. Well, you can never trust these big companies, blah, blah.

Minter Dial: But that’s what‚ evil.

Peter Morgan: Yeah, exactly. No evil. That’s what it says on the contract. So, I’m sort of reporting back to, to what I see and what I’ve experienced. Yeah, but mistakes can happen, and we’ve seen some mistakes. They, they happen less because people know what’s kind of happening now. But when they first sort of came out, I think there was a high-profile case. Samsung engineers put all their proprietary data and into ChatGPT without reading the small print, and and then it leaked into the internet and they lost, you know, all of their competitive edge. I’m sure those engineers don’t work for Samsung anymore.

Minter Dial: Even though I had an experience, I worked for Samsung for about 2 years as a seminar leader. And like, whenever I went to their head offices outside of Seoul in Incheon, they would, it would take me an hour to go through the gate.

Peter Morgan: Wow.

Minter Dial: Because they would, they would look at the serial number of each of my USB keys and, you know, all of the every computer and device and camera anyway, that was protection to the highest degree.

Peter Morgan: So, yeah, valuable.

Minter Dial: So, looking at the different sectors that you work with, Peter. Okay. So, as I scanned a few, which would be like climate, tech, healthcare, finance, education.

Peter Morgan: Yeah.

Minter Dial: Where do you, in which of these areas do you see the most courageous leadership when it comes to implementation of AI?

Peter Morgan: Yeah, good point. I think it really comes down to company by company. So, it’s all down to the CEO really. And it’s not so much domain-specific as is the CEO AI native? Are they forward-thinking? You know, are they adaptable? Are they changeable? Are they fixed in their ways? And it doesn’t so much depend on age either, right? You can have a really stubborn 30-year-old and a really open-minded 60-year-old, to be honest. So, yeah, how much are they embracing AI? In getting it and getting on board, right? And experimenting and allowing everyone in their company to at least, you know, test these things out. Just putting prompts in, can it help improve my job and report back to me yes or no, that type of stuff, all data-driven, right? And not just blind faith, oh, AI is amazing, everyone should use it, but let’s test it out in sandboxed environments, safe environments, so our data isn’t leaked and everything. So, yeah, it has to be, you know, that you have to have a board of an AI board and, you know, compliance and regulation all in place before you start, you know, giving all of these very, very powerful tools to the average worker in your company, right? Or even a highly skilled tech worker. It’s the same thing, right? You have to make sure you’re compliant with the latest regulations, which tend to change weekly as well. So, not just the tech Tech’s changing, but the regulations are changing. So, yeah, it’s a very interesting and dynamic field. Yeah.

Minter Dial: All right. So, let’s stick with the CEO’s mindset at this point. And it’s a regularly interesting tension between flexible to change, yet backbone of ethics.

Peter Morgan: Yeah. Yeah.

Minter Dial: So, that seems like an often competing forces because you’re feeling constant change. How do you remember where your ethics lie and are they adjusted for this new environment? And also, what do you stand for? If everything is changing everywhere and you’re just standing up in one place, good luck staying up.

Peter Morgan: Right, right. Yeah, great question. So, ethics is ethics. Ethics doesn’t change. The core principles‚ ethics is about principles, you know, do the least amount of harm‚ that doesn’t change whether, you know, pre-industrial revolution or Stone Age or now. That’s the golden rule, right? And so, ethical leaders tend to stay in business much longer than unethical ones, my experience anyway. Just look at WorldCom and Enron, et cetera, et cetera. So, yeah, let’s be ethical. So, given that, then it’s kind of how do I apply these principles to this new technology, which is moving faster than any technology’s ever moved. It’s more powerful, it’s agentic, And yes, it can be done. You can kind of map everything over to these new digital workers, just like there’s sort of like, you know, human workers, you know, not all of your employees, even though you did big interviews and screened everybody, not all of them will be ethical. And, you know, and it’s the same with these LLMs, right? So, you have to keep an eye on everything, evaluate, have LLMs in the loop, LLM judging. Judging the output of one LLM, another one judging the output of that. Rather than having humans do everything, you can get LLMs to be judges as well, ethical judges. So, you see, it gets a little bit recursive. Yeah, but it’s totally necessary. Yeah, without that in place, your company won’t survive anyway, right?

Minter Dial: Yeah. Well, so my experience over my career was that I don’t really recall any boss saying, I have a one-line ethical framework by which I operate. No, it‚ no, there’s always going to be a little fudging factor when it comes to the end of the month when I need to get it in. There’s a second risk, which is, well, because I can, I will do it.

Peter Morgan: Yeah, yeah, yeah.

Minter Dial: Or look at this machine, it’s allowing me to do everything 40% faster. Sure. The ethics of that, or I’m being good for my shareholder, Am I doing good for me? My family? Might not be good for a few people who are laid off quickly. So, how does one keep that ethical idea intact? Because everything’s moving around so fast and I think a lot of people are very quickly pushed off of what they might think is their ethical framework because, oh, this is tempting, this is interesting, this is possible.

Peter Morgan: Yeah, yeah. Well, okay, so where does ethics come from? We’re trained as children. It’s part of it. Is it nature versus nurture? But, you know, by the time we leave high school, university, you know, if we’re not ethical, we’re probably, you know, not, not survive. We’re probably in jail, in prison, basically. So, that’s headed or headed there that way, right? And sometimes the white collar, you know, they haven’t learned. So, but, so that’s the kind of, let’s assume that that kind of exists. Now the question is, if I’m going to use these tools and it’s almost a CEO decision, I know that, you know, 50% of my workforce, I can get by with 50% less workers. Oh no, I can’t fire them because they can’t feed their family. That is not what businesses are set up for, right? Sorry, you’re in the wrong place. Businesses are set up to be as efficient and effective as possible. So, that’s, yeah, that’s hard luck. But yeah, the bottom line is profit on every business, right?

Minter Dial: Well, That is the bottom line, except I suppose the question is, what do you put first?

Peter Morgan: Well, I mean, if you don’t put that first, you go out of business. So, you’re not really a business, you’re a charity, right? Yeah. It doesn’t work.

Minter Dial: Well, I’m pushing back a second, Peter, because I think that if you think everything profit first, then you’ll end up missing a whole lot of different opportunities and you also suffer from engagement issues. In the purpose side or in the people side of the business.

Peter Morgan: Yeah, you say that, but look at Google. I mean, all that matters, especially in the US, is the end of quarter results, right? That’s all people care about. Sorry.

Minter Dial: Indeed. Well, I think it’s a fair statement. I guess I just try to meditate in my own little world, my little corner for having more purpose. In what you do.

Peter Morgan: I think purpose might be found outside of work as the workplace, real purpose that, you know, I think people have hobbies and sports and relationships and wives and families. That’s what gives them real purpose. The business, business is ruthless. That’s the way it is in the West. Yeah.

Minter Dial: I was watching an Instagram reel about how certain words in English are kind of funny. For example, there is no such thing as being ruthless. You can only be Ruthless.

Peter Morgan: Yeah, I like that.

Minter Dial: There are a bunch of funny ones like that. All right. So, obviously we’re talking a lot about, in the outside world, we talk a lot about how AI is revolutionizing medicine and the pharma world, biology. Where do you see the AI play happening? From a patient standpoint, where, what, what do we, what should we expect? This all going to mean we’re going to all have longer lives and better lives?

Peter Morgan: Yeah, we are actually. Yeah, that’s what excites me, really gets me out of bed in the morning, and not so much how much, how can I improve a company’s profit, right? I actually find purpose in doing that. Um, yeah, so, um, it’s got an AI for science, really. If we can, you know, come up with cures for cancer quicker And, you know, that’s the average cost is billions now. And if the success rate is, you know, less than 10% of every candidate once they go through clinical trials, that whole period takes 10 years. If we can get that down to, you know, say $100,000 and 6 months and 90% success rate, I’ll take that. And that’s the world we’re heading into.

Minter Dial: Yeah.

Peter Morgan: So, AI can do extreme good. It’s a tool, right? Can be used for bad or good. I’m interested in good things like that because I’m selfish. If I can live longer, healthier life, I’m up for that, actually. Yeah, I’ll take that.

Minter Dial: Well, there is this other issue, which is there are bad actors, less ethical people. And while we might wish that regulations are going to help with that, it feels like policing the AI world. Forget about transparency and and let’s say understanding what it’s doing, but the ability for us to regulate or at least combat, not mention have war with AI.

Peter Morgan: Yes, definitely. So, yeah, policing AI, we’re seeing AI policing AI. Cybersecurity is all about AI agents now. We humans, we’re far too slow. And so, yeah, we’re moving into this agentic AI world. So, that includes bad actors as well. And things going wrong with the AI systems themselves. So, yeah, we need the AI governance structure in place and AI security agents. Yeah. And we’re seeing that in businesses are making a lot of money by leading, being ahead of the curve and introducing these agentic AI solutions, right? So, again, you know, the cybersecurity guy doesn’t so much need to be a good programmer anymore. They need to be able to manage your agentic system. And interpret, you know, what their output is and just keep an eye that they’re not going wrong or becoming bad agents, for example, right? Another thing, another possibility there. So, really what we’re seeing, it’s almost like a digital mirror of the human world, but it moves 100 times quicker and it’s cheaper and it’s faster. So, yeah, we, yeah, so again, you know, technological unemployment, Do we need, yeah, I mean, AI will replace humans. I think that, go back to your previous question, what are we going to do, right, when 50% of the workforce is out of work? Well, then we have to have a, we have to basically tax the companies making all the money from it. And that’s, that we have to, otherwise people will be lining up with pitchforks outside of Number 10. I mean, there’s no doubt if people are starving, they they don’t care about much except keeping their families and themselves alive, right? So, any rational government is not going to let it go that far.

Minter Dial: So, yeah, it gives a new meaning to an AI revolution.

Peter Morgan: Yeah. Yeah. I mean, this is unknown, uncharted territory. And I know about as much about what I can predict the future as well as you can. I have no idea what’s going to happen. I should have said that at the start of our interview. I just don’t know.

Minter Dial: Well, anyone who has that pretense is farcical anyway, Peter. So, I’m with you. I have one very prosaic question, which is more coming from an irritation I have, which is when I see a startup that says my startup is called startup.ai. Yeah. And and I say, well, tell me about how you’re using AI. And they’re like, well, we’re just, you know, we’re using ChatGPT. I say, well, okay, all right. Well, you know, give me a little bit more to it. How do you determine to what extent a company is using AI? How do you diagnose the AI piece?

Peter Morgan: Well, I think everyone should be. The more they’re using it, probably the greater their chances of success are. So, it doesn’t bother me that these companies, startups should all be using AI today, right? I just can’t imagine one that’s not. And it’s who’s using it the most effectively. And the startups I’ve seen, there’s such clever ideas out there now that it’s just like, wow, the future is going to happen way quicker than most people think because these guys are on it, right? I mean, they’re just coming up with amazingly sophisticated systems, AI-enabled, AI-first thinking all the way, right? So, yeah, yeah, things are going to change much, much quicker, I think.

Minter Dial: One of the things I’ve enjoyed doing is asking AI, how can AI help me? If you give the prompt enough material to say, this is what I’m doing and how I’m doing it, what can you do? And like, this is, I mean, let me give you 10 options.

Peter Morgan: Yeah, yeah.

Minter Dial: Holy Toledo. So, going back to the idea of the human being that’s now operating with these agentic AIs, What’s the profile that you recommend have to help run these type of organizations?

Peter Morgan: Yeah, I think forward-thinking C-level. It’s a lot of it comes from the top, right? The bottleneck. And so, if they’re putting their AI board into place and they’re embedding it within their risk management system, which corporations already have in place, and not being like, ah, we’ll wait a year. We don’t, we’re not fully on board, you know, that, I mean, that thinking will just do your company will no matter how big, just, just go out of business in 12 months, right? Because your competitors won’t be thinking like that. So, really it’s an AI-forward mindset. Yeah. And all we have to do, like you say, is just use the tools and then you instantly see how powerful they are. And then it’s just like, okay, so how can we embed these tools in every single business unit to make us more productive and more efficient? Yeah, that’s really what it comes down to now. And I haven’t seen many companies going, no, we’re going to wait a year anymore. They, they used to, not anymore, but I just haven’t seen it. Yeah.

Minter Dial: All right. So, you, you operate in, in many fields. You, you’ve worked in London, Silicon Valley, New York, And so, on. Uh, but I’m, I mentioned, um, about AI culture and yeah, this is, this is the little bit quirky question. So, if you had a, a dinner with, let’s say, these 3 characters, the AI from San Francisco, the AI from New York, and the AI from London?

Peter Morgan: Yeah.

Minter Dial: Who would they be, these characters? And which one would you trust to drive you home?

Peter Morgan: Yeah. Okay. So, I think underneath, I mean, we’re, we’re all the same, but yeah, the Silicon Valley is way more, it’s very impressive. They’re way more like, you know, fast. They just seem they’re accelerating faster than anywhere else in the world. It’s like for better or worse, they are really pushing this hard, right? As hard as you can push. The New York guys, they’re pretty cool. I don’t see too much difference between New York and London. I see probably Silicon Valley a little pushing harder than anyone else. Not surprisingly, they’ve been in, you know, since the 1950s, that’s been their job.

Minter Dial: So, with whom would you like to be seated beside and who would you trust to drive you home?

Peter Morgan: Um, the, the guy from London would be the guy I trust to drive me home. Um, second, the guy from New York, and third, the guy from Silicon Valley. Yeah, I’d probably have‚ yeah, it’d be fun to‚ yeah, always have the guy, Silicon Valley guy at the table for fun. But, um, yeah, I want to go get home safely.

Minter Dial: Fair, fair, fair.

Peter Morgan: That’s cool.

Minter Dial: I don’t want to be….

Peter Morgan: They take bigger risks, and no risk‚ it’s risk-reward trade-off. And yeah, they, they, they crash more spectacularly, but they also win more spectacularly because every now and again their bets pay off. Google, Apple, you name it, HP, they have a history of winning by taking risks. Yeah.

Minter Dial: This question is perhaps an avowal that I need another question in some regard or didn’t come up with the right one. But with all the people that you have talked with, you get, you get these type of questions. What’s the strangest or most unexpected question that actually made you rethink something important?

Peter Morgan: Just around AI, right? Or just in general? Yeah, I don’t know. Sometimes it’s, I don’t know, it’s the fields change. 3 years ago, I did what things that surprised and annoyed me the most where people weren’t accepting what was actually happening in front of their eyes because they probably hadn’t even tried out the tools, right? And it’s just like, God, you’re just looking like an idiot right now. I don’t get that so much now. Everyone’s like, have tried the tools, the kids are using it, the school’s teaching it. So, I don’t get so many big surprise questions anymore, really. Everyone’s on, we’re all on the same page. The thing that is the most staggering is that the whole of humanity waking up to the fact that that the whole of humanity is going to change in the next 3 years. That’s the thing I’m sort of, yeah, a little unsettled by. No one really knows what’s going to happen and what’s it going to look like. And I have my own way of looking at that. But yeah, I think some people‚ I don’t like seeing there’s a lot of fear. And I try, yeah, yeah. That’s‚ and I’m not sure where that will take us. Fear usually takes us to bad places. Yeah.

Minter Dial: I agree with you on that. And this is sort of the subject I’m exploring in my new book. So, this notion of what fear is and how to undo some of the fear that seems to be wired, hardwired into us these days. Yeah. So, let’s talk a little bit more positively, potentially anyway, for people who, let’s say a 20-year-old or even a 10-year-old, to what extent would you be advising them to get into quantum computing? To what extent is quantum part of the future of AI? And when will that sort of interaction actually happen?

Peter Morgan: Manifest? Yeah, the jury’s out a little bit about‚ I know AI can help accelerate the quantum computing progress for sure, because theoretically and practically, it can help us make better tools to build these very intricate systems of quantum computing, QPUs, quantum computing processing units. And theoretically, there’s a lot still needs to be sort of worked out there. And I have no doubt that will accelerate whether quantum computing is going to be a part of the AI story the other direction, like can it help accelerate AI? I’m not convinced that that’s ever going to happen. And I could be a little bit out I’m not alone in that because a lot of people want it to be true. But because I have a pretty strong physics background, there’s not an immediate match for quantum computing because the brain is a classical system and it’s not a quantum computing system. So, I don’t really see quantum computing helping with AI. Where I can see it helping is with simulating new materials and quantum systems. The brain is not a quantum system. And I don’t want to be negative or, you know, sort of like a stick in the mud, but I see quantum computing being extremely useful in perhaps coming up with new medicines at the molecular level because those are quantum systems and new materials because those are quantum systems, helping us to understand the black hole because that’s a quantum system and neutrinos. But I’m not sure it’s going to provide any speed up with these machine learning algorithms. And that, I mean, that’s probably very unpopular, but I have to be honest, right?

Minter Dial: Well, that’s why I have you on, Peter. I mean, not to be unpopular, but to say you speak your mind.

Peter Morgan: Yeah.

Minter Dial: And that’s what I appreciate. I’ve always appreciated. So, last question really. Most of my interactions are with people who run boards or run companies and I’ve long battled with the thing that I am specialized in, which is the value of a brand.

Peter Morgan: Yeah.

Minter Dial: And, and sort of the, the role of marketing in, in the pushing of a company. Yeah. It typically gets a really big, big old backseat compared to sales, finances, and nowadays tech. Yeah. So, there are a few companies that have explored the idea of having AI being on the board.

Peter Morgan: Yeah.

Minter Dial: Having an agentic AI member on the board to help you perhaps quickly spew out data, reflect on different things. Do you‚ to what extent have you seen any of that in your viewings? And to what extent do you believe that’s something that is viable for a serious CEO today?

Peter Morgan: Yeah, I think it’s very, again, very practical actually. And I think most CEOs, particularly in startups, are using these tools to brainstorm now. So, it’s not replacing them, but it’s definitely a co-conspirator. And that’s great. And I think that’s what AI is all about, right? At the moment, there are AI assistants for everything, including CEO, right? And that’s great. And if they’re not using it, then the CEO who is will get 10 different ideas, and one of them will be the one that accelerates their company, and the other one gets left behind. So, and I’ve read stuff, I’ve heard anecdotal stories about that happening. If we hadn’t have used ChatGPT or Gemini or Claude brainstorming, I wouldn’t have even thought, you know, of these weird sort of outlier ideas. And it’s just like, yeah, that’s it. They’re as clever as us now and more creative even. So, yeah, will they replace humans? I’m not Again, eventually they will, yes, but not for, not this year, maybe 2027.

Minter Dial: Well, all right. So, Peter, thank you so much for coming on. Someone who’s been listening to this, what would you like them to do as an action, a call to action? How could they hire you? Check out more about what you do, your writings, your work? What would be a good course?

Peter Morgan: Yeah, I think so. So, again, I’m a tutor on one of the Oxford, two of the Oxford University Saïd Business School courses. Those are a good place to learn about AI basics and fundamentals and reduce your level of uncertainty and fear and start, you know, understanding exactly what AI is, the history and how it works at a high level, compliance regulation. That’s a good place to start. I mean, we have a consulting company, Deep Learning Partnership, where we usually just consult startups, SMEs, and we basically just work hand in hand and, you know, go in there and normally it’s people with not a deep bench in machine learning engineers. So, we would supply that resource to them. So, we would bring them up to speed in a practical sense as well. So, that keeps me busy. Yeah.

Minter Dial: Peter, many, many thanks for lots of provocative thoughts.

Peter Morgan: Yeah.

Minter Dial: I’m thinking that anybody who’s listening to this and hasn’t started scratching down some new thoughts on what they’re going to do right away will need to get with the program. Many thanks, Peter.

Peter Morgan: Thanks a lot, Menter. I really enjoyed it. Appreciate it.

Minter Dial

Minter Dial is an international professional speaker, author & consultant on Leadership, Branding and Transformation. After a successful international career at L’Oréal, Minter Dial returned to his entrepreneurial roots and has spent the last twelve years helping senior management teams and Boards to adapt to the new exigencies of the digitally enhanced marketplace. He has worked with world-class organisations to help activate their brand strategies, and figure out how best to integrate new technologies, digital tools, devices and platforms. Above all, Minter works to catalyse a change in mindset and dial up transformation. Minter received his BA in Trilingual Literature from Yale University (1987) and gained his MBA at INSEAD, Fontainebleau (1993). He’s author of four award-winning books, including Heartificial Empathy, Putting Heart into Business and Artificial Intelligence (2nd edition) (2023); You Lead, How Being Yourself Makes You A Better Leader (Kogan Page 2021); co-author of Futureproof, How To Get Your Business Ready For The Next Disruption (Pearson 2017); and author of The Last Ring Home (Myndset Press 2016), a book and documentary film, both of which have won awards and critical acclaim.

It’s easy to inquire about booking Minter Dial here.

View all posts by Minter Dial

 

Pin It on Pinterest