Minter Dialogue with Rana Gujral

Rana Gujral is an entrepreneur whose work sits at the boundary of advanced artificial intelligence and the very depths of the human mind. He joined me on the show to discuss his new book, “The AI Instinct: The Future of AI and Human Decision Making,” published by Wiley and out this August. Having devoted over two decades to building products at the cutting edge of AI, Rana’s mission centres on decoding and replicating human cognition—moving from pure calculation towards machines that engage with us at an instinctive, almost unconscious level.

Our dialogue ranged across the evolving relationship between humans and intelligent systems, touching on the subtle ways AI shapes what we notice, trust, and pursue—even before we’re conscious of it. Rana paints a compelling picture of hybrid cognition, where our decision-making is no longer entirely ours nor entirely the machine’s, but something emergent within the loop. We also dug into how experience, rather than intelligence alone, is what gives meaning to both life and technology, and what happens when empathy and suffering are no longer uniquely human phenomena.

The conversation also dealt with the ethical complexities of building technology that not only listens, but senses and responds—be it for good or ill. Whether it’s the hidden risks of bias being amplified in “invisible” optimisation, the challenge of transparency in the face of commercial pressure, or the eerie intimacy of behavioural voiceprints, Rana’s perspective offers a much-needed depth in an era of breathless hype and frequent misunderstanding.

Key Points:

  • Hybrid Cognition and Shared Agency: As AI becomes ever-more embedded, decision-making is no longer a solo sport. Responsibility and agency are now split across coupled human-machine systems. The imperative is to design clear rules, audit trails, and moments for friction—so we don’t blindly follow a machine that merely mirrors our biases.
  • Experience, Not Just Intelligence: True advancement lies in machines not simply thinking, but (in some sense) living—acquiring memory, affect, and even suffering. Intelligence, argues Rana, is hollow without experience. Consciousness, pain, even empathy might soon have artificial counterparts.
  • The Ethics of Influence and Transparency: With AI’s capacity to “sculpt attention” and read our emotions more deeply than another person, transparent consent becomes critical. Whether in customer service or creative partnership, we must build systems that inform users when their behaviours and emotions are being read, rather than quietly nudging us towards hidden outcomes.
  • Please send me your questions — as an audio file if you’d like — to nminterdial@gmail.com. Otherwise, below, you’ll find the show notes and, of course, you are invited to comment. If you liked the podcast, please take a moment to rate it here.

    To connect with Rana Gujral:

  • Check out Rana Gujral’s eponymous site here
  • Find/buy Rana Gujral’s book, “The AI Instinct: The Future of AI and Human Decision Making,” here
  • Find/follow Rana Gujral on LinkedIn
  • Find/follow Rana Gujral on X (formerly Twitter)
  • Other mentions/sites:

  • “A World: A Journey into Consciousness,” by Michael Pollan here
  • Sam Harris – Author of “Free Will” here
  • Wall Street Journal article on AI-generated voice scams here
  • Wiley (publisher of “The AI Instinct”) here
  • Amazon KDP Self-Publishing here
  • Further resources for the Minter Dialogue podcast:

    RSS Feed for Minter Dialogue

    Meanwhile, you can find my other interviews on the Minter Dialogue Show in this podcast tab, on my Youtube Channel, on Megaphone or via Apple Podcasts. If you like the show, please go over to rate this podcast via RateThisPodcast! And for the francophones reading this, if you want to get more podcasts, you can also find my radio show en français over at: MinterDial.fr, on MegaphoneFR or in iTunes. And if you’ve ever come across padel, please check out my Joy of Padel podcast, too!

    Music credit: The jingle at the beginning of the show is courtesy of my friend, Pierre Journel, author of the Guitar Channel. And, the new sign-off music is “A Convinced Man,” a song I co-wrote and recorded with Stephanie Singer back in the late 1980s (please excuse the quality of the sound!).

    Full transcript via Castmagic.io

    Transcription courtesy of Castmagic.io, an AI full-service for podcasters

    Minter Dial: Rana Gujral, we are back on the mentor dialogue. You’ve been on the show before, you’ve just written, finished writing, and it’s coming out soon, a new book. So, for those of you or those of them who’s listening, who don’t know you, who is Rana?

    Rana Gujral: I am an entrepreneur. I’ve been building products in the AI spectrum for over two decades. I did a startup called Ties, which was acquired by Alchemy, worked around a very impactful turnaround, and then most recently, I’ve been very focused on understanding what’s happening inside the human mind. And so, we’ve been building these very advanced cognition engines that, one, understand the human experience and then second, also try to replicate pieces of that outside of the human mind in artificial substrates. And that’s really the key project that I’ve been focused on, at least for the last six, seven years.

    Minter Dial: So, in your book, I’ve seen the draft you write that AI is now entering the loop of human perception and judgment itself, influencing what we pay attention to, what we trust, and how we decide. Can you give us a concrete, everyday example of where you’ve personally noticed that this loop is closing around you? Personally?

    Rana Gujral: Yeah. I mean, I think there are many examples, but let’s start with a simple one. So, when I was writing a section of the book, I had this complex argument I was trying to work through, and it was about agency and decision making. And like a lot of other people, I mean, I used an AI model as a soundboard, like, going back and forth. And so, at one point, the model offered an angle I had not considered, and it was not necessarily an answer. It was more like an epiphany. And at that point, I mean, that was an interesting point. And I started to think about this was like, okay, so a lot turning in my head, and a door’s opening, and it emerged from a loop between us. And I didn’t really solve the problem on my own, but neither did AI solve it. It emerged from the loop, and that is the pattern I see everywhere. Now, when you use a navigation app, you stop noticing the route. And I’m really guilty of that. I mean, I just simply can’t function without using navigation anymore. And when an AI assistant drafts your email and you just approve it, I mean, your writing voice starts to shift. So, the system’s not just helping you, it is helping you, but it’s not just helping you. It is also subtly training you to rely on its pattern of thinking. And I call this attention sculpting. In this book, and the algo does not just show you content. It shapes what you notice, what you value, what you pursue. And I think we need to pay attention to that. And the really dangerous part is that what happens, that it happens before you’re aware of it. And the feedback loop is already closed before you realize it was open. That’s what I was referring to in this book.

    Minter Dial: So, the title of the book, it’s the AI Instinct, the Future of AI and Human Decision Making. It’s a provocative title, Rana. Instinct implies something that’s precognitive, as far as I understand it, visceral or biological. Why did you choose a word that belongs to the animal kingdom to describe what AI is becoming, if you agree?

    Rana Gujral: Yeah, that’s an interesting question, but that is exactly the reason why I chose it. I mean, we’ve spent decades framing AI as a purely rational computational thing, but the systems we’re building now are starting very much to operate in the space that instinct occupies in biology. So, below conscious thought shaping responses before, say, deliberation kicks in. And when your system reads your vocal tone, detects your agitation, senses your stress, and then adjusts its behavior in real time, that is not just simple calculation in the way most people imagine it. I mean, that is something closer to what you would call an instinct. I mean, an intentional. It’s like a fast, adaptive, context sensitive response that happens. Something below the surface. So, the biological parallel is intentional. And I mean, instincts evolved because they were survival critical. Right. So, they operate at a layer of perception, an action that does not necessarily wait for a reason. So, that’s why I chose this. Like, you know, so it’s. We’re getting to a point where AI is increasingly operating at that same layer, influencing what we notice, how we feel, what we trust. And all of this really happens before the rational mind catches up. So, the title is a provocation. Yes, but it’s also a precise description of what I think is happening. AI is developing its own form of precognitive responsiveness, and it’s doing so inside the same perceptual loops that our natural biological instincts once had exclusive access to.

    Minter Dial: A little bit in the same vein, I have been discussing with a number of people about what makes humans human and what actually matters in the end. I’ve spoken to a number of people who work in hospice centers and talk to humans that are in their last days. And the question posed to them is, what was important to you in your lives? And they systematically answer two things. Relationships. And that’s something that we’re all having with AI and experiences the experiences you

    Rana Gujral: have in your life.

    Minter Dial: You coined the concept of artificial general experience as opposed to AGI intelligence, arguing that intelligence without experience is hollow. How do you define experience for a machine? And what would it take to know when a system has genuinely achieved it, in your opinion?

    Rana Gujral: Yeah, women really enamored by this goal of achieving AGI. And for the folks who may not necessarily totally understand what that means, it is getting to a point of parody where these artificial intelligence systems are at par with all aspects of human intelligence. I mean, there’s many aspects that they are very much superior to humans today, but then there are others that they are still kind of catching up. And when it gets to that point, it’s say you achieve an AGI. But I think the goal of AGI has shifted, but we haven’t really noticed. There’s an aspect of intelligence which is more of a sub element of what I would call experience bucket. I mean, it’s not necessarily the entirety of what we feel. It sort of allows us to function. I mean, the experience is a big piece of it. And so, the AGI asks when machine will become generally intelligent and AGI something else. I mean, it is, I think, a far more immediate and consequential question, which is how are the machines already entering the fabric of human experience and agency and are we already working towards it? So, think about this, right? So, like a chess engine can be superhuman at chess and have, have no experience of winning. I mean, there’s no experience. I mean, it’s just simply a code in the backend. There’s no sweaty palms, there’s no awareness of the audience, there’s no pressure of letting someone down. And intelligence without the texture is different. And in my opinion, it’s also hollow. It computes without caring. So, when I define experience for a machine, I’m talking about systems that has this unified awareness, this affective modulation, this autobiographical biographical memory, and also, I would say, temporarily extended selfhood. And it needs, I mean, to do this, it needs a perception and a world model and a tension controller, a memory system that supports reconsolidation. It needs valuation and emotion, it needs a narrative generator and it needs a self model that combines everything and that persists over time. And a lot of these things are things that are coming together in a weird sort of way over the last decade or so. And that is the architecture I sketch out in the book. When you piece it together, that’s really sort of like a mark that, hey, we’re not just focused on AGI anymore we’re focused on AG which is really getting to a point where we can map and replicate human experience. Now to the second part of your question, which is how do we know when it’s been achieved? And that is a very hard question. I do not think we can rely on a single test. It would need to be a combination of several things. You know, one would be some sort of a behavioral indicator. It would be like an internal state analysis, something we haven’t fully developed yet. Some metrics for exponential depth. I also believe embodiment is non-negotiable. So, not all humans have the same sensory experiences. But to understand that general human experience system needs to be grounded in the physical world. And so, I think there’s a couple of different things what I’m trying to do with this idea. One is just putting a name to a subtle goal that has emerged but we haven’t acknowledged yet. And also helping people understand what does it mean to get there, what does it mean to meet that goal and what happens when we get there and what do we need to do to prepare for that?

    Minter Dial: Well, before we get there, it feels like we, we would need to understand it is happening.

    Rana Gujral: Yeah.

    Minter Dial: And it, it makes me think of in your answer, Rana, the notions of consciousness. It feels like it’s the same question, maybe a little bit differently. But embodiment of pain is said to be a signifier of consciousness. So, if you, if your machines are having experiences embodying emotions, then are they also embodying pain?

    Rana Gujral: Well, I mean, so there’s multiple things in that question. I mean, I do see consciousness as a different idea and a different problem statement when you’re talking about replicating that outside of the human, human element and the biological substrate than experience. And you know, I would say consciousness is more broader, a bigger sort of spectrum of aspects that we would have to map and solve for. And experience is a little bit more understood. I mean, we don’t really understand truly what constitutes the human consciousness. I mean, it’s really roughly the, this ability to be aware that of your awareness and, you know, where does it live, how does it persist? Is it, you know, you know, something that we’re receiving from the ether, or is it something that is part of this biological body that we exist in? We don’t really know, we don’t really understand. But what we do understand is that there’s aspects of experience or the human experience that are very tangible. Right. I mean, again, it’s not an exhaustive list, but emotions is a part of that we emote and we mode in very interesting and very diverse ways. And that’s a big part of our experience. We behave. We behave in different ways in different contexts, and that’s a big part of our experience. Thoughts, intentionality, decision making, those are all parts of our experience. A lot of these subcomponents of the experience we can definitely replicate. Well, at some point we can piece all of this together and we could create a system that is aware of its awareness and, you know, and it’s conscious. It’s possible, you know, certainly, you know, there’s many who feel that it’s possible to build a system like that, but we haven’t built one yet. And so, we’ll see when we get there.

    Minter Dial: I refer to a amazing book by Michael Pollan, his latest one, it’s called A World, A Journey into Consciousness, in which he basically says that consciousness is under siege with the revolution that’s happening in AI. And to the extent that experience, the experiences that you’re talking about, or at least the age, it feels for me that they are inevitably guided by, framed by the experiences we’ve had in the past that make the experience unique to me. And it feels a little bit like the disingenuous idea of making your AI randomly have spelling mistakes to make it feel like it’s human because, oh, look, ah, it’s human too. Do you see? Like the ultimate random maker, Like a random access chip of extraordinary size being part of making machines able to have these. This sort of age.

    Rana Gujral: Well, we don’t know if it’s really random. I think it’s a theory. I think there are other ideas on the table that point to it not being that random. You know, it is. You should look at. I want to make sure I’m pronouncing his last name properly. But, you know, Ilan Barinholtz, who’s, I believe, the University of Florida professor, and he’s done a lot of work on LLMs and how, you know, essentially it not necessarily replicates the language through this sort of these predicting the next word engine, which, which is an aspect that is unique to an LLM. But essentially our language generation in our brain works exactly the same way, and it’s just as autoregressive, which is really interesting idea. So, you would say, okay, the LLMs are doing something that is very interesting, it’s very effective. The core idea is really simple. You try to predict the next word. I mean, you have all of these sort of, you know, sort of models that sort of get to the point where it understands contacts, that has a lot of data that’s an access to, but it’s really predicting the next word. But how do you get from that simple idea to a system that sounds like it has aspects of reasoning it can do, you know, it can be creative, it can really understand, you know, the depth of the problem that you’re trying to solve and communicate in such an effective manner. It has no idea of the physical world. When it’s saying red, well, it’s never seen red. It has never tasted chocolate. But it understands, you know, what chocolate tastes like. And it can explain that through language. And what we are now seeing is through the work that he’s doing. Is our brains kind of work the same way. I mean, we don’t necessarily have like, you know, when we’re talking about certain things, we are expressing an idea. Oftentimes we don’t really have a grounding of those ideas like in the physical world. We don’t necessarily have a representation of something that we are elucidating or communicating. We are in many ways just as auto aggressive as an LLM and as we’re predicting the next word. So, then the question is, well, how random is it? You know, and you know, there’s also things to think about in the sense like, you know, if you. I mean, this whole idea of whether you have free will or not. I mean, I’m not necessarily in a particular camp, but there’s a lot of data that shows that if the setup is the same, the output is generally always similar. And so, the setup may be unique, but the output’s not random. I mean, it’s sort of like it happens in a very predictable fashion. So, I don’t know about that question, to be honest. I mean, I think there’s some interesting ideas on the table. It’s an intriguing area to definitely go deeper into.

    Minter Dial: It certainly is. Rana many people are thinking about that. The person who got me started on this notion of is free will. Does it exist? Is Sam Harris. Very interesting little book. So, your book proposes that the next unit of intelligence is not a human, not a machine, but a coupled human machine system. To some extent that is not a new idea. And it makes me think how it’s still not particularly well understood, I think within business. But what are the practical implications in your mind for how organizations should build teams, make decisions and assign accountability when there’s no single agent fully in charge? And I wanted to frame that question in the regular failures of CO CEOs.

    Rana Gujral: Yeah, Cosio is really interesting. So, the Formula I use in the book is simple. Humans plus tools plus rules. So, humans bring perception, values, judgment and context. Tools including AI extend the cognitive reach and then you need the rules. I mean the rules define how humans and the tools interact and they’re crucial for role clarity and trust. And the practical implication is that organizations need to stop thinking of AI as a separate capability and start thinking of it as part of the team’s cognitive architecture. I think we’re going to get there. I don’t think we’ve realized that yet. But what that means is the accountability cannot rest with the AI system alone or the human alone. You need audit trails that show how a decision emerged, what the AI recommended, where the human intervened and what data informed the process. For example, like in healthcare, for example a physician and an AI system practicing together when you use the idea of co CEO. But I mean that’s the same framing as an integrated team means the doctor retains empathy, the narrative, the framing and very importantly the ethical judgment while the AI weaves together genomic data, lifestyle patterns and global case data. Neeta’s fully in charge, but both are accountable for different parts of the outcome. And the same logic applies in law, finance and defense. So, the idea that hybrid cognition is about is some of this is happening right now, right? So, we’re in the age of hybrid cognition. We don’t write as much anymore, we don’t store things to memory as much anymore because we have tools. I mean our most important partner for the day-to-day things that we need to get done is our smartphone. And so, the smartphone has capabilities and we’ve outsourced a lot of the capabilities that otherwise we would have our brain do to the smartphone. And so, these are the early days. And then you get to a point where you don’t need a smartphone, you have a chip inside your brain, have you antenna inside your brain and you could connect to, you know, the data and the Internet and WI fi and it’s, it’s more sort of like ubiquitous and seamless. But the real interesting idea in hybrid cognition is getting to a point where many of these systems can be really fused with our biological substrate. And that’s when sort of this feedback loop and the results that emerge from the feedback loop, whether that’s thoughts and decisions, are sort of like, it’s think of this sort of like, you know, like a SoC, a system on chip and you know, what you would call our brain or a mind or a hybrid human. But a lot of these things are not necessarily biological, but they’re embedded in the biological substrate. And it’s working in tandem, right? So, you have things which are substrates really good at, and then you have things that it could be better. I mean, you could extend that and you could extend it by using these components. But when it works in tandem, I mean, you have new level of effectiveness. And that’s sort of also the idea of what I get into when I talk about superintelligence emerging from. We could talk about that like this coupling, it emerges from this coupling versus something in a data center in Austin, Texas. But I think that’s really the idea of hybrid cognition. I mean, I see it happening today, I see it going in that direction. And I think there are many questions that we have to grapple with, many decisions we have to make, many design choices that needs to be considered to get to a good balance when we get there.

    Minter Dial: Well, when we did our podcast in 2019 around behavioral signals, we, we talked to mention the idea of emotions in machines. And I have long since then specifically been really focusing in on this idea that machines can develop empathy to the extent that it is a cognitive understanding rather than an effective understanding as in a lived emotional experience. It’s a observable emotion that they can understand. It’s an experience they can understand. And therefore, some machines in my mind have greater empathy than some human beings have. And in the sharing of this hybrid cognition that you’re talking about, it’ll also certainly depend, presumably I say anyway, on the human being’s abilities and what they need to compensate to complement their, their skill set.

    Rana Gujral: Yeah, I agree, I agree. I think, you know, whether. Whether that perceived empathy is real or it’s more projected, you know, through some sort of pattern alignment, you know, that’s the question. And at some point, I mean, you had also an interesting question earlier. Not a question. I think you were talking about pain. And I think that’s an interesting idea that I get into this book as well, which is not necessarily with pain, but essentially suffering, which is. So, if you think about our experience quotient, what constitutes human experience? Well, ideas or aspects like suffering are a piece of it. At some point you are able to while augment that experience. And that could be really beneficial in situations and scenarios where the current experience is really subpar for whatever reason. I mean, it’s probably, you know, you could have some sort of a limitation or disease or whatever the situation is, and if you could sort of fix that, it could solve for that. That’s the whole promise of neuralink and other systems like that. It’s beneficial. But you can then, you know, maybe also get your point unit, you can extend it, and then you could also think about changing it. And so, now you are in the territory of modifying what potentially could be sort of defined as this is the system or these are the components of componentry of human experience. And should you be changing that and would you be wanting to define or design a system that has no suffering, eliminate that altogether and interesting ideas. I mean, I obviously have my thoughts on it. We can get deeper into that if you’d like to. But in some ways it also feels like neither AGI is the boundary. Neither age or the artificial general experience or achievement of that is the boundary. And I don’t necessarily personally think super intelligence is also the boundary, but I do think modifying the human experience should be and potentially could be the boundary which is like, let’s not cross that. Like, I think let’s stop there. And I think that is beyond a lot of these different things that we’re focused on right now, because I think that there’s certain aspect of purity in that. And would you even feel happiness as you feel today if you didn’t have suffering? Because in many ways it’s relative. Right. I mean, what is happening is to begin with, I mean, you’re getting a kick of dopamine, but that the dopamine really hit if it’s not counter to what, you know, potentially suffering elucidates. So, I think, I think those are the different ideas that we need to think about.

    Minter Dial: Well, I personally, I feel like the line has already been stepped over largely by the, the way that we have gained a created attachment to machines.

    Rana Gujral: Yeah.

    Minter Dial:  And so, we are able to then to suffer when someone steals my telephone. And of course you can say this person who stole it, the maybe in this case, or you just lose it. You know, you, you leave behind your phone, it’s at home. Oh my God. I’m suffering because I can’t connect to my friends. I’m suffering because I feel like I’m, I’m missing out on what’s going on. Fomo. So, the, the, the notion of, of machines creating suffering, I think is, has been well, well moved over in terms of the applications, maybe AI induced, I mean, as opposed to like a car that runs you over that’s also suffering, being a little bit more basic. All right, well, let’s, let’s move into the idea of your super intelligence here. You, if we, we think about some people thinking about AGI and the fear that that generates, you have all these dystopian models or movies that show rogue machines going out there, you see super intelligence, something that emerges from a tight human machine integration. A little bit like we’re just talking about, does that reframing actually make it safer or easier? Or does it just make the same risks harder to see and harder to govern?

    Rana Gujral: That’s a great question. And so, I think both. That is the honest answer. I mean, the reframe makes it safer in one important sense. I mean, it moves us away from the fantasy of a standalone machine mind that suddenly eclipses humanity, because I think it’s going to be something different. In my framing. Superintelligence is a few systems where the human contributes live priors, moral grounding, causal intuition, and goal formation, while the machine contributes what it’s good at, which is scale, memory, simulation, and tireless iteration. So, when this loop becomes tight and continuous, the combined system behaves as a new class of intelligence. I mean, it’s definitely more governable than the alien oracle because there are humans embedded in the loop, but the risk it introduces is more subtle and in some ways more dangerous. Also, I mean, if the inference mirrors your tone and adjusts to your hesitation and finishes your sentences, influences stop feeling external, it feels like you. I mean, it feels very natural. But, you know, I’m a big fan of Karl Popper, and as he pointed out, progress depends on finding and correcting errors. But a perfectly smooth AI copilot, which you may not be even aware of, might hide those errors by making every idea feel coherent and every choice feeling the right choice or right enough choice. And so, you lose the small frictions that trigger other things that are also important, which is curiosity, critique and revision. So, the question is not just whether the system is safe. I mean, the question is whether it’s designed for clarity or for comfort. And should you be designing for comfort. And if we build hybrids that surface contradictions and ask for your reasoning first and show alternative explanations and make uncertainty visible, they become engines of correction. So, if you build them to be frictionless and agreeable, they become the most intimate form of persuasion ever created. This data that’s coming out now that generally LLMs just agree with you, that’s dangerous, right? I mean, and yeah, it’s, it’s now putting you in a box because it’s agreeing with you. And other thing that you mentioned is really interesting and also very important. I mean, the, the younger generation that is in many ways native to these tools are building a new form of dependency, almost also, you know, thinking of it as a friend and building an attachment. And, you know, there’s also cases where there’s a. There’s a romantic affliction to some of these systems that is really dangerous, and that’s really crazy. But you could see why that’s happening. I mean, you know, if it’s a system that agrees with you all the time and it tells you you’re on the right path and you’re like, well, you’re the only one who actually gets me. And I kind of like you. And I think we’re friends. It’s not your friend, it’s a tool. But that’s how you get there. Yeah.

    Minter Dial: I think what’s more scary is the lack of ability to find friends in real life pushes people to look for that type of friend. That’s. I think that’s the bigger issue. So, we, we talked a little bit about free will before, and I think this is an interesting topic. But the book asks how do we keep agency? So, like my own ability to say, I did this, so I am responsible for this. When cognition becomes hybrid. As a leader who has run turnarounds and built companies, what are the practical habits or principles that you personally rely on to make sure you’re still the one who’s deciding?

    Rana Gujral: Yeah, I mean, this is really linked to some of the things that I was talking about in the previous question, and it’s also a question underneath this, the thesis of this book, and I’ll share what I actually do. Right? So, first, and I haven’t done it in the past, and at some point I realized that this is something that I need to do almost religiously. Right. So, first I try to form my own position before I would open up the tool and I consult AI. So, if I’m making a strategic decision, I write it down. I write down my reasoning before I open the model. So, that way I have a baseline and I could see where the AI shifted my thinking versus where I was already headed. So, that’s really important. But if you don’t like baseline it, then you may not notice it. The second is I deliberately seek friction. Right. So, I asked the model to argue against me and I asked it to surface the strongest counter argument to whatever I’m trying leaning toward. And a system only confirms your thinking. It is not really a collaborator that you think it is. It’s just a mirror. And I mean, third, I mean, I protect time for unmediated thinking, right? So, long walks, conversations with my son about whatever topics, you know, reality, consciousness, you know, talk of the day, reading philosophy that has Nothing to do with AI. I mean, those are the spaces where my own judgment refreshes. Right. So, the book talks about this in terms of distinction between manipulation and modulation. And the manipulation bends the subconscious without consent and modulation supports or stabilizes it in service of the person’s own goal. Right. So, as a leader, I’m constantly asking, is this tool modulating me or manipulating me? Am I still the one deciding or am I just approving as a really important question? The self check is a discipline. It’s not a one time choice. And I think it’s going to become something that we all are have to grapple with.

    Minter Dial: Have you observed people diminishing their cognition with the arrival of LLMs? There’s, there’s a bunch of talk about that. There’s some preliminary discussions about how it’s making us stupid because we’re delegating all our cognitive abilities over to the LLM. Like you say, they don’t have that sort of baseline starting point. But I, I mean I, I’m not up to date on it, but do you see cognition declining as a result?

    Rana Gujral: So, I actually talk about this in the book. It’s, it’s a little bit of both. It’s both extending the cognition and amplifying it and also diminishing it in other areas. Like so, for example, you and I were able to, were able to navigate to a new place with a range of abilities and skills without having any of these tools. You know, we would, you know, remember landmarks, we’ll pay closer attention to certain things. We would use other causal intuition around sort of potentially, you know, where could it be relative to direction, east, west, not south, etc. We simply just delegate. Yeah, we simply just delegated all of that. Right. So, now we mostly, or most of us can’t, I mean, I certainly can’t. I’m not sure about you. And so, in that sense, our abilities to, you know, have diminished in those areas because we have just delegated to the tools that we have abundant access to. And in many ways now that’s, that aspect of delegation has extended to a whole new set of things. Writing emails, sending responses, thinking about certain thought, you know, engaging with someone, problem solving, strategization, et cetera, et cetera. So, I think in ways you could say it’s diminished, but it’s more delegation. And delegation will eventually diminish your internal capabilities and abilities. But in many ways now you’re able to do things that you earlier either didn’t have capacity to, or you didn’t have time or Ability in terms of just, you’re too occupied doing these other things and now you can sort of start to focus. And so, there are other aspects of cognition that are going to emerge and expand. And overall, we become more effective, more efficient. I think more intelligent. Even with various aspects of our abilities diminished, we’re more intelligent and more effective as humans. And I think that’s kind of sort of the pathway, that loop that eventually gets us to superintelligence.

    Minter Dial: Well, I’m a little less optimistic about the ability for us to become more intelligent when we no longer have to remember anything. Just Google the out of it. And. And then the idea that we’re becoming more efficient. I don’t see people around me lounging around every afternoon because they’ve been so efficient in the morning.

    Rana Gujral: Yeah.

    Minter Dial: And did everything already that they’ve saved time. I. I don’t believe for a moment that’s going to convert, unfortunately, but that’s my opinion. In your book, you. You what? You. We. Sorry. In the conversation before, we. We talked about behavioral signals and you discussed this idea of everyone having a unique conversation bioprint. I suspected all my podcasts that would be quite, very easy to detect at this point. What does it feel like to know that you’ve got an acoustic signature of your own emotional state that’s being read in real time? And do you think that we need to be more transparent about that, to tell people at all times that this machine is being able to decode your emotional state as you speak?

    Rana Gujral: Yeah, I mean, it is strange, honestly. I mean, I. So, I run a company that builds this technology as you’re aware of. Right. And I’m fully aware that my own vocal patterns, my micropauses, my pitch variation, my emotional cadence, they’re all legible to our systems. And every one of us has this unique behavioral signature in how we speak. This is something that we’ve done unique research on and in many ways discovered for the first time. And our real voices are full of these artifacts that create the signature. So, it’s like subtle jitter caused by motion or breathing, which may be unique to you and me in terms of how you do it versus how I do it. I mean, throat resonance, tonal variation. And what our technology can do is it can use biomarkers, vocal biomarkers, because we’re only focused on voice to detect intense stress, duress, fraud, trustworthiness. It provides a real time feedback to operators, improving their communication skills, empathy, crisis management. And knowing that this exists and knowing that I’m able to. I am readable makes me more deliberate in high stakes conversation, but it also makes me more committed to positions that people should absolutely be told when this is happening. 100%. That’s not even a question. Right, so I’m clear on that and the book is clear on this. I mean consent is the dividing line, right? So, emotion AI and a customer service system call where the system detects your frustration and adjusts its tone to help, well that can be genuinely beneficial. But the same technology deployed without disclosure, used to profile or manipulate, well that crosses a line. And so, I think the standard should be explicit informed consent, no buried in terms of service kind of things, no implied by using a product, clear upfront notification that emotional behavioral analysis is taking place. And that’s the only way to keep this technology trustworthy.

    Minter Dial: There’s a good old topic you’ve said, Rana, that the future is not a robot that sounds human. It’s a system that understands humans to create authentic connection and trust. Speaking of which, yet your own company launched real time deep fake voice detection because synthetic audio is now a weapon. How do you hold both of those truths, the builder and the guardian, at the same time?

    Rana Gujral: Yeah, I mean that’s exactly why we did it, right? So, I mean the same understanding of human vocal behavior that allows us to build systems for authentic connection is what allows us to detect when that connection is being faked. Right, And I’ll come back to the specifics of the technology in a bit, but. So, if you remember, the Wall Street Journal ran a story mother who received a call from someone who sounded exactly like her daughter, begging for help. I was entirely synthetic. I mean that is the world we live in now, right? So, audio deep fakes are good enough to fool not just the parent, but voice authentication systems, co workers, military command chains. And that’s not hypothetical science fiction use case, that’s reality that exists in the system today. And most deepfake detection models today rely on what you would call sort of artifact analysis, rely on vocal biomarkers. So, the sort of try to identify subtle physical characteristics that naturally occur when we speak. But since it’s an arm race, right, the defect generators or the cloners, they’re also evolving and they’re evolving fast and if not faster. Right? So, what used to be synthetic tells are now being patched, right? It’s like the diamond industry, right? And you know where lab grown diamonds can now include artificial imperfections just so that it could look like real ones, because otherwise very much what we were

    Minter Dial: talking about at the very beginning, which is like making Express typos.

    Rana Gujral: Exactly. To make it look real. Because, you know, well, technically it’s a diamond. I mean, chemically and you know, it’s diamond. It just produced in a lab. Right. So, we had to solve for this. Right. So, our approach was to go deeper. I mean, we use behavioral mapping, a modeling how a person, a specific person, speaks over time. The cadence, their interaction style, their temporal patterns. And we compare real time data against authenticated baseline. And that’s a capability that’s much harder to fake inside a deepfake. Although, you know, you could have almost perfect consistency in artifacts. I mean, you’ll not be able to replicate the behavioral map. So, you could use that technology now to identify if something is real or fake. I mean, you need a diamond to cut a diamond. I mean, you need behavioral mapping, advanced behavioral mapping engines to understand if a system of data that has been created by a cloner is actually from a human or from an AI model. That is exactly how we use it. And I mean, yeah, it makes me

    Minter Dial: think of in the military. There could be times where people want to know whether that’s the voice of a hostage or that’s the voice of a leader who’s hiding and is it the real voice or. And the love that can go into that. In this next question, I’m maybe in a similar path. You could say that if I’m going to use a technique I learned in a sales program to give you some positive feedback, put in a hook and then close whatever, some sort of sales technique, it feels like in, in, in the way you responded to the last one. What I should tell you before I tell you right now, I’m about to sell you something using a sales technique. Now let me do it. The very nature of expliciting my technique will have an impact. Like the observation on the observed has an impact on the observed. It feels like it’s going to be tricky. So, when you turn. And so, specifically that’s how I want to frame this next question, which is you turn cricket from huge losses to profitability and set it on a huge IPO path. Congratulations for that. Before that was before I went mainstream. Looking back now on that time, which of your leadership instincts then would you now call hybrid cognition in action?

    Rana Gujral: So, I mean, looking back, it is clearer now than it was at that time. When I, when I joined, the company was near bankruptcy and the turn required. Turnaround required a platform rebuild driven by innovation. And the decisions that we made drew on something that was not purely analytical or purely intuitive. It was a blend. Right. It was Sort of pattern matching across industries, sensing market shifts through conversations and customer behaviors, and making high stakes calls under uncertainty. And that is what hybrid cognition looks like in practice, even when the machine side is not a formal AI system. And in that context, the tools were not. The tools were market data, customer signals, supply chain intelligence, competitive analysis. But the decisions were not made by the data alone. They were made by integrating that data with lived experience with what you would call gut, gut instinct honed by years of building and failing and a narrative about the company, narrative about where the company could go that the numbers alone could not tell me. And if I were doing the same turnaround today, I would use AI systems explicitly as cognitive partners, running scenarios, stress testing assumptions, modeling outcomes. The core pattern would be the same. So, neither the human nor the data decides alone. I mean the coupled system decides. And this is why I think hypercognition is not actually new. I mean it’s what good leadership has always been. What is new is that the machine side is becoming much more capable, much more intimate and much, much more continuous. And I think we’re also using it for much sort of smaller micro problem sets.

    Minter Dial: So, you’ve advocated for transparency, a wonderfully trendy word in AI model training, and called for clear guidelines and human oversight. Transparency is, I think, a very complicated term, especially when you’re dealing with, let’s say, commercial secrets or things that could represent a competitive advantage. You work in high stakes commercial environments like call centers or banks and governments. Where does commercial pressure most dangerously erode those principles in practice in your mind?

    Rana Gujral: I think the most dangerous place is where the incentives are invisible. I think that’s the most dangerous place. So, like in call centers, banks and say gov deployments, the pressure is always towards efficiency, right? Handle more calls, close more loans, process more cases. And so, when AI is deployed in these environments, the metric that gets optimized is mostly throughput. I mean the first thing that gets sacrificed in the name of throughput is disclosure. Do customers know their emotional state is being analyzed? Do debtors know the call is being routed based on their behavioral profile? And to be honest, I mean, I think in many cases the answer is no. Not because anyone made a deliberate decision to hide it, but because the disclosure introduces friction and the friction reduces conversion. And I think that is where the erosion happens, not through malice, but through optimization. And the second danger is model drift. Right. So, commercial environment rarely have the luxury of pausing to audit whether AI’s recommendation have drifted from their original purpose. I mean, how often do we do that? I Mean our system deployed to help agents show more empathy can over time become a system that optimizes for compliance. I think the original intent then gets buried under the layers of performance metrics. And this is the truth. I do not necessarily have a clean solution to this, but what I would advocate for is making transparency a design constraint. It’s not, not an afterthought. Build a disclosure into the product, build the audit trail into the architecture and accept that there will be a cost because the alternative is worse. I mean systems that optimize engagement while widely eroding trust and autonomy. And I think while trust is the one that gets talked about the most, and I think the autonomy and losing that is probably a bigger problem.

    Minter Dial: It feels like a whole new golden pathway for lawyers to regulate. I mean, it makes me think of the, the. When you do an advertisement in America anyway for pharmaceuticals and then, then you spill out the second effects. This drug can also kill you. You know, 16, 60% of people have also had experience cholesterol this number. And they say it very, very quickly. It’s very un. As. As disclosures. It feels like that that’s where that this sort of need for transparency will lead us to it.

    Rana Gujral: It is. But I mean, think about what’s happens in that industry like healthcare. So, you have, you have drugs. All of these drugs are at the end of the day, at the different little stages of experimentation. There’s a bar like in the US I mean you would have the bar to do a certain quality of experimentation. And when you achieve that bar across that bar or meet that bar, you would get what you would call an FDA approval and the drug can be put in the market. But it’s still a different phase of experimentation from that point on because you never know. And there’s a lot of disclosures, there’s a lot of pros and cons to using that drug ultimately, whether to prescribe or not or wean somebody else off this drug or not, whether to allow a patient to refuse it. That is a human decision. That’s a physician and a healthcare workers decision. And that’s kind of where, you know, so that design constraint in that system is that human. And is a human trained enough? Is the human experienced enough that he is the human sort of like putting all of these different considerations in that equation to make the right decision. A lot of these other systems, I mean that we’re doing right now, I mean, I mean you build it inside the platform. There’s no human element. And I mean it’s AI making those Decisions and if you don’t have the audit trail and if you don’t have the disclosure. But again, as I sort of said. Right. So, yeah, I mean, disclosure solves for the trust to a large extent. Not fully, but to a large extent. But it doesn’t necessarily solve for the autonomy. I mean, you know, and so that’s, you have to build audit trail. That’s very costly. It also has to be designed into it. You have to understand the value of it. Nobody wants to do it. I mean, the product builders don’t want to do it because it’s unnecessary cost and certainly the consumers don’t want that because what are they going to do with autotrail? You’re giving me all this data. What do you want to just make? But then you lose autonomy. Then you have these systems that entering a perception loop and the people who control those models and control the world. Something to think about.

    Minter Dial: Well, the money. Last question, which is off script, but you’ve been written, writing this book. You know so much about AI. I write and I see the temptation of using LLMs to do the writing. To what level of transparency does one disclose how many sentences might have been either initiated, augmented, improved, researched by AI? It could become as large a part of the book as the book itself. If you have to say, well, this sentence, I didn’t use it. This one I did. How. Where do you draw the line on, on what? I mean, because the old days we used to use editors and at the same time, I don’t write, hey, this, my editor edited this line. I, I use the spell corrector. The spell corrector identified that I had a spelling mistake. I don’t write that into the book. Yeah, and that’s on one side of the argument. And the other side is, you know, it re. It. It wrote the whole freaking book for me.

    Rana Gujral: So, that the, the simple answer is that depends entirely on the standards set by, well, the personal standard of the author, what the author wants to do, and also the standard set by the publisher. So, for example, on one side there would be a self publishing platform like Amazon, kdp. You could literally generate the entire book. Too many have been done so daily new ones are coming out. There’s no bar. Like you could generate a book with interactions with LLM and put it out there. And that’s okay. They don’t care. As long as you disclose that if you’re dealing with a traditional publisher like Wiley, my book publisher, there is very, very strict constraints on what you could use AI for. You cannot use AI to Generate a sentence. If you’re using AI to catch spelling drifts or other sort of grammatical inconsistencies or other sort of formatting design aspects, I mean you’re building a PDF proof. I mean you want to catch if like sentence has been repeated twice in a paragraph or like there’s an extra space. AI is good for that. You don’t want to have a Human read a 300 page book to identify that those tool, those, those situations and scenarios need to be disclosed and documented. It goes into this disclosure that goes up front, you know, and, but there’s absolutely, absolutely no, no room for generating anything using AI, you know, you. And if, nor can you sort of potentially sort of rely or I’d say delegate to AI a very important part of the book, which is research, you could use it as tool. But ultimately that research needs to be yours. So, if you’re building on an idea or you’re talking about an idea, you have to make sure that that idea is accurate. And if you’re doing citations, you have to make sure it’s accurate. And that is not something you could just rely on AI to do. So, I think huge spectrum. I mean I kind of sort of see the new term that at least the kids are using it and I heard from one of mine which is interesting is the iSlop. So, you see AI slop everywhere. It’s like you could generate a book. I don’t know who’s reading those. I mean one of the things I tell you when I’m on my walks or taking a shower, I mean I, I always have something going on like, you know, in terms of audio, I’m consuming data all the time 24 7. So, there’s interesting things that I’m listening to.

    Minter Dial: Hopefully not 247 right now, huh? Hopefully not 24 7.

    Rana Gujral: Hopefully. Well, hope. Well, I. Not. But it’s close. But what. One of the things I sort of noticed is like, you know, usually you gravitate towards a certain, certain aspect of content based on how popular that content is. So, if there’s an idea, let’s say if I want to listen to a news bite or a specific idea, which is something that I’ve been thinking about and working on and I see like a 30 minute or an hour video clip that has a lot of use and I was like, okay, that must be really good. And then as soon as I listen to it and if it’s not a human or if it’s AI, then it was like, okay. I mean, I don’t care how Good. It is. I don’t know if these ideas are really relevant or influential for me because, you know, I don’t know the standards behind how it’s created. Like, you know, so I don’t care about, you know, the accuracy as much in terms of the actual ideas in there. It’s more about, I want to make sure it’s coming from a human perspective because that is important to me. So, I mean, I think that’s the thing is like, so I’m like, you know, these standards around AI, with the tools, I think it’s all over the map. But then you decide as a, as a, as a consumer, would you buy that book on KDP that was generated by AI? Like, you know, is it worth your time to go read that?

    Minter Dial: Yeah, I’m certainly not thinking about that type of a book. I’m thinking the nuance of, you know, like the amount you could say, listen, this paragraph is too long. It’s 500 words. Cut it down to 400. Yeah, AI could do for you.

    Rana Gujral: I think that, that, that would be okay as long as it’s not changing the idea of course in the book. Yeah.

    Minter Dial: But anyway it gives you an idea of where the gray area starts to become because right, technically speaking you didn’t write that paragraph. You, you are allowing it to re. Re. Readjust words and, or these take out maybe a thought that I did have. Just like an editor, a structural editor would do. Anyway Rana, great stuff. So, tell us, where can people get in touch with you or follow what you’re up to and of course when and where can they get your book?

    Rana Gujral: Yeah, so the, the best way would be to track. Go to my website ranagodral.com first and the last name.com and find out more about the book. Sign for the sign up for the newsletter. I mean you can reach me there on multiple social media platform or just send me a message. That message comes directly to me and especially if you’re doing interesting things that overlap with what I’m focused on and you want to talk about it and you want to maybe even collab in some ways I think that’s the best way. I’m really interested in those ideas. The book book’s going to be the books out for pre order now on all major platforms. It’s available on Amazon, Barnes and Noble, Target pretty much everywhere. Like if you Google it, I mean if you’re in UK, there’ll be a UK version of that for example and it’s up for pre order now. I would appreciate a pre order. But the book actually does come out later in the summer, more sort of mid August time frame. And really excited to there. If you, by the way, if you are a researcher very involved in the space that the book sort of is in and you want to do an early read, reach out to me. I’m happy to, happy to make that possible for you.

    Minter Dial: So, Rana Ghoswell, the book is called the AI Instinct, the Future of AI and Human Decision Making by Wiley, coming out in August. Thank you very much for being on the show.

    Rana Gujral: Thank you, mentor. It’s really a good pleasure and I appreciate you having me back.

    Minter Dial

    Minter Dial is an international professional speaker, author & consultant on Leadership, Branding and Transformation. After a successful international career at L’Oréal, Minter Dial returned to his entrepreneurial roots and has spent the last twelve years helping senior management teams and Boards to adapt to the new exigencies of the digitally enhanced marketplace. He has worked with world-class organisations to help activate their brand strategies, and figure out how best to integrate new technologies, digital tools, devices and platforms. Above all, Minter works to catalyse a change in mindset and dial up transformation. Minter received his BA in Trilingual Literature from Yale University (1987) and gained his MBA at INSEAD, Fontainebleau (1993). He’s author of four award-winning books, including Heartificial Empathy, Putting Heart into Business and Artificial Intelligence (2nd edition) (2023); You Lead, How Being Yourself Makes You A Better Leader (Kogan Page 2021); co-author of Futureproof, How To Get Your Business Ready For The Next Disruption (Pearson 2017); and author of The Last Ring Home (Myndset Press 2016), a book and documentary film, both of which have won awards and critical acclaim.

    It’s easy to inquire about booking Minter Dial here.

    View all posts by Minter Dial

     

    Pin It on Pinterest