Rotman Visiting Experts

Digital dialogue: How AI is reshaping the way we think

Episode Summary

AI isn’t just a productivity tool — it’s changing how knowledge itself is created. Author Paolo Granata joins the latest episode of Rotman’s Visiting Experts podcast to explain why the real question isn’t whether AI can think, but rather, how it’s transforming human thinking.

Episode Notes

AI isn’t just a productivity tool — it’s changing how knowledge itself is created. Author Paolo Granata joins the latest episode of Rotman’s Visiting Experts podcast to explain why the real question isn’t whether AI can think, but rather, how it’s transforming human thinking.

Show Notes 

[00:00] Brett Hendrie asks what AI is doing to the way we think 

[1:40] Meet Paolo Grenata, Author of Generative Knowledge: Think, Learn, Create with AI

[2:02] What does it mean for AI to help us think? 

[4:28] How knowledge has historically been created, shared and understood, and what's different about this moment

[7:32] How should people adapt their mindset in terms of how they use AI in their day-to-day lives? 

[10:08] What does a healthy partnership with AI look like? 

[12:17] How to find the right balance, and not offload too much to AI. 

[15:42] The idea of learnability, and why it matters. 

[18:33] What does success look like for individuals and businesses in terms of their use of AI over the next 10 years? 

If you enjoyed this episode, why not give some of our back catalogue a listen? If you want to dig deeper into leadership topics, check out Cutting through the noise: How to make better decisions with Nuala Walsh, or Michael Bungay Stanier on the secrets to coaching others.

Make sure you subscribe to this podcast on AppleSpotifyYouTube, or wherever you get your podcasts — and please consider giving the series a five-star rating.

To explore more leadership tips and tricks from the Rotman School of Management, check out our Rotman Executive Summary podcast, featuring the latest research and thought-leadership from our esteemed faculty. Check it out on Apple, Spotify or wherever you get your podcasts. And be sure to subscribe to the Rotman Insights Hub bi-weekly newsletter for even more insights shaping business and society.

 

 

Episode Transcription

Brett Hendrie: Did you know that open AI's chat GPT has over 800 million weekly active users. That makes it the fastest growing global consumer application in history. Add to that the hundreds of millions of people using other large language models, Claude, Gemini, grok and so on. And it's clear that the world is quickly changing how work gets done. Chances are you've started to use AI as a tool at work too. Maybe it's helping you be more productive, but is it actually helping you understand your work better? We're all grappling with the reality of AI right now. Is it replacing us? Is it making us sharper or lazier? And maybe a more interesting question is, what is AI doing to the way we think? Our guest today, Paolo Granada, wrote the book on this topic. Literally. He wants us to rethink what it means to know, not in spite of AI, but in partnership with it. Welcome to visiting experts, a Rockman school podcast for lifelong learners, exploring transformative ideas about business and society with the influential scholars, thinkers and leaders, featured in our acclaimed Speaker Series. I'm your host, Brett Hendry, and today, we're excited to welcome Paolo Granada. He's a professor of book and Media Studies here at the University of Toronto, and a faculty affiliate at the Schwartz Reisman Institute for Technology and Society. He's an award winning educator and an internationally recognized media scholar. His work spans media ecology, semiotics and the philosophy of technology. He's the founder and director of the media ethics lab, and he's held positions in Italy, Canada, Brazil and China. His latest book is generative knowledge, think, learn and create with AI, a fascinating exploration of AI's impact on intellectual life and a roadmap for navigating its implications. Paolo, welcome to the Rockman school and welcome to the podcast.

Paolo Grenata: Thank you for having me. 

BH: Paolo, your book comes at, I think, a really important moment for all of us. We're grappling with AI and what it means for society, what it means for our own lives, what it means for being a professional in the workforce. And you pose some really interesting questions and ideas in your book, one of the central ones that I wanted to ask you about at the start is we've been very focused on the idea, can AI think? And you talk about, maybe it's more important to think about how AI can help us think. What do you mean by that?

PG: The question about AI thinking is mostly misframed. Since Alan Turing in 1950s was asking whether machines can think. At the moment, if thinking means intentionality, consciousness, self awareness, the answer is no, but staying stuck on that question, I think, blinds us to what really matters, and that is what happens to our thinking when we engage with machines, when we engage with artificial intelligence. That's why, in my book, to overcome the question whether or not machines can think, I define artificial intelligence as an artifice of human intelligence. So the real intelligence leaves, and the people who design, who train, who use AI, this enormous creativity is invested in building the systems. And this is really the manifestation of human creativity and human intelligence. And so instead of looking for intelligence within inside the machine, I think it's better to look at intelligence in the human mind who created and then eventually I define AI as an epistemic technology that fits into the idea that throughout the entire history of humankind, every tool, from writing to the printing press to the telescope, the microscope, all those tools shaped how humans think and how humans produce knowledge.

BH: Well, this was one of the interesting angles in your book that I wanted to explore with you, the idea that we might be at a fulcrum point that changes how knowledge is generated. And I wanted to get your perspectives on where we are in that history. Can you maybe give us your thoughts on how has knowledge historically been created and shared and understood, and what's different about this moment? With AI,

PG: I frame generative knowledge as the main framework for this book, and to some extent, knowledge has always been generative. So what do we know today is the result of what previous generations knew before us, what they refined, they challenged, and they passed it on eventually. So this. Why knowledge breeds more knowledge. And generativity, in terms of generative knowledge, means that every generation starts from where the previous generation left. But again, not all knowledge is generative. So there is an aspect of knowledge that is conservative. It is intended to preserve, store and disseminate. What we know when knowledge becomes generative is where advancement really happens. there are a few principles of generative knowledge. One principle is tools. We always needed tools to expand our way of thinking to externalize our thinking. Knowledge creation, again, always needed what I call the epistemic technology. And AI falls in this trajectory of epistemic technologies. Knowledge has always been even social. Ideas are social. So ideas do not emerge in a vacuum, in isolation. Generative knowledge always needs that social perspective, and then again, in terms of the very shift as AI sits in this trajectory of generative knowledge. So nothing new again. So essentially, all previous technologies helped us to store, reproduce, confirm and transmit, but AI participates actively in this knowledge making AI again participates and operates at a scale that is unprecedented compared to the previous technologies. That's why I call our era the Alan Turing galaxy, because AI has become the new printing press, a new epistemic environment that we live in in the same way that the printed book defined the intellectual life for about five centers. And so if printing press gave us new ways to share knowledge, AI is giving us a new way to generate new knowledge. It's so fascinating because it's clearly a technology that is self reinforcing. The more we use it, the more that it learns, the more we learn. 

BH: I think what you're guiding us towards is the idea that it's such a shift from previous technologies like the printing press and so forth, that we need to change our mindset in terms of how we use these technologies. I'm wondering, as you were writing the book and as you researched this area, what have your thoughts been in terms of how people should adapt their mindset to how they use AI in their day-to-day lives?

PG: Well, I will start with competence first and always. It is the first principle of generative knowledge. It takes knowledge to create new knowledge. It may sound a bit recursive or redundant, but it's fundamental, and so competence, I think, is the golden rule, the mantra of generative knowledge and without the solid foundation in a domain, whatever it is, AI, gives you very little on real value. Without competence, you want to recognize errors, mistakes, gaps. You want to really know which outputs are brilliant, creative, insightful, and which are nonsensical or just redundant. So I think the first habit is really investing seriously on what we already know, the expertise, the domain knowledge, and then on this we build. And essentially, AI can balance this parallel effort to develop new competencies. Second point, I believe it's treating AI., as a thinking partner, as an epistemic partner, intellectual partners on just an answer machine or the Oracle. The idea of Oracle comes from ancient Greece. Oracles used to deliver essentially common knowledge, and people were fine with receiving those very common sense responses. So today, I see the same pattern right. And again, the real value emerges when we engage not with AI as an Oracle, but engage with AI as a genuine dialog, iteration, probing, adjusting, pushing back, going back and forth. I build my book on a pragmatist approach in. Knowledge and knowledge making that it means thinking by doing so that the making is really what makes a big difference.

BH: So what does a healthy partnership with AI look like? Already, we've seen in the popular press stories of people who have not healthy relationships with AI. And I think this is also something that many leaders and professionals in the workforce are trying to navigate in terms of how they can have a partnership or use it as a tool in a way that is healthy and productive. What are your thoughts

PG: I mentioned before, the deep roots and throughout history, we always have been thinking with and through the tools available to us. Every Age has been thinking by the tools available at the time, from writing to printing press and so on. As AI is probably the most sophisticated cognitive partner we ever had, because, as I was mentioned, it participates accurately in the very process of meaning making. I think a balanced approach, as you were asking, is to recognize who or what does what. Who does what? So in terms of AI, brings computational power, computational capabilities, the ability to recognize patterns, variations, hypotheses and so on. But we bring intentionality, we bring the contextual judgment, the ethical reasoning, and ultimately, the capacity to assign the meaning to what we do. So a healthy partnership respect the roles these divisions. Essentially, you don't ask your partner what you should do, and vice versa, right? And so professionals should, I think, apply this balance rule in terms of judgment to select, evaluate and make sense of what AI does without competition is an alliance, another competition, delegating what AI is very good at and keeping what we are very good at. And so I think this balance may be a way to really make the most of AI as a partner.

BH: Those are great insights, and I wonder what your perspective is on the conversation around if there are risks with AI making us a little bit intellectually lazy, or are we offloading too much cognitive load to it? Now I know accountants who didn't have spreadsheet programs 40 years ago are probably very happy to have them today, and there's still lots for them to do and for them to wrap their mind around it. But AI isuch a step up in terms of technology, and what it can do. Is there any concern that we might offload too much to it, and how can we find that right balance?

PG: I like as you frame it, and the risk is real. I take it very seriously. And I think that every epistemic technology, every technology as AI, like AI, in history, has come with cognitive trade offs. So AI is accelerating this pattern. You give something, you get something. And when you have a system that can summarize, can boost your productivity in many ways, the risk of cognitive lifting is enormous, and so I describe this risk, particularly as the danger of the illusion of understanding, illusion of knowledge, or illusion of comprehension, mistaking access to information or efficiency for genuine understanding, so you feel you know something because AI gave you a fluent, very confident answer. But fluency is not knowledge. The gap is really where that intellectual laziness takes root. Again, I'm a media historian, so I always analyze this kind of revolutions, and not by the clock, but by the calendar. And so in the past, the cognitive offloading has always been part of the way we externalize our thinking, right? But this is what makes us human. offloading is not dangerous, per se, offloading is something that is part of the human nature. So the question is, firstly, what we offload routine jobs, routine tasks or judgment or profound agency, so what we offload and what we do with the free time that we have arranged? We get back by delegating some certain tasks. So if we delegate a routine job, AI could liberate our mental energy to think more, to think more deeply, to think better with more creativity. Once again, it's a balance offloading means being conscious of the time we keep for ourselves, and to some extent, again, the balance principle comes back. And so the danger is not that AI thinks for us. We know that there is no thinking in a human sense. The real danger is that we may stop noticing when we have stopped thinking for ourselves.

BH: I think that's such a great insight, and I think it's so important for us to keep in mind, because we're already using these tools every day, and as you said, they have a fluency that people can misinterpret as authority or something that is definitely correct or proven or true, but there can be a misinterpretation. There you write in your book about how it's really important for us to be able to learn, unlearn and relearn. Can you walk us through a little bit of why that process is so important?

PG: When we use AI that touches the fifth principle of generative knowledge, the willingness to learn and relearn, which I frame it as a cycle of learnability. Learnability literally means ability to learn, and learning how to learn is becoming more and more important. So I call it a cycle, because it's always going where relearning means that we integrate new understanding in a more resilient, more adaptive form. And so this cycle is continuous, never ending. And I believe it's a fundamental principle to really rethink the knowledge in a different way, rethinking knowledge not just as a process of accumulation, but as a process of creation and making. But why AI, as you pointed out, makes this learning so urgent, I think, is because AI is reshaping the entire knowledge ecosystem. AI is the new printing press, as I like to define, and that makes intellectual rigidity very dangerous. The openness that I believe is required will make a difference. If we are open to this relearning, unlearning, the cycle of learnability, we are essentially becoming open to learn what AI is bringing to the table, And I think learning how to learn in a workforce, in education, this is largely untrained, so we need to really develop this new skills that really will determine whether our relationship with AI is passive or, as I hope, generative.

BH: I wonder, as we look ahead to the future and to the next 10 years, what are your thoughts in terms of what success looks like for humans and professionals in terms of their use of AI?

PG: Well, thank you for this question, because it gives me the opportunity to address the final chapter of my book that is titled The generative thinkers. And I covered this idea of generative mindset, the general thinkers, because I believe it is success as you frame it would be really measured in the future based on the quality of thinking happening across society, individuals and institutions. At the same time, I would really want to see people in any profession, teachers, scientists, engineers, entrepreneurs as well. So using AI to push their understanding further, to ask harder questions, to reach insights that they couldn't reach alone, I will really see this healthier knowledge ecosystem that instead of producing just an enormous volume of AI generated content as is happening today, we could, in 10 years, learn how to distinguish between knowledge that really generates new knowledge and knowledge that is just an accumulation. In 10 years, I will want this to be shared, the values, the understanding that how we think matters as much as what we produce, that intellectual integrity that deserves the same attention we give to innovation. And that purpose of all this technology, AI and whatever will come next, is ultimately, to improve the human condition, not just to accelerate progress, and so to elevate and clear, what is the ultimate goal of humanity? What really makes us human, what elevates the human condition? And so the real measure, I think, of success in 10 years, will be likely what kind of thinkers we will have become there.

BH: I really appreciate that, and I think it's a very optimistic outlook that we should really aim for. And I think it really impresses upon all of us to think about how we use these technologies in a responsible way for our own betterment, and that we do it in a thoughtful way, because they're part of our lives every day, and if we aren't thoughtful about it, we won't have those optimistic outcomes that we're looking for. Thank you, Paolo, for all of your insights and for sharing these thoughts from your books, and we really appreciate you having the time to join us at Rodman today. Thank you for having me. Our guest has been Paolo Granada, and his new book is generative knowledge, think, learn and create with AI. This has been visiting experts exploring transformative ideas about business and society with the influential scholars, thinkers and leaders featured in our acclaimed speaker series to find out about upcoming speakers and events at Canada's leading business school. Please visit rodman.utoronto.ca/events this episode was produced by Meaghan MacSween, recorded by Dan Mazzotta, and edited by Damien Kearns. For more innovative thinking, head over to the Rodman insights hub and please subscribe on Spotify, Apple, YouTube, Amazon, or wherever you get your podcasts. Thanks for tuning in.