A former university president is trying to reimagine college teaching with AI in mind, and this year he released an unusual video that provides a kind of artist’s sketch of what that could look like. For this episode, I talk through the video with that leader, Paul LeBlanc, and get some reaction to the model from longtime teaching expert Maha Bali, a professor of practice at the Center for Learning and Teaching at the American University in Cairo.
Watch Matter and Space's video discussed in this episode, "Butterflies"
Read Maha Bali's blog.
Jeff Young:
Welcome to Learning Curve, a look at what it means to teach and learn in the age of generative AI. I'm your host, Jeff Young, a longtime education journalist, exploring the intersection of tech and education.
What if you were designing the ideal teaching experience today, knowing that AI chatbots are possible? Imagine what education could be, knowing these AI tools are in the world and that they could be used for teaching.
That's the big question we're tackling in this episode. And we're going to start by hearing the story of a former university president who has spent more than a year proposing what an answer to this question might look like.
That university leader is Paul LeBlanc, who spent more than 20 years as the president of Southern New Hampshire University. During that time, he led a transformation of that campus. When he started, it was a residential campus with about 2,500 students. That is all still there, but LeBlanc was early to embrace starting online degree programs and tailoring them to busy working students who just could not get to a traditional campus.
Wherever you live in the country, you might have seen a bus ad or or some other advertisement for Southern New Hampshire University's online courses. It has grown into a mega-college. Today, there are something like 200,000 online students there — 200,000.
Last year, LeBlanc stepped down as president of Southern New Hampshire, and he decided that his next step was to explore whether and how AI might lead to the next big rethink of college teaching. He started a company called Matter and Space, and one of the first people he hired was George Siemens, who has been a long-time leader in thinking about how online platforms could reshape teaching. In fact, it was George Siemens who, back in 2008, taught a free, open online course that first used the term MOOC, or massive open online course, as you might remember, that sparked a bit of a gold rush back at that time, as many big colleges started putting free online courses up, though actually Siemens himself, he focused more on what he called Connectivism and and making collaborations among students online, instead of doing more passive videos, as a lot of people did with their early MOOCs.
Anyway, not to get too into the weeds on all that history, but my point is to note that Paul LeBlanc and the folks he's working with now, they have some interesting history in this space.
Earlier this year, LeBlanc’s new company published an unusual video online that shows what he sees as a vision for how AI might fit into teaching. It's kind of like an artist sketch for a product rather than a product, and he says it was meant to spark some discussion about what could be. So in that spirit of getting discussion going, I also wanted to bring in another perspective to this episode, since not everyone agrees on what the right mix should be between human and bot in teaching.
And so in the second half of this episode today, you will hear from another long-time expert on college teaching, who responds to LeBlanc’s video with her own alternate view of a critical AI literacy.
Here we go, first off, highlights of my conversation with Paul LeBlanc.
Okay, so it's been a year since you left Southern New Hampshire. You were there for so long and made so many changes, but I was intrigued by this question that you left the university with which you just said — it was great, like the plot of a movie — you said you're going to assemble this top team and go ask this question that a lot of people have on their mind in an era of AI, which will quickly seem to be entering generative AI. ‘What if you designed a new university from scratch? Or if you thought of learning today without the constraints of the way we've done things traditionally. What would it look like is that it seemed like that was the question, right?
Paul LeBlanc:
it was really the way that when George Siemens, is the co founder of Matter and Space, and I sat down at ASUGSV in April of 2023. And I said George, why don’t you leave your joint appointment at University of Texas and University of South Australia, come join me, and let's figure out how we could reinvent learning if we were unconstrained by any assumptions. Like we could use AI and not worry about accreditation, not worry about Title IV, not worry about how universities are supposed to look. But rather, ‘What if we could just have a blank sheet of paper?’ So it was both a kind of incredibly exciting and incredibly daunting proposition. And George was crazy enough to say, Yes and shake on it. So that's what we've set out to do.
Jeff Young:
Could you walk us through the thinking of where it took you and where you've landed so far?
Paul LeBlanc:
Yeah. So you know George and I, when we began that conversation, had a set of what we were really seeking in the beginning were interesting questions or interesting hypotheses that could be tested out. So one was around the long-held promise of personalized learning, and it had almost become a discredited phrase in higher ed because we've seen so many failed attempts or attempts that fell short of their promise, whether it was kind of adaptive learning systems or whether it was Knewton, if you remember.
Jeff Young:
He's referring here to a company called Knewton that caused controversy back in 2015, almost exactly 10 years ago, when the company's founder, Jose Ferreira, made a claim about the power of the company's personalized tutoring system that many people saw as overhyped. In an interview back then, Ferreira said, “We think of it like a robot tutor in the sky that can semi read your mind and figure out what your strengths and weaknesses are, down to the percentile.”
Many thought the actual product fell way short of that kind of language, and it led one prominent critic at the time to call Knewton’s product “snake oil.”
Paul LeBlanc:
Well, yeah, the robot tutor in the sky. The founder of Knewton famously getting [criticized], for good reasons, you know.
So we thought, but maybe for the first time, AI will give us the tools and technology we need to do, generally personalized learning. What we've learned is that the answer is, ‘We can.’ And I think it's one of the most fundamental shifts in the model we've built with our platform is that higher education is with a little bit of variability, mostly still as it has been for hundreds of years — a one-size-fits-all model.
That is, if you and I enroll in the same program at the same place, we're, generally speaking, going to have the same experience. You may choose some different electives than me, and there will be variety. But the reality is we're marching in a predetermined curricular pathway with a particular pedagogy towards a particular set of agreed-upon outcomes.
And what we think we can now do with our platform is create an experience our learning environment in which 20 people in the same program would all end up with mastery of the same skills and competencies and outcomes at the end, but have 20 different experiences. And those 20 different experiences would be, and I mean dramatically so, because each one would be based very much on who you are as a human being. And that is everything from how you best learn; how you like to consume content and curriculum; when you are at your best as a learner; the level of your executive functioning. We integrate with wellness devices. So Whoop bracelets like the one I'm wearing and Fitbits and Apple Watches.
So to the extent that you want us to also integrate your wellness and well being, the research is quite clear. My point in all of that is under the hood of the Matter and Space learning platform. It's called LE by the way. LE-1.
So the AI agent is LE, but under the hood with LE is a dynamically updated, robust learner profile that reshapes the experience in real time. So what you're doing at this moment gets shaped by what we know about your learner profile. So that's a pretty big thing. That's a paradigm shift. Imagine if we can build learning where everyone in a learning Pro, in a program or situation or environment is getting learning delivered and highly personalized, precise way that's very powerful.
Jeff Young:
I really was struck by this video that you put on your website, the “Butterflies” video. Can we talk about that for a minute? It's an interesting video you know because it's not a demo. I think you label it almost like near-future science fiction, so to speak. And I'll link to it in the show notes.
Paul LeBlanc:
Let me say what it is because I think it's important for people who go to the Matter and Space website. It’s five minutes long, and we commissioned it.
So much of what we've been talking about feels technical, perhaps abstract in some cases. We haven't got to the question of epistemology versus ontology, like, what does that mean for consumers, right? So what we wanted to do is say, ‘Look, at all of this boils down to better learning and enriched human experiences. Could we create a little five minute vision video, if you will, that captures what it would look like if we could get it right, because so much of the AI conversation is justifiably anxious and fearful, and I share those fears in many, many instances. I mean, it feels like it's coming for jobs. There are lots of other things.
Jeff Young:
I mean, it feels like it’s coming for humans, maybe, I don't know. We've all watched the Megan movie this summer, right?
Paul LeBlanc:
So we wanted to create a vision because these are choices, and we can shape the direction of the future. It's formidable, because you and I don't work in one of the big AI companies, which seem to hold so many of the cards right now. But policymakers hold cards, and the decisions we make as parents hold cards. And as consumers, we hold cards. And we wanted to create a vision of what it could look like if we got it right. So that's what that video is.
Jeff Young:
And so I want to play a little clip, if you don't mind. Let me share my screen. So let's roll, I'm going to roll it from about two minutes in. Actually tell us where we are in the video because we're not in a classroom at all.
Paul LeBlanc:
Right? So the video was filmed on the Pine Ridge Reservation.
Jeff Young:
It's in South Dakota, right?
Paul LeBlanc:
I believe it may be the poorest community in the United States, actually low-income wise, and afflicted with all the ills that come with that. And also with an incredibly rich Lakota Sioux culture.
Sarah Eagleheart, the producer and director, worked with us on this project. She is from the Pine Ridge, originally. She's Lakota, and all the actors are indigenous. And we did this with support of the community, and we did it there because the mission of Matter and Space continues to be to try to reach and serve those who are least well-served by the incumbent system of higher ed. Who's not being well served today? How do we get the power of learning and connection and AI in their hands?
So the five minute video is sort of three phases in a young man's life as childhood and to his adolescence, and then finally, in his 20s. And what he's using is what we hope will eventually be a version of LE.
Video Clip:
Today, Santa introduced me to Sophia.
Paul LeBlanc:
Yeah, so this is the point. I love this. I'm glad you chose this part, Jeff.
So this is so by the way, you'll see a lot this video is with smart glasses. We are [imagining we are] in 2026, and we'll start to integrate smart glasses into the platform as well. I have a deep conviction that smart glasses, AR glasses, are the future for how human beings intersect with the physical world and AI and ourselves. We have lots of examples of how we will use it in assessment and so on, but also to connect.
So the scene is a young boy. We've kind of done the exposition, which is a slightly near future, dystopic future, and now he's using the platform. He's using LE to connect with another young learner, and in this case, in a Mexican community. And the question, the reason I love this part, Jeff, is part of what we're trying to ask is, how do we use AI to be much more effective at knowledge transfer and learning? But how do we use that, then, to leverage community connection and the enrichment of individual humans? Because we say we're a human-centered approach to AI and learning. But what does that actually mean? Like, what does it look like in practice? And this is one of the ways this could look when we start to connect people. And I've always been interested in the idea of how learning takes place in the world at large, versus a classroom. So we did not set it in a school. We set it in the community in which these kids live.
Jeff Young:
Okay, I'm gonna roll like 30 seconds of it here. Let's just watch for a second. Then I'll come back.
Video Clip:
My favorite is the monarch.
Jeff Young:
So at this point in the company's promo video, we see a Lakota boy sitting in his bedroom wearing a pair of smart glasses, and these let him do a video call with a student named Sophia, who lives in another country and is in part of another culture, in Mexico. The main character is sitting on his bed in his room while Sophia gives him a tour of her bedroom, which is decorated everywhere with butterflies.
Video Clip Dialogue:
She loves butterflies. Well, technically, they're insects. They used to be everywhere, but I don't know anyone that's seen one. Her home is very different from mine.
Voice of LE AI Agent from Video:
Mathematics can help us understand patterns from this flower to the spiral of our galaxy. Monarch caterpillars only eat milkweed. It's essential to their migration. Can you tell why this one is struggling?
Video Clip Dialogue:
There's so much to learn. I'm glad I don't have to do this alone.
Jeff Young:
If I understand it correctly. LE, the agent, is imparting relevant, related information to the subject matter that the student is exploring with the other and with a connection to another. A human friend learner in another zip code.
Paul LeBlanc:
Yeah, that's right. And the idea being, again, that we can use AI to give the best teaching and expertise to a learner, even when that's not available to them, and also in a way that connects them to others and also helps them improve their community.
And this goes back to kind of the second question we didn't talk about, which is that one of the sort of hypotheses that George and I have is that universities in the future will shift their focus to being still about learning questions of epistemology, knowledge, knowledge making.
There are lots of profound questions about what knowledge will now count. We can come back to those, but to share that focus with ontological questions of being, ‘What does it mean to be a good human in the world?’ ‘What is our relationship to this new entity that sits alongside us?’
Because we now have this powerful new thing, it’s as if we've invited, you know, an alien onto the planet. It's a little bit like the character in Rain Man, which is like, it has an incredibly, incredibly capacious mind, but doesn't quite know how to navigate the world very well.
And it's hard to imagine a job where we won't have an AI companion at our side, and lots of aspects of our day-to-day life. What does it mean? Like, what does that mean? And if you've read the novel, you know “Clara and the Sun.”
Jeff Young:
For those who have not read the novel, ‘Clara and the Sun,’ I totally recommend it. It's by Kazuo Ishiguro, who won the Nobel Prize in Literature in 2017. The novel is told from the perspective of an AI robot named Clara, who is purchased as a companion for a child.
Paul LeBlanc:
What is our relationship to the robot? Right? I mean, these are no longer sci-fi questions. These are the questions of today.
So we really start to think about, ‘How do we now think about universities and about learning as being much more intentionally connected to the holistic development of the user. So we don't talk about Matter and Space as creating a learning platform any longer. We talk about creating a human-development platform.
And that it is about learning, certainly about skills. That's probably why most people come to it first, but that it's also about helping you be the best, better version of yourself. So your well-being, your wellness. And I can talk about what we're seeing in the early testing, it's kind of remarkable.
And then the third part is about what we might call your you know, interpersonal skills. We say soft skills, we say enduring skills. It's like communication, working with other people, you know, navigating cultures that are not your own. So interpersonal skills is the third sort of leg of the stool for us, and when you pull those together, it's the most genuinely holistic approach to learning that we've seen.
In other words, everything else is a point solution. We take parts of it, and if you think about universities, it's all like the traditional residential university has always had a healthy attempt to get at the human being. We create intentional communities, but we don't do it. We don't do it with much design. We kind of say, ‘Hey Jeff, welcome to campus,’ at the beginning of your four years, and we're pretty sure when you leave here, you'll be a better human being.
Jeff Young:
Well, there's tons of ways in which — there are clubs and there are sports. I mean, in a way, you could argue that the design has a subtle assumption that there's a lot of offerings, and people will take from this buffet and it'll all work out,
Paul LeBlanc:
Yeah, but the reality is, we continue to produce Stephen Millers and Ted Cruzes, right. So it doesn't always work to make better human beings.
These intentional residential communities are kind of a buffet. But we don't measure, we don't know, like, we don't subject that incredibly important work. Some would argue it's actually the most important work a university should do, and yet it's the one we give the least amount. Like we don't hold ourselves accountable in any way. We don't give a lot of guidance. We say to the student, ‘Here's a buffet of things, knock yourself out.’ And what we know is that that kid who doesn't engage with that community in the first three months is at a high risk of dropping out.
Jeff Young:
I think what we're seeing, especially after the pandemic, is this crisis of belonging and almost like a buy-in.
When you and I went to college, there was a sense of shared respect for the institution that I don't see as much when I talk to students today. I understand the project, but I feel like that social contract, if you will, of like, the whole package and buffet is kind of broken down a little bit when I talk to some students these days,
Paul LeBlanc:
Yeah, and I think that. We could talk about how that was happening even before the pandemic. I think we've moved into a much more transactional relationship as a sector with our students. So we now talk about ROI and I think a lot of this was our own doing as a sector, given what we charge, why wouldn't someone say, ‘Is this worth it?’
Jeff Young:
Well you do need a job, especially to pay off those loans.
Paul LeBlanc:
Exactly. And you know, we've done this massive cost shifting from government support to student support, so students paying on their own.
So there are lots of reasons why that might be so, but my point is to go back to what we're trying to build. I do think that I'm really been interested in this question of, you know, higher ed as we know it, is kind of going through an earthquake right now, and a lot of the buildings are going to crumble, and it's always like, if you can survive the earthquake, then the question is, how do you rebuild?
And how do you take that opportunity to rebuild in new ways that were hard. It's easier to build from rubble than it is to renovate an existing structure, right? You clear the rubble and you have a clean slate. And I think in some ways it's a useful question to think about, what does higher ed look like in 2030 or 2035. Because right now, it's being shaken. They're like we're in the earthquake.
Jeff Young:
I was struck by how even in this clip you've played and what we've talked about so far, I don't see a human teacher. I see LE, the AI agent, interacting with the student. I see a student interacting with another student. What is the role of a human instructor in this scenario?
Paul LeBlanc:
So this is one I spend a lot of time thinking about, because we actually could build a platform that is so good that at least traditional roles of teachers wouldn't … Well, that you would be able to provide teachers where teachers are not available.
But the danger is, of course, that we could start to displace teachers where teachers are available, right? That we would have providers who would say, ‘Hey, this is great. Like, I don't need them.’
And so two years ago, it was interesting. I think we would say you keep a human in the loop, because the AI is not that good. Today, I would say you have to keep a human in the loop because the AI is so freaking good, like you have to.
We have to be really thoughtful as a society about where we displace humans, or how we ask humans to play a different role. And when I think about teachers and the educational sort of structures that we live with, I think I would be asking teachers to play a very different role. And I think this would be very hard. My challenge to teaching into teachers would not be a welcome one, I suspect, for many people.
So here's how I think about this. I'm going to back up. Mark Schlafman, Elizabeth Collins, Helen Heineman are the three teachers that transformed my life. I don't know that they were great teachers. I'm pretty sure they were, but I don't remember them because of knowledge transfer or their lectures or the way that I got smarter about A, B or C. What I remember is that they made me feel like I mattered. They knew me holistically. They knew me as a human being. They care about me. They understood my context. They demanded more of me. They helped me dream bigger dreams for myself. Kind of kicked me in the pants when I wasn't doing what was possible for which I had the potential. Right? And those teachers are transformative.
When I am on stage, and I ask audiences to hold up their hands and say, you know their names. How many of those teachers did you have? Jeff, how many did you have?
Jeff Young:
Oh, yeah. I mean, I remember a couple from high school and college right now that come to mind.
Paul LeBlanc:
The average is three. And I think we can, with this platform, build the feeling of that relationship, that kind of learning, support that kind of like I like, that transformative power. I think we can capture that on the platform.
You know, one of our testers said, LE feels like my teacher, my therapist, and my friend.
Now I want to talk about the dangers of anthropomorphism, because as we have gotten so good at this, it's actually scaring me a little bit.
Let's come back to that in a moment. I want to stay with your question about what about teachers? So what I would argue, what I would love to see with our platform would be institutions or providers who say, I'm going to sort of treat this like the flip classroom on steroids,
Jeff Young:
And the flipped classroom being, you know, you watch the the lecture video with
Paul LeBlanc:
With LE, you learn the subject matter outside of class, on the platform, in the middle of the night, or at whatever time that you are your best learner, and you're getting a wellness supported, and you're feeling good, and you feel like, ‘Oh, you know, LE is endlessly patient with me, and I can ask the stupidest question about calculus, and it's never judgmental.’ And all of that stuff we can do, we can adjust to reading levels. I mean, it's amazing how personalized we can get.
And what I would ask teachers to do is to take classroom time and to curate that as a community. And as precious human time. And frankly, I would have no screens in the room like no AI, no tech, no, LE. LE's doing her work as your genius TA on the side, because the knowledge transfer is not the valuable thing anymore, right? And if that's why you teach, and a lot of people love their disciplines. I get it. I was a faculty member, and they want to be up in front of class talking about this stuff. And I don't just mean sage-on-the-stage. I don't see a lot of that anyway. Well, there's a fair bit of that. But I think, you know, we do it, but they're in their discipline.
Yeah, what I would rather see is that classroom time as the place where, like, ‘Hey, we get to put like, you guys have learned you've been working with LE, but LE is really good at moving the dial — you're learning this stuff. I'm going to use this time for us to now apply that learning. I'm going to give you really challenging scenarios and situations where you now have to take what you learned and show that you really can apply it.’
Jeff Young:
You mean without LE — without the AI helper?
Paul LeBlanc:
it's no longer what you know in a world where you can know everything, it's what you can do with what you know.
So I'm going to create situations. I'm going to do hands-on and do experiential [things]. I'm going to do enrichment. I'm going to bring people in. I'm going to have you working with each other. I'm going to have someone stand up and explain a thing that's pretty complicated and see if they got it right. You all are going to sort of be the judges of that, like you could use that classroom time to do amazing things.
And in future iterations of LE, faculty who do want to use our platform that way, they would have a dashboard that would give them recommendations about how to use the time. So. ‘Hey, Jeff is killing it. He and four others are all really captivated by this notion. We recommend that in class time, you might put them in a group and give them this situation. Paul is going through a difficult, challenging time, which we can't share the details of. We recommend you spend a little bit of time with him today, and don't talk about school work, just let him know that you're paying attention to him, right?’
Like as a teacher, I'm preparing to go into that precious classroom time, and my genius TA is basically giving me observations and suggestions. And I have agency. I can say I'm not doing that, or I'm not comfortable with that, or I have something else in mind but, I have like now I'm freed up right to really do the thing that we know matters most in teaching, which is human connection.
Jeff Young:
So I want to go back to the fear you have. As you put this in place, what is striking you?
Paul LeBlanc:
We've said many times, ‘If we are so good at what we do that learners just hold up in darkened bedrooms thinking that they're talking to humans, we should pull the plug.’
We're holding ourselves accountable to our level of transparency — so, data privacy and agency. So we want students to be able to always interrogate what we think we know about them. So Amazon thinks it knows a lot about you, Jeff.
Jeff Young:
I think it probably does right if it tracks what I buy and what I watch on their Prime Video, yea.
Paul LeBlanc:
Now they have data profiles of you. You can't interrogate that. You can't go look and ask, ‘Hey, what do you think you know about me? Right? And you know, I have two daughters. So my Christmas shopping is so weird that I'm sure I'm throwing the algorithms off. Like, this guy's buying young women's jeans for Christmas, and also motorcycle parts. Like, what is this?
Jeff Young:
I understand. And then, yeah, you're followed around the internet by ads for women’s jeans.
Paul LeBlanc:
Yeah. So your data profile is going to be an important part of who you are in the world, and we think you should have control and agency over that.
Another thing we're holding ourselves accountable to is, are we thinking about learning and supporting learners? Learning that employs AI to make you smarter or to get a thing done. So this is an analogy I use a lot these days, which is maps versus GPS. GPS allows. It's amazing technology. I use it. You probably use it all the time.
Jeff Young:
I probably over rely on it, sure.
Paul LeBlanc:
So your point, right? It doesn't make me smarter, it just gets me it gets the job done. It gets me to where I want to go. I don't actually have to pay much attention.
Jeff Young:
Yeah, I often don't know where I just went, so to speak. I wouldn't be able to retrace my steps.
Paul LeBlanc:
Compare that with the law, the ancient technology of a map, which is a technology. And when I have a map, I get smarter. Like, ‘Oh, wait a minute, I know spatial relationships of various towns. I see I'm closer to the coast, and I realize, Oh, there's another route I could take. It's slower. GPS didn't recommend it, but look, it's beautiful. I want to go up through the mountains instead.’ And so on, and so on, and so on.
And I think what we're seeing, for example, with writing, is that when students use AI to write an essay, it doesn't make them a better writer — and more importantly, it doesn't make them a better thinker. So you either have to rethink what writing is for, or how you redefine the act of writing, and probably move more focus from the product. Is this a good essay? I'm going to grade your essay for process. How did you think?
So faculty are facing these really interesting questions. So another question is, again, this question of, ‘Are we using AIto help learners be smarter? Are we helping them get things done?’ And then even that smarter question has to be examined, because what counts for smart in 2020 may not be what counts for smart in 2030. In other words, the cognitive maps are changing.
Let me give you an example of what I mean by that. This was my research as a doctoral student, lo these many years ago. Which was looking at technology paradigm shifts and how they change the noetic economy of a society. So how society changes. Think, how — literally, how we think differently because of tech. You know that old expression, ‘we shape our technology, but in turn, our shape technology shapes us.’
So if you remember, Socrates rails against the new technology of writing,
Jeff Young:
Yeah, he had some concerns about it messing up our ability to remember things.
Paul LeBlanc:
Exactly right. So in the time of Socrates in an oral culture — pre-literate culture, pre-writing culture — what you can remember literally equaled what you knew. Like you can only know what you remembered, right?
And then maybe the second most important skill I could have would be networking. Because, like, geez, I don't know anything about hunting, but I'm going to talk to Jeff, because everyone says he's an amazing hunter. And I know Jeff, right. So networking was very valuable.
Sure we are now engaging in a much-needed conversation about, ‘What should our kids learn? How should they learn? What counts? What are the cognitive skills that are going to be most important?’
So I would argue that if I was a writing teacher, I would require all my students, if I were teaching freshman writing, to use AI — an absolute requirement, because that's what's going to be true of their future.
I've had writing teachers say to me, we should ban AI. That is like, ‘Do you want to ban the very tools that they're going to use in the workplace, the tools that will get them jobs like that?’ It doesn't make sense to me.
But let's rethink what writing is for and how it happens. And I would say, require your students to use AI and then ask for three things. I want to see the prompts you use to generate this piece of writing. That's a process question and a thinking question. I will know a lot based on your prompts about how you think and what you know. Secondly, I want to see the draft, but pen the draft that it produced, and show me how you improved it. What did you do, whether you prompted the AI to improve it, or whether you jumped in and did it yourself? How did you make it better? How did you know what better was, right? So that's a process and thinking question. And then lastly, I would say, wherever AI made a factual claim, how did you know that was accurate? What did you do to test veracity? How do you know it wasn't hallucinating? These three things reshape how you teach writing.
Jeff Young:
I guess. I think the interesting thing for me is, as somebody who has spent a career writing, is I think that's all very interesting, and I've definitely seen that, but I think at some point you do have to learn how to do that before that writing before you can be in the position to even smartly ask all those questions. So it gets back to thinking through the scenario you laid out for Matter and Space.
Paul LeBlanc:
That's true, Jeff, is it? Well, I don't know. Let me ask you this question. I was a literature major as an undergraduate.
Jeff Young:
Oh, same here.
Paul LeBlanc:
All right. So we both started off in classes where we learned to ask a whole bunch of really pretty basic questions. Who's the antagonist? Who the protagonist? What is the plot? What are the key themes? Right? These are the basic questions of literature. Like, let's just for a minute. We read a lot more novels we asked more sophisticated questions, right?
Jeff Young:
The close readings later on?
Paul LeBlanc:
Right? Yeah, and close reading was one approach, but we also learned about other literary approaches, or literary theoretical approaches, and by the time we graduated, we were asking pretty sophisticated questions of any novel you put in front of us, anyone put in front of us, right? I feel like you give me a novel today, and I could give you a pretty sophisticated understanding of it.
Jeff Young:
Yeah, you could hone in on some detail and like, really spin out some interesting analysis.
Paul LeBlanc:
Pretty high level assessments of the novel. I never wrote a line of fiction. I didn't have to write a novel in order to be really good at looking at novels and assessing novels.
And there's a version of this that's happening now with code. So the reality is that software engineering, software programming, is being blown up by AI. You still have to be really good at prompting your AI companion to write good code. You still have to have the right questions to look at what it produces to see if it's okay, right? You have to have a high level of understanding. I could argue that, not unlike what I just described with the novel, I don't have to write a line of code now to be a good software engineer.
Jeff Young:
I think it's interesting, but I do worry a little bit. Back to the map analogy. You know, I am worse at reading maps than I used to be. And yes, I have not written the Great American Novel, despite, you know, a dream when I was an undergraduate to maybe go do that someday. I've written some bad short stories. But the point is, like, even if I haven't written a novel, but I can analyze novels, I think it's a little different, because I do think I have been taught how to write, you know, and do the constructing. Like, writing is thinking, as I think it was George Saunders or somebody said, it's like, ‘Good writing is good thinking.’ And so I worry that if you don't build the code yourself, in the code example, or in the writing itself, if you've never done it, or very rarely, or you don't build enough basic skills, I don't know if you're going to be able to really do what you're saying could be done with that assignment idea of, like, analyze a piece of text, or like, give a prompt and then say this is bad because of X when you haven't done it yet.
And I guess that's the trick that feels like it even back to what you said about the worry of like, if you know, if the classroom is to take away the device and get someone to replicate it, but if the assignment is not to create it from scratch, when does the person, when does the student create it from scratch? Do the mental energy of analysis themselves?
Jeff Young:
Your anxiety seems well founded. I just don't know if either of us have a firm fix. This is so new, right?
So this is the conversation every discipline needs to be having. And not about, ‘What am I holding onto?’ Because I that's how I know and how I learned. But, ‘How will we think differently, and what does that require of us?’
And it may be that we conclude that no one should be able to sort of use AI for writing till they're let's pick an age 21 age of consent, you know,
Jeff Young:
Or you get a license to use AI for writing, so to speak. It's like you're, yeah, you pass something that says, ‘Okay, now you're ready to go to the LE, you know. They'll write for you or whatever.
Paul LeBlanc:
But it could very well be that there will be disciplines and cognitive skills today that we think are important to be learned in a certain way, where we say, that’s not as important.
[MUSIC]
Jeff Young:
When I had this conversation, with LeBlanc, I was really wondering what some of the professors I know would think of this vision, and I was really wishing I would have brought along an expert on teaching and to respond to this video that Paul LeBlanc said is supposed to start a conversation about these issues.
So I decided to try something a little bit different for this episode, I sent the full recording of my conversation with Paul LeBlanc to a teaching expert that I know who has been thinking about AI's role in education a lot. The person I reached out to is Maha Bali, who is a professor of practice at the Center for Teaching and Learning at the American University in Cairo, and she took the time to listen to the conversation you all just heard, and then I connected with her via zoom to talk about her reactions.
Maha Bali:
So first of all, I had heard about matter and space through friends of mine who are critical ed tech folks. We call ourselves “continuity with care.” We had Twitter DMs throughout the pandemic, which continued, not just the first few months of the pandemic, but up until now. We just moved to Signal, because everybody left Twitter/X.
So we had been talking about Matter and Space. I do want to say a few things. Can I say these things at first? Because it's very important. So first of all, it's really important to say, I'm commenting on the conversation you had with Paul, and a little bit of stuff I've seen on the site. I have not had first hand experience with the platform that we're going to be discussing here. And that's really important, right?
This is how they describe what they've done and what we've heard them say — the discourse around it. It doesn't necessarily mean the platform is what I'm going to say about so that's important.
I have to also say I know George Siemens, and I've known him for maybe 10 years or more, sure, and I've interacted with him a lot. I would actually consider him like a light friend as well. And we don't always agree, but one of the key things I do want to say about him is I learned about Connectivism through him, and when I used to read about it, I didn't get it at all, but when I experienced it, I got it.
So there might be things I just want to give people the benefit of the doubt. There might be things they're doing that they're not able to explain easily, or it sounds like they're explaining it with hype, or they're explaining it in ways that sound like something else we know, but in actuality it isn't but because we haven't experienced it, maybe we're seeing it differently. You know, the lenses we're using are not appropriate here. So that's always a possibility. I just want to give them the benefit of the doubt about that.
However, I also want to say something else, as an Egyptian woman who works in the area of social justice in education in general, I believe that socially just education needs to be designed with a justice lens. And to design something with that kind of lens, it probably needs to be designed in a participatory manner where those who are most marginalized, those who are furthest from justice, are included in the design and not just served by the technologies we create, right? And so it has to be letter co led by these people as full participants who set the table, not just are invited to the table at some point. So since I'm not seeing that, people from the global south and indigenous people and people with disabilities and women and so on, and all kinds of minorities, I don't see them prominent in this work.
That already makes me skeptical about whether it really does serve, you know, everyone.
So those are just foundational things to start with. I think, to just lay the foundation before I comment on particular things that Paul LeBlanc has said. I have never met or spoken to Paul at all, but I have met and spoken to George a lot, not in person, but a lot of online conversations somewhere we agree, a lot where we disagreed.
Jeff Young:
That's great. No, I appreciate the context, and that's important to note. So over. Okay, so the reaction there is, is one of, ‘Who's participating in the design?’ is what I'm hearing. And a question about if the tool or the platform, LE, this, AI agent, might end up, you know, being used. I believe,
I believe Paul LeBlanc was going off to Mexico to talk to people there about its potential use, and that he was having interest by places that felt like they couldn't provide enough in-person teaching and so could this tool be helpful?
Maha Bali:
And you do know that a lot of people, a lot of places in the Global South that can’t provide that in-person teaching also don't have stable electricity. So if you're going to provide a tool, you need to have, like solar energy or whatever. And solar energy is also not necessarily very cheap. And yeah, so there's a lot of those things about remote places. There are remote places in Canada, and they haven't solved that problem yet, and that is a well-resourced country, so imagine the Global South, where they don't have stable electricity, obviously not stable Internet in even better off places like Egypt and South Africa sometimes. So let's just keep that in mind as well. So that's important.
And like the video they created, you said they went off to Mexico. The video they created, they said it was with indigenous actors, with permission from the indigenous folks over there. But still, were they involved in the design of the tool? As far as I understand, they're recording the video with you.
So you thought of them at some point, but not, not early on. It felt like a white savior type of video, honestly, to me.
Jeff Young:
Yeah, so that's an interesting reaction. Actually, I wonder. You know, there is this clip that I played and looked at with Paul Leblanc. I wonder if we could watch it real quick together. Just this short short clip. I'm going to share my screen. So bear with me.
Video Clip:
My favorite is the monarch.
Jeff Young:
I played the same clip for Maha Bali that I played for Paul LeBlanc earlier in this episode.
Maha Bali:
So lots of different things. So one of them is, this is kind of already possible. Just use video conferencing and meet people and let's talk, right? You can do a lot of stuff without all of this hologram type of thing, although it'll be cool when it's there. Sure, I don't mind.
The other thing is the use of the glasses thing. I remember when Google Glasses, I think they created something else, Google Cardboard. I think it was called and it was such a failure, because it distracted people when they were doing things. So I'm scared of the tech that's going to be like, right in your face the whole time. It's bad enough with the smartphone in your hand all the time. Now it's going to be in your eye like you can't even look away from it. That's going to be kind of scary. So I'm not too excited about that one.
I think the potential of AI for translation is beautiful and is already here and has been here for a long time. As have the problems with it, right? I think one of the things they talk about later is about the translation like they never learned each other's languages, but the AI is translating for them. I think Sure, and honestly, I use AI for translation all the time, but I can only trust it in the two languages that I'm very fluent in, which are English and Arabic, okay?
And then the other element is that language is culture, right? If I think using AI tools for language translation is useful, if you're just in the airport passing through, if you're just in a country for two days, you're not going to invest a year to learn a language that you're only going to be there for two days, right? But like my friend who just married someone whose parents speak a different language, I don't think you want to keep using AI translation for that relationship for the rest of your life, right? So that there's culture behind language.
And even when you read a translated book, if you don't know the initial culture. You're like, why are they doing that? Like, it's really difficult. Culture is complex and deep and takes time to understand. And when we teach languages, we teach culture along with language, yeah? So if everything becomes translated by a machine that doesn't understand nuance and context and culture, yeah, it's cool for a little bit, but that their lifetime interaction is like this, I think, yeah, learn each other's languages people.
Jeff Young:
So that's interesting. And so this idea that the learners would be connected thanks to this AI agent, you're basically saying, like, there's all this technology that's been around for years that have, that could have done some version of this, and the challenges are known in so the idea that AI would solve these, these challenges is, is, like, you know, kind of far-fetched at this point.
Maha Bali:
Yeah. I think there's a lot of reasons why people object to use of AI in general, or in education specifically, and especially generative AI. And related to what you were saying right now, there are two directions to go. Do we really need AI to solve this problem? Or could we just solve it with humans or with simpler technologies that we already have, that are less potentially harmful?
And the other one is the AI that we have right now actually good enough right now to do this? So the good enough question could change with time.
Jeff Young:
If I could sort of summarize, and since you've heard it too, you can push back if my summary is not quite right. But one of the things it seems like Paul LeBlanc argument was that the opportunity with AI is to is to really like be there for students who might have questions outside the classroom and and sort of be so good at getting people over whatever their little obstacle they hit, whatever small obstacle they hit along the way, that they can keep do a lot of sophisticated work and project work, and more than in the past. And then when they're in class, somehow this then makes class better, because of this kind of tutorish interaction with the agent. So what do you say to that kind of vision?
Maha Bali:
First of all, my first question is, what's new? You always had opportunities to ask other people to search the internet, now to ask chat GPT, whatever you can give chat GPT content, or, you know, any AI tool, some content, and ask it to answer the question from within the content, so it doesn't go and hallucinate too much. We can already do that. I don't know what's new about this tool.
But I also like going on discussion forums and asking other people, and why don't I just have a TA to help me with this, who's probably going to give me a better answer than they? I know there are, like, all these AI tutors, now that there's one company that I met with, and they were saying, the AI tutor will give an answer, and then there will be a teaching assistant. If the AI tool is not sure that it's got the right answer, it's going to refer you to the TA, and the TA is going to revise its answers, and then eventually it will build its database and improve on itself so that it answers questions better.
I think, for very basic things, fine, but like, if you've interacted with these chat bots on websites, they're very clearly bots most of the time, And they're really usually not very good. The AI summary of Google often makes things up that if you ask it about something that doesn't exist, it's not going to tell you it doesn't exist. It's going to try to make up a meaning for it. So I think if you ask it really hard questions, I think when people test these things, they ask it questions that they know the answer is in the book. Ask it a question where the answer isn't in the book and see what happens?
I created a chatbot once that was supposed to only answer me in Arabic, and at some point, I don't know what I did, I must have broken it, because they started talking to me in Spanish. And I was like, ‘What's this got to do with it?’ Like, that's not what I told you to do. It just went off on a rail, you know. So those things happen.
I don't think there's anything particularly new about any of that. I think the talk about personalization has existed for a very long time, and has never, ever, ever done what it's supposed to do. It’s also very individualistic. But also personalization is always based on what do people who look like you, based on some categories of data that we know about you do, therefore you must be like them. It's not personal at all. The only way you can really, really personalize something is by really knowing a person as a person and then responding to that.
Any other way of doing it that's automated is either going to just use random data points that the developers decided were the important ones to collect about you. And you miss a lot of details, and even us as humans. I can't remember the name of the psychologist right now. He's a psychotherapist who used to say, we'll never know every part of a human being, right? So categorizing people is always problematic. It's reductionist, right?
The other version of it that collects everything about you is surveillance, and so we don't want to normalize surveillance for education, so that then surveillance becomes the norm, and then we're in a kind of 1984 George Orwellian type of future, where everybody you know, governments and and corporations know everything about you, right? So that is all not comfortable for me,
Jeff Young:
So it is interesting this idea that to you, if there is a breakthrough with AI generative AI around personalization, there's a chance that the only way to get that breakthrough is through radical surveillance. Is what you're saying?
Maha Bali:
It seems like that to me. I don't want to comment on something else, by the way, too, related to these very quick answers you can get from AI. I want to tell a personal story about a friend of mine who moved to a new country, and she has not made new friends yet. She's got four kids, she has a lot going on. She hasn't made new friends yet. So when she needs something, she contacts me, and the internet makes that possible, and it makes her have a social life, even though she doesn't have friends in person, which I realize is not good for her, but it's a good temporary solution.
So one time, she texts me on WhatsApp because she needs to talk about something, and I'm at work, so it takes me, like, a couple hours to get back to her. And you know what happens in between? She's asked ChatGPT. And ChatGPT has responded to her, and it gave her this kind of combination between Islamic prayer and affirmation that no one ever would say in Arabic, but it was beautiful and interesting and whatever, and then she doesn't need me anymore. And I think that's very problematic, that you get used to these, this immediate gratification that you get from the technology, and then any human responding to you is too slow.
If you have parents, you have two parents, and you have, like, multiple brothers and sisters. I'm an only child, so I didn't have that problem. But that problem. But if you have multiple children and you have two parents, then there's no way every parent will be there for you immediately when you want them, right? I was just at this AI for Good Global Summit in Geneva, organized by the UN with lots of exhibitors doing what they call AI for Good, and one of the AI for Good things that they were trying to convince us of was this teddy bear that you give to your child, okay, to your child, and it's there for your child. And I'm thinking, well, they're going to talk to this teddy bear instead of their parents and their siblings and their friends. And says, no, no, it's not an addictive one.
The parent can set it so that if the child starts to get too addicted, it starts talking to them less, and it will tell the child to go talk to your sibling and give them ideas of what to say to their sibling. I'm like, Really, it's very much what Seymour Papert warned us of, and Audrey Watters always reminds us of, which is it's like the machine programming the human, rather than the human programming the machine.
And as soon as you take away people's agency by calling it personalization, but it's really the machine deciding for you what you see next and what you do next, rather than you messing around in this chaos of knowledge and trying to find something on your own and struggling a little.
When children are motivated, they struggle and they find what they want. So when my daughter is struggling with something in school, she's 13 now, but when she was even eight years old, she's struggling with something in school. She wants to give up because she doesn't like it. She's struggling — it's hard. Yeah, she's struggling with something on the game Minecraft, and she wants to create something. She will watch videos, she will read, she will troubleshoot, she'll spend hours until she gets what she wants. And she's learned something. She's learned programming. She's been able to achieve what she wants, and she's proud of it. I think it's all about motivating children and finding what motivates them, rather than giving them easy ways to get things that we want them to know.
Yeah, there's another thing Paul says. He talks about how all the children are going to reach the same learning outcome, but each one of them is going to reach it their own way. That I think self determined learning is about each of them deciding what their learning goals are, rather than us telling them what their learning goals are. And you don't need a learning environment. That's life. Life is you discovering what you want to learn about and you finding people that you want to learn with, rather than through this interface.
Jeff Young:
So the lure, it feels like the concern is stopping human contact just by being a little more convenient and, and just, you know, digital so less awkward and maybe a little bit less socially challenging?
Maha Bali:
Yeah. So I think even if you think about people who are on the autism spectrum, obviously that's an extreme case of someone who really might struggle dealing with humans. And I can understand the value of this. However, I also know that the parents of students, of children who have autism also want, first of all, the rest of the world to learn to deal with that. So the rest of us had to learn to communicate instead of to just have them always be with a digital chatbot.
Because I think they're still humans. They have feelings, they still want to connect with people. They just don't connect with us the same way the most of us, Norm, neuro normative folks, connect with each other. And of course, we're all on a spectrum as well, right? I mean, we all accept that children don't speak, and we deal with it very young children, right, before they're verbal. Then we need to accept that there may be older adults who communicate in different ways, and we learn to deal with them.
There were these AIs that help someone who is autistic, right, in a way that sounds more like a normal person, and that's problematic. We should learn to understand the text of someone who isn't like us in the same way that a lot of AI is also, you know, feeding into native speakerism. We have a professor in my institution who speaks very good English, but just has a little bit of an accent, and he just wants us to clone him, like deep fake him, so that he sounds like a native speaker of English. And I'm like, ‘No.’ Students, and everybody needs to be okay with respecting that a non-native speaker doesn't sound like a native speaker. And I say this, of course, with my mid-Atlantic accent, but I accept, you know, I respect people who speak with an accent, and we need to get used to this diversity, rather than try to make everyone sound the same. This is also very problematic.
Jeff Young:
Yeah, it's interesting. With the internet, we had filter bubbles where people were only, you know, kind of in conversation with people that had similar views as them. But it seems like you're painting a picture of a concern about this kind of radical world in which everything is kind of normed and kind of smoothed over, the way things sound, look and and a shared culture that's very watered down?
Maha Bali:
Does it sound very slippery slope to you? Am I exaggerating?
Jeff Young:
I think the whole point of this conversation is to think about, to think, imagine what this could roll out to, because things are moving so fast. So no, I wasn't, I wasn't actually pushing back, except for to say that, like, the I guess, is that the concern you're raising, just to clarify, like, is that the
Maha Bali:
A concern. It’s one of the concerns.
Jeff Young:
Also I was really struck by something Paul LeBlanc said, which is that if they feel like they're seeing their tool, this AI agent, being used to replace human interaction, which is one of your concerns, that they would pull the plug. And I wondered what you thought of that.
Maha Bali:
I'm just wondering why you need to test that. It's obvious. It seems obvious. I don't see why you need to test that. It's already happening. My friend who consulted ChatGPT when I didn't respond to her within two hours. I know they're saying, you go into the classroom and you have no technology, and then you sit with people and you focus on them, and that's fine. That's just flipped learning. You don't need LE for that. You just need to give students something to do. They could read a book, they could watch a video. They do whatever they want. I'm okay with LE doing that. It's just, I don't think it's revolutionary, is all. If it is what they're if this is all it's going to be doing, then it's not that revolutionary. So I don't see what's so special about it, but there might be something that I'm not seeing, you know. I'm not seeing what's what value added there is in all of this, honestly.
Jeff Young:
And I guess the other thing that I hear a lot, and I'm sure you see it in conversations about AI, is that you know resisting is not possible, because the world is changing, and AI is going to be everywhere. And so the world of work and society after students leave the university is going to be infused with AI. And so it's, it's sort of irresponsible not to introduce them and give them tools and say, Let's get used to using an LE type thing. And I guess, what do you say to that argument, which it seemed implicit in the conversation that I had?
Maha Bali:
I gotta be honest. I gotta be honest. First of all, I'm an educational developer, so I support people along the spectrum, and I say ‘no, AI shaming.’ If you find AI helps you knock yourself out, use it. Teach us how you're using it, maybe we'll benefit.
And that's a lot of our engineers, and sometimes people in the writing program even. And if you're someone who resists AI, I respect you, you probably have a good reason you're not being lazy. I trust you. I trust your reasons for that, and we need to respect them. And there's a spectrum of reasons.
In my own classes, I teach a digital literacies and intercultural learning course. So what I do is, first of all, from the very beginning, you're allowed to use AI. We're going to explore a lot of issues related to identity and bias and othering and ethics and inequity and oppression and all of that. And then we're going to apply that to AI and technology, and you're going to see how AI, I don't tell them this, but they get to experience this throughout the semester. They get to experience how AI reproduces oppression in all kinds of ways and reproduces bias in all kinds of ways. But they're allowed to use it however they want, as long as they let me know that they've used it and reflect on it.
And towards the end, they're allowed to use it in a certain project in whatever way they want. I don't tell them ‘use it here’ and don't ‘use it there.’ And by the end of the semester, they're like, ‘Oh, I'm not going to use AI except to revise my grammar. I'm not going to use AI except for this.’ And sometimes they'll use it in certain things and say, ‘Oh, my God, it's stereotyping.’
So different people use it for different things, and what I want to say is different people need that extra support from AI that they're not getting from human beings in different areas of their lives. So some people are good writers. They love the way they write, they hate how AI takes their voice. Some people are not good writers, and I'm not teaching writing. If I was teaching writing, I would take more time to help them with their writing, but since that's not what I teach, then they could use AI to help them with their grammar or whatever. That's fine with me. And I think the main thing is what we need to do to prepare students for AI, pair them with good judgment and wisdom to then know whether or not to use AI in this thing in the first place. If it's a very critical, high-risk kind of decision, do you want to risk not having accountability over how you reach that decision?
Jeff Young:
I guess one of the last questions I wanted to ask you is, like the one of the ideas inherent in what Paul LeBlanc talked about and what other AI folks often talk about, who are really radically embracing things for quickly moving it into the classroom is that it's going to change the role of the teacher. And I guess I wondered if, do you, if you think there's a role that that somehow AI might change the role of a teacher where it might be different these, these chatbots, whether it's LE or something else like might might be different than previous technologies, and might lead to a change in instruction.
Maha Bali:
Yeah. So first of all, I think the internet should have changed the role of the teacher, but not all the teachers changed. I think just the presence of the internet meant that we cannot be content centric in the way we teach. It should have changed a long time ago, and it didn't.
Of course, the AI makes it easier, because you can ask, like, a very specific question, and you don't have to be very good at Google doing Google research anymore, because you can just ask the very specific question you want. Yeah, I think regardless of LE or whatever ChatGPT and Gemini and all those exist, students have access to them. Students are using it. That has happened, that ship has sailed. We can't change that. We have to accept that's the inevitability,
The aspect of whether it transforms education, or whether it's really going to keep working in all fields, I'm not sure. We'll see what happens.
I do think we do have to rethink our assessments, and we need to rethink whether students are seeing things that are relevant to them, and we need to rethink whether we're supporting them enough or not.
Jeff Young:
Well, I guess one other one thing to leave on is, are you I guess one one thing I got from LeBlanc and many people that are, you know, talking around AI and education is a kind of, you know, very optimistic and utopian kind of hope that this can really move the needle and get more people, you know, access and improve access to education, get more people learning at a higher level. What do you think? What are your predictions for how AI will affect education as it's going right now?
Maha Bali:
I think a lot of what you're calling optimism, I'm calling techno positivism, like I think people are just overly positive about technology, and a lot of it is hype, and historically, it has never lived up to that hype.
Jeff Young:
Are we moving too fast to adopt AI in education? We being, you know, higher ed?
Maha Bali:
Obviously the corporations are pushing for that, and the administrators are neoliberal, and they don't understand pedagogy, and the teachers are suffering, and the students are going to suffer even worse. And I feel bad for people who are enthusiastic about tech … the ones who struggle to learn new tech, I feel bad for them, because they're always feeling like they're behind. They felt behind during COVID, and now they feel behind now. And with COVID, there weren't that many options, so you had to do something about it. But with this one, it feels like you're being forced to feel like you're behind when the students have access to it. So yeah, you are behind if you haven't tried it and you don't know what it does, because you want to understand what your students have access to. But yes, moving too fast.
So yeah, AI exists. We need to accept that it exists, that students have access to it, and that possibly there will be jobs that use it. I don't think it's inevitable that it will take over. I don't think it's inevitable that it will transform or whatever.
It might be, especially for K -12 too fast. I've been involved in an accreditation body in the U.S. and internationally rate K-12, and whenever we're talking together, I say we need teachers to be able to recognize whether AI is going to be helpful or not, not to say how AI is going to be helpful, as if we know it's a foregone conclusion it's going to be helpful. And then we just need to figure out whether. The answer may be yes or no.
Jeff Young:
And I just have just, just to be clear. I think this is the conversation worth having. and I'm trying to be as neutral as possible. Because one thing I'm excited about is that I feel like people like LeBlanc are not talking enough to people like you in the critical pedagogy space and vice versa. And so my hope is that surfacing these views together will help people you know hear a broader discussion. So thank you.
Maha Bali:
People who have much more knowledge and expertise than me who are talking so listen to Joy Buolamwini and Winnie. Listen to Timnit Gebru. Listen to Emily Bender. Listen to Audrey Watters. They're usually women, Okay? Safiya Noble, we've been talking for a long time about these issues, and there are men too, if they want but like I want them to listen to the women of color. Stop reading other white men for a while, start reading women of color who know AI because you don't want to talk to people who don't. Like I had forgotten that I was a computer science grad until AI came back, and then I went back and said, Yes, I have a PhD in education, but my undergrad was AI.
Now we can talk.
Jeff Young:
So well, we could obviously continue this conversation. It will be continuing out there in various forms. But thank you for joining me today.
Maha Bali:
Thank you so much for inviting me. Jeff.
Jeff Young:
As you can hear, these two long time education experts have come to some different conclusions about where they see AI fitting into education.
I recently checked back in with Paul LeBlanc by email to see how his project is going. He said Matter and Space is planning to launch the first commercial version of its LE chatbot later this month. His first customers are Southern New Hampshire University, and six universities in Mexico. The reaction from testers has been “super encouraging,” he says.
The plan is to start with a limited pilot launch to make changes based on user feedback.
I ran some of Maha Bali’s critiques by Paul. Here is what he wrote back in an email:
“I have all my ongoing worries about the big questions: What do we do about anthropomorphism? How do we foster human connection and not replace it? How do we make it affordable to learners in the Global South and other low-income learners?
His overall hope for his work, he says, is that by doing early experiments he can help focus on what AI in education looks like when used in what he calls “transparent, responsible, and healthy ways.”
I'd love to know what you think. Please send along your thoughts in a written email or a voice memo and send them to Jeff at learning curve.fm. I might include those in a future episode. So please note whether you want to use your name or be anonymous.
This has been episode two of Learning Curve, and I hope you will help me get the word out about this new show.
Please tell a friend you think might like it, or you can also leave a rating or review wherever you listen. The easy thing to do is just click the stars on whatever app you're listening on that will help get us recommended by the algorithm.
You can also sign up for Learning Curve, plus where I'm going to be posting the full interviews with both Paul LeBlanc and Maha Bali. They both had a lot more to say on these issues and some fascinating insights. Just go to learning curve.fm to find out how to get that some quick credits here.
This episode was reported and put together by me, Jeff young, and you can find me at jeffyoung.net