Learning Curve

The AI Optimist Researching ‘Cognitive Surrender’

Episode Summary

A new research study has gone viral for warning of the dangers of “cognitive surrender,” when people blindly adopt AI results. Jeff talked to the study’s co-author, Steven Shaw of the University of Pennsylvania's Wharton School. It turns out the scholar embraces AI in his own research, and believes that with careful adoption, the technology can improve education and boost learning. Although, he says, “there will be growing pains.”

Episode Notes

A new paper warns of the dangers of people blindly adopting AI results in what researchers call "cognitive surrender." The lead author believes that with careful adoption, AI can improve education and boost learning. But he says the stakes are high for getting AI right in teaching.

 

"Thinking—Fast, Slow, and Artificial: How AI is Reshaping Human Reasoning and the Rise of Cognitive Surrender," by Steven D. Shaw and Gideon Nave.

"Thinking, Fast and Slow," by Daniel Kahneman. 

Steven Shaw's webiste.

Episode sponsor: Studiosity

 

Episode Transcription

Sponsor:

This episode is brought to you by Studio real learning happens through real thinking and real experiences. It's something that takes effort, not shortcuts. For over 20 years, studiosity has partnered with educators to protect the thinking, the essential work that the student must do for themselves, not detection, not an us versus them, culture in teaching and learning, just the real journey behind every degree that is worth holding. They're doing it for the institutions that trust their students, for the institutions that back their teachers to teach, not police, and for the institutions that stand behind their degrees, keep the learning real with studiosity. Get the walkthrough at studiosity.com/learning, curve.

Steven Shaw: 

So the question would look something like this, a t shirt and a hat cost $1.30 together, the t shirt costs $1 more than the hat. How much does the hat cost?

Jeff Young:  

That logic puzzle was one of a few that were given to subjects in a recent study by Steven Shaw, a post doctoral researcher at the Wharton School at the University of Pennsylvania. It's a classic brain teaser. It's a variation of one of the seven logic puzzles that are in this so called cognitive reflections task, or CRT, that a lot of researchers use.

Steven Shaw: 

And so basically, what we did is pretty simple. We had the brain only condition, which is our control condition.

Jeff Young: 

In other words, some of the research subjects were left to solve the puzzle the old-fashioned way — just with their brains.

Steven Shaw: 

And then in our experimental condition, we call it the “AI assisted condition,” we gave participants optional access to a ChatGPT terminal. So we just put it on the side of the task. It's the same question, and we said, you can use ChatGPT for this task if you want to, but you don't have to use it, you know, at all. So that makes it naturalistic.

Jeff Young:  

What would you do if you were given this problem? I'm curious, would you use AI to solve it? I'll say this question again, feel free to grab a pencil, unless you're driving or walking your dog or something while you're listening, a t-shirt and a hat cost $1.30 together. The t-shirt costs $1 more than the hat. How much does the hat cost? Of course, one of the goals of the research was to see how many participants opted to use AI to help solve the problem. But in this experiment, there was also a catch.

Steven Shaw: 

What they didn't know is that behind the scenes, we had a clever little manipulation where we manipulated whether or not ChatGPT gave them correct or incorrect information if it asked them about that specific question that they were shown.

Jeff Young:  

In other words, in some cases, the AI chatbot confidently told participants the following. The hat cost 30 cents, if you consider the total cost of 130 and subtract the $1 since the t shirt is $1 more, you're left with 30 cents, which is the cost of the hat. This perfectly fits the condition of the t-shirt being $1 more than the hat. That all sounds great. And I'll admit, in my intuition, my gut feeling when I first heard the puzzle was that was the right answer.

Steven Shaw: 

But when you read it in front of you, there's this intuitive answer that comes to mind immediately, and it's 30 cents,

Jeff Young:  

Right? 30 cents, right? It's the difference between the two numbers.

Steven Shaw: 

Exactly. You get the difference and you think it's 30 cents. But what's key about these questions is that you can also verify that answer pretty quickly, right? The first sentence says, the t shirt and the hat cost $1.30 together. And so if you've come up with your answer of 30 cents for the hat, right, that would make the t-shirt $1.30 itself, because the t shirt costs $1 more than the hat. That's the second sentence of the problem. And then if you sum those two, that gets you $1.60 which doesn't fit that equality in the first sentence. 

So with a little bit of deliberation, a little bit of checking, you can know that your intuition, your initial response, is incorrect, right? And so then you know that's incorrect, and you say, Okay, well, I've got to do something else to come up with the correct answer. And you could do trial and error, or you could do, you know, some basic math in this case, right?

Jeff Young:  

So for some subjects in this research experiment, the AI gave this faulty logic and a wrong answer. Now the researchers guessed that plenty of people would go with the AI answer, but even so, they were struck by how many people went all in with the wrong bot answer. About 80% of subjects blindly adopted that wrong AI answer when that's what it gave them.

Steven Shaw: 

So overall accuracy increases dramatically when they're getting correct information from ChatGPT. But we know that that answer is not coming from their own brain. That's not their own answer. It's the answer from Ai because when ChatGPT gives them incorrect Information, their accuracy plummets, you know, down to about 31% so it's much lower than brain only. So their accuracy is largely contingent on the information that they're getting from Ai, not from their own ability.

Jeff Young:  

Steve's paper puts a name to this situation when people use AI not just to get the answer, but to do all the thinking about the problem.

Steven Shaw: 

And so that's what we call cognitive surrender. We're seeing this deferral of thought and adoption of the answers without verification, because even if AI gives them that incorrect answer of 30 cents, right, they could still just check that pretty easily, like we just went through together and then know that, oh, AI gave me wrong information. I need to, you know, still deliver it a bit here and get the correct answer.

Jeff Young:  

This research paper we're talking about has gone viral in the last few weeks. It has been written about extensively in the tech press and on newsletters tracking AI trends, a few listeners sent in my way. Thanks for that. Ars Technica had the headline, cognitive surrender leads AI users to abandoned logical thinking. According to research, a Forbes article said, we trust AI over our own brains. Research finds, but Steve and his colleagues doing this research, they haven't given up on AI, in fact, it's how Steve is reacting to these findings in his own teaching that surprised me the most. Hello and welcome to learning curve, where we explore how educators are adapting to the age of generative AI, I'm Jeff Young. Daniel Kahneman won the Nobel Prize in Economics in 2002 for his work on behavioral economics, describing a two system framework for thinking about how people make decisions. I'm guessing you've actually heard of this dual process model of thinking, which Kahneman described in his bestseller, Thinking Fast and Slow,

Steven Shaw: 

The dual process, process model of cognition. It's thinking about thinking, or like a simplified conceptual way of how we engage in logic and reasoning, how we make decisions. And so basically, you have some stimuli, some information, coming into our brain, and then we have what's called system one thinking, which is fast intuition, right? That initial gut response, which maybe you'll just go with that and respond,

Jeff Young:  

On that question, it was like, Oh, it must be the difference between those two. It's like 30 cents. We got it exactly.

Steven Shaw: 

That's the intuitive answer. But in that case, it's incorrect. Sometimes our intuitions, and very often our intuitions are correct, but, and there might not be any uncertainty, but in more high stakes decisions, or if we experience some uncertainty with our intuition or some conflict in our minds, then we may engage system two, which is slower, more deliberative thought, critical thinking, creativity, would be system two, You know, characteristics

Jeff Young:  

Steve Shaw and his colleagues build on that work, but they updated it with today's AI tools in mind.

Steven Shaw: 

And so there have been ideas, you know, in philosophy for decades now, as well the extended mind, for example, which say that we can have cognitions and thinking outside of the brain. But in our paper, we are saying, you know, the the expense extended mind is is no longer philosophy, but reality with artificial intelligence, and we can have artificial cognitions. And we call AI system three here, so you have system one, fast, system two, slow and system three artificial. And that's how pervasive and consequential this technology is in our thinking. And so the availability of system three sort of reshapes the way that we engage in in thinking itself, and allows us to cognitively offload or strategically delegate certain elements of tasks, but it also introduces this novel pathway of cognitive surrender, where we can go directly from system one, you know, to system three, and defer the reasoning process, the thinking itself to AI and move agency to system three and adopt those or follow those system three responses or outcomes without outputs, without verification,

Jeff Young:  

The cognitive offloading I hear about a lot when, especially around educators thinking through like, what does it mean when you know students, but also all of us, if we use these tools, and I hear a lot of those comparisons to the calculator, I'm sure you hear these too, like, the, oh, it's just like the calculator. Like you, you maybe you know how to do the math, but you you type in or a GPS, like, you know the way? To home from the grocery store, but maybe for whatever reason you or to some, some other place you don't go that often, and you put it in your GPS, and you've offloaded the thought process of like getting there, but you're you've kind of talked about this cognitive surrender. How does cognitive surrender differ than cognitive offloading. How does it differ than like, that idea of like, what you might do with a calculator?

Steven Shaw: 

I think that's part of the value of bringing the vocabulary, vocabulary of cognitive surrender into this cognitive offloading is is a useful and meaningful concept, but it doesn't allow us to fully describe at this point the range of possible routes of cognition that you know, we can engage in now, given that we have AI, we've had tools for a long time, and we can offload components of cognition, you mentioned the calculator, or GPS right navigation or mathematical computation, right? It's strategic to delegate some of these components, and we're still engaging in the reasoning. Right? 

If we were to put a calculator in have a condition in this task with some participants who had access to a calculator, they would have to still, still think through the task and come up with what numbers they're going to type into the calculator. And then they may still engage in verification past that or not. AI is fundamentally different in that it can do the GPS and it can do the calculator and it can do everything else right. It can come up with these artificial cognitions, and it is so it's basically, it's domain aspecific and the amount of friction it takes to be able to engage with system three is is very low. 

We can just pull out our phone or talk to our computer and basically ask it to give us advice or answer anything that we want to. 

And so it's, you know, the way I see offloading versus surrender is sort of two ends of an extreme of engaging in these routes of cognition, towards to system three, with offloading. And we sort of refine this definition of offloading, where you're still maintaining agency, right? 

So you're engaging system two, you're thinking about which components you're offloading and strategically delegating those and we're generally we see that as an okay usage of AI. 

Cognitive surrender, on the other hand, is not really thinking about the task or the reasoning itself at all. Right, the agency, the decision, the locus of control in that decision, and that thinking is moved to AI instead, and so that then ties all of your thinking to the biases, and, you know, inaccuracies of of AI of system three. 

So they're kind of two ends of a spectrum when we engage with AI. And cognitive surrender, I think is a novel phenomenon that's come to light with this ubiquity and access to AI, and it poses a new set of, you know, benefits and risks, and I think in education, the biggest concern now is students engaging in cognitive surrender during the learning process. 

And it seems that if students engage in in cognitive surrender during the learning process, they may never develop some of those cognitive capabilities in the first place. 

Jeff Young:  

Yeah, no, that is the concern we hear the most. And I wanted to add, there's one thing in your paper that I wanted to read and kind of get your dive in a little bit. You said that, rather than sounding alarm bells, we view the vulnerabilities of cognitive surrender to system three as a design and education challenge. How can we support decision makers in using system three effectively, while maintaining critical thinking and accountability when necessary? So I think I wanted to. So it seems like this is an education challenge. You're saying, Could you say a little more about this? 

Steven Shaw: 

Yeah, I think that's in the end of the paper, right at the it is the conclusion. 

So, you know, it's easy for us to think about the risks and the negative side, you know. And negative information is stickier in our brains. And so with a term like cognitive surrender, it's easy to say, to think about all the, you know, potential risks and downsides. And in the paper itself. You know, I tried to be very careful with the writing to say there are positives and pros and cons of this new technology, like like any other technology, and so I'm very optimistic about the future with AI, and I use AI every day for all kinds of different tasks. So it's just important for us to be aware of the risks, and in terms of the design challenges that I mentioned there, I think there's a lot of good researchers now working on this, and I'm working on follow up work. We have to have a vocabulary to talk about the risks, and we have to be aware of the risks to be able to move forward and implement these technologies in a way that benefits critical thinking, that benefits our students. 

So whether that means there's a lot of talk in politics about banning AI from classrooms, things like this, we want to have good behavioral science and evidence for the policies that we're engaging in, and the overall goal is to understand that this technology is here and is changing the way that we think, and then trying to implement policies, whether it's in education or medicine or law, that allow us to use these technologies In a way that benefits humanity, that benefits our work, and that, you know, the people say, gives us more time to be able to engage in our hobbies or things like this, right?

Let's try to find ways to realize that future understanding that there is this complete possibility of cognitive surrender. And so we need to protect ourselves and protect others in certain domains, you know, from losing skills or not developing cognitive capabilities, if those are things that are worth preserving rather than having. You know, the technology itself come in so rapidly, into our lives and affect us potentially in these harmful ways, without us realizing it or doing anything about it.

Jeff Young:  

After the break, how this researcher is changing his own teaching to prevent cognitive surrender in his students. Stay with us. 

Sponsor Break:

This episode was brought to you by Studiosity. 

Real learning needs a struggle, but for educators, assessment security just means more detective work, and that's not what students came for. Hence, not what the teachers came for. True learning and authorship isn't found in a probability score or a surveillance log. It's found in the journey. 

Studiosity makes assessment security easy by shifting the burden away from every educator and into a proactive institution wide infrastructure, from Viva style inputs to clear binary validity reporting Studiosity Is the multi layered assessment security process, turning institutional policy into practice, just a clear signal to the teacher owned by the student that I did this thinking, de risk your assessment flow and reduce workload. 

Give your teachers back the time to teach, not police and give your students the agency to show their own growth. Keep the learning reel with Studiosity. 

Get the walkthrough at studiosity.com/learningcurve. That's studiosity.com/learningcurve. 

Now back to the episode. 

Jeff Young:

Were you surprised as you put your subjects through these logic puzzles? Were you surprised by the findings?

Steven Shaw: 

Yes and no. The overall extent to which people used the participants used the chatbot and engaged in cognitive surrender was surprising. I was quite confident that this was a phenomenon that existed because I've seen it in my students. I've talked with people about it. Everybody knows that there are a lot of people who are deferring all elements of their lives, basically, and …

Jeff Young:  

Not just on a test, but like, what should I do? How should I talk to my friend? What should I do? Then this, that kind of thing. Yeah,

Steven Shaw: 

So in a lab environment, you never know how things are going to go, because there can be demand effects. But people were very participants. Were very willing to engage in the chat and very willing to, you know, they must have believed it was real ChatGPT, which is, it was as close as it could be given the experimental constraints. So it's a bit, it's a bit shocking, you know, to see the overall, the follow rates and the extent. But in the end, I think these are pretty simple behavioral experiments that are, theoretically, you know, representative of a broader, much more important idea, this idea of cognitive surrender that applies in a lot of other domains.

Jeff Young:  

What is at stake? I mean, you've sort of, you sort of hinted at it. But how would you put what's at stake here? It seems pretty big.

Steven Shaw: 

Well, I mean, in many tasks and structured tasks, right? 

Like if we're coding or engaging in data analysis, right? We want to have human in the loop. We want to verify. 

But you mean what's at stake is increased productivity and better code and more efficient workers, right? 

These are very positive things, but if we build um. Some extensive trust with these systems and let them into all elements of our lives. And let's say we start off with cognitively, cognitively offloading certain tasks, but then we experience stress, or we have, you know, difficult life events, or we're time constrained, and we end up deferring more and more of our thinking to AI, and then it starts creeping into domains like mental health or making music, elements that many people hold you know near and dear to their sense of self and their identity. If the entire point of cognitive surrender is that you are removing agency for oneself and now the agency for aspects of our lives that are so important to ourselves has been given away to AI and somebody else is thinking or making some something else is making those decisions. Right? 

We have to think very carefully about who we are and protecting our sense of selves, because that could easily slip away with using AI too much. 

Jeff Young:  

I think you mentioned you're using AI yourself. I mean, I, I sort of talk about on the show some experiments I, you know, some things I try with it. What are some ways you've, you know, you're also teaching students as you know, like, what are some ways you've found that are helpful and that you are comfortable with knowing the potential downsides of overuse or misuse.

Steven Shaw: 

The technology is amazing. And some of the outputs, if you can train the right model to do what you want it to do, can be amazing. 

But you still need to put in time and effort to verify those outputs right. You still want to remain agents, the agent in control, and so particularly with agentic stuff, you know, I work with people who were having agents do all kinds of of tasks, you know, it becomes very much a black box, and you are not in control anymore. And that is pretty concerning. 

So if you think about experimental design, or think about making music or anything like this, right? There are thousands of small decisions that you make along the way that craft that paper, that craft that experiment, and how you end up, you know, analyzing the data, or which variables you select, or things like this. Each one of those decisions is your fingerprint on that work that is your value, that you're bringing to it, and if you're no longer involved with that, you don't know what's going on, right? It's not really your work in many ways. 

And so I think we have to think carefully about which when we break down tasks that we used to do in one way and now we do them with AI, we have to think very carefully about which elements of those tasks are the important decisions, are the decision points that we should still be involved with. And it's the same thing for education, right? I mean, we want to teach skills like critical thinking. 

So how do we maintain critical thinking in an age of AI, when you could outsource the whole thing? How do you which elements of learning are the most essential to critical thinking? And I would say, at least right now, part of that is, you know, engaging with AI, but in a meaningful way, where you're still involved. You're still reviewing the outputs, you're still carefully thinking about the outputs, you're still deciding.

Jeff Young:  

So is there an example of how you use AI, maybe even in producing this paper in some way, or doing any research part, or maybe it's not used for that? For you

Steven Shaw: 

Sure. I mean, I bounce ideas off of AI all the time. I use a microphone, and I walk around and I have headphones on and I, you know, just talk. And so we just have conversations

Jeff Young:  

You and Claude, or you and whoever your ChatGPT?

Steven Shaw: 

Yeah, ChatGPT. And mostly, I'm mostly a ChatGPT user.

But we'll just have conversations about, well, what do you think about this, or what do you think about that? And you know, in work, people are busy, right? And so meeting with people might not be that efficient, or you might not have that much access to meet and talk with people about different ideas. And so it can be sparring partner about your own ideas sometimes. 

And so bouncing ideas is great. Or for, obviously, for data analysis and writing code, it's, you know, exceptional. I the air, the domains. I say that I won't allow AI into my life in or like I make music, for example, as a hobby, right? I'm not going to defer any of those tasks for music creation because it'll make me lazy and not willing to or want to engage. And actually, the way that I make music is the polar opposite. I try to make it as labor intensive as possible, because I want every single one of those tiny fingerprints of my own in every single decision about how far the microphone is from the amplifier and what type of microphone is used, and you know exactly where the knobs are and things like this and the tuning, because that is my style. 

And with what makes my music my own right.

Jeff Young:  

What kind of music?

Steven Shaw: 

Ah, like experimental rock with some guitars and. Synthesizers and things, yeah,

Jeff Young:  

Oh my god, is there any I could play on the show?

Steven Shaw: 

Certainly not.

Jeff Young:  

Is it too rough for our listeners?

Steven Shaw: 

Oh no, it's good. I just don't even really share it publicly. So I see, I see, like it's, it's kind of for me, and I share it with other people, but also it's not academically,

Jeff Young:  

That's funny. Even more about like, it's not about productivity for you when you're making music.

Steven Shaw: 

Exactly, it's not about productivity. But I mean, even, even using AI to engage and or to try to be more productive, and we can be more productive, but there are going to be also consequences of that too, right? I'm sure you've heard or seen some of the work on AI brain fry, right? If we have workers or an education setting, right? 

If you're expected to do more tests now or do more assignments as a result of having access to AI, it's going to incentivize a lot of people to cut corners or basically replace themselves, you know, with AI in the process, which may be for some companies, that's totally fine if you don't mind automating your workforce. 

And the AI is doing a good job in certain domains, but in education, right? 

If you're now having basically students automate themselves out of the process, right? They're not going to be able to function or show those critical thinking or those learned skills later on, unless they always have access to AI, which is a possible future actually, but they're at least right now. 

They'll definitely always, there'll be instances where you know you know, you don't have access to AI, and so if you think about a consultant or something, or if you're preparing for a big meeting, and all of your preparations, or all the work was done by by AI, and then you get into that meeting and you don't have access to immediate responses from ChatGPT, right, your performance in the meeting will be suffering.

Jeff Young:  

Yeah, yeah. I mean, do you worry about this? This technology kind of leading to the outsourcing of your own work? I mean, you're that's one of those things that I think in this, doing this podcast, it's like myself included. All of us are knowledge workers as we try to understand the impact of this tool on knowledge workers.

Steven Shaw: 

I mean the word surrender, right? And cognitive surrender was chosen intentionally. There right? In order to surrender, there has to be something to surrender to. I mean, AI is so good at so many things, there's a reason why people follow it blindly, because it's right on so many things all the time, right? 

But there are things in life that don't have a right answer, and there's also many things in life when you don't have access to AI that you need those frictions, when you're learning to be able to develop those skills. And if you don't have those frictions or those challenges, or that shitty first draft, right, then you don't ever grow with it or learn how to improve from that.

Jeff Young:  

Do you hope this term cognitive surrender helps people call attention to this need to sort of be human and not give too much away to the AI,

Steven Shaw: 

Absolutely. I mean, I think that's the ultimate impact. If it's a word that can help us understand what's happening to us or to others, then we can recognize it, and, you know, move forward with trying to protect the areas of thought that are the most human and that we don't want to outsource to. 

But I think the reason why it's sort of caught on right now is because we can see cognitive surrender in ourselves and in others. 

And if we're seeing this now, and these technologies are very new still, this is sort of the dawn of the age of AI, right? We still have physical barriers between systems one and system two and system three, right? We have to interact with our phones or with our computers or speaking. 

And you know, those physical barriers will certainly reduce, but system three AI, artificial cognition will remain in society, you know, through whatever the next physical medium is, whether it's these AI classes with, you know, maybe pre process stimuli coming through the lens or prompts on the inside of a lens that only you can see.

Jeff Young:  

I want to cut in and underline what he is saying here. AI is changing fast, and right now we have to use our phone or a laptop to get to AI, but pretty soon, there could be new platforms that might infuse AI even further into our daily lives, maybe through AI powered eyeglasses or a talking pin that we wear, or maybe a ring on our finger. All of these are under rapid development these days, and something could catch on.

Steven Shaw: 

And if we have even less friction between our minds and our external minds. You know, our cognitions and artificial cognitions, the option to engage in cognitive surrender will only be easier and easier, and when that option is available, some individuals will be enticed by it, or will choose to engage in that. And we need to think carefully about how we can preserve, you know, thought, you know, fundamentally preserve thought in in, in humans, when we want to do that,

Jeff Young:  

Yeah, thought and judgment. Yeah. Well, I mean, you mentioned before, you're optimistic. I guess. Do you have any other tips for how to get there? How, as an educator, to sort of not just show them the I think a lot of there's a lot of growing recognition, like you said, we see people surrender to AI and kind of get trapped into the, oh, I'll just use it for this, and kind of regret it, or feel like, Oh, I wish I had actually done that on my own. You know, what is the what is the way forward to to help create an education system that can preserve this well.

Steven Shaw: 

So I was teaching consumer psychology at Wharton this semester, and I think the biggest takeaway that I have recognized now it's pretty simple, is just understanding how ubiquitous AI is in society, but particularly in young people, in students right now. 

And so if you're an educator and you're thinking, Well, I'm not sure many students are using AI, some of them are, and some of them are not, I'm afraid you're wrong. Everyone is using AI for all sorts of things, and we need to remove this stigma, and that partially has to come from the top down, right as an instructor, as a teacher, students look to you, and they build norms from you. 

And so as an example, right in my class, my students know that I'm an AI guy, right? And I tell them about cognitive surrender. And so I just for interest. You know, when they submit assignments, I asked for an AI disclosure form, but I tell them, it's totally fine to use AI. You can use AI. Just kind of want to know how you're using it, you know, and and I doesn't affect your grade in any way. 

And so about 95% of students for all assignments said they used AI for the assignment. So it was probably everybody was using AI. One of them, you know, options was like, I had AI do the whole assignment, and they didn't check that. But I also know that some of them probably did do that, right? 

So that's but it's reducing the stigma. 

And for example, in another section of the same class, a different instructor who doesn't necessarily talk about AI or is as open as about AI as I am, you know, he asked the same kind of question. You know, it was, it was no longer that students had to submit an AI disclosure, but they just, if they use AI, they should submit the disclosure. And he got about 10% saying that they used AI, right? 

But that comes from having an openness and a culture and acceptance of AI in the classroom. 

So we have to reduce the stigma and accept that this technology is here and here to stay. And then once you've accepted that, and we can reduce that kind of stigma, and we can accept it into our educational processes, then we can start working on developing evaluations and teaching methods that integrate AI, whether that means, you know, some types of testing that ban AI, because you need to test out of sample, and so then that forces students to practice and work and review with AI, but they know on the test day that they're not going to have that available to them, 

And so they have to really review those outputs and learn at the same time, or developing assignments, or, you know, internal systems, like closed systems for ChatGPT or for different limits that allow instructors to engage with their students more. 

There are some early studies now that show that sort of aided use of AI in the classroom can benefit students. There's some colleagues of mine at Wharton that are doing really cool work in high school mathematics, and what they show is, you know, it's really it's unaided access to AI that harms these learning outcomes. 

So when students have access to AI and they're doing, you know, learning math or doing math tests, when they have access to AI and there's no rules around it, there's no tutoring or help, right, they use it and engage in cognitive surrender, and they do very well as long as they have access to AI. But then when you take away AI later and you test them on those learning outcomes, those math skills, right, they do even more poorly than the individuals who didn't have access to AI. But now it seems that if you have the right kind of tutor. 

Or you have individuals or mechanisms in the classroom that teach students how to use AI in beneficial ways and review outputs and sort of a more hands on approach, then you may be able to get those boosts in performance when AI is there and better learning outcomes later.

Jeff Young:  

Okay, so you basically see a way, you see a concrete way to integrate it into teaching, in a way that does improve things. Instead of have surrender foregone conclusions, there will be growing pains, serious growing pains. 

Steven Shaw: 

Yeah. I mean, it's this technology is coming into our lives so quickly, right? 

And these educational institutions that we're working for with have been engaging in lectures or teaching the classroom in the same way for a very long time, and so these updates will be challenging, and that will be growing pains, but I'm definitely very optimistic about being able to use AI in educational settings to improve outcomes. 

Jeff Young:  

Yeah, you mentioned some future work you're headed to. Can you describe some of the other, you know, research you plan to do based on this going forward?

Steven Shaw: 

Yeah, so, I mean, the biggest question is, you know, fighting the rise of cognitive surrender, how can we develop systems and design LLM systems to increase checking, for example, when we want to increase checking or what types of characteristics of individuals are going to lead to more likelihood of engaging in cognitive surrender? 

So we're doing something called a pre registered report where we implement a bunch of behavioral interventions to try to basically reduce the likelihood of cognitive surrender in these CRT tasks. And that would be, you know, an interesting theoretical way to then hopefully broaden those interventions into, let's say, educational settings or medical settings, to say, here's a way that we can potentially reduce cognitive surrender when we want to.

Jeff Young:  

I am continually surprised at the people that I meet doing this podcast and their reactions to these AI tools. I keep thinking of Steve Shaw holding a microphone and and talking to his AI chatbot to use it as a sparring partner, as he kind of does this research and designs his experiments, even while these experiments are about the cognitive hazards of AI I keep coming back to this idea that AI tools are not exactly equivalent of calculators. 

One thing I took away from the conversation is that maybe people should be talking more about the risks of AI and how big the stakes are in education, but that there's also a need to talk openly and frankly about how and when people are turning to these bots, and that having that open discussion is key if AI is ever going to improve education and boost learning. 

This conversation also reminds me of the very first episode of Learning Curve back at the beginning of the academic year. That's one where I talk to a scholar who is examining the unique ways that AI thinks and how that compares with how our human brains think. He also stressed that we really need to talk openly about the differences as we move into what he called hybrid intelligence. It's worth going back and checking out that episode one of this podcast, if you missed it. 

These are really the ongoing questions that are driving the show. 

Oh, and I'm sure you are all very smart listeners, and you solved that logic puzzle from the beginning of the episode. But let me give you the answer to check your work again, that question, a t-shirt and a hat cost $1.30 together, the t-shirt is $1 more than the hat. How much does the hat cost?

Steven Shaw: 

The correct answer is actually 15 cents, which makes the t shirt The hat's 15 cents makes the t shirt $1.15 the total total $1.30

Jeff Young:  

Did you get it right? Did you use AI as always? 

I would love to hear your reactions or about your own approaches to adapting to AI. Send me your comments to Jeff at learningcurve.fm. 

This has been Learning Curve. On each episode, we dig into how AI is changing education. If you like the show, please take a minute to leave a rating or a review on Apple Podcasts or whatever platform that you use to listen. 

I know everyone always says that at the end of these podcasts, but I am making an end of the season push to get more reviews. So it'd be huge if you could just navigate to the Show page for learning curve on your podcast app and scroll down to where it says, write a review, or just click on the number of stars you think it's worth. 

This episode was written and put together by me Jeff Young, and you can find more about my work and about the show at learning curve.fm. The music is by Komiku and Blue Dot Sessions, and the show art was generated by Midjourney.

We'll be back in two weeks until then. 

Thank you for listening.