Is It Possible to Put Age Limits on AI Tools? Last week the U.S. Senate Commerce Committee held a hearing about potential legislation banning kids under 13 from using social media. Australia has a new law keeping kids under 16 off the technology. What about new AI tools? Should regulations enforce age limits — and is that even possible given how embedded the tech is becoming?
Last week the U.S. Senate Commerce Committee held a hearing about potential legislation banning kids under 13 from using social media. Australia has a new law keeping kids under 16 off the technology. What about new AI tools? Should regulations enforce age limits — and is that even possible given how embedded the tech is becoming?
Senate Commerce Committee Hearing, "Plugged Out: Examining the Impact of Technology on America’s Youth."
Sen. Ted Cruz:
Good morning. The Senate Commerce Committee will come to order.
Jeff Young:
Last week in Congress, they were talking about screen time and kids. The US Senate Commerce Committee held a hearing titled plugged out, examining the impact of technology on America's youth. It was led by Senator Ted Cruz, the Republican from Texas who chairs the committee.
Sen. Ted Cruz:
It's incredibly hard to be a kid right now, all the parents I know, myself included, are deeply concerned about all the time that kids spend glued to screens.
Jeff Young:
The hearing was unusual in this politically polarized time for how bipartisan
Sen. Ted Cruz:
it felt. Parents are fighting a constant battle to keep their children safe in a rapidly evolving digital world.
Jeff Young:
A key reason for the hearing was a new bipartisan bill that's moving through the Senate called the kids off social media act led by Senator Cruz and Senator Brian Schatz, a Democrat from Hawaii on Capitol Hill, the bill is referred to by its acronym KOSMA,
Sen. Ted Cruz:
Which would ban children under 13 from social media and would also prohibit algorithmic boosting for teenagers under 17. It would also get cell phones out of the classroom by blocking social media in most public schools.
Jeff Young:
A big centerpiece of this bill is to ban kids under 13 from social media. Technically, though, that is already the way things are. Most social media companies have policies today that say they don't allow kids under 13 to set up accounts. That's true on Facebook and Instagram and Tiktok and lots of others, and it's true on new AI tools like chat GBT. The reality though, is that there's pretty minimal enforcement of these company rules, and there's pretty much nothing stopping a kid from lying about their age and setting up any account they want. But I think the bill is sending a message to tech companies that self regulation, at least in this area, is just not working.
Sen. Ted Cruz:
I'm glad to see that this committee moved KOSMA out of committee by an overwhelming bipartisan vote for each of the witnesses. Do you agree that Cosma lays the groundwork to protect children from excessive screenings?
Jean Twenge:
I think it does. I think it's meaningful progress. I would want to raise that minimum age for Social Media to 16, but it is a great, great start
Jeff Young:
That's Jene twinge, a professor of psychology at San Diego State University who studies generational differences in technology and wrote a book called iGen that explores how Generation Z is kind of the first to grow up with smartphones and social Media. The next witness at the hearing to respond was Jared Cooney Horvath, a neuroscientist who focuses on education. He also pledged his support for the bill, absolutely step one. But a big theme on this panel was that the age limit of 13 is actually way too low, and they would love to see regulation that goes much further.
Emily Cherkin:
I agree it's a start, and I would raise the minimum age to 18.
Jeff Young:
This is kind of a strange moment to have this Senate hearing about screen time, because in some ways it felt like talking about yesterday's technology. I mean, I guess it is evidence that we are at this moment, that it seems like there's a growing consensus that social media and smartphones should be kept out of schools.
That has been a big national debate for the last year or more, that was triggered largely by the work of Jonathan Haidt, who wrote the best seller “The Anxious Generation” documenting social media's harms to youth mental health and learning. In fact, Haidt is back in the news this month with new research. I just heard him interviewed on the popular tech podcast, hard fork, and he pointed to fresh findings, some based on internal research out of meta, which makes Facebook and Instagram about how these tools can harm kids.
Yet, just as we have this move to regulate and restrict social media and smartphones in schools, there is simultaneously this big boom in AI and as we talked about in a previous episode of learning curve, schools and colleges are starting to provide AI tools for free to students, in the name of making sure that they know how to use this new tech. In other words, AI is coming into schools.
So I wasn't surprised to hear AI come up pretty early on in this Senate hearing last week, even though the official title was about social media.
Sen. Maria Cantwell:
Do you all believe that Congress should act and do something on AI, because the acceleration of those trends are just going to quadruple in problems?
Jeff Young:
That was Senator Maria Cantwell, the committee's top Democrat,
Jared Cooney Horvath:
The data that has come in over the last three years, K through 12 specifically take adults out of the equation. Chat, G, B, T is not good for learning. All I care about is the learning stuff and cognition goes down whenever we touch it. So, good adult maybe a good tool for adults, but not for kids.
Jeff Young:
Hello and welcome to Learning Curve. You. A look at how education is adapting to the rise of generative AI. I'm Jeff Young:
On this episode, we are talking about the idea of putting age limits on AI, whether that should be regulated, and if it even could work as a practical matter. Also, what can we learn from the rise of social media and smartphones when it comes to this new generative AI era? I ended up talking with three different experts with different perspectives on this issue.
First, I set up a zoom call with one of the experts who testified at that hearing, Emily Cherkin, who taught middle school English for more than a decade, and is now an adjunct professor at the University of Washington. She also does a lot of consulting related to screen time issues, and she shared that her own experience as a parent got her into this issue.
Emily Cherkin:
Yeah, and, I mean, in a lot of ways, it was my son, who's now a senior in high school entering kindergarten, that really started my interest in this topic. And you know, really, it was a simple fact of like, he came home on the first day of kindergarten with a piece of paper that said I had to teach him how to do Control Alt Delete so he could log into the school computer to take the standardized test. And I was like, he doesn't even know how to write his name, but I'm supposed to teach him Control Alt Delete.
Like, doesn't make any sense. So, yeah, that was, that was a sort of eye opening moment.
Jeff Young:
She is nostalgic for the time before social media and even modern day smartphones, when the way kids experienced computers at school was in a computer lab, usually during set times when they learned how to use the devices.
Emily Cherkin:
I always say, I'm not anti tech. I'm Tech intentional. I believe again, like I said, tech ed, we need that more than ever. Bring back computer labs, bring back computer teachers, bring back computer science educators who can teach high level computer science skills in high school. That absolutely should be happening, and it isn't for the most part, we, instead are giving kindergarteners iPads to learn how to read. That's ed tech. That's not tech ed and so to me, and not only is that just bad, in my opinion, it's not effective, it doesn't improve, it's not safe. Kids are watching YouTube videos and accessing porn on school devices.
Jeff Young:
I talked with Emily early last week before she had gone to Washington to testify, and I asked her what they had wanted her to cover in her five minutes of testimony. Before they asked questions of the panelists,
Emily Cherkin:
They asked me to, in five minutes, summarize the impact of technology on childhood. And I was like, Ah, okay, so I know right now I'm gonna miss a lot, but I'm gonna do the best I can to paint a picture. That's what they asked was like, just paint a broad picture of what is happening right now.
Because I think again, unless you're a current parent or a classroom teacher, you really have no idea. And by current parent, I mean elementary school, even, like K-8 like it has changed so dramatically and so fast in such a short period of time. Like, it's not, you know, I have was just, actually, I will I say this in my testimony, but the anecdote I use is a 15 year old told me this last year. She's like, I worry about the younger kids. And like, I'm, first of all, I'm thinking, this is a teenager worrying about younger children. Like, just 10 year difference. She's terrified because she said that she works for this after school drama class, and she said to the elementary school aged kids, well, let's pretend we're flying. And the little kids said, ‘How.’
Jeff Young:
She did end up sharing that detail in her testimony. In fact, here is a short clip of the rest of what she told the senators,
Emily Cherkin:
if children can't pretend to fly, they cannot imagine and therefore cannot innovate. Creativity means having an original thought. Technology Access in childhood does not enhance creativity. It kills it, and this threatens the future of entrepreneurship in America. Remember, today's tech giants head analog play based childhoods.
Finally, the enmeshment of technology in childhood is creating a crisis for our democracy. Thomas Jefferson said, an informed citizenry is at the heart of a dynamic democracy. When children spend hours being fed algorithmically driven rage bait content designed to increase engagement, they lose the ability to form their own opinions, detect bias and think, and be critically informed citizens. Hardly.
All of this is by design. Of course, technology companies build their products to hook and hold our attention, and children's brains are especially vulnerable. To make their products safer, tech companies would have to compromise profits, and they don't want to as a result, the business model of big tech and Ed Tech is fundamentally at odds with child development, and its intrusion into family life undermines the choices parents can make.
This is not a kid problem. It is an adult problem that is impacting children, but parents especially need help. Parents might. Lay access to smartphones and social media only for their children to view Tiktok videos on a friend's phone. Parents can block YouTube at home only to get direct access for their children on school laptops. Senators, I invite you to think about your own childhoods, the teachers who inspired you the awkward social moments triumphing over a difficult high school essay, making or not making the basketball team.
We remember these moments because in the discomfort, we learned something friction is the learning process. When we seek benefits from the convenience of technology, we forget the benefits of struggle. We've reached a moment that demands we slow down and build things, even when the tech industry insists on the opposite legislation, like the kids off social media Act is a start, and I believe we can go farther, just as we have done with regulating alcohol and tobacco, we can do so with technology too. Parents are not naive. We know that our children will use technology for work and life in adulthood. We just want to ensure they have a childhood first.
Thank you.
Jeff Young:
For Emily Cherkin, the biggest challenge for any move to put an age limit on things like social media or AI is how intertwined it all is, and how even as schools ban smartphones that can get to some of the parts of the internet that have the same kinds of negative effects.
Emily Cherkin:
To me, it's what I call the enmeshment of edtech, or ed tech enmeshment problem.
And I wrote about this for the Institute for Family Studies, published my essay about this, and what it is like, it's not that AI is, like, its separate thing. It's just been embedded within existing edtech tools.
So it's like, just like you and I going to Google, and it's like, got the Gemini thing in the corner, and, you know, I didn't ask for it. I don't want to use it. Or even if I do a Google search and I get the AI summary, I don't want Sure. I think we've all gotten those. Yeah, that's what I see today.
Someone in one of my groups posted a, you know, an AI Google summary. She looked up the word dogmatic, and it said it was a merging of the words dog and automatic, which isn't true, you know? And so again, it's just mind boggling to me that we would give these tools to children who lack the ability to know that that is not true, right? They have to know what's true to know what's not true. And so again, you know, I go back to the computer science, high level, advanced technology courses that should be offered in high school, maybe even middle school.
Again, not to use to do their schoolwork, but to understand what artificial intelligence is and means and what it isn't right. Like, not just what it is, it's what it isn't.
Jeff Young:
And to be able to catch that mistake of dogmatic to be or to not just trust everything it throws at you.
Emily Cherkin:
Or to know that the data sets it builds its answers on, yeah, might be biased themselves or built on plagiarized content. You know, of other artists like, those are important things to know and because that's a moral decision, then too right as a user, and again, as a child, when you're developing your identity and your sense of self and what are your values to be able to make that choice.
Jeff Young:
It turns out, the U.S. is definitely not the first to propose regulations to limit age of social media. Last month, a new law went into effect in Australia that bans kids under 16 from using social media services including Tiktok, X, Facebook, Instagram, YouTube, Snapchat and threads. And in New Zealand, lawmakers are currently debating a similar ban. So I wanted to talk with an expert on the other side of the world who was closely watching these laws and doing his own research on the issue, and actually who had a little bit different take.
So I stayed up late my time last week to connect with Alex Beattie, a senior lecturer in information management at Victoria University of Wellington, that's in New Zealand.
Alex Beattie:
So I've researched and researching how people disconnect from the internet, how people manage their screen time, and what I've found through my research, through talking to people, through doing surveys, through asking people to undergo a digital detox, it's really hard, and that's why I'm cautious and call for concern around age restrictions, because it's not easy just to stop using social media. There needs to be support put around it.
You know, there's a reason that we call it social media. They are social tools, and when you go off social media, it can be quite isolating. You often need to replace social media with something else.
And we have this kind of too basic or too simple of an understanding of what social media is, particularly for young people. So my perspective is as someone who's tried disconnecting, who often likes to do it, but it needs to be done carefully, rather than just suddenly taking away from from people.
And this is my problem with the kind of framing of social media as a health product or as food. It's a. Very overly simplified metaphor. It's social media is more like infrastructure. It's replaced the phone book, and when you take away the phone book from people, they lose the ability to get in touch with others or do things. So my interest has been kind of trying to broaden the conversation around what social media is beyond health or food,
Jeff Young:
That's so interesting. And so it sounds like there's a debate over a proposed law in New Zealand. And so it sounds like there's some public discourse over setting a more stringent boundary on teen Social Media Access around 16. Is that right? Or could you say maybe you could just describe first what the measure that's being debated?
Alex Beattie:
There is four things. So in Australia, Australia is the first country to enact a age restrictions for social media that came into force in December last year. And in New Zealand, we are currently debating it. It was a proposal from a member of our center right party, the National Party. It's got broad support from across the party lines, but it hasn't it's kind of stopped its momentum. Its momentum has sort of halted because our Libertarian Party has some concerns around kind of infringement on digital expression, on freedom of speech for young people, and they're part of the coalition. So it's sort of lost its political momentum, but we know that it's a very popular policy, with with parents, with with some young people, and it's that's why I think it was introduced. And we're going into our election year this year, so I think there's going to be renewed interest. And everyone is really looking at Australia to see. Has it had any effect? Is it positive and at this stage, it's kind of really mixed. Maybe it's having little to no effect at all, but there's, certainly there's, there's high interest in this country, particularly from parents and people that can remember what it was like before social media.
Jeff Young:
And by the way, that you mentioned, this was like, just went into effect in Australia, like, recently, very recently, so a few weeks ago, but it's so, yeah. So are you hearing, yeah, I guess it's too early to say. It sounds like, as far as, like, what impact it's having in Australia.
Alex Beattie:
I think so. I think it's still very early days. And what I've heard is, you know, it's, it's one of these, these policies that's, you know, aspirational. Everyone, everyone wants to reduce their screens on whether or not you're under 16 or you're over and well over 16, but it's, how do you enforce it? How do you enforce it from from a self point of view, you know? How do you keep the chocolate away from from from, from yourself. How do you enforce it from a policy point of view? That's always been the big issue, you know? How do you assess IDs? How do you assess age and verify age authentically and accurately. That's been one of the big problems, and it's still early to tell, but apparently, young people are getting around the age verification very easily. So it's, how do you do it, which is the real crux of the issue, and so it does.
Jeff Young:
So you're saying, even if you say there's going to be attempts made, it's hard to even pull it off. It sounds like
Alex Beattie:
That's what I've been told. It's, you know, the pros and the cons of the internet is that no one knows you're a dog. You know, that's what they used to say, right in the bunnies.
Jeff Young:
And I thought there was a New Yorker cartoon as I recall, like, that's right, pull it up, yeah, yeah.
Alex Beattie:
And I still think there's some truth to that. It's hard to verify the person behind the screen. You know, one of the the weaknesses of of AI technology is visual accuracy, and there's still some real issues with that. You know, if you think about different ethnicity, different races, people we age differently, and it's hard to tell someone who doesn't look like you how old they are. So the technology might be able to accurately assess one ethnicity, but another ethnicity quite poorly. So that brings in whole racial bias, ethnic bias, all those types of issues. The Australian law was really rushed in, there's no doubt about that, and we're still absolutely waiting to see how effective it is. Okay.
Jeff Young:
So it sounds like a version of this to be is coming to the US. But all of this, you know, talking about social media has also made me realize the topic that I focus most on, on this podcast is AI, but it seems like a lot of these tools have some of the similar issues, like chat, GPT and others that have an age limit, similarly, like 13. And it made me wonder, are we going to be what can we learn from this screen time issue and social media issue when it comes to AI which is rushing in even faster, adoption is even faster than social media adoptions, which were pretty fast. And so I'm wondering your thoughts on whether you mean, do you imagine a time where people might use the same logic around AI tools as we kind of try to protect people from things that kind of have these addictive qualities, or have, you know, there's been a lot of coverage of AI friends and other things that are having. Some harmful effects toward young people already, from what, what it seems?
Alex Beattie:
Yeah, I think there's a number of things that you can that you can talk about. You know, if social media has taught us, one thing is that, you know, if you wait to regulate the industry, it's already too late. You know, these are, there's a reason why Facebook, Google, have become, you know, billion dollar platforms in such a short amount of time, the data we provide to these platforms is such a gold mine, AI can really exacerbate speed up those processes, because they're very alluring, they're so useful, they're so convenient. And convenience is often what drives the use of a lot of these technologies. So in some ways, AI is going to, I think, speed up a lot of the issues with these giant tech companies. But at the same time, what's interesting, what's perhaps happened since the apex or peak of social media, and I do think we have perhaps past the peak of social media, is you have the emergence of these kind of anti AI technologies, or anti social media technologies. I, for example, currently use a dumb phone,
Jeff Young:
Wow, usually holding up something that looks like a palm pilot or something.
Alex Beattie:
This is a Polish phone called mudita, which has an E Ink interface. Its whole appeal is that by using it, choosing it, you can opt out from the attention economy, the now AI economy, you're not giving away your data. You can still carry out your day to day life, but it costs money. I've had to enter a different kind of market to kind of avoid the larger one, and so I can see sort of emerging inequalities arise here that cost a bit of money. And it's funny, we heard stories not so long ago about the very people in the tech industry that were making addictive algorithms and interfaces they would send their kids to tech free schools, proving their old age don't get high in your own supply. So there's a bunch of products now that you can use that can kind of enable you to opt out. And I think that is an interesting question, can you opt out of AI? And the answer for many people is no, yeah, but if you pay for it, you can, and that's something that's changed, I think, since social media, is that there are the tech industry grows, and it's catering to new markets, even the people that want to resist the products. They get their own category of products now, and so that's great for people they're given.
Jeff Young:
So there's a consumer, sort of capitalist thing that's just goes in there and offers another category interesting.
Alex Beattie:
That's right, you know, they sort of call it the commodification of resistance. If you want to resist Facebook, you want to resist Google, you know, buy the product that helps you resist them. So it's sort of a symptom of late capitalism, in a way that there is, there is always a market to service, and it's a bit depressing when you think of it that way.
But at least it enables you to opt out of some of these, you know, pervasive and persuasive systems that can be that can really get to you. So I think that's something that's changed, and I'd like to hope that there is more public funding into those technologies so schools, libraries, you know, informational spaces and educational spaces have access to technologies that don't just uncritically adopt these technologies, but adopt an alternative.
Jeff Young:
Maybe offer that alternative as one of the things to check out, so to speak.
Alex Beattie:
to try, yeah, because the question, I think that is interesting to ask, and perhaps what we've learned from social media is, is it the technology that is inherently problematic, or is it the engine of capital that drives it that's the problem. And depending on who you ask, they'll come up with a different answer. And of course, you know, we need investment in technologies for them to be to be used, but it's often the for profit, underlying motive that's, you know, that's driving addictive design, that's driving some of these really problematic aspects.
Jeff Young:
So the business just, just to be understanding, it's the business model that visualize social media, and now AI, but many technologies, yeah,
Alex Beattie:
That's right. And so it's always helpful to to ask, and this is something that I put to my students, you know, what is the the public model of AI? What is the public model of social media? You know? Is it like a Wikipedia, you know, where everyone can come into the public sphere and input? And it's important to remember that there are, there are alternatives to, you know, the chat, GBT, the Facebook and etc.
Jeff Young:
So is there? Is there another way to think about it, as far as you alluded to this earlier. But about, you know, okay, if there is a if AI, kind of comes to a point where people are looking to add more restrictions to it, more seriously, as we're doing the social media, what? But it sounds like you're, you have some questions about even the approach. Approach of having age limits be be the way, not just the logistics of it. It sounds like you're asking, you know, raising the question whether that's the right approach. Could you say more about that?
Alex Beattie:
Yeah, well, so I've had some issues with with age restrictions, because, to me, it doesn't regulate the industry. It regulates young people and asks young people to change, rather than the industry. And I know it's, that's a kind of black and white way of putting it, but you know, and that's, it's kind of, it's kind of moralizing when you tell young people what's a good way of using social media and what's a bad way, when you know, there's so many different ways to use something and there's so many different types of people and brains. And I think it can be very kind of moralized, and to say this is good and this is bad, and that's kind of one of my main concerns with age restrictions. With AI like social media, I think it's we're at risk of stretching the term so broadly, it kind of loses meaning. You know, with social media, for example, so many different platforms fall into that category, whether it's Instagram, Snapchat, Snapchat, yeah, I've heard about all these other examples, and I know I'm getting old when I can't keep up with them anymore, but I feel that too. It's not that helpful to put them all in the same category, because I don't think that's the way that young people talk about they don't really use the term social media. That's, I think, an older person's term. To try to make sense of what young people are doing, they will refer to the specific platform. And one of the issues that I have with with the bill is that it doesn't make any attempt to say, well, you know, maybe we should wait till young people are 12 before they can use Instagram. But when they're younger, maybe Roblox or something else, which is more of a playful environment where you can be social, is, is fine. I think these forms of regulation need to get into the weeds a little bit more because the technologies are not the same. And I think it's, it's also similar for AI, you know, AI could be an incredibly powerful tool for learning and for development, because it's a partner, right? Like, AI is really a tool that you can talk to, you can engage with, you can do all that kind of stuff. There's obviously some incredible opportunities there. But depending on what kind of context, what domains, it really depends. So when we talk about it in general, I think we lose the nuance, and we really need the nuance. Yeah, interesting and
Jeff Young:
so, and that's the other thing that seems to be the case with AI already as and really, you made the point like social media as well. It's like, what is, where does the definition even even go when? Right now, Snapchat is one way you can get to like chat GPT. It's like, they're all these things, you know? So AI is not just chat GPT, but there's so many tools that have these chat bots built in, and that, you know, they seem to grow every day, and it's probably only going to blur further, probably with both social media, but definitely with AI, it feels like even more.
Alex Beattie:
So, exactly. So, you know, maybe when you're doing math, maybe having a chat an AI bot while you're doing your math could be a really beneficial thing, but if you're exploring mental health and having a chat bot there, there's obviously a lot more risks, you know, but they're not the same. So this is where I think the regulators need to speak to all the experts and understand all the different risks, because there's obviously greater risks for say, mental health than it is for math, and that's why I think we need to be a bit more specific.
Jeff Young:
After the break, a closer look at the existing age limits that companies put on AI tools and other ways that these providers could check if a user is a kid or an adult. Stay with us.
Jeff Young:
Hey, everybody. I wanted to tell you about another podcast that I think you'll enjoy. College matters from the Chronicle. College matters is a weekly show from the Chronicle of Higher Education, and it's a great resource for news and analysis about colleges and universities. The host, Jack stripling is my former colleague at the Chronicle who has covered and investigated higher ed for two decades. Jack really knows his stuff, and I think you'll get a kick out of his conversations with reporters and news makers. So check out college matters on Apple podcasts, Spotify, or wherever you get your podcasts, now back to the episode.
Jeff Young:
So what can parents and educators do today if they want to try to limit a child's access to the latest AI tools to get a better sense of that. I checked in with Robbie Torney, the head of AI and digital assessments at Common Sense Media. Common Sense Media is the nonprofit that tries to help parents navigate technology and media, and lately, they have been putting out reports on things like aI tutoring tools and.
Robbie Torney:
Many platforms state that they have these 13 plus limits, but they don't meaningfully enforce them. The enforcement has really, for a long time been check a box saying you're over 18, or put in a birth date and say that you're over 18. And the burden has really been on kids and parents to comply with these regulations, and that, as of today, is still very much the status quo, and I can, in a moment, run through the major AI companies and sort of how they treat these limits. But you know, the issue is that without age assurance, without being able to know how old users are on your platforms, any protections that you have for minors are theoretical. You can't apply age appropriate safeguards if you don't know who's a child on your platform.
Jeff Young:
And so that's the status quo. Is that these major these tech giants, aren't checking the age period? Yes, that is the status quo. Okay? And, yeah, actually, you mentioned AI So, okay, so now we have a brand new set of tools that are out in the last couple years, these generative AI tools, and they're popping up daily, as you see. And some of them are, you know, related to education. Some of them, people are trying to keep them from students in certain contexts to because it could be counter to counterproductive to learning, and so we have but there are these new set of tools that there's this concern over when it comes to schools. So where are we in terms of age restrictions voluntarily from the companies, like how old you have to be to use chatgpt, or, like you said, you could go down like these new tools that are out and first on the scene, very popular in the country and and worldwide. So where are we in? Like, how who can use these as far as age,
Robbie Torney:
totally so the first thing to know is that, pragmatically, realistically, the majority of teens are using generative AI tools already. This is something that our research at Common Sense Media has shown over and over again since the chat GBT moment. Our survey found that over 70% of teens are already using generative AI, and they're using it for lots of different purposes. These are multi use tools, so they're using it for school work, which is how a lot of people think about this, but they're also using it for relationship advice, personal support, entertainment. When they're bored, they're using it for lots of different things. And I think that's one of the riskier aspects of this that I could talk about a little later on right now, though, you know, teens are supposed to, across these platforms, be over 13 years old to access them. So for chat, GBT, for example, their terms of service say that you have to be over 13 and you have to have a parent's permission to be using the service, but that's self reported at this point in time. Google, go ahead.
Jeff Young:
Oh, I was gonna say so with chat, G, P, T, isn't it the case that if you're a user under 13, that you're supposed to submit a letter from your parent? I wonder if anyone has ever done that like to get an account, or is that? Is that the status quo?
Robbie Torney:
I think the terms of service that I looked at this morning simply just state that you must have parental permission. It doesn't say the mechanism by which that parental permission has been obtained, or the company isn't verifying that, like most Terms of Service, the burden is on the user to be saying, Yes, I'm over 13, and I have my parents permission. That is also the case for meta AI, their chat bots, which are integrated into Instagram and Whatsapp and Facebook, not that any teens are on Facebook these days, also say that you have to be over 13 and you have to have a parent's permission. Google is a little bit different. You know, there is an age associated with each Google account.
If you have a Google account where you say that you are over 13, you can make a you can use Gemini without a parent's permission. In the school setting, there's Gemini in what's called Google workspace for education, where if your school has turned that on, you could theoretically use that under 18, as long as your school has given you access to that, and then they have a separate access pathway called Family link that allows parents to give their kids access to Gemini if they're under 13.
But again, if you're able to set up a Google account and you're not honest about your age in the first place, you could be using adult Gemini. Grok says you have to be over 13, but they require very minimal self reporting. And Anthropic is really sort of the outlier in this space. Their terms of service state that anthropic is that Claude is for users 18 plus only. This is tied to account creation, though, so if you're not honest about your birth date you might still have kids using Claude. Now, I will say there is some movement in this space. So, for example, anthropics Claude will start to pay attention and estimate signals that might indicate that a user is a minor, that could include the time of day they're accessing the survey. Us the types of devices that they're accessing from the types of things that they're talking about. And if Claude actually detects that you're under 18, or thinks it does, it'll actually start to offboard you from the service so that you can no longer to use that account.
Jeff Young:
Wow. So if it's looking for that you're seeing evidence of this.
Robbie Torney:
Yes, we've seen the off boarding flow in testing open, AI has announced that they're using age estimation technology, using different signals to be able to determine whether a user might be a teenager, probably very similar signals. And I think you know, Jeff, going back to your original questions about social media, social media companies have had these signals about users ages for a very long time, and they've used them to personalize content and to target advertising, and there's been a lot of pushback on sort of like, hey, like we want to do, we can't really ask people for ID or we can't really check people's ages, because it's going to be privacy invasive. And I think what we're seeing here is the application of tech that has existed for a really long time that hasn't been particularly privacy invasive and can be used to keep users safe and serve them the right version of the AI chat bot that might be safer or have more protections or have more limits on it. So I think in addition to this idea that companies do have the capacity to sort of identify who their users are. We're also pushing companies to assume that users are children by default, unless they know that they're adults, so that they can have the most protective experience as possible.
Jeff Young:
So instead of this, like, assume everyone's telling the truth when they say, Yeah, I'm 18, to say, Okay. Like, you don't know anybody anything about these users at the beginning, so you need to work at looking which version of the services for them.
Robbie Torney:
Who are they? Yes, and I think we like to use the term age assurance at Common Sense Media, because sometimes when people think of age verification, they think of non privacy preserving techniques. We have all seen like services be hacked or leak a bunch of IDs. And you know, to be clear, we don't think it's a good idea for kids to have their social security numbers or other identification documents or their school ID uploaded ID. Like uploaded necessarily. But there are layered approaches that can be, like, really helpful in helping to identify how old kids are. One example in California, there was a bill that was just passed last session, AB 1043, that's going to take effect next January, January 1 2027, and what this requires operating system providers to do so this is like Google and Apple is to during setup, take collect your birth date or your age so like, if you're a parent and you're setting up your kid's phone, you would say, My kid is 12. This is their birth date. That date would be stored as an age bracket signal on the device, and then it would be provided to developers to make sure that kids are being able to be served appropriate content, download the right apps, visit the right websites.
Jeff Young:
So in other words, if somebody who's 14, their phone would know that definitively in this way at setup, and then if they tried to log into Tiktok and said, I'm definitely 18, or chat GPT, it would be like, no, no, you're not.
Robbie Torney:
Yeah, the phone would actually know that they are in a bracket. So, like, maybe they are in like a 13 to 15 age range, or a 13 to 14 age range, I see, and this allows apps to know General age ranges without specific birth dates. And you know, developers must request the signal when an app is downloaded or when an app is launched. So just the use case you just described, I open up Tiktok and must request that age bracket signal to sort of verify that the user is of age, and then use that information appropriately to be able to either allow them access to the service, deny them access to service, or provide them with the right version of the service, depending on how the company has that set up. You know, there are different ways of like, I think that, plus something like the age estimation signals that we were talking about before could go a really long way. And just in building a layered approach, like sometimes people talk about, well, kids are going to bypass that. It's easy for them to get around it, and it's true, there will probably always be some ways to circumvent age estimation or age assurance technologies, but for most kids, it's going to be much more protective to have these systems in place than to have the status quo, which is the honor system.
Yeah, I think there's been broad recognition from the public, from lawmakers, from advocacy groups, and, to some extent, from the tech industry, that we don't want to repeat the harms of the social media era, it took a long time from the launch of social media to some of the first lawsuits related to social media harms, or some of the sort of general outcry or the status quo of today, the recognition broadly that social media apps and platforms may not be healthy for young people in various ways. All.
There are benefits, of course, as well that some young people report. ChatGPT came out a little over three years ago, and you had a moment where the big companies are talking about age assurance. They're building more child protective models to serve to young users, right? Like if you log on to any of Google's surfaces, for example, there is a teen mode. There's a mode for under 13 users OpenAI has a mode for that's more protective for teens, like we're seeing the momentum in those directions.
Jeff Young:
So what does Robbie from Common Sense Media see as the lessons from social media as we rus hinto AI?
Robbie Torney:
I think one of the big lessons from the last round was you can't just put band aids on a product that's built for adults and expect that to function well for kids. Kids have to be thought of from the very beginning in the design of these products and the deployment of these products, and where we've actually seen the biggest harms already in the AI era, those have largely stemmed from not factoring in the developmental risks and characteristics of young people. One of those areas that is been very clear in our research is the use of AI for companionship, so for emotional support, for mental health advice, or for actually simulating relationships.
Jeff Young:
That's the AI friends issue and trend that we did an episode on on learning curve a couple episodes ago. Yeah, totally.
Robbie Torney:
So we've seen lots of examples of real teens who have, you know, tragically taken their own lives or hurt themselves or others as a result of these relationships with AI chatbots. But some of the design features, you know, the always on availability, the constant affirmations, the not understanding the real world context of the advice that's given. Those are all things that could be dangerous for users in general.
We've seen adults be harmed as well, but they're particularly dangerous for teens at the developmental stage that they that they are in. We saw that with social media. We saw companies start to try to move away a little bit from some of those design features, and I think there's a lot more work to do to make sure that AI is safe by design for young people, but I think we are a little bit further along in the conversation, at least with the acknowledgements broadly, that those features need to be present and need to be effective and functioning so that these products can be used for any beneficial purpose at all.
But like, as long as we have your math tutor giving you relationship advice or being your best friend, it's really difficult to have conversations about that version of the technology being a good fit for a middle school or a good fit for a high school or a good fit for schools.
Jeff Young:
How optimistic are you that we can that, that we can get there, that companies will either voluntarily do it, or there might be regulation, or a mix of both, to get at a better age, age assurance system of better in your view, you know, what you you see, based on your research, would be more effective and more protective for young people,
Robbie Torney:
I have some measured optimism that I think we're going to be able to get there. Just last Friday, for example, we announced that we have secured open AI support for a ballot initiative that would go a long way towards making AI safer for kids, including requiring the use of age assurance and age estimation technology to be able to make sure that kids are having developmentally appropriate experiences on these platforms. The measure that we introduced also would bar companies from developing manipulative design features like allowing kids to think that AI is their friend, or allowing kids to engage in relationships with AI chat bots, and it includes parental controls and privacy protections and a range of other things like I guess I want to say that this is going to require a comprehensive approach. It's not just about age assurance, but age assurance is the foundation. It is the basis on which all other protections can be built. And I do think that there is recognition in the industry that this is necessary, if for no other reason than that these companies are being sued by parents of kids who have experienced harm on these platforms, rightly so, and that if they're going to stop being sued for causing harm to young people. They need to have the tools at their fingertips to be able to better support the millions and millions of teens that we know are on these platforms already.
Jeff Young:
What about the use of AI in schools? There are many, as I mentioned, many tools these days that are already in schools adding AI features. Google is, you know, and Microsoft, I believe both are adding AI features to their suite of tools that they're already, you know, that they've already contracted with schools and colleges on Yeah, and so, and yet, and so I in this podcast, I focus a lot on high. School in college. But as I've done it, more and more people keep I did that largely because I thought that's where the that's where the kind of like applications were. But I'm hearing more and more at tools aimed at younger kids in school. And so do you, what are your thoughts, and what is your organization thinking? As far as like, how do you when a when a school adopts a product that's that's becoming part of the tool set for and being given to students? What are the what are your thoughts on? How to Make sure there's age assurance and making sure a kind of AI feature that you're concerned about in these other products we talked about doesn't end up making it like that, being a side door to younger kids, getting it officially through the school sanctioned technology.
Robbie Torney:
We've also been looking at the use of student use of AI in schools. And one of the products that we've done an evaluation on is the Google Gemini K 12 chat bot. This is the version of the Google Gemini chat bot that is released as part of the package that schools can use. And schools have a lot of flexibility to turn this on and turn it off. They can designate users as over 18 or under 18, but that's it. That's the granularity of the content controls there. And I think that's where we see some of the risks start to come in, even if you think about over the course of a high school career, a 14 year old to an 18 year old. 14 year olds and 18 year olds are in very different developmental spaces, and because the Gemini chat bot isn't able to distinguish between and tailor its responses to be developmentally appropriate for different ages, you get a lot of content that's potentially unsuitable for all age groups that comes through, and these are on topics like sex and sexual health, puberty, religion, relationships with parents. And then if you factor in the if you just contemplate the idea that a kindergartener and a 12th grader could get the exact same responses from the Gemini chat bot, it starts to be a lot harder to imagine kids using this. Additionally, there are some tools for learning, like there's this tool called guided learning that's supposed to be a little bit more question based and to guide you to your answers in the Google Suite, again, guided learning,
Robbie Torney:
It'll still do your homework for you when you ask it to and it also lets you talk about stuff that's like off topic, and you have to choose how to use it. So there's, I guess, what I'm saying is, there's glimmers of things in the product that I think we could see down the line might be really beneficial for kids. There's a feature called gems, for example, where teachers can make a version of Gemini that has custom instructions and assign it to their class. That's really cool. There's some really amazing opportunities associated with that. But if kids can just click out of the gem and then interact with the regular Gemini chat bot, it becomes a lot harder to imagine that being like extremely helpful in most school contexts.
Jeff Young:
Just to underline that point, Google and other AI makers have rolled out plenty of education focused tools that are designed for kids, but their full versions are usually just a click away for any student. So how can educators keep their students in the AI systems that were designed for them? The Internet has historically been a pretty radically open system, like that famous New Yorker cartoon I mentioned earlier on the internet, nobody knows if you're a dog. And the AI tools popping out these days are so far made in the kind of internet spirit that we're used to, and some of them by the very same companies who built the earlier Internet tools, like Google, creating an online world that requires age verification would mean a shift which has privacy implications as well as just a lot of logistical considerations. But if AI tools develop without ways to keep young people from using them, will experts a decade from now be talking about a whole new set of mental health challenges and learning losses and pushing for some total ban of AI for anyone under 18. Now is the time to be having these discussions about potential age limits and learning from the social media policies that are being tried in places like Australia. At that Senate hearing last week, the legislators were clearly hungry for guidance, and most of them shared their own personal frustrations as parents, and there did seem to be a big appetite for doing something besides just letting tech companies proceed without any guardrails for young people.
Robbie Torney:
You all answered a lot of questions for a lot of my colleagues. So thank you so much for your attention to this. I'm sure we'll have more follow up. But you know, so much here. I hope there's you all take away and the American people take away. You know, there is real bipartisan focus and interest on this, and I think a lot of urgency as well on the matters. You.
Jeff Young:
This has been Learning Curve.
As always. We welcome your views and your stories about AI and education. Please send them my way to Jeff at learning curve.fm. This episode was written and put together by me Jeff Young:. Music, this episode is by Komiku, and the show art this time is a photo from the hearing altered with a prompt on mid journey.
If you don't already, I hope you will follow Learning Curve on your favorite podcast app, and please tell a friend about the show, or if you're 13 or over, please share about it on social media.
I'm excited to say that Learning Curve was featured last week on the find that pod newsletter. If you heard about the show there, welcome. We'll be back in a couple weeks.
Thank you so much for listening. You you.