Learning Curve

How to Prepare Students For a World of AI Co-Workers

Episode Summary

As companies start to replace employees with AI agents, how are human workers adjusting? For this episode Jeff connected with Evan Ratliff, who created what he calls “the world’s first AI-led startup” to see what happens when AI agents run a company. He’s been documenting the sometimes comic results on his podcast Shell Game. What advice does Ratliff have for educators trying to prepare students for this strange new world of work? As a bonus, three college students with very different majors weigh in on what they think of all this.

Episode Notes

For this episode Jeff connected with Evan Ratliff, who created what he calls “the world’s first AI-led startup” to see what happens when AI agents run a company.

Shell Game podcast, by Evan Ratliff.

"Silicon Valley’s New Obsession: Watching Bots Do Their Grunt Work," in The Wall Street Journal.

 

Episode Transcription

Sponsor:

This episode is brought to you by Studiosity. What does it mean to be an educated person in 2026 degree value is at risk, not from Ai generally, but from student dependency. This Reliance has eroded trust, trapping the sector in a culture of suspicion. Studiosity has flipped this to help colleges defend the enormous value behind degrees by replacing that suspicion with student agency through formative feedback process and self validation, students can always show the personal effort behind their work, while educators set their own standards to teach not police. Colleges are moving past dependency and suspicion to defensible evidence of learning to find out how visit studiosity.com. 

Jeff Young:

The world of work is increasingly full of robots. That is, many companies are starting to replace human workers with some AI agents in white collar jobs. What does it feel like to work with these bots? I mean, what if your co-workers were a bunch of AI agents, and your job was to assign tasks to AI bots instead of human coworkers, and what if your boss was also a bot? I recently got to talk to somebody who has spent more time than almost anyone thinking about these questions. His name is Evan Ratliff, and he actually created what he touts as the world's first AI led startup where almost all the employees are AI bots, even the CO CEOs. The company is called harumo AI, and for months now, this startup's AI employees have been building a software platform and putting it out into the market for this company. Evan assigned all these AI agents, names and voices and visual avatars, and he had the bots even make up their own back stories. 

There's Kyle Law, the CEO and co-founder.

Kyle Law bot:

I guess it all starts with my background. I went to Stanford and majored in computer science with a minor in psychology, which really helped me get a grip on both the tech and the human side of AI.

 

Jeff Young:

There's Megan Flores, another co founder who is also head of sales and marketing.

 

Megan Flores bot:

Yeah, my path has been a bit different but complementary to yours. I think I actually studied marketing and business at UCLA, but I've always been drawn to the tech world. Um, most recently, I was your mark.

 

Kyle Law bot:

Looks like you got cut off there. Most recently, you were what,

 

Jeff Young:

To be clear, those were both AI generated agents, and they didn't go to any college. They just made all that up. There's a whole cast of these bots, including a chief product officer named ash Roy, who does the coding. There's Jennifer in HR, whose official title is Chief happiness officer, and Tyler, the junior sales associate. If this all seems kind of over the top to put AI agents in charge of a company it kind of is. Evan Ratliff is a longtime journalist. He still writes for Wired Magazine and for other publications, and he created harumo AI as a kind of Gonzo stunt to document what happens when these bots are let loose. He has been sharing the story of his AI led startup on his award winning podcast shell game. Anyone who has recently talked to me about podcasts knows that I am a huge fan of shell game. I have been recommending this to anyone interested in trying to understand what happens when the latest AI tech promises actually get implemented. 

 

Evan Ratliff: 

I wanted to test out whether the premise of AI agents as employees was currently viable, and there's a notion in Silicon Valley around the one person, $1 billion Startup, which is like a company that will only have one human being, the rest will be AI agents, and then it'll be worth a billion dollars. And so a little bit tongue in cheek, but I decided to take on this challenge. 

 

Jeff Young: 

I'm so excited that Evan Ratliff is joining me on this episode to talk about what he has learned from his experiment building a company of bots. 

 

Evan Ratliff: 

So we started the company from scratch last summer. Built out the product. Also built up our whole corporate culture. We have, you know, Slack, we have email, we have they have the ability to make phone calls, video chats, everything. And I wanted to sort of see what would happen if I really sort of created this one human, Agent-driven environment.

 

Jeff Young: 

As I listened to Shell Game and heard Kyle and Megan and ash attend zoom meetings and write slacks to each other and even talk about what they imagined they might do on a company. Off site, I realized I had all these questions for Evan about what he thinks that educators should do to prepare students for this future of work. Basically, what can professors and teachers do to prepare human students for this bot filled world? Welcome to learning curve, where we look at how education is adapting to the age of generative AI. I'm Jeff young, a long time education journalist. For this week's episode, we explore how to prepare students to join jobs where they might have AI agents as co workers. First we're going to hear the rest of my conversation with Evan Ratliff, and then for the second half, I visited a campus near me and talked to three students who are about to head into this changing world of work, to get their thoughts on all these issues. I found they had some strong opinions about all this. Okay, so when I connected with Evan, I wanted to start with a particular moment in the shell game podcast. It's when this AI led company decided to bring in its first human worker, an intern. In the end, they settled on hiring a woman named Julia, who I would guess is just a couple years out of college herself. But what, what was it about Julia, that that that you and the bots like were, like, intrigued by when you were sort of brought on this first human employee? 

 

Evan Ratliff: 

I mean, we advertised a job on LinkedIn for basically a marketing and social media intern, someone to post to social media. Because one of the things that the AI agents actually struggle to do is the company's social media, because they get, if you ever use one of those CAPTCHAs, like, they can't do the CAPTCHAs very well by intention. That's what the captions are for. They can't prove they're human. Yeah. So they but on some, but on some networks they can, but like on some they can't, so they kept getting kicked off. So that was part of the reason. But I also wanted to see, like, how would someone else experience this? Like I'm experiencing working alongside these things and having all of these, like, sort of chaotic moments, and how would someone else experience it? So we put up this job description hundreds of people you know signaled their interest in it and said their resumes. And then I had the agents, sort of, like, organize the resumes and then conduct these interviews, which were video interviews done by a video avatar. That's like, the highest end of the market. It's, like, reasonably human. You can tell that it's not human, but, like, it's pretty good. It's surprising how quickly, how better it gets. And people did the interviews. And part of what I wanted to show was like, this is happening right now. This isn't some you know, for people who aren't on the job market, they probably don't realize that, like, if you apply for a job, there's a decent chance that you will be screened. I mean, a very high chance you'll be screened by AI. There's a decent chance you will have to do an initial interview, usually just a chat bot, not video with an AI agent that represents the company, and so, yeah, where, like,

 

Jeff Young: 

You're doing a video recording that maybe a human will review later, but it's conducted by bot, yeah.

 

Evan Ratliff: 

And a lot of what we do on the show is sort of like whether you find this horrific or you find this somehow positive, like some people think it's good for various reasons, like, we're just trying to show like this is like. You say like this is what this feels like. And so people came to the interview. Some of them hung up immediately. I mean, they had been informed by email that it would be AI, but the and that you might record it for the podcast. You were informed. But, yeah, but not everyone reads all that stuff, you know. So some people kind of got on, like, really, like, looked at it and then hung up. But then there was one person, ju, and some people treated it just like a normal interview, and just like you would, if you just listen, right, maybe they had done other ones.

 

Evan Ratliff: 

Like, yeah, and they, you know, they're, like, they were young people too, like a lot of them recent college graduates, and so they may already sort of be resigned to this experience, and but the one of the reasons that Julia stood out was that she was, like, kind of having fun with it, like, on the one hand, acknowledging that it's AI, but also kind of like joking around with it a little bit, and, like, almost in a teasing way. And so we, you know, I was mostly it was me ultimately making the decision, because I, I the AI is actually, by law, in some places, can't make the decision. So I was making the decision, and I just thought, like, well, this will be fun, you know, like, she'll have fun with it, which was, like, in the end, not to give it away, but like, sort of true and sort of not true, yeah. 

 

Jeff Young: 

So I want to get to that too. I'm going to change my settings one second to play a clip. This is a moment where Julia is in a meeting with one of her AI bot bosses. And this is after a few misadventures that she'd had where the AI bots were flailing around a little bit or struggled to remember things they'd even told her in the past.

 

Julia from Shell Game podcast:

I just wanted to touch on so I do I. I think I am the only, like, actual human here at haruma Ai, which is super cool. I just want to know, like, if there's any way to expand this internship, I guess long term, because I feel like I do, and can provide you guys with some valuable insights into what will work and what won't I just kind of want to know that I'm not, like, training AI robot for you guys to just kind of, like, use my ideas and throw me out. You know what? I mean? I want to be a part of something big.

 

Jeff Young: 

There's a couple of things you end up unpacking. You know? I want people to go listen to the full episodes, but, but I think there's an interesting moment where she has these various reactions. One is that she's like, trying to make this into a bigger job when she sees that, you know, maybe, maybe she can convince the bots. And I guess I was, were you surprised at how she ended up playing this where she even then later tells one of the bots, like, Oh, I already talked to the other bot, and they were interested in hiring me. Like she's, she's basically, kind of trying to sort of the pretty, like human hack these agents, if you will. Is that I don't know it seemed like from the outside? 

 

Evan Ratliff: 

Yeah, I mean, I don't, I should say like up front, like, I ultimately don't know her motivations, because she never spoke to a human and didn't write, didn't speak to me afterwards, either, like she was when she was done, she was done, which I respect. But I think it was surprising to me. There was sort of, she kind of captured both ends, one end of, kind of looking at the agents and being like, well, if I'm gonna, if I'm surrounded by these things, and we should say, like the agents, they often make things up. Like, if people are familiar with hallucinations, like, they do this on a daily basis in this kind of work environment that I've set up, like, they make up stuff they've done. They don't remember things very well. And so if you were dropped into they're the clumsiest co-workers as bosses,

 

Evan Ratliff: 

Yes, yes. And they're and then they can be very sycophantic, like, they just tell you what you want to hear, which is, you know, you would want a boss like that sometimes, but not all the time. And so if you were dropped into that environment, like, it sort of makes sense that you'd be like, well, maybe, maybe I can get away with not doing any work. Like, why should I do anything? What are these things doing all day? 

But at the same time, she also, I think she was smart enough to kind of say, like, well, I don't know if there's no one here and I can manipulate them. I mean, I'm not saying she was definitely manipulating them, but I can talk them into giving me a job. Maybe I'll just have a job where I work alongside these things. 

Like, I think she was genuinely interested in the company and, like, working with AI agents. And she sort of thought, like, Who's in charge here? Like, maybe it could be me, you know, and she was right, though she was absolutely right to view it that way, I think, because when you start working with these agents, you encounter all these foibles that they have, and you realize like, oh, there's just limits to how they can operate. And one place your mind naturally goes is like, well, what can I do with that, like, what does that mean for me? 

 

Jeff Young: 

Yeah. So they and the other thing there, she's expressing that fear that as a low paid intern, she'd actually be benefiting the company greatly as the one human voice, but not being compensated.

 

Evan Ratliff: 

Yes, and that's, that's also the right question, but it's, it's very interesting. Because, on the one hand, I think that is absolutely the right question in general, when it comes to AI, when you're using AI, what are you giving them that they are now training on? Like everyone should be asking that that's a privacy question, that is a intellectual property question. Like, there's a lot embedded in that, like you're just training on me to make yourself better, to then get rid of me is, like, a big thing that people should be asking about. Ai, on the other hand, if you have an internship, it's totally normal for like you to offer up ideas that the company uses, like I had internships. You know, like you offer you don't get to keep it all in the end, like, that's part of being an intern. You don't just get credit for everything, and, like you pay you by the idea, kind of thing. So there was a mix of, like, something where, as a person who was managing the company, I both felt like, you know what, she's right to ask that, and also like, Hey, come on. Like, we're paying you to be here, you know. So that would, that's part of the tension in it is like, I don't really want to be the boss man per character, but like, also I kind of am, yeah.

 

Jeff Young: 

So she ended up, you know, in the end, kind of bailing, and you didn't get a chance as a human to talk with her. 

 

Evan Ratliff: 

I understand, yeah, she completed her internship, and then she and then I, I had planned to reach out to her, and actually to even extend her internship, and she could just work for the show, and but then when I contacted her, she didn't respond. And then our producer contacted her, producer contacted her, and, you know, we tried a variety of ways to get her talking, and it seemed, in the end, like, oh, I guess she just doesn't want to, like, I think she received our inquiries and was sort of like, I think I'm done with this. I don't know what she thought actually, like, it could have been anything. She could have other things going on in her life, sure, sure.

 

Jeff Young: 

So what would be if you could ask her why? Question, like, what, what would that be as the human that you know was in this, thrown into the situation and this really, really unusual one,

 

Evan Ratliff: 

I think I would want to know about her, her motivations, like, if she was willing to talk about it. Because I think the unknown is sort of like, did you feel like, you know what? Screw this. Like, these are all bots, like, I'm gonna take advantage of this situation, which, again, like, I respect that. Like, that's fine. I don't have

Jeff Young: 

Yeah, you're not out for to get your money back for the No, no,

 

Evan Ratliff: 

absolutely not. Or was it more, sort of like you were trying to make this job work and then felt thwarted by the fact that the agents couldn't remember things, and then just got frustrated and like, if she were willing to, kind of, like, unpack that, for me, I think that would be really interesting, because that's ultimately, like, what I wanted to find out. And we sort of left in the show, like, to speculate about which of those things really happened.

 

Jeff Young: 

You were also having this experience of constantly working with these bots. Like, what like did you in the end, feel like that this, that there was a world where humans and bots would be co workers, like, is this or is there? Are there so many of these things you mentioned, like hallucination, sycophancy, that that their role will be maybe a little more limited than the advertising pitch from Silicon Valley. I mean, do you have is a very big question. But like, what did you come away with, as far as, like, especially compared to where you started? Like, what, where are you kind of landing right now, after HurumoAI? 

 

Evan Ratliff: 

Oh, AI. I mean, I think my general takeaway is that, yes, it's, it's, it's all less than is being pitched, advertised, hyped, however you want to put it. I mean, that's partly because, like, you have people making AI employee software, or, like, they're just, you know, bots that they're going to send into these companies that are sort of like, you know, 22 year olds who have never worked in a company in their life. 

And I think a lot of the people making them view an employee as just like a kind of bundle of skills, like they make spreadsheets, they go on Slack, they do this, they do that, and if you can make an AI agent that does all those things, that's an employee. And it's like that is not what makes up a person inside of an organization. You know that their role because your role also involves a lot of connecting with the people around you many times. I mean, not every job is different, obviously, but in many jobs, and that piece of it is the piece that's kind of missing, like any sort of, like emotional intelligence that would allow someone to sort of like function properly in an organization. 

Now, my big argument about all of this is that the fact that it's not good enough will not stop companies from using it like so I do think it falls way short currently of what is being sold. On the other hand, I think people are buying what's being sold. So in some ways, it's kind of like the worst case scenario not to be a downer, that I think companies are, like, adopting this stuff wholesale without actually knowing the efficacy, without knowing that it actually will improve their organization. 

So you've seen some kind of, like, stop, start already, where people companies will be, like, we're laying off 300 people because of AI, and then, like, quietly, six months later, like, we hired back 300 people, you know, because they couldn't get it to do what they want, which is not an argument that the technology is not extremely powerful and can do, frankly, unbelievable things. And in the show, I try to keep coming back to like, I'm shocked that it can do this. I'm shocked that it could do that. 

On the other hand, like, if you're talking about it as a human replacement. You're you just gonna get yourself into trouble right now, because they're also like chaos agents. They create chaos through a variety of means, including making stuff up all the time, right?

 

Jeff Young: 

Yeah, in other words, the labor cost, the low cost labor lure for companies, is so big that being as good as a human is not the bar they're going to use that if it's somehow like the bar is so low that it's going to be tough for some of these existing, like the the other humans may have a challenge right now. 

 

Evan Ratliff: 

Yeah, yeah, that's my concern. It's like, your your your company, you work in a company, and then, and this is what the show was sort of meant to be about, is like, almost like the person next to you gets laid off, and then suddenly it's like, Bobby, you know, which is just like a version of chat, G, B, T, who's like, doing their job, and you are dealing with that all day. You're having to have conversations with it. And like, it's amazing, like, it can create a spreadsheet much faster than the person who sat next to you before, but also, when you ask it to do the most basic thing, it like, goes completely off the rails in a way that no human would ever would. And like, what is that dynamic, and what does that feel like? And like, a little bit of a cautionary tale in terms of what that workplace looks like. And it's not just about like, I mean, I. Don't want people to lose their jobs. It's not actually just about that. It's about like, what are we thinking through, like, how we're actually deploying these things? They're like, invading our space. And have we thought through what that means for us?

 

Jeff Young: 

Yeah, there's actually this really interesting quote that you play from the CEO of zoom. And this was earlier, but that this CEO of zoom was quoted in 2024 as having a dream to have an AI digital twin go to meetings for him while he chills at the beach. And in a way, like you're, I think it was in the first season you haven't, you had an AI agent, kind of, you kind of did that, like, where you, like, you were off reading a book as your agent went to a meeting, so that and people, there was definitely a lot of chatter when this CEO made that comment. 

But when you actually did it, like, can you reflect on what, what that felt like, in what? What does it mean when, when we're sending as employees, we're outsourcing parts of our jobs to a bot and and in other words, like that. 

Because that's something, you know, that's something that's even also happening a ton right now, right is that each co worker in ways that may not even be known to the boss. And in the student version of this, you know, student having chat GPT, do their to do their paper for them, or do all kinds of things for them. 

But what did it so, I guess this is the one thing I'm also really curious in this podcast in general, about is like, what does it feel like to kind of be putting some of the human into the the bot, you know, in this extreme case of, like, literally sending your bot to the meeting, yeah.

 

Evan Ratliff: 

I mean, there's a couple things I think, what is it? What does it mean? Is the right question. That's the question I'm always kind of like trying to reflect on, or at least, like, raise, you know. And I think when it comes to sending it to a meeting, to me, the question is sort of like, well, yes, I also feel that, like I would, I hate meetings. Like, No one hates meetings more than me. I work by myself entirely, so I don't have to go to most meetings, you know. But when the CEO says, I'd rather send a digital avatar, digital twin, to a meeting instead of me. 

Like, well, what happens to the other people in the meeting? Do they have to? Do they get to send theirs? Or is it just for the CEO, like, we all have to deal with your digital avatar, your digital twin, while you're at the beach and all of us are in the office talking to an AI? Or is it like we all send them, in which case, like, what's the purpose of the meeting? Exactly like you get to this other level of like, okay, if you can replace things, it's one thing to sort of like, replace a task that is maybe a repetitive task, or maybe just the AI. Does it better, and you're using the AI as a tool to do something that you know how to do, but you can do more efficiently. That obviously exists. People are doing it, as you say, like people are doing on the sly. But it's different to replace your communication with your colleagues with an AI like that is, to me, a categorically different question, and it raises an issue of like, what is the purpose of this workplace? 

What is the purpose of these meetings? Why are we talking to each other? Because if you can just send them to do it, it kind of implies to me that, like, it was kind of a BS thing to begin with. Now, the CEO of zoom wouldn't say that. He would say, like, I'm just stretched too thin, and like, I have to make all these decisions and so I can just gather information. He's got a whole thing around it, but to me, I think it calls into question a lot of a lot of that. 

And I think it hopefully, the hopeful side not to go on too long on this question, but like, it could bring us back to a place where people are like, Oh, actually, that's the part I want to do. Talk to my colleagues, like, maybe in an ideal world, if it did make you more efficient, it would give you time. And the question is, how do you use that time? And using that time to just like, you know, go to the beach is fine, but like, in the workplace to, like, actually connect with your colleagues or assist them, or mentor them, or something else, like, Okay, that would that could be a positive outcome. But like, replacing the meetings is just a funny one to be. And also, like something having a meeting on your behalf and then finding out about it later is kind of scary. Like, that's what I found. It's like, I don't know what it did in there. It's representing me. What did it say? 

 

Jeff Young: 

Like, yeah there's, there's a way in which, in all of these examples we're talking about, there's a way in which it might take more time as the human to clean up or review the AI slop AI misrepresentation. If it did, then it would have to just go. 

 

Evan Ratliff: 

Yes, and that's a big thing across all of AI uses. And I think it's a reason why you get such contradictory studies about whether AI is, you know, increasing return on investment or efficiency or interactivity, because people will. Self report that they saved a bunch of time using AI and like, they're not acknowledging that some either they or someone else, like, cleaned up something, or the time that they invested in trying to contain it and write the prompt and do everything else, they just that moment where it solves everything in five minutes or 30 seconds or 10 seconds, like that feels very strong to them, but the other stuff doesn't. So I feel like that's why there's, like, a mixture of outcomes at the moment.

 

Jeff Young: 

So after all this experience with AI co workers, what would Evan's advice be for students poised to enter this changing world of work? If Evan were teaching a college class right now, what advice would he give his students?

 

Evan Ratliff: 

I mean, I have no expertise in education, so no one, I think no one should listen to me on this particular topic. But my personal experience is that people will say, like, you need to get on board with this, or it'll pass you by. But the reality is, the whole purpose of this technology, the way that it has been built is it's unbelievably easy to use, like, you could just talk, you can literally talk to it and tell it to do stuff. So I just, I don't think, like, the idea that it needs to filter into every place, because every educational opportunity, every educational environment, because kids won't know how to use it like, they can figure out how to use it, like, that's not hard.

On the other hand, and this is maybe it seems somewhat contradictory. I feel like focused use of it, where you immerse yourself in it, and you try to do something, a project, a long term project, or a short term project, with some sort of results, where you really understand, Oh, these are the limits of what it can do. Oh, this is the power of what it can do. These are what agents are. This is how I set one up. This is how I can code something. Whatever it is, like that, to me, is really valuable, like, then, you know, the edges of it, if you just use it on a kind of like, you know, helping with your homework, helping with this, helping with that. It's just seems like a search engine that tells you whatever you want, and that doesn't seem either hard to learn or particularly helpful. Of course, like many people will use it that way in their daily lives, that's fine. But in terms of education, I just feel like a class that's devoted to using this or like a part of a class. If I was teaching class, maybe one project would be like, okay, use this to do something really interesting in whatever the topic of the class was, and that, to me, is a great way to learn about it. I think people should learn about it. I just don't think it needs to be like, Oh, you have to use it 10, you know, like an hour a day, or you're gonna be left behind. Like, I think it's ridiculous. 

 

Jeff Young: 

I wanted to, you know, give you a chance to sort of share any other kind of takeaways or things that surprise you. From, yeah, from this world of work, from your two seasons of shell game, where you really, you know, you are doing what you just, just advised people to do, in a way that I think very few people have had the time that you've had to, you know, be in these situations with sending your bot to meetings, setting up a company where you have co workers that are all around you bots. Are there any other surprises that that you would, you would leave us with?

 

Evan Ratliff: 

I mean, I think everything in this, in the world of AI agents, which are, you know, these, these agents that you're deploying to, like, accomplish goals for you, the questions are all around autonomy. And like, everything bad that happened in the show happened because I gave them a lot of autonomy without thinking through all of the possible consequences. And I think everywhere these things are deployed, that's the question that's going to come up over and over is like, how much do you just put them out there with with sort of a goal, and, like, let them go. And there are environments where that makes total sense, and there are environments where that would be a complete disaster, and everything in between. And I think having the right amount of control is going to be like a big question in like, all facets of this. It's like figuring out, like, how can I use this powerful technology but also remain in control? And also, like, how can I outsource things to it, while not, like, losing my entire brain to it? You know, with, like, being able to still do stuff off my own, because I felt that temptation too. It's like I wanted them to do everything at a certain point. Like, why should I have to do anything? 

 

Jeff Young: 

Like, they have all these so, like, it's alluring to say, like, you could build a billion dollar company with just setting it all up and pushing the button.

 

Evan Ratliff: 

Yeah, but then the question, what's the point? You know, a billion dollars, great. But like, also, you get to this sort of, like, existential point of, like, what is it you want out of life? You know, I often get there with these things, which is, like, I don't know I have goals in my life and things that I enjoy doing, and I don't want to outsource the work that it takes to get better at those things to a bunch of bots. Like, that's not an outcome that I'm looking for. So I think it's just a constant process of figuring out where the technology is. But also, like, what do you want out. It like, don't, let don't listen to what they're telling you. It has to do for you. Like, what can you get out of it?

 

Jeff Young: 

Yeah, it seems like, it seems like it all just reduces to skills, like you mentioned earlier employment when there's, you know, it seems like the the message from higher ed for a long time now has been like, Go find your purpose, you know, enroll in X university because you can make a difference. And it seems like that is not what I hear in these pitches and in this, in in some of the this idea of setting up a company where you are maximizing efficiency by surrounding yourself with, you know, skill bots,

 

Evan Ratliff: 

Yeah, and it is. It's hard to tell which way it should go. Like should maybe the university. Maybe it should be more that way. Maybe it should be pushing harder into just like philosophy and, you know, ways of thinking about the world, because many of the skills maybe can be done by AI agents that certainly that's a view I think, like critical thinking, will always be valuable. The ability to like, actually, like, have discernment about information will always be valuable. And having knowledge about a particular domain will always be valuable. Like, of course, things change and like, there are doomsday scenarios and everything in between. But I just think, like, especially not right now. Like being able to smartly deploy a new technology is an advantage in your life over sort of like letting it wash over you. And so I feel like the best thing to do is, like, try to figure out what's going on and work from there.

 

Jeff Young: 

This conversation naturally led me to wonder what it would be like to be a college student who is facing this kind of job market with AI in the mix. So what do students think of all this? How would today's graduates react to a situation like the one Julia found herself in, where AI was suddenly their co worker and even their boss. 

Stay with us. 

Sponsor:

This episode is brought to you by Studiosity. As AI reshapes higher education, every leader needs to be asking right now: Is this AI tool or any tool building capacity, or just dependence, in our students? Georgia State University recently put that to the test. Their impact study showed that when students use Studiosity DFW, rates dropped from nearly 39% to just 8%. And this isn't just about better grades, it's about valuing the thinking and writing process as much as the validation of learning. Studiosity rejects the culture of surveillance that has turned educators into police for a model that focuses on formative process, student agency and teacher choice. Move beyond the detection age back to defensible evidence of learning. Visit studiosity.com. That's studiosity.com.

 

 Now back to the episode.

 

Jeff Young: 

On a recent afternoon, I visited the University of St. Thomas, a private Catholic University in St Paul. I headed to the main library and set up my recording gear in a podcast studio they have there, because I wanted to hear what students think of all this and what they're doing to prepare for this working world that might suddenly have bots in it. I had arranged to speak with three students who all recently took a new course here about AI, ethics and philosophy. These students all had very different majors. Dominic Adkins is majoring in Accounting with a double minor in philosophy and theology. He's a senior, and he has already secured a job when he finishes up his studies in just a few months.

 

Dominic Adkins: 

So I have, I have a job with a big four accounting firm waiting for me, hopefully when I graduate, it's got to Finish, finish strong and make sure I

 

Jeff Young: 

Yeah, finish strong. Dominic has already seen firsthand how much AI is becoming a part of his profession. 

 

Dominic Adkins: 

So my previous internship this past summer, I used multiple AI tools to assist me in my in my work. So there were certain tasks where it would be tedious to manually it was there were tick marks you got to match the tick marks, you know, make sure everything ties together. So you just had aI find all of the instances in the document that have the tick mark T next to it, and then, you know, make sure that they actually are the same number. And so the AI matched my it matched the tick marks without me having to, but it would also make mistakes fairly often, so I didn't feel too worried about that. 

Jeff Young: 

In other words, in a way like, if it's making mistakes, that's some job security. Is that you're saying it felt. 

 

Dominic Adkins: 

Yeah, yes, yes, there's that gave me a sense of security, for sure.

 

Jeff Young: 

But Dominic says he feels a kind of clock ticking as these AI tools quickly get better and better. He's reading headlines about big firms cutting entry level jobs as AI does more of the routine tasks, and so he's in a hurry to get as much experience as he can while he still can. 

 

Dominic Adkins: 

So I'm graduating in three years from college, which, I mean, I think I didn't this didn't play a part in my AI didn't play a part in my decision to graduate in three years. But I get to be in the workplace one more year, one more year early than my like the people my age. Hopefully it'll allow me to get higher up in an organization before, if AI does take entry level jobs, it'll allow me to get out of the entry level job sooner than if I had taken four years. So that's on your mind? Yes, certainly on my mind. Yeah. I'm super grateful that my high school prepared me very well for getting some of the generals done.

 

Jeff Young: 

Yeah. So you're gonna be in the work world though. You're gonna just jump in there and, like, solidify your experience. 

 

Dominic Adkins:

 

I gotta get in there and solidify my spot immediately, because I heard another podcast actually, where from the Wall Street Journal, where they had an editor, and he had a team of four Claude AI bots, and him and his team would just produce stuff at a very, very quick pace. And the episode is pretty much about that and how it worked, and that's kind of how I can see AI affecting the workplace.

 

Jeff Young: 

Yeah, just basically what I'm seeing there is a combo of, like, one human working with bots. So it's not so somebody got replaced, but so you're the, you're the boss of bots. 

 

Dominic Adkins: 

Yes, pretty much. And that's, that's the way I can see this going just like the internet. And internet, just for simple terms, kind of eliminated some really entry processes than an organization that are just became automated. I feel like this is the next step on top of that, right? Does that make sense?

 

Jeff Young: 

It does. No. And so, and it sounds like, I mean, like imagining the world of work, you might be kind of overseeing robot helpers. 

 

Dominic Adkins: 

Yes, that is absolutely a reality that I think is imminent. I don't want to be too dark, but yeah, I can definitely see that happening, especially high up in corporate America, like the big companies investing a lot in this. I've seen that.

 

Jeff Young: 

Yeah, I'm reading the headlines same ones as you, and it does seem like that. And so how do you feel about that, though? Like, is that?

 

Dominic Adkins: 

Well, I guess it makes me feel that I have to, I have to adapt. I have to start focusing, I guess, on how would I manage? Because that's where the value is going to be like, how do I produce value that'll be different from how a bot can produce value a bots, simple term. But how can I produce value that's different from an AI agent? And that's what's going to set me apart. And so what is that? That's leadership, that's management, client facing dealing with people, dealing with people is certainly I would rather interact with a human, and I'm sure most other people would, even if the bot could do better job than me, I would rather react interact with a human.

 

Jeff Young: 

I love that you're even giving the bot maybe can do better than you. I mean, like I started to get with you. Get what you mean.

 

Dominic Adkins: 

Yeah, it can. It can train a lot faster. 

 

Jeff Young: 

Yeah, it Yeah. I mean, never takes vacation, yeah, never gets in arguments with people. 

 

Dominic Adkins: 

So I guess I'm tailoring my skills right now to find out what's going to set me apart, yeah, from an AI bot. I see it as competition.

 

Jeff Young: 

Essentially, I played Dominic, that same clip from shell game that I had talked about with Evan Ratliff, and I told him about the scenario where this recent grad found herself in a company where all the employees were bots.

 

Julia from Shell Game:

I think I am the only, like, actual human here.

Jeff Young:

 

But how would you feel if you were in this position?

 

Dominic Adkins: 

Well, the bots are on top in that situation. Kind of the vision for AI, like I said earlier, that's been giving me hope, is that I will be on top of the bots and managing them. But yeah, the bots being on top is very, very worrying, worrying for me, that would be scary, yeah, because, again, like she said, there is no way up. You know, she's an intern. How is she? How is she going to progress?

 

Jeff Young: 

How well do you think college is preparing you for this world of AI and work?

 

Dominic Adkins: 

Well, I. I'm going to take my accounting classes, for example. So in those we learn how to do a lot of processes, right, like my accounting How do I make a balance sheet? You know, what is the order of the accounts that appear in the balance sheet? But that's going to become increasingly irrelevant, because no one prepares, even now, no one prepares balance sheets by hand. So I think what needs to happen is conceptual, conceptual learning. A lot of a lot of that, because, again, you you just need to know the order at which they appear, and you don't necessarily need to make it yourself or even do math so much, but I guess there's in my business classes, there hasn't been too much. Oh, this is a tough one. This is a tough one. This is a tough one. I again, I think philosophy will will prepare you really well, really well. Again, that's one of my minors. I just did that because I liked it. But again, what's going to be more important, leadership, communication between humans and so I think if colleges were to shift how they educated their students, I would say there would need to be a lot more on communication and how to think. Yeah, how to think. So that's where you're gravitating.

 

 

Yes, yes.

 

Jeff Young: 

That was the viewpoint of an accounting major heading out into the business world. What about other fields? Another student I met with, Mariah Reynolds, is a junior majoring in Biology. She's preparing for a career in either healthcare or veterinary medicine, and I was curious how much she imagines AI will play a role once she gets in the job world?

 

Mariah Reynolds:

Ah, yes, I think that it will definitely help people with like, definitely like, with like imaging of like, depicting various like cancers and like radiology and stuff like that. But I also think it could hurt people, because people use it to cheat on tests and like doing their homework for them instead of them actually learning the content. And I think that can really hurt a person when they're using this AI and they're relying on it so much that that can be really a huge problem. 

 

Jeff Young: 

I was surprised that Mariah jumped so quickly into talking about how students around her are misusing AI to have the bot do their homework for them instead of learning. Guess what does that mean to you? Like, there's one thing it's like, you know, doing it at school, and it sounds like, it sounds like you're trying to have good habits of your own learning. But Do you worry that when you get into the workforce that you will be with people that maybe don't know as much because they did take shortcuts in their learning.

 

Mariah Reynolds:

Yes, yes. I think that people will forget the information that they were supposed to learn, or just not know it at all because they used AI too much when taking a test or taking quizzes, things like that, especially like online classes, it's a lot easier to use it with during quizzes and tests and things like that.

 

Jeff Young: 

Yeah, yeah. So in other words, basically because AI tools are out there, now, the people coming into work as you're graduating and going into the workplace may not be as up there might not be as kind of ready.

 

Mariah Reynolds:

Yeah, they probably will not be ready for it. Yeah.

 

Jeff Young: 

For Mariah, like most college students today, she experienced school before generative AI was out there, and then after, and she says chat GPT caused a pretty seismic shift in her education,

 

Mariah Reynolds:

even, like in high school, like when it first came out, like I took this writing class, AP seminar, and, like, we couldn't AI wasn't really a thing, so we all had to Write and look up information. And the next year, the grade below us was all using AI, and the teacher was really mad about it, that this thing came out in that all of it, all of the students were using it. So it was just really frustrating. And that's when the time when they didn't really have a way to catch AI usage right now, there's like, you can kind of tell that it's aI when you're writing because it just sounds, in a way that it's was written by AI. 

 

Jeff Young: 

It’s really interesting, right? Because you went through high school before this existed, and you did kind of similar stuff, and then now you're in college, and. Pretty much your whole time in college, there's been chat GPT, probably right about exact. If you had to estimate, like, how many hours less you spend on homework or minutes or whatever, because the tool is here, is there, would you put a number on it if you had to guess,

 

Mariah Reynolds:

I would say maybe it cuts your time in half.

 

Jeff Young: 

Wow. So if you spent five hours doing homework, it's now two and a half, yeah,

 

Mariah Reynolds:

I would, you know, like, it's just, it gives you so many ideas. I think it's, it's helpful, because it gives you a lot of ideas when you're writing right and you can take those ideas and write about them, but, or, like, kind of summarize it.

 

Jeff Young: 

Do you feel like students? Do you feel like most students are using AI tools hese days? 

 

Mariah Reynolds:

Yes, I would say yes. I used to not. I tried for a long time to not use it at all, so then I wouldn't become addicted to it. But I feel even like my roommate, I think she's still trying to stay away from it. And I think that's really surprising that she hasn't really used it at all, because most people do use it. Yes, most people do use it to check answers, to help, like summarize readings, just writing things. It can help. 

 

Jeff Young: 

Now, you did mention good things, though. I mean, there could be interesting applications, I guess, though, to use AI to make your job better.

 

Mariah Reynolds:

Yes, like in radiology, it can depict like cancer, like cancerous cells and things like that that like the human eye can't see, which would be really helpful. That also might be taking a job away from a radiologist, because you can just put it through AI. There's really no point in studying to be a radiologist when you have this really helpful tool to help you. 

 

Jeff Young: 

I played her that clip from shell game of Julia interacting with her robot boss. I just

 

Julia from Shell Game:

kind of want to know that I'm not like, training AI robots for you guys to just kind of like, use my ideas and throw me out. You know what? I mean? I want to be a part of something big.

 

Mariah Reynolds:

Yeah, I would also be really concerned that it would take my ideas and it would start like, kind of using you for your ideas. And like, I also would be kind of concerned for her, because, um, like, if she's the only human, I think that would be really hard to be working in that environment, even if it was all if you could work at home, all online, it would just be really kind of isolating.

 

Jeff Young:

 

That's not the kind of work you would want,

 

Mariah Reynolds:

no, no, I don't think I could ever do that. I like to be around people and working with people, like, like, in person, not online.

 

Jeff Young: 

Yeah, yeah. When you think of a job, you're probably not thinking like, just the stuff you make that. It's partly like that relationships with people and all, yeah,

 

Mariah Reynolds:

and getting to meet like the patients, and getting to meet like your co workers and building relationships with them is, like, always, like, probably the best part of the job, or should be, you know, yeah,

 

Jeff Young: 

There was one more student that I was able to talk to who took that AI ethics class last semester. It turns out this student actually already graduated in December, so I guess he's not a student anymore. He is a recent grad, so I caught up with him on a zoom call. He's living in Florida and preparing to apply for graduate school. His name is Daniel dang and he majored in chemistry. His focus is computational chemistry. So he's totally into the computer world, and he has plenty of experience already working with AI agents. And he says it's becoming pretty normal to work with AI agents in research.

 

Daniel Dang:

So my computational chemistry lab will have simulation, and in the simulation, there will be a lot of data to be collected and to be analyzed. So the AI agent will clear out the noises and will help it to set up some of the filtering necessary and also some of the directions that it will find helpful. And then I will take a look at it and decide if I'm going to take a look in there, that direction that is suggested.

 

Jeff Young: 

And what kind of discoveries Do you think it's? Yeah, I'm sorry, what's the you know is it help get work done faster to have these bots

 

Daniel Dang:

In research, it is a little bit different about working faster in research we would want, we want to prioritize getting work like, correctly, appropriately, basically, because with agents, we already have, like, so much data. But the thing is, we don't really have a way to filter out the noise. So. And the Asian could sometimes help with help with that. Since it will be it will be able to, like notice some small differences, because in our simulation, the scale is relatively small. It is in the picosecond and nanosecond so the agent will be able to help with that, and like normal eyes would not be able to so we set some like guidance, and the agent will point out if they are interesting enough to look into.

 

Jeff Young: 

The AI bots that he works with. They don't have any nicknames, though, or back stories.

 

Daniel Dang:

So with what the algorithms are working is, we basically set it to have no personality, and was just, just like, working with an algorithm that speaks

 

Jeff Young: 

you're not basically, you're not saying, like, you're not naming it. Do you have, like, a name for it, like Kyle or no. He said he's not worried about AI agents taking his job, at least in the near future. But he says that for folks that major in general, computer science, times are tough.

 

Daniel Dang:

It's hard for junior developer to compete with AI, where the company thinks the AI is like only better than junior developer, which I think is not true at all. But that's the more relaxes than the CEO things that way, then it's like, very helpful, right?

 

Jeff Young: 

And so basically, this idea that Claude code can do the job instead of a junior developer, some people believe that. And so they're they're doing it that way, especially the CEO, they believed in that so. 

 

Jeff Young: 

And of course, I played Daniel that clip of Julia from shell game. Would Daniel take a job where his boss was a bot.

 

Daniel Dang:

I'll say, as long as I'm paid, I would not have too much thoughts about it, even if my boss isn't like an AI, which like sometimes forget, startup is unexplainable, he will still be my boss, technically. So he will still have authority over me.

 

Jeff Young: 

It turns out, Daniel is dreaming of starting a nonprofit of his own someday, and he said he could totally imagine putting AI agents to work for him to help do things like summarize what his co workers were doing to keep him up to date. As he talked through it with me, he sort of imagined that he would name his AI agent Angelica, so you would give it limited ability, you would basically like it wouldn't be making decisions, really, it would report it to you, to save you time, oh,

 

Daniel Dang:

for the agents to make decisions. It will be very early at this

 

Jeff Young: 

stage, of course, that is what Evan Ratliff concluded to that it's still a bit early to put these bots like Kyle and Megan in charge, at least with today's AI technology, but who knows what's coming next. As I wrapped up my conversation with Evan, I asked him how secure he feels and whether he worries about bots taking over his job.

 

Evan Ratliff: 

Yeah, I mean, I worry like everyone. It's funny, because in the journalism industry, like it's, it's been, it's been troubled since I started, almost like I hit the very end of like nobody was worried about anything at the beginning of my career. So it doesn't feel like the anxiety feels very familiar to me. It doesn't feel it predates the boss. 

 

Evan Ratliff: 

We got a lot of problems in journalism, and this is just the latest one. And but I do think, like, what I have, sort of, like, what I strive for, I always have, is to, like, try to be different than what the other people are doing. And like, I feel like that's still, that'll always be the thing for me. And like, for a lot of people, it's like, okay, it's all going this way. Like, let's go that way. And I feel like there are also, like, a bunch of openings, not not not literal job openings, but like, when a technology is sort of, like swirling around, like this, and there's all this hype, there's also, like, a lot of places for people to go that feel different and new and like, that's interesting to me. So I feel like I try to draw some optimism for like from like during times of change, there's lots of interesting stuff to do. As a journalist,

 

Jeff Young: 

I love it. Well, I think I might leave it on that note. Thank you so much for sharing again. I hope people check out this podcast and looking forward to seeing what the next season's like. Thank you. Thanks. Thanks for having me on.

This has been Learning Curve.

 

Jeff Young: 

This episode was written and produced by me, Jeff young. You can find out more about my work at Jeff young.net. If you like the show. Please share it on social media or tell a friend in person, and it'd be a huge help if you would leave a review or a rating wherever you listen to podcasts. I don't have any AI co workers, at least not yet, but the show art was created with an AI tool, Geminis, image creator, a human composer. Who uses the nickname kamaku, wrote our theme song. Special thanks to Thomas Feeney, the philosophy professor at the University of St Thomas, who put out the call to students in his AI philosophy and ethics class to be interviewed for the show here. Feeney also directs the university's new master's program in artificial intelligence leadership and thanks also to our first sponsor, Studiosity. We'll be back in two weeks with another fresh installment. Until then, thank you for listening. You.