The Changing State of Talent Acquisition

#68: The AI Skills Gap: How Educational Institutions and Employers Can Prepare Workers for the Future

Graham and Marty from Change State

In this thought-provoking episode, we welcome Alex Swartsel, leader of JFF Labs Insights Practice at Jobs for the Future, to explore the growing AI skills gap and its implications for the future of work. Alex shares eye-opening research showing how AI usage in the workplace skyrocketed from 8% to 35% in just one year, while revealing a concerning disparity: 60% of employees are using AI for self-directed learning, yet only 16% have access to employer-provided AI tools.

Our conversation dives into what makes a "quality job" in today's economy and JFF's ambitious mission to see 75 million Americans working in quality jobs within the next decade. Alex unpacks how AI is transforming both educational settings and workplaces, challenging traditional notions of digital literacy and reshaping the skills landscape.

You'll discover why the most valuable future skills may not be AI-specific technical abilities, but rather the "human skills" that AI can't replicate—creativity, critical thinking, communication, and adaptability. Alex also explores how different educational institutions are navigating AI adoption and the policies that could help create more equitable access.

Whether you're an HR professional, educator, or worker navigating this rapidly evolving landscape, this episode offers critical insights on how we can ensure AI becomes a technology that makes everyone better off, rather than deepening existing divides.

Links: 

JFF Labs: The AI Ready Workforce Research Findings

Jobs for the Future

Speaker 1:

Welcome to the Changing State of Talent Acquisition, where your hosts, graham Thornton and Martin Credd, share their unfiltered takes on what's happening in the world of talent acquisition today. Each week brings new guests who share their stories on the tools, trends and technologies currently impacting the changing state of talent acquisition. Have feedback or want to join the show? Head on over to changestateio. And now on to this week's episode.

Speaker 2:

All right and we're back with another episode of the Changing State of Talent Acquisition Podcast. Super excited for our next guest, alex Swartzel from Jobs for the Future. Alex, welcome to the show.

Speaker 3:

Thank you so much. It's wonderful to be with you.

Speaker 2:

Yeah, we'd love to have you start off with an easy one. We'd love for you to share a little bit more about your journey to becoming the founder leader of Jobs for the Future's JFF Labs Insights Practice.

Speaker 3:

Sure, and I'll start by saying a little bit more about Jobs for the Future for listeners who may not know us yet. We are a national nonprofit organization that focuses on transforming education and workforce systems across the United States so that more people, especially people that have historically faced barriers to economic advancement, can get the training and education that they need to attain a quality job and a career that allows them to sustain themselves in their lives. And that mission has been super compelling to me in my professional career. I've had one of those careers that spanned politics and public service. I'm based in Washington DC. I've spent some time in trade associations.

Speaker 3:

I have my MBA, but within the last 10 years or so, I spent time at Teach for America in the DC region, getting a really firsthand look at just how critical education systems are in making sure that all of us are prepared to thrive in our society and to help make America more economically competitive, and just how important it is when we can make sure that those supports are available to everybody.

Speaker 3:

Of course, talent is equally distributed, but too often opportunity is not, and the centrality of opportunities for work and for quality jobs felt really critical to me as a key component of economic mobility and opportunity line of sight across really every different aspect of the spaces that all of us move through, whether that's from K through 12 to potentially post-secondary experiences or career and technical experiences apprenticeships, for example and then into the world of work as well.

Speaker 3:

So this felt like a terrific place, and JFF Labs is a very special place. Much younger than JFF as an organization, we're only about six years old and we are really designed to be the research and development arm of JFF, looking far into the future to understand how emerging technologies and other innovative models are poised to change the way that we work, the way we earn and the way that we learn. And so we've had a chance to dive into technologies like artificial intelligence, which we'll spend some time on today, as well as things like virtual reality, starting to explore quantum computing, for example, just knowing how fast technology is developing and how much of an impact that potentially has on all of us who are living our lives and working day to day. So I feel both deeply connected to this mission personally, as somebody who cares profoundly about opportunity for all, but it also is just an incredibly exciting place to be, with extraordinary colleagues all across the landscape.

Speaker 2:

Yeah, I think that's great and, like you know, we're big fans of JFF. You know you don't know this, but Marty certainly does. My parents are both public school teachers in the South side of Chicago, for, you know, 50 some odd years and very passionate about the education space in general. So, you know, thrilled to have you on One piece that I'd love to have you even define, you know, as we kind of. You know, get started here is you know you mentioned quality jobs, which I know is something that JFF talks about a lot. What do we mean by quality jobs?

Speaker 3:

Yeah, it's a great question and at its bottom, I think of a quality job as the job that we all probably want to have, and it's certainly a job that pays well enough to sustain people and families, potentially, where they have families.

Speaker 3:

But it's beyond that and beyond even making sure that jobs bring benefits with them, for instance, that also allow us to support our lives, but that also create opportunities for advancement upwards in our career path, that give us the flexibility and autonomy that we need.

Speaker 3:

It is a pretty holistic structure and far too often those kinds of jobs are not available to all of us, which is exactly the kind of challenge that JFF has set for itself. We have a North Star vision that you may know of already, graham, that in the next 10 years, 75 million Americans should be working in quality jobs, which, surprisingly, is about double the number who are in quality jobs today. So it's a major focus for us, and one that allows for a lot of different dimensions, because for that to be successful, we need our education and training systems to work at their highest potential. You know we need jobs to be created, you know, through a variety of ways, and we need to make sure that people are supported to prepare for and navigate into those jobs. So it's a big mission and allows us to think really comprehensively and holistically about what both people need and what our systems need.

Speaker 4:

Yeah, that's very interesting, Alex, and welcome to the show.

Speaker 4:

You know, it's quite a treat actually to talk to somebody who's been thinking about these things for a while, especially as we approach or we're actually already in this current moment where it does seem like there's a lot of different factors that are dramatically affecting the future of work.

Speaker 4:

Obviously, what's on everyone's mind these days is artificial intelligence. I think, Graham, is it true that every episode we've done this year has touched on artificial intelligence in some way? I think that might be true, and in some way, it feels like we've been talking about AI for a while, but it does seem like something has changed dramatically recently, and you did some research recently, Alex, where one of the stats that really popped out at us was that AI usage at work jumped in the last year from 8% to 35%, which is what More than fourfold increase, and I just wondered if you could comment a little bit on that stat. I mean, is it as simple as that's when these large language models like ChatGPT came into public awareness, or what are the forces that are shaping such a dramatic shift in such a short amount of time?

Speaker 3:

Yeah, it's a great question. I think it's probably a lot of things. So those are two different surveys that we did, one in June 2023. So actually after ChatGPT had been around for about six months to the most recent one in November 2024. And so it is a pretty significant jump in that period of time, and I suspect it's probably a combination of things. One is that there is surely, you know, an increased level of awareness amongst people we studied that as well and not just awareness but sort of depth of understanding and opportunities to try these tools as they've proliferated and improved in quality.

Speaker 3:

One of the things that has always been so striking to me about this Cambrian explosion in artificial intelligence that we've seen here in the last, you know, two, approximately two years, is that it's essentially a B2C technology. It is available to all of us in various forms, obviously not always in the most sophisticated forms that require more payment to use, for example, but anybody can access these tools and play around with them, you know, even just on a mobile phone, which is really exciting. And so I think and this is also borne out in some of the survey research that we'll probably talk about I think you're seeing the human spirit and human curiosity at play, with people, you know, hearing about AI on the news through social media, for instance, from their friends, maybe from their employers, and getting a sense of all right, let me try this out. That's on the individual side. We also are seeing constantly that there is increased pressure on businesses to adopt and use artificial intelligence.

Speaker 3:

There was a study from McKinsey just in the last month that found that over 75% of their respondents are saying that their organizations are using AI in at least one business function, which has been a big concern, at least initially. And so you know where somebody might potentially have tried out an AI tool or looked at it in the early days and thought, yeah, I can sort of see the potential, but I'm not sure that I can fully get my arms around it. You know that same person coming back to some of these same models today is probably going to be blown away by the improvement. That's especially true, I think, for some of the image generation models. But it is a really significant shift that I think has a lot of different parents to it but that I think increases the urgency both for businesses, for education and workforce institutions and for learners and workers as well to get their arms around what this means for them.

Speaker 4:

Sure, well, that certainly helps to have that unpacked.

Speaker 4:

It was certainly much bigger than my initial hypothesis that it was the mainstreaming of chat GPT.

Speaker 4:

That seems like it's certainly part of it, but, as you point out, there's a lot of vectors pointing in that same direction. Well, I wanted to ask you about another sort of stat that came out or maybe the stat is less relevant than I think the distinction that you make between what you're calling individual initiative and institutional support for AI, as it relates to using AI in jobs and seen in other trend reports, that business leaders, hr leaders in particular, are looking at when is AI coming? They're waiting for it, they're anticipating it, and I think some of the leading thinkers that we've talked to are making the point which maybe is what you're also saying here that it's kind of already here that individual employees as you say, it's a B2C product A lot of people can access it and what's to stop them from using it in their job. Is that what you mean when you say this difference between institutional support and individual support, or maybe you could just share your thoughts a little more about that?

Speaker 3:

Yeah, I think that's some of it and to a certain degree, you know a technology like this. It is, of course, it is B2C, but it is also very much B2B increasingly and it shows up in different ways in those different modalities, right? So if I'm an individual, whether I have a job or not, and I can access this tool myself, maybe I'm using it in part for personal things or in part I can see opportunities to use it at my job. If I am any institutional entity or any business and I am thinking about a formal workplace adoption of these kinds of tools, that kicks off, as you all know well, a whole cascade of decisions that that entity has to make. Whether that is, what exactly is the tool that we're going to use, what are the cost-benefit analyses of different structures? What policies will surround it? How do we think about data governance and privacy? What are the use cases that we want to explore? How are we training and supporting our people, for instance?

Speaker 3:

So the ability for an institution to make a decision about a use case, as contrasted with an individual person's ability to just pick up or sign into a webpage and start to use it, those barriers to entry are very, very different, and so on some level, it doesn't surprise me that we're starting to see these gaps between how people are using themselves of their own volition and that's true, I think.

Speaker 3:

Even within work, we're seeing distinctions between people saying I'm using it at work, but not because my employer is telling me to, I'm just using it at work. I've seen the term BYOAI float around out there versus people who are saying, yes, you know, I'm using it in a school setting or I'm using it at work and it's because it's being intentionally used in the classroom or at work. But I actually think that there's real opportunity here, and this is part of what we've been talking about coming out of these survey results, because now any institution that's thinking about adopting AI in some form and probably all of them should has a whole universe of testers and experimenters and brainstormers. You know within your four walls, right Of all of the people that are part of your organization and culture who are already thinking in these terms, know their jobs and their work really well, probably already have ideas. So for us, I think it points to some real exciting possibilities for how organizations are involving their own stakeholders in the deployment and use of AI and the decisions that they make around it.

Speaker 4:

Yeah for sure.

Speaker 4:

Well, your research is full of such provocative stats, but I just want to call out a couple here.

Speaker 4:

So, in reference to the point we were just mentioning, I think you found that 60% of employees are using AI for self-directed learning, as opposed to the institution or the employer providing direct support or access to a tool, and separately and related, of course, you found that only 16% of people had access to an employer or school-provided AI tool.

Speaker 4:

So I think that's I mean, if you can just take the 60 minus the 16 and have that be meaningful, because I assume those came from different places, but nonetheless it highlights a major gap here, and I guess the question that comes in my mind when I see that gap is who are the people who are out there accessing these tools on their own and taking the initiative to that at all? And, like my theory might be that actually might be a way that might be one subtle way that we're, that the people who are like certain groups are getting ahead of other groups in terms of advancement opportunities, sort of underneath the radar, because they've somehow, based on their education, their life experience, have come to a place where they have started using these tools. Other people are not, and there's an inherent advantage in that. I just wondered if you have any response to that, or any reaction. I mean, do you think about it in a similar way?

Speaker 3:

Yeah, I think that's exactly right. It is its own form of digital divide, when some of us have access to really high quality tools and some of us don't. You know, on top of the fact that the digital divide still exists in this country, not all of us have access even to high-speed internet, and so it's an especially important question to ask. I don't think our data parses in every way that we would want to. You know, who are the people who do and do not, have varying kinds of access, especially to paid tools, which is not always, but can sometimes be a proxy for quality. But, you know, we have seen some interesting things. One that really stands out is the degree to which people of color are using these tools, which is higher than a lot of other demographic groups, which is, you know, I don't think we have all of the reasons for that, but it's just really striking for us to see. But I think there are a lot of implications when we're starting to see these kinds of gaps, you know, and whether that is the lack of early exposure to technology like AI, as its adoption grows, especially when people are coming out of schools, for instance, and entering the workforce, where they may be expected to be familiar with AI, ai literate, but they haven't necessarily had the experience with the tools in an educational setting. I think that will be a growing concern for employers, and that barrier could exist for a lot of different reasons. You know, whether a tool is, or a school rather, is under-resourced and so they can't provide access to tools.

Speaker 3:

We've certainly heard that, including from some community colleges who have talked about the cost of some of the licenses or the cost of, you know, as you all know, what's called compute, just to be able to have, if they're trying to build out actual facilities for people to build and experiment with AI tools on their campuses, that gets very expensive, very fast, and so the resource considerations you know that are not always the same across the country, you know are very, very real here, and we're also seeing continued mixed messages in some cases, I would say, particularly within education, where there are certainly some districts or schools, leaders or particular educators who are all the way in on this technology, building it themselves, experimenting with it, encouraging new forms of pedagogy, encouraging students to use it.

Speaker 3:

At the same time, there are a lot of messages that using AI is cheating, using AI has academic concerns in some form, and we've heard anecdotally from students that that can potentially be a real concern. So it really is, I think, a complex landscape where everybody who is figuring this out for themselves and trying to navigate a new landscape, and so the more I think that institutions that are working with people, especially learners and workers, can have a way of thinking about AI that leans more into its potential as a transformative technology that will be relevant for them in learning. Relevant for them in work is a new foundational digital skill that we need to develop. That, I think, will go a long way towards addressing some of the potential gaps that we could see emerge here.

Speaker 2:

Yeah, so, alex, you know on that. So we talked a lot about you know educational institutions and districts and, like you know, access to AI tech tools, right. And I'm just curious, like you know, how do you think about that same lens from you know an employer or an organization? So on one level, like a community college system might not have the same, you know same, you know level of access that you know another district in, you know the Bay Area might. But like, how do you think about that from organizations and you know what are some of the risks that you know organizations face? You know, from that same lens of not investing in AI tools or not investing in AI training Is that contributing to workplace inequity? Talk to us a little bit more about the employer side.

Speaker 3:

Yeah, potentially, and, on the one hand, especially if we're in an environment where, on the risk mitigation side, if we're in an environment where people are bringing their own AI tools to work, if we're in an environment where people are bringing their own AI tools to work, you know we've already seen companies send really clear messages about making sure that you're thoughtful about the data that you input into tools and not sharing proprietary data, or making sure that the models are not being trained on your data. So making sure that people have foundational literacy to understand how the model works and what risks might show up, so that they can protect themselves and their companies as well, is maybe the foundational element in terms of risk mitigation. But there's also extraordinary opportunities that I think organizations miss out on when they're not engaging with their employees in this way. Just a couple that come to mind for me is one it's a signal of investment in your workforce, recognizing just how transformative an impact AI will have on the future. We see time and time again when we work directly with workers and companies helping implement new technologies. The workers come back and say it is awesome that my employer trusts me with this exciting new technology and you know, and helps me think about ways that I can do my job more effectively. That's really a signal of strong employee engagement. So that's an extraordinary opportunity, certainly as businesses themselves are seeking to adopt AI, as we talked about a minute ago. You know, more and more businesses are telling us that their employees are the ones that are bringing them ideas about how to use AI, and so when you can have broad-based AI literacy training, you create more opportunities for your workforce to show up and add value in that way.

Speaker 3:

And one thing that has felt really important to us as we've dug into this space a little bit is to make sure that AI literacy training is not just tool-specific but is really truly broad-based, and maybe one good analogy here is the idea of digital citizenship. You know, when we think now about supporting people and understanding how to use the internet, there's all kinds of layers to that in terms of how to understand if a source is likely to be an accurate source, how to be safe online, for instance and I suspect that, because AI is really increasingly a an accurate source how to be safe online, for instance and I suspect that because AI is really increasingly a general purpose technology it's going to be the water that we swim in, the more that we can orient training around those kinds of foundational questions, including ethics, including responsible use, all of these things. That's going to prepare people much more effectively than here's how to use ChatGPT or here's how to use Microsoft Copilot, and the only thing that you know is how to use this one tool, because we also know that AI is going to develop extraordinarily quickly. That's already happening, including with the growth of AI agents, which seems to be the topic of the year. This year, in particular, no-transcript lives and at work.

Speaker 3:

Very true, well you know.

Speaker 4:

I wonder if we could get into some specifics or if you could just provide some specific examples of AI-based skills, because I know this is a huge priority. I think you said in your research, 70% of people of color feel that they need to gain new skills related to AI and I think we've all seen stats like this and we're all on board. But some of the challenges what does that actually mean? And maybe you don't know, maybe no one knows at this point, but can you give us some examples of specific AI related skills that people are trying to cultivate and how are these different from what we might call traditional skills?

Speaker 3:

Yeah, that's such an important question and I want to take it a little bit beyond what we might think of as AI skills quote unquote because, in addition to understanding as I was just describing how the technology works, fundamentally understanding how to use it most effectively. We're seeing a lot of even over the last couple of years, a lot of micro iterations of that. So are we training people to prompt effectively? Are we training people how to build so-called GPTs or agents, you know? Are we training people to really deeply understand what data sources AI is tapping into and what it's not, either generally or within the context of their job? But what fascinates us and feels even more important than AI skills by themselves is the ways in which AI will transform jobs as it starts to percolate across the workforce, and what new skills will be activated by the changing job descriptions that will result. The changing job descriptions that will result. We actually released some research about this in late 2023, called the AI-Ready Workforce, which mapped out this idea that AI and I think this language is becoming more and more commonplace, but that AI is not just something that can potentially automate away tasks or skills. Ai also has an augmenting effect.

Speaker 3:

There are some types of skills or activities where the use of AI makes the humans better at their jobs, better at undertaking those kinds of skills. Great example of this is what we think of as the soft skills human connection, communication, collaboration, the ability to work with teams. So I'll give you one example. You know where people, for instance, who are managers, who might need to have a coaching conversation with their direct report, you can talk to a large language model, ask, you know, give it a little bit of obviously privacy-shielded information about the conversation that you want to have and have it work with you as a coach to set up the way that you might need to have the conversation. You then, as the manager, still need to be the one to engage with your colleague, but you probably are able to have a more successful conversation relying on, at least in part on the coaching that you receive from the AI, than you might have done if you were just working, you know, by yourself. Unlike, for example, on the other side of the equation, a skill like coding, which we're already starting to see increasingly generative AI models are able to do, and so a software developer you know, who might be very skilled in coding, is probably still going to have to spend some time doing quality assurance, for instance, but they may not need to spend as much of their own time coding, and so what that means is that jobs will shift over time so that, for example, if you're a software developer and you're now spending less of your time coding, more of your time or your creative energy is freed up to be able to conduct needs assessments, for example, to talk to your colleagues, to talk to your clients about the kind of software solutions that you're helping build for them, and to be able to deliver higher value as a part of those interactions through the use.

Speaker 3:

Ofability and the skill of being able to learn, how to learn are becoming increasingly important. Those were also some of the same skills that popped out in the survey that we've been talking about. When we ask workers directly, that's what they say, and it's not huge numbers yet. It's maybe one in five people still, but they're saying I'm increasingly having to demonstrate these new kinds of skills because AI is working with me in new and exciting ways, and we've also seen and others have as well that you referred to more traditional skills that might've been valued in the past, the kinds of technical skills like, possibly a specific coding language, for example, increasingly have a shortened half-life, and so maybe that skill is relevant to you for six months or a year, but then something has changed and you need to develop a new skill.

Speaker 3:

So in our own research, we're seeing really rapid turn of these kinds of technical or digital skills, but the enduring power and potential of the human skills and highly complex cognitive skills like problem solving, for instance, you know, will increasingly come to the fore. And so that's, I think, anytime we think about the critical skills around AI, you know, yes, it's increasingly important for all of us to become users of, and creators of, and builders with, AI, but these additional baskets of skills are also going to be even more essential because they will respond to the ways in which jobs continue to adapt.

Speaker 2:

Yeah, I think that's great and I think you know there's a lot of industry examples of, like hey, where we've seen quicker pushes or, you know, faster adoption in AI. You know one area that you know I think it's great for us to double click into is, you know, probably just AI in education. You know you talked about using AI, you know AI to learn right and you know I think one of the stats coming out of your study was also, you know, close to 60% of learners are, you know, using AI weekly in education. So you know, I think it's safe to say like AI is transforming the learning experience quite a bit. But, that said, in your research, alex, what are some of the positives and negatives with this integration of AI into the educational space and how people learn?

Speaker 3:

Yeah, for sure, and I'll start by saying that you know, at least in my view, the educational space is one of the most complex in terms of AI adoption, just because of all of the different ways in which this shows up. We've talked about the questions around, you know. Does AI have negative implications for academic integrity? For instance, explored the idea that it can potentially be a pedagogical tool, and we can talk more about that with teachers using it day in and day out to support students in the classroom. We see use cases around AI for student supports, helping career counselors, for instance, whether it is doing some of the routine work for them so that they can spend more time using those human skills, engaging and connecting more deeply with the learners that they're supporting. We hear about administrators who are really excited to use AI for some of the same kind of business use cases that business would use it for to streamline processes, for instance. So it is a highly, highly complex space and, of course, education is navigating this within the confines of their local environment and of policy and all of these different considerations. And so, you know, we saw from our own survey that learners are using it again, whether at their own impetus or whether, at the direction of their educational institution, they're using it to just to learn, to help them understand complex topics, they're using it for exam preparation, to help them research, to get access to different kinds of resources and supports. For instance, they are using it for career guidance, and I think, you know, one of the key things that we and many others that are working in this space are seeing is just the how of that really matters. I think it is very tempting for any of us to just ask the AI the question and have it give us the answer, but increasingly you're starting to see not just the platforms themselves building out new modalities or tools that are built on top of generative AI platforms integrating more like Socratic models where what the platform is doing is asking you questions and drawing you out, helping you really drill into areas where your learning needs to be shored up, for instance, which is really exciting. But we're also hearing that students are looking for that as well. I've talked to some college students who say I really hate it when it just gives me the answer. You know I wanted to actually help me learn more deeply, and so you know we think that the considerations there in terms of how these platforms are designed as they're interacting with students, as well as the kind of training and literacy support, as we've been talking about, that students have, so that they know that those modes are available to them, that actually you can set it up so that it asks you questions and you can learn more deeply. You know. That really underscores the importance of making sure that everybody has the supports that they need to use these platforms.

Speaker 3:

The other thing that was really striking to me in the survey results was that we asked a question about how the use of AI in the classroom or in an educational context could impact either student-teacher relationships or students' relationships with their peers, and what we found was really kind of, you know, I think, positive leaning, but a little bit of a mixed bag in ways that are really interesting. So when we asked you know how much time are you spending with your teacher? You know how effective is the communication that you have with your teacher? Is AI making that work more effectively or less effectively? We saw almost equal numbers say yes or no, about 16, 15% on each side. We also saw or asked you know, are you spending more time with your peers? Are you spending less time with your peers as a result of AI interaction.

Speaker 3:

Same numbers on both sides. About one in five people said it's allowing me to spend more time with my peers. About one in five people said it was less, and so I think that speaks to really some of the challenges about how AI shows up in the classroom and the ways in which it's being used can either contribute to pedagogical outcomes as well as student support outcomes or hold them back. And you know, for us, on some level, anytime you're using new technology, it's always exciting to think that the technology is the whole solution, but inevitably the real answer is the combination of technology with the fundamentals is the whole solution, but inevitably the real answer is the combination of technology with the fundamentals and the things. That will always be true. So there is no substitute for good pedagogy here, for ways of interacting with students that allow for them to engage personally, to be drawn out, to develop human connections with their peers and their teachers. But the fact that we're seeing that, you know, both sides of that equation show up in this early data was really striking to us.

Speaker 4:

Yeah, well, there's a lot of different directions we could take this.

Speaker 4:

I think one of the more interesting angles to this perhaps is the idea that different institutions this has happened so quickly, and I think you point out that different educational institutions have different policies. It could be very different policies in terms of what is considered cheating or not cheating or, beyond that, you know what are the appropriate rules of engagement for utilizing AI in an educational setting. I wonder if you could just speak a little bit about that, and do you see a future where there needs to be some sort of? You know, I don't know if it's a government body, a nonprofit, some organization that comes up with a set of rules of engagement that we can apply across educational institutions, Because it seems like this could be an obvious source of inequity as well. If you just chose a school by chance that has a very strict policy with regard to AI, you may have a lot less access to AI and emerge from college with a lot less of an AI and emerge from college with a lot less of an understanding of it versus some other institution.

Speaker 3:

Yeah, absolutely. And on that last point, I do think that this is an important question for parents to answer, for students to answer. You know, not all of us have the same kinds of choice in selecting educational institutions, but, even as we're understanding the places that we're headed, to ask these kinds of questions how does this school use AI? What are its policies? You know, is it a universal policy across the school or is it different from classroom to classroom, which, you know, we've still seen over the last several months may, hopefully, is changing, but, you know, sometimes it really just depends on which subject you're in at any given moment. You know, sometimes it really just depends on which subject you're in at any given moment.

Speaker 3:

But there are, I think, really important and exciting efforts underway to do exactly what you just described AI Alliance, which has developed something that they've called the Safe Benchmarks Framework, which creates a roadmap and a set of issues around what it's going to take to make sure that the AI ecosystem is really thoughtful and built in a way that is equitable for students as well as others thoughtful and built in a way that is equitable for students as well as others.

Speaker 3:

And so there are four components of that Safety, which is essentially focusing on data privacy, managing risks to cybersecurity. Accountability, which is making sure that there are standards in place and that all parties are clear on who holds accountability for the benchmarks that will be used to evaluate the degree of success of the solutions, and that it's abiding by policies and regulations where they exist. Fairness and transparency, which is really understanding how these solutions are available for everybody and ensuring that there are some guidelines to ensure that they're of quality. And efficacy, which is to make sure that there are student outcomes that result in these. So these kind of high-level standards and benchmarks are already well underway across the space, led by some extraordinary leaders in this ecosystem. Those are some that I would pay close attention to, and there are increasingly more and more becoming available at the state and local level as well.

Speaker 2:

Yeah, that's great. Well, one last question. I suppose you know, because I think we keep going through this research for hours, alex's what, if anything you know, surprised you most. You know, throughout the course of this research, you know, and has it influenced your own thinking of AI's role in the future of work?

Speaker 3:

Yeah, I think it really is just how extensive people's curiosity about this topic is and the nuanced views that they hold about it. You know, when we really were digging into this work from the very beginning it felt as though the conversation on AI was very binary, that it's either going to save us all or it's going to be an extinction level event, and there's still some of that popping up. But I think when you talk to people as we you know we did through this survey and we've done through focus groups and others you know people have a nuanced view. They say well, you know, on the one hand, this can help me be successful in school and work, but I worry a little bit about its impact on jobs People, you know especially this is anecdotal, but you know we've heard from young people who are concerned with the climate impacts of AI as energy use grows and grows. Hopefully that will normalize over time.

Speaker 3:

But it is a real question that folks are asking, even to the point where they're starting to think through. You know, in some cases, which model do I use? Is there a model that's more or less you know, energy draining, for instance, and how can I make that decision if I'm trying to be attentive to climate impacts. You know people are so thoughtful about this. It extends as well to how their data is showing up and being used, especially amongst populations where there's a higher level of awareness that AI training data may not necessarily fully represent them or their experiences. So people are increasingly savvy about these tools and really thinking in nuanced ways about how they show up for themselves in their lives, and I think that is an extraordinarily powerful thing, because it creates a really strong foundation that all of us can build on when it comes to asking the question that really is, and will be, at the heart of JFF's work on this moving forward, which is how does AI make us all better off?

Speaker 3:

In our view, the conversation about AI can so often be pulled in this direction of which model is bigger or faster or better? How many jobs are being created or lost? Is there some kind of broadly speaking economic impact? And we always want to bring it back to this simple question of are we all better off as a result of AI? Are we able to access quality jobs? Are we able to pursue opportunities for entrepreneurship, which was another thing that really stood out to us from this survey? Are we able to sustain our livelihoods, and if we're able to do that in a fair way through the use of AI, then I think we can count it as a success. And if we're not, we have to ask some pretty big questions as a society, and so I think that the ground is really ripe for us to ask those kinds of questions and to be met with a community of learners and workers that's really, you know, that wants to dive in and engage.

Speaker 2:

Yeah, I think that's great and, like you know, I hope our audience kind of enters. You know, their path forward with AI, you know, with an open mind. But I do like that question like hey, are we better off? Right, and you know it's not just AI for AI's sake, automating for automating's sake, and there's a lot more nuance to it. And you know, super helpful having you walk through the report and you know your perspective too, alex. Well, the last question is the easiest.

Speaker 3:

Where? Well, the last question is the easiest when can people learn a little bit more about you? Online. We post our research and publications on our website and we try to share about monthly through LinkedIn, both what we're seeing and hearing out in the space, as well as opportunities that are popping up to collaborate with us. We are very, very eager to both hear about how all of you are thinking about these questions and to work together where the opportunity presents itself. So please do look us up and reach out. We would love to connect.

Speaker 2:

Yeah, a lot of great events and sponsored programs through Jobs for the Future in general too, so love having you on today, alex. Really appreciate you joining us.

Speaker 3:

Thank you both so much. It's great to be with you All right thanks for tuning in.

Speaker 2:

As always, head on over to changestateio or shoot us a note on all the social media. We'd love to hear from you and we'll check you guys next week.