The Changing State of Talent Acquisition
The Changing State of Talent Acquisition cuts through the noise in the crowded world of recruitment marketing, employer branding, workforce intelligence, and AI.
Hosted by Graham Thornton, President of Consulting & Growth at Talivity, this podcast brings you unfiltered conversations with industry founders, practitioners, and the occasional contrarian who's actually doing the work – not just selling you on it.
We're not here to hype the next big thing. We're here to help you separate signal from noise, understand what's actually working (and what's just well-marketed), and make smarter, data-backed decisions about your talent strategy.
You'll hear from TA leaders navigating real hiring challenges, founders building solutions worth paying attention to, and experts who see around corners before the rest of us catch up.
Whether you're navigating the AI arms race, trying to figure out your tech stack, or just trying to hire better people faster – this is the podcast for people who care more about ROI than buzzwords.
The Changing State of Talent Acquisition
#67: Building Human-Centric AI in HR Tech: From Fear to Adoption
In this episode, Kyle Lagunas of Aptitude Research discusses the crucial distinction between "human-in-the-loop" and truly "human-centric" AI approaches in HR. Drawing from his extensive 15-year career studying innovation cycles in HR tech, Kyle explains why adoption rates for AI in HR are stalling despite executive support, and offers practical strategies for building AI literacy within organizations.
Topics Covered:
- The difference between human-in-the-loop vs. human-centric AI design
- Why HR departments struggle with AI adoption and tech literacy
- Practical ways to increase AI literacy within HR teams
- Evaluating AI use cases across impact, risk, and complexity
- Success stories and low-risk starting points for HR AI implementation
- Moving beyond the "bias boogeyman" in AI evaluation
Discover how to evaluate AI implementations and learn why many HR departments are starting their AI journey with conversational AI. This episode provides a nuanced framework for HR professionals looking to move beyond fear-based decision making and implement AI solutions that genuinely augment human capabilities.
Welcome to the Changing State of Talent Acquisition, where your hosts, graham Thornton and Martin Credd, share their unfiltered takes on what's happening in the world of talent acquisition today. Each week brings new guests who share their stories on the tools, trends and technologies currently impacting the changing state of talent acquisition. Have feedback or want to join the show? Head on over to changestateio. And now on to this week's episode.
Speaker 2:All right and we're back with another episode of the Changing State of Talent Acquisition podcast, Super excited for our next guest, Kyle Lagunas, currently at Aptitude Research with an incredible background that I'll let Kyle share a little bit more about Kyle we'd love to hear a little bit more about your career journey from Aptitude and, more recently, your role as the founder of the human-centric AI company.
Speaker 3:Yeah, I've got a really weird job. I absolutely love it. I think it's super cool, but I had no idea that existed and I wouldn't believe it if I didn't have this day job. But so my job is to study innovation cycles in the world of HR and tech, and here at Aptitude we focus exclusively on trends that span human capital management, from talent acquisition to talent management, learning development yeah, you name it, and I've been in the biz for almost 15 years, so since I was three years old.
Speaker 3:But I started as a blogger in the space, as a Gen Y that was struggling to find gainful employment. I had a lot of opinions and issues that I wanted to suss out with HR, but as I got into it, I started to realize that some of those issues and frustrations that I had were rooted in real problems and not just a lack of care, and I got hooked. So now my job is to try and help people navigate some of the crazy stuff that's going on, some of the wild innovations that is coming through, and, yeah, just to kind of find a path forward. It's a pretty cool job. I absolutely love it.
Speaker 4:Well, welcome Kyle, we're thrilled to have you, you know, I guess. One question I have just to start, because you were saying your job that you didn't know existed perhaps before this to study innovation cycles, and on a recent episode of ours we were talking about AI, specifically as it relates to the Gartner hype cycle, which I imagine you're familiar with, and we were struggling at that time to understand whether the Gartner hype cycle had much to offer in terms of insight about the moment we're currently seeing with AI. I know that's potentially a big question, but I'm just wondering if you could comment, if you have any thoughts about that.
Speaker 3:Yeah, so I haven't actually looked at the Gartner hype cycle in the most recent one, but I am familiar with the model. I've looked at it before and look, there is, I think, an abundance of hype around the opportunity for impact that AI has in the world of work and especially within HR functions and processes. But all of that hype is like I mean, it is just hype if we don't know how to make the most of these capabilities. And I really do see more pressure to pursue opportunities than I see like real confidence and intentionality. So I'm not like the hype police, but I'm personally a hype police adjacent.
Speaker 2:Yeah, well, I do think, like you know, we'll pick on Garner one last time, and then I want to dive into some way pieces. They also just put out a study and it was hey, like, what percentage of CEOs are comfortable talking to or going to HR leaders for strategic guidance as it relates to AI? And the answer was less than 1%, and that is wildly scary, right A if you're an HR leader, but also B if you're an HR leader that goes to some of these orgs for strategic guidance in AI and innovation, and so I think there's a huge opportunity for HR, and I think that's part of the reason why we're just super excited to have you on the episode too, kyle.
Speaker 3:There is an opportunity, but I think that there's also. I mean, like dude, that is like a gut punch, isn't it that? Like less than 1%? But, candidly, why would people come to HR to understand what's going on with AI? I think that I mean, look, so I've been doing this for 15 years, which I think is a long time.
Speaker 3:I've seen a lot of tech trends come and go and, through it all, the one constant is that HR as an organization I'm not talking about like saying no one in HR knows anything about tech, but organizationally, like functionally, most people got a pass for not being very tech savvy or tech literate, and that is coming back to bite us now where we are. There are a lot of questions about how these capabilities work, what they can and can't do, what our organization, what our company's approach is to the use of AI. And, yeah, I think that HR is starting from much further behind than maybe our colleagues in sales and marketing and operations. I think that we still have a chance. We still have a good opportunity to catch up, but it's going to be a sprint to that finish line.
Speaker 4:Well, graham will tell you. Don't get me started on talking about HR folks and their tech savviness or data centricity. I think all of our audience has heard enough about that, but I'm glad to see that we've got a like-minded guest on the show today, so that's cool. I wonder if you know what some of you said. Kyle really resonates with me and perhaps with a lot of our audience, which is I've only been in this space for what? Five, six years now, graham and I feel like we've been talking about AI since I started, since I first entered the space.
Speaker 4:It's something that had so much promise. It seems to have so much promise it still does, but it just seems like the moment is pregnant with possibility, but nothing ever arrives or at least that's my sense other than what might be described as automation, better and better automation, and I think one of the challenges I have with it is just not knowing, not having language and not being able to get more nuance and having a conversation about it, which brings us to the recent report you did, which I thought was amazing, called Rethinking AI and HR Balancing Innovation, risk and Human-Centric Solutions Kind of a mouthful, but what I really wanted to start with here is. I think you do meaningfully move the conversation forward by making a distinction between what you call human-in-the-loop AI approaches and kind of maybe truly human-centric AI approaches. Those are lots of big words. I'm wondering if you could simplify it and just let's hone in on this distinction that you're making between human in the loop and truly human-centric AI. What is the difference you see there and how does this inform the current moment?
Speaker 3:Honestly, I really appreciate you for calling me out on it. I mean because my intention is to create more clarity and to present some more practical, better practices for approaching the way that we think about utilize, design, ai solutions and all right. So we say a lot about like, because one of the things I think people are worried about with AI, and have been for probably the last five or six years with AI, is AI running off the rails and what happens when AI goes rogue. And we've seen a lot of reassurance being offered around keeping humans in the loop. So, you know, in New York City there's this law that governs what they're defining as automated employment decision making technologies. So something that is moving somebody forward automatically in an interview process could be considered an automated decision. Right, we are saying no, no, no, no. Let's make sure we keep humans in the loop. In all of these air quotes here, moments that matter and, yeah, we definitely should have that but what I think I'm seeing is that we are really only crippling automation when we're putting all of these blocks and checks along the way. It's like running fast and screeching to a halt running fast and screeching to a halt fast and like running like screeching to a halt. Running fast and screeching to a halt, we find that implementing some of these easy buttons around like offering a relevant matching score to like a candidate's application as an example. We see somebody who's like, oh, they're an A, let me move them forward. I'm going to present them to the hiring manager. I'm not going to actually take that's technically human in the loop. I'm going to see that the AI has proposed that they're an A match and so I'm just going to click the button to pass them forward.
Speaker 3:Human-centric AI solutions are actually designing the implementation of AI specifically to augment the human work that does need to remain human and automating the rest of the work that can be repeatable, that we can trust AI to do, and I'll give you an example. So if we are looking at, let's stay with matching and scoring as an example. It's a really popular use case for an efficiency gain and moving faster in the recruiting process. It's also a really great solution for being more data-driven in recruiting processes. But what we've observed is that a lot of people are like that human in the loop is not really doing any quality control. You give somebody an easy button, especially when they're inundated. If they're overburdened, they're just going to click that easy button right With a human centric solution.
Speaker 3:I'm actually going to. I'm going to say I'm going to look and see. For, like, our conversion rates for A and B candidates in this job type is like submission to acceptance ratio is 90%. So I'm actually going to design this in a way that, like, I'm going to go ahead and have the AI that has scored somebody as an A or a B candidate, I'm going to invite them to a screen, I'm going to move them forward to screen because the margin of error here is pretty good, and I'm still going to let humans go in and move anybody else forward that they want to, but I'm going to unblock that top of the funnel and just keep people moving.
Speaker 3:The reason why that's more human centric is it is enhancing and augmenting that human worker and it is repeating the tasks that already occur. It's also, I think, giving us like a dual approach. We are immediately activating on the information that we have, we're making data-driven decisions and then we are also still maintaining human interaction in that process. And so I can still go through, and recruiters love to find those diamonds in the rough. They can still do that.
Speaker 3:We can flag a candidate that or mark a candidate that has been moved forward by the AI versus a candidate that's been moved forward by a human and we're going to make sure that between the two of us we've got everything covered and I think it just helps us move faster. But it doesn't put everything, everything on the recruiter. They don't have to go through and approve every single thing. Instead, they can can focus on those diamonds in the rough or those differentiated candidates, the underdogs, et cetera. So yeah, it's a long way of saying like human in the loop can be quality control, qa, I guess, like limited QA. Human centric is actually like doing that deep process mind and journey map to find where is the best way to use AI here to achieve a greater outcome, not just improve efficiency, but to have a better outcome without displacing the human worker.
Speaker 4:Interesting, interesting.
Speaker 4:Well, it's appreciated to have a more nuanced take on this and help us with some language here, which I think we sorely lack as an industry.
Speaker 4:So what I'm hearing from you is almost a paradox, which is often what you find when you have powerful ideas, I think, which is this idea of human in the loop, which sounds incredibly human centric. There's a way in which because maybe we don't trust AI, or it's just a black box and everyone's a little bit scared, maybe there's some fear of having our jobs taken. You know, all these things conspire to creating these human in the loop approaches where humans, they're really functioning as needless gatekeepers. You know the AI can sprint ahead, but then it has to stop every 10 yards to check in with the human and make sure it's still doing. We're still feeling comfortable with it and I think maybe, if I'm understanding you correctly, what you're saying is and obviously it's case by case we're talking about screening here, but in the case of screening, there's a strong case to be made that the better, more human-centric design is actually to remove the human gatekeeper at a lot of those checkpoints. Is that?
Speaker 3:a fair summary? It is, but you know what, like as you're talking, a metaphor that comes to mind is in this example, we are not keeping humans in the loop, we are keeping. We're putting humans into an assembly line, right and like, the machine's going to like, you know, it's going to move the product to us and then we're going to put some bolts in it and we're going to feel good that we, that was, that was a human built. Like outcome, instead, we could be using AI to move faster and smarter and keep our capacity open. It's like do I want to remain in the assembly line or do I want to step up and be more strategic? In every one of the like, every single rec that I'm managing, do you know? Like the, the work that I do becomes different, more elevated, because I'm not just clicking approve buttons and not making any real decisions.
Speaker 2:Yeah, I think that's great and, kyle, I'd say, like you know, that's probably a good, you know kind of segue into another piece of the report and that's hey, like there's a lot of executive support for AI and HR. It's probably never been, you know, never been stronger, but adoption rates are stalling and so much, like you know, we're running into, you know, barriers or checkpoints with using AI. You know, I think we're, you know, running into similar, you know similar hurdles when it comes to adopting, you know, adopting AI. And so, you know, what do you think is the disconnect between HR leaders, executive support and, like you know, why are we seeing adoption rates for AI? You know, stall.
Speaker 3:Yeah, there's a. There's like it's a great question, graham. I think there's a couple of different layers, at least that I'm observing. The first is the most fundamental, which is that there is a lack of trust in AI amongst most of the rank and file and even leadership in a lot of HR organizations, and the lack of trust comes from concerns fear of what can you know? We hear we mostly hear what goes wrong. I mean, you still hear about the Amazon example of I mean we're going to stay with matching and scoring for a second because it's just such a great use case. The Amazon example of they built a machine learning algorithm that was going to help them screen candidate like or help them to funnel through applicants right, and we still hear about it only moving forward men, and that was like 12 years ago, nine years, it was a long time ago. And so we are really stuck on what we're afraid of and that's okay. I mean that's that has been like.
Speaker 3:One of the core principles of HR is to manage and mitigate risk in our workforce, to, like, avoid risk and create, like you know, more equity in our employment practices. I totally get that, but a lot of that fear and concern is also stemming from what is a lack of literacy, a lack of functional understanding of how these capabilities work and how they don't, what these things do and what they won't, and how we can actually build meaningful governance, even at the use case level, that will make sure that there is the guardrails are high enough that this thing can't go off them, right. That in very practical ways. By the way, I'm like very drag and drop rule setting, like we are not talking about having to be like an AI, like IO. Instead, you can just identify some of the rules and boundaries you want to set on these things, but we don't know that.
Speaker 3:I don't think that we quite understand that, and so instead, we do what we usually do when we're faced with something risky we just avoid it, and you know what I mean and that's what we've done, and so it's a hard habit to kick is because it's worked more or less so far, but the times they are a change in kiddos, and I think that the scale of impact that we're worried about, with AI going rogue on us or something, the other scale of impact that what I think we need to be worried about is us falling behind of impact that what I think we need to be worried about is us falling behind, and I think that the other, like the, the, the risk that's looming over us because AI is coming, whether we are are guiding that train or not, and what will happen is that AI is going to happen to us and it is going to be implemented in a way that is prioritizing efficiency and it's prioritizing cost cutting, and it doesn't.
Speaker 3:It's not going to end up being particularly human centric, it's going to be business centric and and I that is not doom and gloom. I think that is a very real scenario that is in front of us.
Speaker 2:Yeah, no, I I completely agree and I'd say, like you know, on this thought of AI illiteracy, right, you know, so we said earlier that you know, oftentimes you know accurate or not, like HR teams are seen as having maybe less of a technical background, you know, a little less tech savvy, you know. I think that kind of probably amplifies the you know sort of gap in terms of skill set you know needed to really, you know, be comfortable with AI tools. How can organizations really invest in building AI literacy within not just HR teams, maybe broader teams? And then I suppose that's probably a pretty logical jumping off point talk a little bit more about this human-centric AI console that you're involved with too, kyle.
Speaker 3:Yeah, yeah. So I think that there are a couple of things and, by the way, graham, none of these are like massive lifts. I'm coming at this. I'm trying to be as practical in my recommendations as I can. The first is, whether you are a team manager or a functional manager, in your standing meetings with your direct reports, create space to share AI best practices, to share a use case that you found, whether it's whether it is directly in HR or it's something that you like. Today, this morning, um, I was having my coffee, I was planning my 40th birthday like literally detailed itinerary with chat GPT I call her chatty G and IDG was there for me, honey and she. It was extremely helpful and I guarantee I cause I guarantee you that there are so many people in the HR organization organization who do not know just how much these tools can do for you because they have been told they probably should avoid them because they're too risky, but we that's like head in the sand kind of stuff. So, first thing is create space in your standing meetings for sharing just what we're doing with these things and how these things can work, and, I think, like getting our hands on it and playing with it and encouraging us to do it in safe and compliant ways. Like there's a lot of opportunity there. These feel good KPIs and you know, um, the data that is like soft ROI, like that's. Yeah, we're.
Speaker 3:We're still going to have our, our touch point on the business results from our, our engagement, but I'm actually going to start creating maybe once a quarter it depends on how, how, uh, how frequently you're when you're running your business reviews with your vendors. But create space to learn, say, marty Graham, ahead of our QBR in June. I want you to know that a huge priority for me with my organization is to drive AI literacy and I would really like to know how your organization is approaching AI. I want to know if you have ethical standards and commitments that you're making that are guiding some of your product design. I want to know how some of your most innovative and fearless customers are leveraging AI to partner with you. Bring me more than just our vanity metrics next week and I'm going to create space for that. We're going to come with questions. It's going to be a great way for us to reinforce our partnership. Like those are two standing meetings that happen under any program, right Under any, for any HR organization. We have QBRs with our vendors and we have team meetings Like let's start creating space in our ways of working to build up and like confidence in literacy, in AI.
Speaker 3:And the third one, which is a little bit more unique lately but get out of the office, go to a conference Once a year. Go to a conference and really try to have conversations with people. Attend sessions, yes, but then follow up with the people who were in that session. Be like, hey, can I pick your brain for a little bit? I will buy you coffee. I just want to talk. What you said wasn't completely irrelevant for me because we're in a different industry, but I liked the way that you're able to approach it or whatever it is. So I think all three of those are really practical. I'm not having to fight for a million dollar L and D overhaul, like to get this really crazy consultant caught, like to come in and do all this training, like maybe you end up getting there because you get really serious. But I feel like all three of those are just super approachable and practical.
Speaker 2:So I want to oversimplify it kind of like. I almost wonder. It sounds like what you're describing, too, is like too. Is a lot of companies maybe have just a company culture problem If people are just scared to even think about using AI because they're going to get their hands slapped, they're not being encouraged to learn or they're not being encouraged to go to conferences or see what other people are doing people are doing. Are we kind of paying the broad brush and saying, hey, maybe HR needs to rethink the way they go to market or the way they market themselves to be a more arguably inclusive department? And hey, is HR just sitting too much in their own little house without going around and walking the neighborhood to see what other people are doing? That's probably a bad analogy, but that's where I'll leave that one.
Speaker 3:So it might make sense.
Speaker 3:No, I think it's fair. I mean, there are cultural barriers, operational barriers that we do need to acknowledge. It's not just a lack of trying. There are other significant barriers that we are going to encounter. But I guess that's where I'm thinking about trying to propose ways that it doesn't matter if you sit at the top of the food chain or you're down in the rank and file.
Speaker 3:I still think that there are ways for us to improve. I mean, it's Sometimes we're going to have to try and move mountains, but other times it is just stacking up a couple rocks and calling it art. I think that there is some of that really, really big stuff that I think that the HR executives who are tuning in need to be prepared for, and I would love to sit down with them and talk about what I'm seeing working, what's not. But I honestly, graham, I think that this is something that the entire HR and talent organization needs to be leaning into, and so I don't think we can just wage the battle at the top. I think we need to be doing this like more ground level stuff.
Speaker 2:Yeah, yeah. Well, I mean, you know, I think back to you know I remember when you know COVID hit five years ago and you couldn't meet with people in person and you couldn't do anything Like we, we started taking a lot more virtual coffees and you know, I would argue and say you know some of the more interesting, you know, conversations that we had. You know we're really with people that you know we're working on things very much outside of HR, and then you start to think of you know applications to your own business, right, no-transcript, what people are talking about at that table, and so I think there's so much value to spending time with people outside of HR too that I think would probably maybe spur people in TA, spur people in HR, to think about broader applications of AI. I suppose. Yeah, I think it'd be great to talk a little bit more about.
Speaker 2:Let's talk through some success stories from your report. Maybe give us one good example, kyle. We talked about organizations getting AI implementation wrong. We can tack Amazon to the wall for that one again, but what's a good success story where organizations got AI implementation right?
Speaker 3:I mean there's a lot of examples. There really are. Not all of them are very public. That's the hard part, but there are quite a few. One thing that comes to mind is that we need to be evaluating AI differently than we evaluate different types of tech. Differently than we evaluate different types of tech. Like, if you go out to market to buy an ATS, you're going to have an RFP that's going to have 60 or 80, I don't know how many lines of features and functionalities and capabilities that the company has, the vendor has or it doesn't have. But if you take that approach with AI and say, hey, can you support my 16 different interview types, they're going to say yeah, because in their demo environment they can build that up and make it look real nice and shiny. But what they don't have at like what they can't say for certain, is what the data governance and systems governance of your IT organization looks like. I don't know that your IT organization is going to give me the level of access that I need to Microsoft 365 or Google suite to make this work right. I don't know if it's going to give me the level of permissions that I need to Microsoft 365 or Google Suite to make this work right. I don't know if it's going to give me the level of permissions that's going to give me access to your calendar so I can find out if you're actually busy or if you're just holding this for interview. And so we need to be evaluating these capabilities much differently.
Speaker 3:And in the report, we actually identified three different pillars of a more effective evaluation looking at AI use cases across impact, risk and complexity. And what we find is that a lot of us perceive that any AI use case is high risk. Like, if you look at the EU AI Act, they're saying that any AI use case in HR is high risk. And, guys, there's literally millions of use cases and they are not all high risk. So I think it's being able to build a better framework for evaluating the compliance risks and ethical risks and operational risks that a use case might present for you. It is looking at the technical complexity, the maybe change management complexity, looking at the solution complexity of a use case, and then also for impact, like what is the potential ROI? Am I impacting efficiency, Am I impacting effectiveness, Am I impacting experience? And by looking at these three different dimensions of AI use cases, I mean, yeah, you still need to ask them if they can support your 16 interview types. But by going this level deeper, you're actually going to get a little bit closer to what makes these capabilities, these solutions, work within your organization. And so I think to answer in a roundabout way examples of things that go well, I have.
Speaker 3:There's a reason why so many companies are starting their HR AI journey with conversational AI for employees experience to help, like with employee inquiries around HR services, or conversational AI for candidate concierge to support candidates with inquiries. It's going to create a high touch. It's going to be a pretty good resource. It's more dynamic than your FAQ, it's going to connect people to some actions and some resources and it's not going to bog down your HR organization.
Speaker 3:We identify that as a pretty low risk use case because it's not going to tell somebody to worship Satan. I mean, although it is okay if you worship Satan, that is a legitimate you know. I'm just saying like it's not going to tell you to commit, you know, crimes, but it is going to point you to the place in the employee handbook where it talks about the company's approach to freedom of religion, like whatever it is, and it is going to like tell you how to file for leave. It's not going to point you to somebody else's handbook and share somebody else's policies. It just doesn't work that way. And at the same time, the impact of that use case is moderate to significant, because it's saving a lot of the time spent answering the same questions again and again and it is reducing the time to resolution for employee issues or for candidate questions and it's enriching, it's impacting the experience of those stakeholders. So that is a use case that I think is having a lot of early adoption, because risk is low, impact is high and complexity is pretty straightforward.
Speaker 4:Yeah, that makes a lot of sense and I think that's a great example that helps bring this to life and makes it a little more real. The pillars, I think, are also very useful. I mean, we've spent a lot of our conversation here talking about risk. I think that's because that's where most HR people are in this process, but it's helpful to have other dimensions to consider. You say impact and complexity being two of them. I mean, we can't just look at this from a risk point of view. We certainly need to look at it from a risk point of view, but if that's all we do, it's probably a recipe for staying stuck, like we've been.
Speaker 3:Or it's also like just assuming risk, like that's. That's the other part there. It's like we're I talk in the report about getting beyond the bias boogeyman Cause we're just like oh, we got to work, we got to be worried about AI introducing bias. Here it's like, okay, well, where in interview scheduling use cases are there even opportunities for bias? Because the way that this works is the bot identifies when a candidate is available for an interview and then it identifies when the interviewer is available for an interview and then it schedules an interview, and none of that has to do with any protected class. So do you know what I mean? And so, but people really do get questions about bias in any AI use case. So sorry to jump on my high horse there for a minute, but I think that it's actually evaluating risk and not just assuming it.
Speaker 2:Yeah, kyle, I know we're short on time, you know. I think you bring up a great point, though, too, and I would encourage anyone to look at aptitude research report. You know, because you do a great job of calling out things like to your point interview scheduling, low risk, high impact, great place for organizations to start, and I think that's, you know, the impetus of this conversation. It's boy. We're really getting stuck in a slow adoption of AI. There's a lot more that HR leaders can do. How can we make it a little bit easier? So, you know, really encourage everyone to. You know, take a look at the report. You know, I guess you know the last question, kyle, is our easiest and again, I wish we had a little bit more time, but where can people find more about you online?
Speaker 3:I mean this is so boring, but I check my LinkedIn every single day. Linkedin is a much better, safer space. Unless you want to look at my liberal memes and my cat memes and follow me on Instagram, I would recommend Instagram. It's just, it's a very special place. Follow me on Instagram. I wouldn't recommend.
Speaker 2:Instagram. It's just. It's a very special place. That's awesome. Well, we'll link the aptitude full report and the details. You know a lot of great content coming from aptitude research these days too, and obviously you know we'll share your LinkedIn, kyle. We might not link the Instagram, we'll see. Maybe we do have a lot of cat lovers, who knows but really appreciate you joining for an episode. It's been fantastic.
Speaker 3:Thank you so much, guys. Thanks for having me.
Speaker 2:All right, thanks for tuning in. As always, head on over to changestateio or shoot us a note on all the social media. We'd love to hear from you and we'll check you guys next week.