Spirits in the empirical world: The plight of human job seekers in the age of AI

Featuring: Hilke Schellman, author of the recently released book "The Algorithm: How AI Decides Who Gets Hired, Monitored, Promoted, and Fired and Why We Need to Fight Back Now”, Assistant Professor of Journalism at New York University, and investigative journalist extraordinaire.

“As a journalist, I’ve seen the many facets of AI in hiring – it’s not just about the technology, but about the people it affects.” – Hilke Schellman

In this revealing episode of Science 4-Hire, Dr. Charles Handler welcomes Hilke Schellmann, author of “The Algorithm” and Assistant Professor of Journalism at New York University  for a captivating conversation on AI in hiring processes and its toll on the rights and psyche of job seekers. 

As an investigative journalist, Hilke has been on a six year long quest to understand how AI hiring tools are being used, their credibility and their impact on job seekers.  This episode delves into her experiences and insights that culminated in her recent book (The Algorithm), an excellent expose that sheds light on the complexities and ethical challenges of AI in hiring.

One of the great things about this episode is that Hilke brings a fresh, inquisitive outsider’s perspective to the table.  In the process of writing her book, she has interviewed hundreds of people on all sides of the AI hiring game and has taken a great many AI based assessments herself.  

On the show we discuss the ethical dimensions of technology, and delve into the nuanced challenges and ethical dilemmas presented by AI in hiring practices and a range of related topics such as the pitfalls of AI tools, the industry behind the creation of these tools, the impact of these technologies on protected classes, especially those with disabilities, and the need for greater transparency and ethical consideration in the industry of AI hiring tools. 

Highlights:

Ethical Challenges of AI in Hiring: Schulman highlights the ethical challenges and potential biases in AI tools used for hiring, emphasizing the need for critical scrutiny of these technologies.

Impact on Diverse Groups: The conversation explores the disproportionate impact of AI hiring tools on different groups, particularly those with disabilities, showcasing the need for inclusive and fair hiring practices.

Advocacy for Transparency and Ethics: Schulman advocates for greater transparency in AI technologies and ethical considerations in their deployment, urging companies to be more accountable.

Future Prospects and Concerns: The episode discusses the future of AI in hiring, considering both its potential benefits and the risks it poses, especially regarding privacy and discrimination.

Do not miss Hilke’s book:   “The Algorithm: How AI Decides Who Gets Hired, Monitored, Promoted, and Fired and Why We Need to Fight Back Now”  which is now available everywhere you can get a book!

 

Hilke can be reached at:  hilke.schellmann@gmail.com and via Linkedin.

 

Full episode transcript:  

Speaker 0: Welcome to Science for Hyre. With your host doctor Charles Handler. Science for Hire provides thirty minutes of enlightenment on best practices and news from the front lines of the improvement testing universe.

Speaker 1: Hello, and welcome to the latest edition of Science. Or higher. I am your host, doctor Charles Handler. And today, I have a guest with a different background, and I always love it when guests with different backgrounds. Dive into the kind of stuff that that I do and we do because it’s important stuff, you know.
Hiring is is just super important. And if you just look at the EU AI Act, I mean, it is it is it is listed as a high risk area, which is really cool. So Like yesterday, different background, journalist, and aficionado of the rights of humans in hiring. Mhmm. And it is Hilke Schulman.
Welcome to

Speaker 2: him. Yeah. Thanks for having me. So do sure to be on the podcast that I’ve been listening to for years. So I consider myself lucky to me.

Speaker 1: Oh, well, thank you so much. I consider myself lucky too. And just tell our audience I always say, I like my guest to introduce themselves because who knows them better than them. So tell us a little bit about your background. You know, you got some interesting,

Speaker 2: you say, but maybe, like, you know, a personality assessment knows me better than myself.

Speaker 1: Don’t start with that nonsense. Don’t start with that. No. No. It’s true, but I have you know, that would be interesting if I made all my guests take a personality assessment before the show, and then I say, well, you’re not very extroverted.
What’s with that? Maybe you can work on that. Because there’s always that show. Yeah. I mean, there’s always that that kind of discussion of, can you change personality?
Or is are you stuck with it? I believe, you can at least maybe not change it, but you can do interventions on yourself. Like, I have a lot of personality traits that I, you know, feel pretty hardwired on, but I work on them. And I don’t let them show up in, you know, when the triggers are there. It’s a lot of work.

Speaker 2: Totally. Totally. I had to work. I’m, like, I’m actually, like, a very shy person which is why, but I had to, like, work on myself and just be, like, you know, I’m gonna go to every reception. I’m gonna shake people’s hand.
I’m gonna introduce myself to strangers. Yeah. You know, have a much easier time as a journalist doing that than, you know, as a human being. As a journalist is my job. So it’s perfect.
Oh, my job. I call up people. No problem. But, weirdly, like, in, like, social functions. I’m, like, so I don’t work on that.
But, you know, we can, you know, we can evolve and have strategies to overcome these things. And We work it out as humans. Maybe that is, like, the beauty of humanity. But sorry, we’re getting sidetracked.

Speaker 1: That’s okay. That’s fun.

Speaker 2: So My name is Hilda Schalman. I’m a reporter, and I’m also a journalism professor at New York University. And what I do is I just you know, I’m curious person to let my curiosity drive myself to find new reporting things, and I sort of take very deep dive. So I usually do, you know, I did on for frontline. I do long form investigative creative pieces.
I looked at facial recognition. I went to Pakistan. And it all kinds of things just trying to find, like, what are sort of systemic problems that we need to shine a light on.

Speaker 1: No shortage of this. Yes. In our crazy world.

Speaker 2: Journalists are very busy. It turns out. Yeah. And I found this topic actually six years ago in November two thousand seventeen, I was taking a left right in Washington DC from a convention center to a train station. I talked to the driver and I asked him how are you doing?
You know, how, you know, how are you? And the driver was like, he’s like, I’m having a really weird day, you know, and I’m like a journalist. I’m like, oh, yeah. Tell me, Mo, why is it weird? And he was like he was like, oh, I had a job interview by Ophobot, and I was like, job interview by Ophobot.
Tell me more. So he had he had applied to baggage peddle position. This was in the very early days, so I don’t actually think it was a robot. It’s probably, like, a prerecorded phone call. Right?
With, like, prerecorded messages. But I had never heard of that. And I was like, oh, that’s so fascinating. And then, you know, I started looking into it. I went to an AI and fairness conference and somebody who had just left the EUUC was they were talking about about small scale.
I’ll group them looking at people’s calendars, emails, and signing out how may how, you know, how long you work, and very basic algorithmic applications, and she was worried that folks, you know, mothers, people with disabilities might get penalized. Because they have higher rates of absences, and she felt like, oh, this could be like discrimination, and we’re not looking at this. And and, you know, there’s like hardly people in the room. And I started talking to people and I was like, oh, there’s a lot going on in this field, you know, and to sign up, I was blown away, but what I saw into h attack, I mean, that my side is hurting after it took time.

Speaker 1: Oh, it’s difficult. Yeah.

Speaker 2: You know, and I saw all this tech moving into the space and, you know, having affinity from math and statistic. I know it’s wild. I was missing it. And I was like, oh, this is so interesting. Let’s talk about how we quantify humans.
And maybe we found, you know, the magic the magic way to do it turns out it’s all a lot more complicated than that.

Speaker 1: Well, people are complicated. That’s the thing.

Speaker 2: Yes. Yes. Totally. So that was sort of the beginning into this. So I’ve I don’t know for the book, maybe I talked to probably two hundred, three hundred people, and with some talked to them many times and talk to job applicants, vendors, I talk to people who are HR managers, experts, I always like Yeah.
Anyone under the sun who wanted to talk to me.

Speaker 1: Yeah. So you got a lot of practice talking to strangers. I am a shy extrovert, so I’ll perform. I’ll get in front of people and go bananas. Put me in a room with people I don’t know and have no reason to really talked to and I’m pretty shy about it, but I’m I’m working on it.
I was I I introduced myself to somebody at my kids’ baseball game. The other night, and I told him came home and told my wife how proud I was and myself. Like, I walked up to him because his kid’s on the team. Hey. How you doing?
Alright. I don’t do that very often, but it was rewarding. It felt good. It felt really good.

Speaker 2: Yeah. And it turns out most people who stand around, they’re actually kind of grateful if somebody approaches them. Right? Like, other other parents, like, I have a three year old toddler too. Like, there’s a lot of parents who stand on the playground.
It’s actually nice to, like, chat and have a moment. But, you know, everyone is kinda like, and I have that I have that feeling too that I’m like, I don’t know. But, you know, it turns out, most people are super nice. And if they’re not nice, well, you know, You just moved on.

Speaker 1: Well, I know exactly. Well, and having, you know, having only one kid what you’re talking about what a great way to benchmark if you’re crazy or not or if your kids like, way doing something strange or not, you know? And then once they get to be like ten, like mine, it’s a conversation of how do you handle video games at your house? You know? Is it a is it a pestilence on your house like it is on mine?
You know? So

Speaker 2: wait. Well, we’re already talking about, like, you know, YouTube Kids and something like that. Three year olds, they’re like, you know, I have a three year old going on fifteen. She’s, like, ready to move out. Wants to do anything by herself.
You know? She’s like, why do I have parents there, like, in the way?

Speaker 1: I know. Well, she needs you very, very badly. Well, cool. So you wrapped all this stuff up and, like, so we’ve known each other for a while. I’ve been lucky enough to participate in a few interviews with you and But you have a view that, you know, into I mean, I’ve all the people you talk to, I I dabble in talking to those people, and you know, kind of build my own world view as a as a composite, which is the best way to do it.
And you have a book. You turn that into a book and we’re here to talk about that book, but before you get started. I hope I’m not putting you on the spot, but if you had to encapsulate everything you talked about and everything in your book into one or maybe two sentences. What’s the biggest takeaway from it?

Speaker 2: We wish AI tools in hiring and in the world of work of magic. But we have an awful lot more to do and we have to be an awful lot more skeptical.

Speaker 1: Yeah. Good. I I agree. I agree. And I I will just say, as I said before, you know, people are complicated.
And, you know, in my world, even the the your traditional tools, and I would challenge an AI tool really to do better than this or incrementally better.

Speaker 2: Yeah. I agree.

Speaker 1: If you’re explaining, like, twenty percent of of, you know, the variance in job performance through your predictive measures. You’re a hero. I mean and at scale, that saves a lot of money, but it is a very imprecise thing. I I feel like the day that it’s perfect is a scary day because then somebody can really understand a person at a level that’s so personal. And there’s contextual things that people have.
I mean, you may even look great and you get and you are great, but you know, you have a life event happen or, you know, your spouse moves or whatever it is, man.

Speaker 2: Oh, totally totally. Right? Like, how can we predict, like, you know, and I feel I feel in general for hiring. You know, it feels a little unfair like we test and prop the individual. But, you know, they’re going into teams so I can hire the best individuals.
They go into a toxic team with a toxic manager. They’re not they’re probably not gonna perform well. Right? Like, that’s a consideration. Then also, like, obviously, life events something no one is in control.
Right? Like, we can predict predict, predict, but life around us happens. So, like, there there’s so many factors that that go into hiring. And I think one question is, like, how much can we really predict here? And the question is also, like, how much is AI helping us with that

Speaker 1: or hurting us with that?

Speaker 2: That’s good. That’s true too. Like, you know, I started looking into it, you know, a member of, like, going to, like, the first Saia and going to one of the, you know, the like, early days two thousand eighteen. So five years ago, in the early days of AI and hiring, and we were all so excited and, you know, it felt like magically doing, like, pattern, you know, motion recognition and faces and, like, intonation of our voices, the words that we say, and I was like, wow, this actually there might be a tool that has done something that humans can never do as like precise hiring. It seems, I mean, wonderful.
But then when you start digging deeper, you know, and, you know, you all know this. It’s like, wait, what are facial expression and job interviews have to do with, like, the actual job and doing the job? And, you know, what are some of the problems that come back up when we, like, use people who are currently in the job and we use their facial expressions and and drop interviews as if you didn’t have anything to do with the job. So, you know, this is, like, And I think this correlation soup, maybe I we should call it or so. Like, that has come up again and again again.
Like, we find these correlations, these statistical patterns, but are they actually meaningful is sort of a question that I have encountered again and again and again looking at all these different tools. Right? Like looking at resume screeners and I had a couple of people who have come forward, who do you know, who are lawyers or sort of lawyer adjacent who look under the hood when big companies try to use the tool. And, you know, one of them found out when when he looked at the technical reports and, you know, talk to folks and and deck deep into the into the tool that one of the tools was predicting that somebody who put happies like basketball and football predicted success in a job, but had nothing to do with sports, the job. It’s just a middle of the road job.
So that’s kind of a problem. Right? And then the other one was actually pushing right into a potential gender discrimination because it had it up weighed people who put the word baseball in the resume. It downgraded folks who had put the word softball in the resume. Being associated with women.

Speaker 1: For people who drink a lot of beer. Yeah.

Speaker 2: So, you know, there is, you know, there’s all these kinds of problems that when when I talk to folks came up again and again. And I was like, where were, like, these tools out in the wild. We use them, and we have so little knowledge about these. And that is just the surface and and, you know, there’s a lot more there. Other tool online screeners did, like, Syria and Canada, Thomas.
The name Thomas apparently was a vindictive success. You know, probably because, you know, such a sicker correlations, you know, such a sicker patterns that some there were enough people named Thomas in the in the pile. Right? And the computer doesn’t understand that we shouldn’t be looking at this. Like, this is actually nothing to do with the with the job.
It’s just a random

Speaker 1: Right.

Speaker 2: It’s just a correlation. It doesn’t cause anything. It doesn’t make you good at your job, but a computer doesn’t know that. So if we don’t look over these kinds of tools and continually monitoring them, these kinds of problems can seep in all the time. And we see that again and again.
You know? And same with, like, you know, a played an awful lot of AI games for job performance. I kind of enjoyed it. Like, you know, at the beginning, actually, I also did, like, online screens of my Twitter and LinkedIn to find my personality. And, you know, at the beginning, I was, like, really, you know, I was kinda, like, anxious a little bit and worried I was like, what are they gonna find out about myself?
Then I don’t know. And over time, I was like, I’m not sure if this is actually right. And sometimes, you know, I had, like, an AI tool looking over my Twitter and over my LinkedIn and I had, like, the opposite personalities in both of them. So it’s like either have multiple personalities or this tool just doesn’t work because we don’t express ourselves the same on social media or maybe it doesn’t even work at all. Right?
So, like, you know, the anxiety, lesson to lesson. As I was playing these these these games, but the same you know, some of the problems happen again in AI games. Right? Like, why am I hitting why do I have to hit the space bar as fast as I can? Like, what does I have to do with any you’ve ever done.
That’s sort of a question. But then also when you think about it, like, you know, these are often calibrated on people who are successful in the It means often they’re basically working in the job. Right? So, like, if they play the game, maybe they all are risk takers. But then the question is, like, does risk taking actually have anything to do with the job.
Right? It might just be that all your fifty call center employees are required for you.

Speaker 1: Well, they were in call center job. They may not be too big risk taker, you know? Because that’s a tough job. Who knows?

Speaker 2: Yeah. Yeah. Who knows? But, you know, there’s there’s all kind there’s there’s all kinds of questions that come with, like, almost every tool I’ve looked at. So I think we have, like, way way more work to do.
And I sort of feel like, you know, now that I’m talking to, like, probably an audience that has done, you know, I’ve seen a lot of these tools, has looked into them. And I feel like, you know, what I’ve done is, like, I’ve tested them all in a different way if I could, you know, when when I was able to, and I sort of want people, like, to steal my methods. Like, I’m actually not a data scientist. I really can’t code for my life. I mean, I can sort of check code that, like, I see you code at something and you’re not, like, folding me, but that’s the the extent that I can do.
But, you know, if I can talk to a tool that’s supposed to you know, an AI tool is supposed to figure out how about my English competency is, So I spoke to it in English first. I got an eight point five out of nine. I was very proud because English is my second language. So I was like, oh, it has tools. Great.
And then I was, like, oh, let’s test it a little bit. Right? Like, I ran silence through it. There’s, like, no, you know, you don’t get a result at all. But, you know, it’s totally expected.
And when I’ve when I had talked to vendors before, you know, they told me, like, oh, you know, you have to you know, we’re always concerned about people with accents, people with speech impairments, what happens, you know, if the tool doesn’t doesn’t fully sort of, quote unquote, understand what they’re saying or, you know, the transcription isn’t accurate. And Venator would say, oh, you know, the the the tool would totally get and, you know, you have to rise to a certain level. So I was like, okay, let’s just try and speak to the tool in German. So I read the psychometrics article and PDM, German, deep Sohu Midstream, blah blah blah blah. I didn’t speak a word of English.
So I sent it out, you know, did it, sent it out. And I thought for sure I would get, like, an error message or something because not a language that it can that it can predict upon. But if you’re inside, I got a six out of nine competency level. Again, I was competent in speaking English. And I was like, wait, what?
I didn’t speak a word of English. Like, I just spoke in German. And, you know, and then I got, you know, we did this fifty more times and I talk to the developer. So, like, you know, I’m a journalist of belief in, like, a balance and, like, reaching out to folks and just be, like, hey, I did this test. Can can you explain this to me?
Maybe maybe I did something wrong? Maybe there’s an explanation for this. You know, when I talk to the developers, it was just, like, it it was, like, way too complex for me to understand. You know, apparently, there’s, like, a five d world where this all lives, and German is close to English, and then that quadrant

Speaker 1: Are they making it up? Because German is not close to English, and I I took some German in in college. I’m not good at languages, and I found it very difficult. So I I I don’t see how close I mean, it’s all relative.

Speaker 2: It’s so

Speaker 1: unfair to a mangerman or something. Yeah. Good. But yeah.

Speaker 2: It’s probably very closely related. But, you know, we kept talking. You know, I was, like, lost in this five d world that I I couldn’t even imagine.

Speaker 1: Because I don’t know what that is. Five d world.

Speaker 2: Yeah. I don’t know what that is. And, you know, it’s like, we kept talking about it. And I was like, you know, what what if you if, you know, you’re in front of a judge and you have to explain why I got the six out of nine, because she just kindly explained that to me. And I got another five d world and, you know, they’re adjacent a lot, and I was like, I’m not sure if if this developer knows what they predicted and how this tool exactly works.
Right? And I think that to me feels like you know, that’s concerning if we use tools that we don’t know how they predict, what they exactly do. And if they break an act if like

Speaker 1: Yeah. Yeah.

Speaker 2: A journalist who doesn’t it doesn’t know anything can can, quote unquote, break these tools. Right? We really have a lot more work to do. Yeah. So So I think that those are, like, real concerns.
And then, you know, the other big concern I highlighted in the book and I’ve worked with folks who have disabilities, you know, they played some of the games and and, you know, there’s an awful lot that came up with that. Like, why do they if they have a physical, disability. Why do they have to why do we have to hit the space bar as fast as we have to?

Speaker 1: They should get an accommodation, you know. I mean, usually, you have that opportunity, hopefully, to get an accommodation.

Speaker 2: You do have that opportunity, but I think for a lot of people who have disability, they don’t really wanna tell that to an employer. Right? Because they don’t wanna end up on imagine pile b. Right? And no one looks at that pile b because those are the people with the accommodation.
But I think a lot of people also, you know, when you start playing a game, you don’t know what you’re being asked to do. Like, how how do you exactly?

Speaker 1: That’s true.

Speaker 2: If you need an accommodation. Right? Like, that’s another question. And then I think I’d also talk to some vocational counselors who work with I mean, this thing in this case was just like folks who are deaf and hard of hearing. Who definitely needed accommodation for video interviews.
And they said they called the accommodation line, they sent emails, and they never heard back. And I thought that was really concerning that these, like, very large companies, their accommodation pipelines don’t even seem to work you know, and then there’s all this, like, you know, statistical question if you have a disability that you express very it’s expressed very individually. Right? Like, I might I might be autistic, but you might also be autistic, but that is expressed very differently. We’re probably not in the training data.
Right? Like, how on Earth can this ever be caught by a by an AI tool? So I think we have a lot of thinking to do if one size fits really fits all?

Speaker 1: Yeah. Well, that’s a hard thing too. Yeah. It is. Yeah.

Speaker 2: And I think we have to do a whole lot more skeptical questioning. And I wish, you know, I just really hope for the for the industry what is really hard as a reporter is like that, you know, obviously, a lot of vendors don’t wanna talk about you know, don’t wanna wanna give me access to the tools and, like, give me their technical reports. And I do feel like You

Speaker 1: need that. You know what? Technical report. You

Speaker 2: maybe those technical reports should be published. Yeah.

Speaker 1: Yeah. Like,

Speaker 2: if this is how you, like, build your tool, these are your methods, this is how you validate

Speaker 1: We may get there. I mean, I think with some of the new regulations, you know, we could talk about.

Speaker 2: But I’m happy to talk about that.

Speaker 1: Yeah. We oh, we’re gonna. We’re gonna. But some thoughts on, you know, what you’re just saying is really well, first of all, I feel like AI or not. Feel strongly about this.
You know, the uniform guidelines, even though they’re pretty old, you could still apply that to any type of tool. You know, there’s nothing about explainability of models in the uniform guidelines. It’s just all about how it was instructed? Is it job related? And does it have an adverse impact?
And that holds true still to this day. I’m sure there’ll be some modifications, but the tool doesn’t is is agnostic to that.

Speaker 2: But Charles, can I ask about that? Like, I do think that, like, the uniform guidelines are important, and I think they should be updated. We really need to think about some of these things. But I also think, like, you know, the four fifth rule that that so many folks in the in the industry really feel strongly about, like, we need to talk about that. And also, you know, it took me a long time as an outsider to understand this, you know, sort of that validity and discrimination are different things and one doesn’t necessarily exclude the other.
So, you know, a member of, like, talking to a legal counsel at the EUC. And just, like, kept asking, I was, like, well, we know that some of these tools don’t work. Like, why why is nobody doing anything, just like Billy wondering and the general counsel that, you know, the interview is in the book. So, you know, I get clearance to use this. And and he was like he’s like, we don’t care if the tool works or not.
And I was like, what? We don’t care. Like, what?

Speaker 1: Well, you should care if it works. I mean, that’s the crazy thing. Like, I’ve I identified that in the New York City law. I’m like, well, wait a minute. There’s absolutely no mandate that the tools valid.
But the uniform guidelines say that your predictor must be job related. That is the ultimate litmus test. You can actually have adverse impact if the tool has been shown to be job related, it’s just not a kind of people have a justified way so, you know, an ethical or a responsibility to to not do that. But, technically, it’s legal. So, you know, job relatedness trumps every single thing.
We just we just know that it’s it’s good for us to to monitor that a little, and maybe that’s an update to the uniform guy. Yeah.

Speaker 2: Yeah. And if and I feel like but I feel like with, like, the the the four fifth rules, I think that a lot of folks feel like, oh, if I just passed the four fifth rule, I’m I’m pretty safe that the EUC is not gonna actually check if I have a business as necessary and look at the validity. Right? So, like, you have sort of this game that, like, of it just passed the four fifth rule and probably in the clear. Mhmm.
And I do think that we need a whole lot more transparency around some of these tools because I feel like sort of in the old days, right, if I applied for firefighter job, I knew, you know, I have to carry two hundred pounds, my a to b, or whatever the test was. Right? And I could train for that, and I could challenge that test in court. Right? And women have and other people have, and said, like, a cis philanthropy, do you actually have to do this, or is this way to sort of discriminate against women through the backdoor because, you know, they’re often physically less able to do these things.
We’ve had these court battles. But the the time that we now live in with these AI tools is like most of us actually don’t know if we’re being assessed, how are we being assessed? And it’s really like, it’s almost impossible to challenge that. Right? Because I don’t know if he’s using an online resume screener and how that thing works.
And, you know, I get rejected. I get rejected for a lot of jobs. Like, how am I supposed to know you have to

Speaker 1: Most people get rejected. Like, it’s it’s not common that you are gonna you know, the odds are that you’re not gonna get selected. So that’s why you gotta apply to a lot of them. It it’s true, and I think there’ll be some hopefully some accountability that you have to show. I think as far as adverse impact goes, I mean, the fore fist rule is the main thing, but there are other things that you look at, but those are they’re all somewhat flawed like you’re, you know, not to get two technical, the two s d test, but that samples size dependent.
So you could pass that with a giant sample, you know. I feel like there’s another layer to it and I look at it very holistically. How how was it was the assessment constructed. And that would that would even go to the training data, but I think even in the more traditional sense, I had a really interesting experience one time for super eye opening. I developed a bio data predictor tool for a large national bank that was basically, you know, for tellers and personal bankers.
Right? And so my sponsor there did this amazing thing. He said, alright. You’ve written these questions. We’re gonna show these questions to a diversity review council of internal people at our organization.
That not only represent the diversity function, but are also diverse leaders. Right? So we we gave about twenty people this tool. And I’m a conscientious item writer. I’m like, I’m not I’m trying to be as neutral as possible.
They found at least at least ten or fifteen things that were so subtle in the cultural differences that I never would have thought of And they stuck out to those people, and we went back and revised this thing. Right? So so who’s doing that kind of stuff? Because I’ve never had another client actually agreed to do that. When I bring it up, it’s like, oh, that’s too complicated or whatever.
And minimally, it’s been a while, but there’s a lot of things you can do there.

Speaker 2: But it requires a lot of, like, thought and a lot of time and a lot of, like, a adjusting in a yeah. I I agree, but I think that’s not what, you know, a lot of companies are after. They have so many applicants. They have to you know, you know, I get it. If I would get three million applicants, I don’t Yeah.
Look, I also and I wanted technological situation to my problem. But I think what what I’m trying to say is, like, let’s not repeat the errors of the past and just use, you know, problematic bias data and just put it in the tool. Right? And then we replicate the problems, and then we bring in on top of that, maybe, you know, other other machine bias and, you know, sort of these wrong predictions of first names and hobbies and stuff we have hopefully rooted out in humans.

Speaker 1: Soup of stuff. Yeah. Yeah. So I I heard another thing that the CEO of Adid, I heard an an interview or read an interview with him, And he was saying, well, you know, one of the important things is whose code in this data, who’s building these algorithms, maybe we need to make sure diverse people are actually building the product, you know. I mean, how much that certainly couldn’t hurt and, hey, you wanna have a you wanna have a workforce of diverse people no matter what, whatever they’re building, but That was something I was like, oh, man, I never thought of that.
So there’s a lot of layers to the end result of having a tool that you’re fielding. I wanna make sure that we get to this book. So why don’t you hold it up? I asked you

Speaker 2: Oh, okay. Yeah.

Speaker 1: For a pre call, like, what’s the name of it? So I could mention it, and you you were saying, well, it’s a really long name. So I can see at the algorithm how AI decides who gets hired, monitored, promoted, and fired, and why we need to fight back now would have put an exclamation point. Did your editor did your editor tell you no exclamation point? Liberal with punctuation or maybe a few emojis on your

Speaker 2: I think that probably shows more about your personality. There’s there’s an AI tool that could figure out based on your explanation for your ass.

Speaker 1: Hello, my God. I’m busted. Yeah. Yeah. I am an extrovert.
Yeah. So tell us about it. I mean, we’ve been talking about stuff in there. And I That’s exactly like

Speaker 2: what if these

Speaker 1: What’s your favorite thing you found in this book? And why should people read? That’s double question. I do that a lot. First, what’s the favorite thing you found?
The most the most lasting thing that is meaningful to you.

Speaker 2: Yeah. I mean, I think what what I found and what was really meaningful to me, and I think we as journalist also have sometimes overlooked people with disabilities Mhmm. Going through these tools and and being exposed to them. And I think that was really eye opening to me to, like, work with people, have a disability play the games with them and other things and, like, talk through them. Instead of understanding, like, how difficult this is for them to navigate this world.
And it even starts with, like, you know, that people just see their vocational counselor, they even have to take you know, if they a child care, they have to get a bus pass, and it’s, like, takes them forever to get there. And then a lot of these tools, you know, like, some of their vocational counselors were, like, this is like, you know, these people apply for janitorial jobs. Like, why do they need to have personality tests? Like, what does they have to do with anything from some of the vocational counselor, especially, you know, for some of them who are deaf and hard of hearing, they’re like, you know, all this stuff is timed, then, like, and people who are deaf and hard of hearing, American sign language is considered a foreign language. It’s not English that we speak and read.
So for them, it’s like, you know, the vocational accounts as well, I have to read the question. I have to sign it to them. They have to think about the answers. Send it back to me. I have to translate in English.
Sometimes they don’t understand things because, you know, we say, like, the car crashed into the wall. They just had, like, a car crashing into the walls. A very different approach to the woods.

Speaker 1: Right. Right. And they

Speaker 2: were, like, you know, we already timed out. We make it to two questions. Out of the whole assessment. They’re like, this is really not working, and it’s not billed for people for other than like, native English speakers in mind that are a certain way in a certain way, you know, and they felt so frustrated. And there’s no one to talk to anymore because saw the ice cream after the ice cream.
So they really felt like this is like putting a new screen for folks with disabilities, especially who can’t get through that barrier

Speaker 1: Mhmm.

Speaker 2: And and really felt like they’re being excluded to the job market. And we only test for we often only test for for for race and gender. Mhmm. And we don’t protest for other things if the tool works. And I think that’s really to the detriment of everyone because we wanna make sure Like, we have like, people are chosen on merit.
And because they have, like, the minimum qualifications to do the job, And the question is, are we really really doing it? So, like, my hope is with this, like, you know, looking at these tools and finding out that many really probably do more harm than they do good. And I think that’s a sobering revelation. And looking at all of these tools from, like, video interviews to online screens of our of our data exhaust, to looking at AI games, resume screeners, the stuff that, you know, is used on on asset work. You know, predictive flight risk to really look at all of these and sort of understand, like, how do they work?
How ubiquitous are they already? And how like, what is questionable about these approaches? And what should we do? That’s why I think we have to have this conversation to think about, like, how should we do hiring? Like, what is a better way?
How can we not replicate the mistakes in the past and put them into the technological tools? And I get you have to have technology help you. Like, we are overwhelmed by these applications, but let’s build better technology And let’s be transparent. Right? Like, vendors don’t tell me, like, a lot of companies are afraid that they won’t have the not plausible deniability if they say, well, we use this tool or millions of people.
You know, they’re really afraid that backlash comes out. And so I have, like, some whistleblower, especially employment lawyers who get test these and say, like, oh, yeah. We tested them, you know, one AI games vendor was basically they were like, you know, we tested it every which way and it was always discriminating against women. I was like, oh, yeah. And what happened to the vendor?
They’re like, oh, yeah. They’re still around. They totally grew. And I was like, I mean, I hope they changed their ways and learned from this, but we don’t actually know that. Right?
So I think we need to do better and transparency is sort of like a really big step in that.

Speaker 1: Yeah. There’s so much to to unpack there. I think you’re right or I know you’re right. You know, the other interesting thing is it’s it’s volume. Like, I remember back in the I was working for really the first company, maybe the second, maybe arguable, depending on who you asked to put tests online.
You know? And and during that time, it was right when job boards really exploded too. Right? And there was not the technology on the other side of a job board to handle what was happening. So that’s where the ATS kinda came from.
And, you know, in some sense, it’s still that volume. Because if you look at executive assessment, managerial assessment. You know, you’re not using AI tools as much. It’s much more high touch. It takes a longer time.
You’re probably getting even better prediction. I mean, the more time you get to spend with somebody and understand them with good tools, the more you understand how well they can probably do a job as long as you understand the job well enough, you know, and make those connections. But at volume hiring, when you’re just getting so many people and now it’s so easy and you can get bots, I’m sure, to apply for you. In fact, I read a article about that. Like like, there’s some bot a guy had it’s five thousand applications, you know, in in one shot or something.
So that creates even more noise and more noise, and that’s what’s making it hard. But

Speaker 2: Yes. And I think and I think, you know, that’s also kind of funny with Generative AI. I know that we have chat GPT. Right?

Speaker 1: You opened the box. John and the box.

Speaker 2: Yes. I did. So it’s like a great level playing field. I think a lot of job applicants feel. Right?
Because they felt you know, look, it’s always hiring has always been unfair and tip towards the employer. Right? Because they make the ultimate decisions. I as a, you know, traditionally, I could, like, kid sort of curate my application and what I said and the and the reference I would choose might had, like, some level of of of agency as an applicant. But with all of these AI screens and often, you know, maybe even using them on my Twitter and things I didn’t necessarily consent to for the hiring process, I think a lot of people felt absolutely out of control.
They didn’t know how these AI screens worked. They had sort of a sense of it. But now with generated AI, right, like, Chachi Peteka, can write my resume, can write a couple letter. So a lot of folks

Speaker 1: They can answer interview questions for me if I have it

Speaker 2: right now.

Speaker 1: There. And it’s it’s it’s definitely a a playing field, you know, leveler in some sense. You know, it’s interesting to watch the I think back a little while ago, I’m trying to remember when, but I I I wrote something about, like, it’s it’s a ping pong match. When there’s fewer applicants for open positions, applicants can can be picky, they can refuse to do things, they can ask for huge amounts of money. They have the power because the employers need them.
It’s not typically that way. It’s typically that the employers have way more than they can handle the choose from and they need tools or or whatever and then they can be kind of, you know, jerks about it if they want to, hopefully not. But so there’s a there’s a balance equilibrium back and forth. But to your points you’re making, I do feel but that’s just that this is the paradox of AI. I feel optimistic and pessimistic at the same time because optimistically, we have some really good regulations that are starting.
They’re they’re never gonna be caught up to what’s happening, but at least they’re happening. Right? And I think vendor transparency and third party auditing and a lot of that stuff is going it’s gonna be a huge industry that doesn’t mean that people won’t find ways to fuck the system or whatever. And I also think that vendors there are a lot of conscientious vendors who really are using AI. And look, AI is machine learning.
We’ve been using machine learning for a long time in more simplistic ways. It’s when we get to neural networks and stuff and stuff that’s been trained. I don’t think it’s on, you know, open bigger datasets, whatever, that’s where it gets a little bit crazy. But I’ll tell you too, I’ve seen some really good uses. Like, we’re we’re always talking about AI on what call the predictor side, like, how are you measuring a person?
But there’s also really good AI that’s been used to, you know, help score open ended or responses or really build, so training an AI on experts solving a problem to to score it. Right? Not as much measuring the thing about people. It’s just saying when a person does these things, that’s typically when a good pathway or this is a less good pathway or whatnot. And, you know, if you’re a conscientious vendor, you’re making sure that what’s being done on that side of things is it works.
And I have seen that, and I think a lot of vendors are doing that, but but they don’t talk about that as much. It’s it’s much more of this and look, the biggest problems are on the on the front end of this thing. Like, like you’re talking about. And and good for you for, you know, coming from the outside and and not really having any preconceived notions kind of experientially consuming this stuff and saying, whoa. Because you’ve you’ve seen a lot more in-depth than than most in this area.
So Yes.

Speaker 2: I get lucky that. Yeah. I’m a journalist like, some of the vendors. Like, we’re we’re very very generous to to talk to me and, like, talk to me about the methods and and are also okay with me like sort of, you know, when I talk to iOS and other experts telling them, oh, is this a good idea? Like but I think we have to have a whole more a lot of those conversations.
I’ve also reported on auditing and two of the early audits and hiring that were paid for by the vendors to, like, third party entities. And I don’t think those were actually good audits. In fact, there wasn’t actually an audit fame. We need

Speaker 1: to call it the auditors. You know, that’s that’s the

Speaker 2: Oh, oh, my god. Absolutely. And, you know, and, like, you know, and some of them got criticized for, like, one of the auditors was published with companies, employees on the audit? Like, is that, like, a conflict of of interest fee zone? No.
And the other one was, like, stakeholder discussion basically that didn’t even look at the algorithm and how things are done. So I think we have to be very careful that we are not whitewashing. Yeah. You know, not good technology with audits and, like, sort of, give them a stamp of approval because we’ve you know, one company paid or vendor paid an outside party. To do this kind of stuff.
And I’m not seeing so, like, that’s why I’m a little bit skeptical about the New York City law because it’s sort of

Speaker 1: Oh, that laws watered down to almost nothing. I have a guy coming on. Max Sherry, he wrote some really good stuff. I don’t know if you’ve read his stuff about how

Speaker 2: Oh, totally.

Speaker 1: The AI I mean, the New York law is just is really kind of a a a a shell from for us big big corporate interest to make it really easy. And that it’s actually his thing is that it’s it’s the first step in trying to remove the uniform guidelines, which I think is that that gets a little bit conspiracy wise. But so tell us about regulation. You mentioned New York City. Right?
We have a lot of regulations, the EU stuff. So tell us how what’s your take on? The ability of regulations that are happening to actually control, you know, some of this stuff?

Speaker 2: Yes. So I think I think regulation is important. But I think it’s not gonna be our savior. Right? And we see that in the New York City law.
And I was part of, like, sort of, you know, going along documenting some some of the processes you know, we have one problem is that, like, politicians often really don’t know how this stuff works and they need to be educated first and they make decisions on not knowing this stuff. But I do think that, like, hiring needs to be of this, like, high stakes pool, which, like, traditionally, maybe but you haven’t really looked at that. Right? It was like facial recognition and how long people go to jail, but it matters who gets a job. Like, it matters how you get the job and if it’s done or merit because I really care about my job.
I love my job. I think you know, I’m thankful that somebody chose me for this job. But, you know, and I would be really pissed if I apply for my dream job. And, you know, I would get rejected because of a faulty algorithm that did something wrong based on whatever like, you know, correlation that it found that day. If I was rejecting a merit, I would totally take that.
But sort of, you know, the the the New York City law. It’s, like, not even really clear how to audit it. And I’ve I think the ACLU has looked at some of the audits. I think they counted seventeen audits that have been published since in New York, since the last been

Speaker 1: There’s not many because I’ve I’ve been trying to find people who have seen a couple done by some good stuff. I’m I’m able to do them. In fact, I’ve had also people. There’s so much confusion. Couple people from overseas who said, oh, well, I need to find a certified New York City bias auditorium.
Like, there is no. There is a certification you can get from a from a nonprofit in New York, but it doesn’t it’s not mandatory by any means. So there’s just a lot of of confusion. And and as I talk to vendors and people, Most everyone, I’m talking to a lot of IOs at big companies right now for a project, and they’re all like, yeah, you know, we’re just kinda wait to see. We hadn’t really done anything.
But as the California law and then the EU law and then some federal laws, there’s gonna be stuff that people are gonna have to comply with. And I believe one of the biggest things gonna happen is vendor audit requirements. Right? I mean, now vendors can say, oh, I’ve got the company ex seal of approval, but there’s no like, that’s just nice to have. There’s no mandate that you have that.
And who knows? What goes into that and those companies are for profit, you know, companies that rely on these audits. So not to say they’re all bad, but it’s It’s it’s helping solve things, but at the same time, like you said, my take is companies need to have their own policies and practices, and they need to follow those damn things. And they need to make them, you know, real because the stuff it matters and the penalties aren’t very strong. Even the New York City law, so you’re not gonna lose that much money.

Speaker 2: Exactly. Exactly. Yeah. No. And I think, I mean, that’s why we don’t see a lot of compliance with the law.
But I think also, like, I think people have to get a whole more lot ethical and step difficult about these tools and really think through, like, would I use this on myself? Would I think that that is fair if I use this tool? Because I think auditing you’ve seen this with the subprime mortgages, they were all rated Yes. Triple a by the auditors of these financial instruments. Right?
They were paid for by the companies. That is not a system that has proved themselves very right. So we’re using similar system here. Right? So I think we should really be careful about that.
So I think what might be also helpful on top of that is, like, more transparency. More even than the audits, like, have actually an audit framework, maybe have, you know, some people have said we need a we need a government agency to pre license these tools. So people have to open the books and show here is what we did. Here’s the validity. Here’s not we tested it.
And we tested it on more than one hundred twenty to twenty five year old college students if you wanna use it on the general population. Right? Like, there’s all kinds of things stacked could be done. That seems a lot of work, and I don’t think the government is, like, sort of suited or wanting to do that.

Speaker 1: No. But I

Speaker 2: hope it’s, like, if we mandate transparency, I think we might be able to actually have some non for profit groups. I’m sort of actually thinking, like, maybe we need to build, like, a non profit group to, like, do interdisciplinary testing, not only journalists, but, like, you know, I’ve worked with sociologists, computer scientists, all kind of interdisciplinary teams to have large sample sizes to test some of these algorithms, to actually, like, authoritatively say that we have examined it. Here’s our scientific paper. Here’s our methods. We can say this doesn’t work because of x.
And I think if we can test more of these tools and publish this and actually maybe also build our own tools. And, like, let’s see, can we build a resume screener that is not bias, some keywords, or whatever, and has a correlation versus causation problem. Well, if you can, let’s put it on GitHub. Get hub and, like, pressure people to see, okay, this is how it’s been done. Don’t use bad technology.
Use the technology that is proven to work and build on that and steal what we made because we made it in the public interest. Maybe that is the next step because I feel like vendors don’t necessarily wanna disclose a whole lot of stuff and, like, science is expensive. You know? Oh, the other thing I also wanted

Speaker 1: to experience I

Speaker 2: would love to do yeah. We would love to do long term studies. Like, you know, like open open your company to us. Like, we work with institutional review boards. Like, we make sure that this is, like, ethically sound.
And, like, let us do a test over five years. Like, you actually proved does this work and have people you hire traditionally, people who the tool set are successful, not successful. It’s higher than what It’s

Speaker 1: such an idealist. I don’t I don’t wanna rain on your parade, but I don’t know if a lot of that’s gonna happen.

Speaker 2: I think that would act actually be helpful having these long term because, you know, we we sort of try to build these tools for, like, successful employees. Like, we don’t even know what makes success

Speaker 1: It’s not just also, it’s important. It’s not just vendors. There’s a lot of companies that build stuff like this internally, which makes even harder for anybody to know because they’re, like, building their own stuff and you you see some, you know, some things about that. And sometimes those things have been exposed, but you just you just never know and the more the more digital exhaust is, you know, prevalent or whatever. I mean, it already is that the more you’re you may not No.
And so it’s it’s it’s cat and mouse game. It will give us a ray of sunshine from what you’ve done. You know, like, where it’s easy and it’s it’s harder to find rays of sunshine other than we know that it is possible to field tools that are are wholesome and and good, and there’s a lot of people who do that. And, you know, it’s also true that’s I’m diverting a little bit, but I have to get this in that, you know, humans have biases too. Right?
So we’re not it’s not like we’re we got a free lunch here.

Speaker 2: Oh, no. And I think that’s, like, one of the problems that I was struggling because I mean, I’m not advocating we go back to human hire. Like, human hiring is like supervised. Let’s not do that.

Speaker 1: Well, it’s a combination just like any AI. Right? It’s a combination.

Speaker 2: But let’s talk about the methods and and and what we do and really think through, like, is AI having more data? Actually, the solution or are we bringing in so much so many more proxies that actually could be proxies for arrays or or or gender and actually making this worse. So I think there is, you know, a call for, like, basic regression analysis and actually test these tools two ways. Right? It’s less sex to use, like, the old school methods.
Right? Then the AI seems shiny and interesting. But, like, actually, is there regression analysis as good as AI. And if it is or even just a little, if you only have four parameters, that might be much more controllable than using all this data, Sally. Right?
That have all these problematic proxies potentially. Well, let’s do that. But then also the question is, like, you know, is hiring fair and general fold. Maybe maybe the most fair thing is, like, we would have to hire people, have them do the job. And see if they can do that.

Speaker 1: I love that part. Right?

Speaker 2: But then you have to, like, lay off a whole lot of people that I’m not good at it. It’s like Well,

Speaker 1: you know what? Let’s let’s talk about and I feel like we get this could be going two or three hours. We gotta start wrapping it up, but that’s why I love work samples and job simulation. So much. And I think AI is gonna be able to help put people into virtual like, look, text this is gonna blow things wide open and and it’s like, as I’m looking at my interest, being able to type text to photorealistic video is gonna solve a lot of problems with simulations and work samples know, there’s still AI looking at what you’re doing, but if you put someone in a control environment and say, here’s the job, here’s your little avatar, here’s you in a VR, AR, whatever, do the damn job.
It’s it’s been expensive in time consuming to do that because you haven’t been able to brand it and make it locally specific, so you get these generic things. That aren’t really tapping into the precision of what it takes to be successful in that company. So you have some control there job related, it could even be engaging or it should be engaging realistic job previews. So if someone doesn’t think it’s a good job for them, they’ll leave, say Yeah.

Speaker 2: It’s also helpful for me to understand what we’re you know, as an applicant, what would the job entail? Right? Like, it’s actually, like, helpful. I was, like, oh, do I actually wanna do that?

Speaker 1: So AI can help that way. You know? And I think that GTP and everything, I feel like, well, we should have another another edition where we talk about large language models because and and well, I’ll ask you this. We talked about this in the pre call. You said that because I figured writing the book is a long process.
I’ve been writing one for about five years now, and so I haven’t gotten even close to the end. But I said, well, to to generative AI and you know, ChaticTP, make it through into the book. And then my guess is you probably went back and made sure to add it. But tell us how the book addresses that subject. What did you find about that?
What are your thoughts?

Speaker 2: Yeah. I mean, they think, you know, generative AI is pretty slow coming into the HR enterprise. In companies. You know, we’re still figuring that out. Can we have a chatbot answering questions?
I think it’s all, you know, interesting and we’ll probably see some interesting applications. I think it’s really blowing up on the applicant side. Right? They feel like, wait. I now have AI to fight AI and, like, their resume screeners, like, left the best AI when the cover letter folks use it to prep for interview questions, like, you know, chat, if you give me interview question, also give me answers.
Some people use it actually probably in real time doing video interviews, you know. And I’ve certainly I don’t think there’s a whole lot of company safeguards. Like, I’ve done video interviews where I wasn’t in front of the camera, and I had a deep fake of my voice. I typed it out say the things that I wanted the answer. And I got a a, you know, whatever, seventy five percent or something like that match to the job.
So we have this a lot, right, that people impersonate each other in these video interviews. And there’s not a whole lot of things that the companies find there, especially if we use AI, to just predict on the words that somebody says, well, if that is a deep fake

Speaker 1: Yeah.

Speaker 2: A totally made up stuff. You might give me a job and it wasn’t actually me. I think also people are concerned that for remote hiring, you give people a job, and, you know, maybe the wrong person shows up, but also day from day one get assist to your systems. So this could be a backdoor info hackers to steal company data. And I think we’re you know, the FBI to put out a warning not too long ago.
I don’t know if a lot of employers have seen that they really need to be thinking about this.

Speaker 1: Yeah. Yeah. Then you can, you

Speaker 2: know, chat to BT can be really helpful as generating value. In general, to folks like this, to spoof and fool Yeah. The hiring process season. I haven’t seen a lot of application of that. A vendor is, like, building in you know, there’s, like, facial recognition.
It has, like, a life thing in there. Yeah. You can still

Speaker 1: find that. You can still fake that. I mean, it’s like an arms race.

Speaker 2: Yeah. It totally is a cat and mouse game. Right? Yep. The same things for, like, monitoring at work.
Tied out mouse wigglers and that and now the companies that track employees now say, oh, we can find the mouse jigglers, and we can find this, and we can find this. So it’s like always a cat mouse game. So but I think that’s something Yeah. That I hadn’t heard that there’s, like, actual, like, work fraud that I think company need to be aware of.

Speaker 1: Yeah. There’s ethical things, you know. And I think Go with it.

Speaker 2: Yeah.

Speaker 1: Yeah. So I read a thing that was saying it was about people being afraid to admit they use GPT on the job. Right? And I think there’s this and I I did a presentation on the scribe thing about kind of the psychological safety of this. But in the article, one person said, well, you know, I I work remotely.
I can do chat I can use chatGTP to do everything I need for my day’s work in about three hours. And the rest of the time, I just, you know, I just grew off. And so I’m like, And I I interviewed Chad GTP for a podcast, and I asked it about that. And it’s it’s suggestion was, well, that person obviously needs more challenging work. They should be promoted.
But, I mean, it’s like, is that person really gonna say anything? Yeah. And so and it’s and you could do that on a separate computer that’s not the company computer. So I view GTP, really. It’s almost like a consumer product or an individual product.
Right? And for companies to bring it in, they need to do a lot of work to to isolate it or or whatever, manage it with policy, which is Some do, some don’t at this point. Eventually, everyone will. But that’s the thing about it is it’s a consumer accessible power tool like nothing we’ve ever had. Honestly, the consumer has the power with this thing right now, I think.

Speaker 2: It’s also it’s also a great competitor. Right? Like, it’s a great for,

Speaker 1: like Amazing. Right?

Speaker 2: You know, for for for basic emails, basic writing, you know, like, handbooks for for companies, like, don’t write that from garage. Like, get help with that. But, you know, obviously oversee it because, you know, I’ve tried to, like, play around with it, you know, and I teach investigative reporting. So I was like, oh, write me syllabus. And I was like, wow.
Like, you know, like, you know, it’s the basic first draft. But then I looked at I was like, oh, man, I didn’t know that Mark wrote this book. So cool. Like, you know, you know, an investigative reporter, and then I looked as, like, Mark didn’t write that book. You gotta, like, check that stuff out there.
Right? Like, we know that. It’s just how it just it’s it’s a great thing to, like, just write a first draft, a very technical stuff. Right? It’s a very

Speaker 1: Yeah. It’s no fun. I’ve written a lot of tech reports. It’s It’s no fun at all.

Speaker 2: So use it as a good, like, first first draft. And then you see human brain to go over it and make sure it’s all correct. Right? It’s always Always

Speaker 1: That’s all AI. We need to do both. It needs to be a harmonious partnership. So this I’m gonna ask one last question as we as we play out here. True or false within ten years.
Ten years will people’s individual AI agents be interviewing or companies you know, will bots be interviewing bots on behalf of both parties in the hiring equate

Speaker 2: I think we are pretty kind of seeing that happening. We have, like

Speaker 1: That’s what we’ve just been talking about, right, in a little bit. Right?

Speaker 2: Yes. And I think then we’ll we’ll we’ll find new ways to interview people because it’s gonna be so unsatisfactory for the companies and for the applicants that it’s like, I’ll just have my avatar interview your avatar. Great. Like, I didn’t learn anything. I just learned the mediocre answers that chat GPTS.

Speaker 1: That they both lie to each other about stuff. And right, you know? So alright. Well, good. We are we even have gone up more than I usually go, but I could keep going for

Speaker 2: a long time. Really fascinating, but that’s why I never stopped thinking about it. So I love to, like, talk to folks. You know, I’m super

Speaker 1: Watch the book. Watch the book. Tell people how they can get in touch with you. I can’t wait to read.

Speaker 2: There’s only one book of showman’s.

Speaker 1: Is it

Speaker 2: in touch with me?

Speaker 1: Yeah. Is it out now available now? Or is it Yeah.

Speaker 2: So it’s available starting January second twenty twenty four, which is, you know, super soon. It’s out there. You can preorder it. You can get it already.

Speaker 1: Yep. Oh, good.

Speaker 2: You get it on January second?

Speaker 1: I’m gonna do it.

Speaker 2: And let me know what you think. I’m like, you know, I’m I’m giving people feedback. I also want feedback. So send me a feedback.

Speaker 1: You on LinkedIn, I know I noticed because I’m really getting kind of addicted to LinkedIn right now. You’re when you posted something about it, you got, like, exponentially more comments and, you know, reactions than anything I’ve seen. So obviously, a lot of people are looking at what you’re doing, listening at what you’re doing, and that’s amazing because it’s hard to get people’s attention in this world right now. You know?

Speaker 2: And I think a lot of people care about their job applicants themselves. Maybe parents who have kids who are looking for jobs. Right? Lot of every every company has, you know, of any size, of a certain size has has HR. We we care about work, and we haven’t really looked deeply into this.
So I I was really lucky that I get a chance to spend some time on it and really look into it. And, like, you know, maybe also make it a little bit know, I’m a journalist. I try to make it entertaining.

Speaker 1: Well, yeah.

Speaker 2: Learning about these tools and, like but we need to know it if we wanna change it. Right? Like, I can’t change something if I don’t know how it works. And I think we You don’t know enough, so that’s my that’s my job here.

Speaker 1: Well, and I think you’ve done a good job, a really good job, and I’ll I’ll read the book, but I I can already tell. It’s gonna be great. So thank you very much. Fascinating conversation, and I’ll I’ll have you on again. I feel like we need

Speaker 2: a chance to Thank you, John. Alright. But but there will be a lot more to call

Speaker 1: you guys. As we wind down today’s episode dear listeners, I want to remind you to check out our website rockethire dot com and learn more about our latest line of business which is auditing and advising on AI based hiring tools and talent assessment tools. Take a look at the site. There’s a really awesome some FAQs document around New York City local law one forty four that should answer all your questions about that complex and untested piece of legislation. And guess what?
There’s gonna be more to come. So check us out. We’re here to help.