“Making People Smarter about People”: The role of technology and ethics in building world class hiring systems

Featuring: Christina Norris-Watts- Head of Assessment and People Practices at Johnson & Johnson

“We bring data and insights right from that data to our decision-makers… that’s how we make people smarter about people.” – Christina Norris Watts

 

This episode offers a unique perspective on balancing innovation with ethics in the rapidly changing world of HR technology.

My co-pilot for this journey is Christina Norris Watts, Head of Assessment and People Practices Johnson & Johnson.  We start the episode with some serendipity, both relaying that Band aids are our favorite J&J product.  After sharing some fun stories,  we dive deeply into the evolving landscape of talent assessment in HR, and the amazing work Chiristina has done to help create a best in class assessment center of excellence.  We focus on the role of scientific methods in safely and ethically exploring the new frontiers of AI in the workplace.  Of course no podcast these days would be complete without some discussion of large language models.  We got that covered, as we wax poetic on how LLMs are revolutionizing HR practices. 

But wait, there’s more!  Christina and I discuss the importance of data-driven decision-making, the ethical implications of AI in HR, and the future of talent management. 

 

Key Takeaways:

 

  • Integration of AI in HR: The episode highlights the growing importance of integrating AI and large language models in HR processes for better decision-making.
  • Ethical Considerations: Christina discusses the ethical aspects of using AI in HR, emphasizing transparency and fairness, noting their importance as a guiding star.
  • Importance of Data-Driven HR: The conversation underscores the need for HR to be data-driven, using insights from data to inform decisions about talent management.
  • Challenges in HR Innovations: The episode addresses practical challenges in implementing new technologies in HR, including system integration and maintaining the validity of assessments.
  • Future of HR Technology: Christina shares thoughts on the future of HR technology, particularly the potential shift towards more passive assessment methods.
  • Role of HR in Innovation: The importance of HR’s involvement in technological innovations within a company is discussed, particularly in terms of ethical considerations and understanding job changes.

 

This episode offers a comprehensive look into the future of HR technology and the critical role of data and ethics in shaping HR practices.

 

Full transcript:

 

​​Transcription for: S4H_Christina.mp3 

Speaker 0: Welcome to Science for Hyre. With your host doctor Charles Handler. Science for  Hire provides thirty minutes of enlightenment on best practices and news from the front lines of  the agreement testing universe. 

Speaker 1: Hello, and welcome to another edition of Science or Hire. I am your host doctor  Charles Handler, and I have another in a series of amazing guests going back sixty some  episodes. Amazing guest, Christina Norris Watts of Johnson and Johnson. She is the head of  assessment and people practice. Which is a pretty cool title.And I love to let my guest  introduce themselves because who knows them better than them. So have it, Christina. 

Speaker 2: Thanks so much, Charles. I am so excited and thrilled to be here. I think you run a  fabulous podcast. And you’ve had some of my favorite people on your podcast in the past. So  that’s it’s really just a thrill to be here.So I’m Christina from our spots. Now, like you said, I lead  our assessment and people practice this team within global talent management at Johnson  and Johnson. I’ve been in Johnson and Johnson for seven years now. Before that, I was in  financial services to deprecate at a hedge fund. But before that, an investment bank, and  started my career at APT Metrics as a consultant. 

Speaker 1: Me too. 

Speaker 2: She was mainly doing litigation support. For companies who have been sued for  discrimination and hiring payer performance. And it was really that work that made me want to  go internal into a company I thought maybe I could affect more long lasting change internal.  And it’s really my dream job at j and j. I love it.Get to oversee all of our selection assessment  strategy, our development assessment strategy, our skills strategy, and also succession  planning falls into my world. So yeah. So anything that has to do with what we measure people  on and then how we measure them on those things, either before we hire them or after, falls  into my world. It’s awesome. 

Speaker 1: Very cool. So I’m glad that we’re able to make this happen too. We we tried, I don’t  know, multiple times. We had some technical issues. So you don’t live near like a nuclear  power plant or any power lines or you have a steel plate in your head or anything like  that.Right? 

Speaker 2: I mean, no steel plate. It is New Jersey So I don’t think I lived near a power plant,  but I guess I can’t be totally 

Speaker 1: certain. Oh, just kidding. So So I can’t wait to dig into what you do. I mean, I I also  worked at APT, by the way, early in my career. I’m doing the same thing.Were you in the  Connecticut office? 

Speaker 2: Yep. In Daria? 

Speaker 1: Yeah. Yeah. Me too. I was there. Let’s see.Ninety nine to two thousand, something  like that. So good twenty three years ago. But what a great education? I I really learned to be a  consultant there. They’re just such good so good at being polished good consultants.So I  learned my chops there. It was great. I I value that time a lot. So Johnson and Johnson, let’s  talk about that a little bit. What’s your favorite Johnson and Johnson product?Oh, that’s gonna  have a favorite. There’s so many. 

Speaker 2: So many. That’s a tricky one now that we’ve spun off our consumer business.  Right? So we recently spun off our consumer business and that is now our kit our Ken Vue to 

totally separate entity. So Oh.You know, but, I mean, it was a band aid. It was the answer. It  was the band aid. And the reason was because that was the easiest thing to explain to my  children when they asked me what I Oh. Did.Right? I helped hire the people that made the  mandates. Right? And I helped make sure that they were good at their jobs. And so now, I  

helped make sure people are good at their jobs, who are creating, you know, robots for surgery  and creating cancer drugs which is very cool if you’re grown up.And if robots are cool if you’re  a kid, but robotic you know, robots for surgery isn’t that leads to a lot more questions. 

Speaker 1: Yeah, and COVID vaccine, right? I mean, didn’t you guys come up with one of  those? That’s pretty good. It helped a lot of people. So it’s really crazy.I’m not making this up. I  went through a list of the consumer products right before the call to see to make I saw the  domain of them to pick my favorite, and I also chose Band Aid. And and I chose Band Aid a  because commercial jingle. I am stuck on Band Aid because Band Aid stuck on me. That you  know, I remember that.Because I watched a lot of TV and and also just remember all the great  memories of being cut and bruised and and my mom putting a band aid on me and, you know,  every kid puts band aids on when they don’t even have a wound, at least I did, and my kid  does. 

Speaker 2: Oh, yeah. 

Speaker 1: So they’re There’s something about those band aids. 

Speaker 2: We put band aids on the tummies for hurt feelings. 

Speaker 1: Yeah. Yeah. Yeah. There you go. 

Speaker 2: And Band Aids were on the very first shuttle, I believe, that went up into space. Speaker 1: Oh, wow. 

Speaker 2: Was it Make sense. Of course, if you’re an astronaut and you’re going into space,  you might need a band aid, but we’re pretty proud of that. 

Speaker 1: Yeah, you have a space suit on, but, you know, if you got a cut, that would be  disgusting because with in zero gravity, you’d have blood just floating everywhere in the air.  That that’s crazy. So you would need a band aid. I hope, and it seems like they wouldn’t have  to use those to fix anything in the actual spacecraft. 

Speaker 2: You know, a palovar chain style? 

Speaker 1: Yeah. Exactly. Just stick a just stick a gauze pad or the the little round ones on  there. Well, cool. So we’re calibrated as to what where we’ve worked and what our favorite  product is.That’s a lot of fun. And there’s so much to talk about today, but I have a lot like, it’s  very special when you have someone who is you know, a good I o psychologist who has the  opportunity to run large scale testing programs that are broad within the organization and  shows the organization really cares about it because it is a differentiator. I can promise  everybody. That when you do it at that level and that scope and you’re not just, hey, we got  problem in our call center. Come on in, you know, because that’s typical.And I as a consultant  typically end up in those situations where there isn’t an IO and it’s usually a little less strategic.  So when I get to to talk about and hear about strategic programs. That’s amazing. And I guess  tell us a little bit about it. I mean, in your own words, just describe you know, what the program  has accomplished and how the business really views it, you know, because obviously they see  it as an asset.You know.

Speaker 2: Yeah. I mean, I’m really lucky that our business sees it as an asset. I think that’s  really such a benefit of working for such a scientific company is that they value the science in  all aspects. Mhmm. Who?Yeah. And you right? And so right. Because HR gets you know, a  bad rap in the in the world for just being touchy feely and just being the people people. Right? 

Speaker 1: The police 

Speaker 2: The police. Every Right. Even even worse. And, actually, strategic HR brings so can  bring so much insight into how people operate. Right?So I sit within a broader group called  Decision Science at j and j. 

Speaker 1: Oh, cool. 

Speaker 2: Yeah. Decision Science has people analytics and also our performance  management group and myself And our whole mission is to make people smarter about  people. That’s what we go out with. Right? And so we bring 

Speaker 1: Nice. 

Speaker 2: Data and we don’t just bring data. We bring data and insights, right, from that data  to our decision makers. Right? So that decision maker could be one hiring manager, hiring a  new financial analyst. Right?Or that decision maker could be a CFO thinking about his or her  pipeline. Right? And his or her leadership team or that decision maker could be ahead of  ahead of HR thinking about, well, what kind of talent are we gonna need to support this  business strategy that the business has planned for twenty twenty five? Like, what telling are  we missing? Yeah.Yeah. Right. So those are all the decision makers that would can be so  benefited by all the data we can bring and sometimes are aware of what data we can bring,  sometimes are not aware of what data we can bring. Sometimes think we have more data than  we do or that we’re gonna share more data than we will. So it runs the whole gamut.Right? 

Speaker 1: Right. Right. 

Speaker 2: Right. How do we make sure that we’re really strategically employing our efforts  and focusing on the right things I’ve been spending the past three years now really thinking  through our skill strategy at J and J and how we better understand the skills of our people Yes.  That’s where everybody starts, but I actually need that to be the second step because the first  step is understand the skills needed for all of our jobs. Of today and tomorrow. And that’s kind  of the unsexted step.Right? I mean, I mean, maybe IOS think it’s a lot 

Speaker 1: of work 

Speaker 2: because it’s Yeah. We I mean, we used to just call it a lot of work. But but it’s how  do you do that? How do you do that at scale? Right?At such a diverse company, such a  fascinating puzzle to solve, such a fascinating puzzle, and then Right? And then you can then  you can really plan. Then you can be like, okay. So this is these are the skills that we think  we’re gonna need. Here are the skills we have.What’s the delta? Oh, and by the way, on that  second piece of the skills we have, human beings change all the time? Not that skill level  necessarily changes all the time, but 

Speaker 1: It can. 

Speaker 2: It can. Right? And so you how do you keep that up to date? It’s also a fast Yeah. Speaker 1: Yeah.

Speaker 2: Question as well. So It is. It’s all together. 

Speaker 1: I know. And you have to start there. I mean, if you don’t define look, the the the  value of any predictive hiring tool assessment or whatever it is is anchored in the fact that it’s  job related such job related. You’re missing the point essentially. Right?And if you go to HR  tech, boy, all the platforms now, it’s it’s all about skills management and, you know, projecting  skills forward and filling, you know, future roles. I mean, there’s if there’s one, there’s ten, you  know, at least big big platforms that are doing those things with AI or otherwise, you know. And  in those, there’s gotta be some way to, you know, figure out what the skills are. I mean, I think  there’s a lot of AI focus on that. But there’s also a lot of, hey, let’s bust that onet, you know,  which is actually the onet people I learned at the leading edge consortium are are actually  connecting to GPT.So there’ll be, you know, some really cool stuff there, and we’ll talk about  we’ll have our obligatory and fascinating GPT conversation in a minute. I wanna get more more  educated myself and our audience just on, you know, what it’s like to have such an awesome  purview and influence in an organization that’s brought in. And, you know, the light bulb went  off from me, the the science part of it. Right? You’re a science based company.I also found like  financial services love assessments because they’re they’re numbers people. They’re  predicting stuff. Right? And anybody who does that has a scientific mindset of, oh, we collect  data and we make decisions on that data duh, you know, and good data. It’s great.So do you  all build your own assessments at all? Or you use a multitude of vendors? Let’s not name any  vendors. Try to stay neutral here. But, like, tell us a little bit about the blend or the mix of the  tools that you all use. 

Speaker 2: For the most part, we use external vendors, strong strong scientific, psychometric  partners 

Speaker 1: in the assessment scores. 

Speaker 2: We have created our own assessments in the past. So a few years back. We had  created a very, very early on in the selection funnel, credo assessment. So the Johnson and  Johnson, Krato is foundational. It’s everything we do at Johnson’s Johnson, if you’re not  familiar with it.It’s it’s four paragraphs that really guide our whole business. Right? So it talks  about our responsibilities. Right? So our first responsibilities is to the doctors and nurses, the  mothers and fathers, the patients, everyone who uses our products.The second responsibility,  the second paragraphs to our employees. So I spent a lot of time thinking about that second  paragraph of the credo of, like, their responsibilities to our employees. We also have  responsibilities for the communities that we serve, and we have responsibilities back to the  shareholders to make a fair profit. So it it really helps guide everything we do. And because it’s  so foundational to every job, we actually did create a credo assessment.We used really early on  in the funnel to screen out. What’s interesting about that is we were never able to get it  globally. We were never able to get it really that globally. There were so many country specific  differences that 

Speaker 1: Right. 

Speaker 2: That we had to navigate. And then it was still working. It was still working fine to  really narrow the the the hiring funnel early on, but what continues to happen in the  assessment world. But then we decided we JJ decided to switch systems that we were using  from an applicant tracking perspective, and we couldn’t make it work. We couldn’t make the  assessment work.The way we wanted it to work. So this is a real challenge that happens all the  time. Right? Because this isn’t like something you learn in grad school. Right?Like, I I’m all  about the psychometrics. I’m all about the good assessment. But then I have some I have  some requirements for it. Right? Like, Like, I don’t want you like, many people apply a J and J 

apply to more than one job over time.Well Yeah. I didn’t want you taking this test every time I  wanted 

Speaker 1: No. Not at all. 

Speaker 2: So then I had to so then the system requirement was that we knew who you were.  And if you applied again with a certain time frame, you didn’t have to take it. Your score just 

Speaker 1: Yeah. 

Speaker 2: With you. Smart. Right? But it couldn’t necessarily always do that because  sometimes the score was associated with the the wreck, the job wreck. 

Speaker 1: Jobress versus Yeah. That’s how those things work. 

Speaker 2: A person. Yeah. And, like, little things like that. So they’re like, oh, so so Christine,  are you okay if they do it again every time. And I was like, no.I’m not okay. No. No. And so, you  know, that that whole balance of what you’re okay with, what you can live with, what you can’t  live with, and then how you’re prioritizing things. Right?Because you gotta prioritize the  candidate experience, of course. You gotta Yeah. Priority of course, the validity of the  assessment. You’re gonna prioritize this is gonna be useful. You gotta prioritize the cost of  everything.So long story short, but all that to say, like, we don’t have that assessment anymore.  Not because it didn’t work as an assessment. It was great as an assessment. Because it didn’t  work practically in the in our system ecosystem. 

Speaker 1: Yeah. I mean, that’s a that’s an age old story. So, you know, the overarching values  assessment, something I’ve had a lot of experience with, help and build those, and help and  advise on those. And The interesting thing I find is that typically when you get in there and you  get the the the company values, they’re not very orthogonal, you know? They’re like, oh,  there’s fifteen different things mixed into this description.And when you write an assessment  on them, you gotta kinda tease it apart, but you can’t rewrite those things. So there there’s  challenges there. And I think those are good and meaningful they certainly don’t tell all the  story, but if you’re just screening on the broad, like, is this person kind of a fit for for how we  see things? I think that works pretty well. And definitely the tech stuff.I mean, you know, it’s it’s  it’s the reality. We’re working within a technology stack and system and you know, it doesn’t all  fit together. I think that’s another one of the things you see with ATS’s and stuff. They typically  don’t have their own, you know, assessment engine built into them And so they don’t think  about it from our point of view, but, you know, they’re the big kid on the block essentially. So  we’ve got to be subservient.So that’s a that’s an issue you face. Right? And everybody faces  that, and it’s just about compromises in choosing your battles. But what would you say?  Because one of the things that I think is is important for our listeners who are both talent  acquisition folks and also, you know, psychologists and mostly selection people probably is  existing inside the bigger business, existing inside, you know, HR we talked about or I guess,  you transcend talent acquisition.But what’s the biggest challenge that you have operating  underneath that umbrella in terms of your freedom to to make a difference or, you know, the  headaches. Yeah. What what are you banging your head against the wall about that you would  like to change if you Oh, 

Speaker 2: what an interesting question? I’m gonna I have a couple answers. I have a couple  answers. 

Speaker 1: Go ahead? Alright. The many

Speaker 2: The first one is a more external answer of what’s happening externally in the field  that drives me crazy. I don’t know why so many of the vendors that exist today think we don’t  care about validity. So, of course, I also call just care about Liberty. But just from a business  perspective, I’m not gonna spend a million dollars on it assessment if it doesn’t predict job  performance. 

Speaker 1: Why would you? 

Speaker 2: Because I can because I can flip a coin for free. Right? I I can hire a cent personally  for free. 

Speaker 1: You know, a quarter or dollars. 

Speaker 2: That’s true. That’s true. I was gonna do a quarter because it’s easier. So I can flip a  coin for twenty five cents. 

Speaker 1: That’s true. That’s true. 

Speaker 2: Right. And I have made the argument on very frustrating days. I have made the  argument. I can save j and j millions of dollars right now. We’ll just hire every tenth person who  applies.Done. Over time, I won’t have bias. Right? Over time, that should vary out or a random  number of data are gonna be better than every tenth. But, you know Yeah.Over time, that’ll  that’ll weigh out. So it’ll be fair. 

Speaker 1: What about chicken? The chicken might do it? You know how to have chickens and  pick stocks and stuff? 

Speaker 2: Oh, yeah. Yeah. Yeah. Yeah. That’s good.Video of their beak? Yes. Right? And then  so people forget about the women, no no, but I think the person did do the job. Oh, you need  the person to do the job. 

Speaker 1: Yeah. 

Speaker 2: Alright. So if you need the person 

Speaker 1: to to 

Speaker 2: do the job, then we need to understand what the job is and what the job requires  and then we’ll measure things on those. And and and that I don’t understand how that’s  getting lost. In the conversation when it’s the most important thing. 

Speaker 1: Yeah. Yeah. I know. Because it takes a lot of effort to get there. Right?And and I  think a lot of vendors they’re not really on the hook for that because the company never gets  the data, the show if it works or not. I mean, that’s one of the big things we get called in for is,  hey, is this assessment working? Is a third party? Come in and and take a look at it? And So  that that’s an important that’s an important piece of it. 

Speaker 2: Yeah. So that’s what I think my head gets wrong, and then and then it gets even  more, like, nuanced some vendors then get really sophisticated, and then they’re like, okay, I  can prove that it predicts job performance. And I don’t know if you’ve seen this in any vendors  you’ve looked at, but there’s some vendor that were showing me that they had a point eight  correlation with 

Speaker 1: Yeah. Yeah.

Speaker 2: Their assessment in job firms, and it’s, like, the point eight 

Speaker 1: Yeah. What kind of creation? 

Speaker 2: Let’s talk about that. 

Speaker 1: Yeah. 

Speaker 2: Why? Because they’re like, isn’t this great? Isn’t this great. And so if someone says,  look at this point, the correlation isn’t this great. That says to me that they either don’t  understand what they’re doing or they’re or they’re actually or they’re overfitting their model on  us. 

Speaker 1: Yeah. Or or correcting it with all kinda crazy corrections 

Speaker 2: They’re correcting. On purpose. And so I I just am gonna give them the benefit of  the doubt and believe that they don’t understand because I’m I’m gonna believe the best. But  the overfitting of, like, okay. And then how you know, so how did you get to that point eight  correlation?Because Cool. You know, point eight, I I can go home. We can all go home. We  have this perfect test now. 

Speaker 1: Oh, man. I mean, we’ve solved the world’s problems here with that. 

Speaker 2: Yeah. Right. And then you’re looking at okay. Let me look at how they measure job  performance. And how many did the test.And lo and behold, it was the same thing. They just  measured the same thing. 

Speaker 1: Right. Right. Right. 

Speaker 2: Right. They did 

Speaker 1: Oh, yeah. Yeah. Sure. That you would expect to correlation area. Hi. 

Speaker 2: Yeah. So even sometimes when folks have real, and I don’t I wanna be careful. I  don’t wanna bash all vendors. And I do wanna continue to improve innovation. Everybody I’m I  want companies out there trying new things.Absolutely. Keep trying new things. That’s how  science works. Keep trying, keep trying, keep trying, but science also works that we all  critically think about it. And think like, hey, is that correct or not?Hey, what’s really going on?  Hey, what problem is that solving? 

Speaker 1: So as you talk about job performance, you know, sometimes random thoughts  come into my head. So what’s the worst job you ever had? 

Speaker 2: Oh, I’ve been very lucky. I have not had I have not had terrible jobs. Speaker 1: Yeah. 

Speaker 2: I have collected my favorite job titles over the years. 

Speaker 1: Oh, okay. What are those? 

Speaker 2: So one of the and one of the worst jobs that I do think about I’m trying to motivate  myself and I think I’m having a bad day, sewer grout service worker.

Speaker 1: Nice. That’s a good one. Like, where did you even see that in the in the DOT? 

Speaker 2: Jefferson County, Alabama. Oh. So In Jefferson County, Alabama, there’s a super  route service worker job. 

Speaker 1: Uh-huh. 

Speaker 2: And that and it’s a county job, so it would have all the benefits and the good things.  But that That job is you fix the grout in the sewers. 

Speaker 1: Yeah. Yeah. You’re in there. 

Speaker 2: You’re in there? 

Speaker 1: Gotta do it. Somebody’s gotta do it, you know? 

Speaker 2: Wow. You know, someone applies for that job. There’s a list for that job. Someone  gets that job and and hopefully brings him a paycheck to feed their family, and that’s great.  And I am so privileged that I don’t have to do that job. 

Speaker 1: Yeah. Yeah. I think, you know, you probably just are able to switch it off just like I  think about a surgeon or something like I don’t wanna be cut. They just walk in there. It’s what  they do.They’re able to compartmentalize it and I guess the smell would be tough, but maybe  they you have a respirator. You got something going on. 

Speaker 2: Even in my boring jobs that I’ve had in my life. Like, I think I’ve been able I’ve had  fun with them. So, like, in college, one of my jobs was I worked at the student employment  center, and And basically, the major job task that there was when other students who would  drop off their time cards for the other jobs they worked, we would make sure their math was  right and then initial their math. And so it was a lot of just reviewing time cards and initially  reviewing time cards and initially. And but I’ll tell you, It was just such a break from school.It  was such a break from classes. It was such a break. And then it was one of those jobs at the  end of it. You had a whole set of time cards that you had done. Right? 

Speaker 1: And having that perfect product, that’s there’s a lot to that, you know. Yeah. It  happened to detachment. Yeah. I would say for me, I mean, you know, they gotta show dirty  jobs.There’s all kind of stuff on there. I haven’t really watched it a lot. But my own personal  one, I was like, I was an assistant housekeeper at a holiday inn. 

Speaker 2: Really? 

Speaker 1: Uh-huh. And I I I found my way there. It was, you know, I like working hard and and  having those kind of jobs coming up. I did a lot of, you know, blue collar jobs just as a teenager  and stuff, but I got in a little bit of trouble. Let’s just say, my dad knew the the manager of the it  

was at the downtown in Knoxville where I’m from.It’s a big high rise one, you know. Lot of  rooms, a lot a lot of stuff going through there. And he’s like, I’m gonna call my friend and you  know what? You’re gonna pay for what you just broke by by working this job. And, you know, it  was I was folding laundry, washing glasses, scrubbing the hallways down, stripping rooms.So I  got to see a lot of rooms after people left them and, you know, there’s a lot of all kind of stuff in  there that you would you would find. It was formative job for me, but but every time I’m at any  kind of lodging establishment. You know, I always have a lot of connection. I don’t know  exactly what I would say. A lot of sympathy for folks that are doing that because it’s it’s not an  easy job.So one thing that I would say is, I’ve seen vendors say, hey, we can predict success 

with ninety percent accuracy, eighty percent accuracy. So I’ve always told people, you know,  that’s a giant red flag. That that is impossible, you know, to do. I mean, if you get super lucky,  like a million monkeys with a million typewriters could write one piece or something. But I  mean, that’s just human nature and the being human and predicting things humans will do is  nigh impossible at any kind of accuracy level like that.The good news is even at the lower level  of accuracy we achieve, we save millions of dollars. You know, we’re doing that. 

Speaker 2: Yes. Yes. And I would argue people don’t actually want us to be able to do that.  Right. Any nobody actually wants you and I to be able to predict human behavior with eighty to  ninety percent accuracy.That gets really concerning 

Speaker 1: Oh. 

Speaker 2: If that’s 

Speaker 1: Yeah. 

Speaker 2: True. Yeah. Right? So not but so let’s I went like, no. I don’t think we can do  that.And then there’s also a question of one day, if we are able to, should we? 

Speaker 1: Yeah. Yeah. I mean No. Yeah. I don’t know.It it reminds me of kind of an existential  conversation I had last night with my brother-in-law where we were just talking about, you  know, the afterlife and and all that kind of stuff. And and I was like, you know what? Like, life  would suck if we actually knew what came next. I mean, maybe it is Nirvana and it’s amazing  and there’s just fluffy marshmallows floating around and everything you love is there, you know,  and there’s nothing wrong with that, but But, I mean, what would that do? People just be  commit suicide so they could get there all the time.Right? I mean, there’s certain things like  that we don’t want to know or happen even though we kinda have a quest for it, it’s it’s just so  interesting. And and that ties into I mean, here is just like a giant red carpet for a segue into the  innovation and technology portion of our show, which is a mandatory, perfunctory thing that we  have to do. So let’s just start the ball rolling there with innovation. What’s the innovation you’re  most proud of that you’ve seen happen in your world.I’m sure j and j is doing all kind of crazy  innovation. That’s amazing. But more about what you do. You know, how open is the door to  innovation? And, you know, what some things that you’re either proud of or maybe you’re  considering doing, I don’t know.Just talk to us about your job and innovation. 

Speaker 2: Oh, that’s a great question. I’m really proud that we have been able to test things.  We have been able Jane J. Has been open for us trying things out. For example, I’m of the firm  belief that I think assessments are gonna become more and more passive in the future.Right?  Instead of having to take a test, we’re gonna be inferring things. We’ve tested out some  inference assessments. 

Speaker 1: Oh, very cool. Alright. Oh my gosh. Group. 

Speaker 2: Yeah. We have already tested them out with a pilot group. We are not using them  for decision we’re not using the results for decision making. We’re still evaluating the results. In  most cases, the accuracy is not where I I I need it to be yet.It’s just just not there yet, but we  haven’t stopped testing. Yeah. And we haven’t stopped trying, and we’re gonna try another  one soon. And we’re just gonna and we’re gonna keep trying, and I’ve been really proud that  this organization lets us keep trying and is open to us being like, hey, it not working is also a  success. 

Speaker 1: Yeah. Of course, it is. So Failure is nothing but a a door to future success. That’s  the way I look at it. So passive assessment is a passion of mine and offline we should talk 

about that because I’m curious.I’m working on and I’m not trying to sell you at all here, but I’m  sitting here looking at what comes next for our field and can I can I look at a product that I  might be working? And and I’m talking to people who are also working on passive assessment  type things and you know, I believe in it. I called Stealth assessment. You know, I guess there’s  all kind 

Speaker 2: of stuff. Oh, I like Stealth assessment. 

Speaker 1: There’s all kind of stuff around, like, well, does the person know they’re being  assessed? Like, we we could have another million podcasts on that. But at some point, I’d love  to catch up with you. And I think we’d already starting the ball rolling with some interviews of of  folks who do what we do or do what you do in companies to kinda learn a little bit about what  they’re doing, etcetera. So we’ll connect on that.But 

Speaker 2: Wait. Can I can I respond to the ethical piece real quick? 

Speaker 1: Yeah. 

Speaker 2: Because I did I did get to go to the institute for work with the quality conference last  week, the EEOC was talking about this a little bit, and I I I appreciated the way that they were  talking about it. Right? So it seems like they’re thinking so they they talked a lot about  transparency, transparency, transparency, and they were also taking an ADA perspective of  Hold on, how will candidates or employees know to ask for accommodations if they don’t  know they’re being assessed? 

Speaker 1: Yeah. 

Speaker 2: A question 

Speaker 1: Yeah. 

Speaker 2: That Chair Burrows actually asked, so I can’t take credit for that. That was her her  wording, which was very good wording. And that’s exactly right. And I think it’s it’s exactly  right question to ask. And I think also from an IO perspective, we have a chance to really lead  from a from an ethical perspective as well as a regulatory like, we understand the regulations  and we’ll make sure we’re aligned with all the regulations too.Beyond the regulations, I think  there’s also this ethical component that I would love to see our field play a bigger hand in 

Speaker 1: I’m trying. 

Speaker 2: In terms of could we Yeah. In terms of should we be doing this? And how do we do  it the right way to to be fair to the person and not just fair as defined by the uniform guidelines,  but fair. Feel fair. 

Speaker 1: Yeah. Yeah. 

Speaker 2: To the person. And that kind of so I’m a big fan of passive assessment, and I was  like, huge fan of transparency and making sure you have consent and making sure the data is  used appropriately. And I think those those issues are challenging too because there’s cultural  differences and country differences sometimes on what people think is appropriate to do with  your data and not appropriate to do with your data, what you’re okay with your employer  knowing and what you’re not okay. And the line is not always clear. It gets real slippery.It’s a  slippery slope real fast.

Speaker 1: It is. 

Speaker 2: Again, I’m still excited by all that innovation and think we bring the right lens to that  conversation. 

Speaker 1: Yeah. I mean, it’s a multidisciplinary conversation. I think about something though,  like, digital exhaust. Right? So you’re you’re at some point, it’s already happening, but at some  point, it’ll be codified.And hopefully, people like us will get a chance to to weigh in or be even  building it. But just, you know, you go out there and look at all the particles you’ve left in the  ether and then, you know, start to make judgments about you. I mean, that do you need an  accommodation for that? Unless, you know, you’re not creating as much digital exhaust  because you don’t have the upper opportunity to use it because you’re because, you know so  there’s a million questions like that and, you know, we’ll never be able to answer them all.  That’s another thing that is So fascinating about what we do is, you know, people are all  different, and then the circumstances are different.And now it’s just, you know, the Lids blown  off of it. But it it’s cool to see that. I I remember going to the leading edge consortium in, like,  twenty nineteen and you talked there about kinda how you’re looking at at innovative stuff.  And, you know, innovate innovative stuff is kinda my biggest. I do a lot of stuff that I O’s do,  but that’s my biggest passion area and what I really like the most.So I try to pay attention as  much as I can to that. You have some I guess what I’m saying is there’s some precedent.  There’s some like for that. And it accompanied your size with as many applicants and as many  employees and as much budget. Let’s face it.There’s no reason you shouldn’t be trialing,  testing, doing all that. And what do you do with the drug? What do you do with the consumer  product? You can’t trial it. You know? 

Speaker 2: Exactly. You try it. You test it. 

Speaker 1: Hopefully, it doesn’t turn him into mutants. You know? I had a friend. He’s crazy  guy. And not in our not even in the professional world at all, it’s like a DJ or something.He  would participate in drug trials for money, you know, that was that’s how he made a living. He  would go to those things. And he told me a couple times, like, they had to shut them down  because I knew he was, like, man, all kind of weird stuff was into people. Anyway, it was funny  listening to that. When you talk about bad jobs, I don’t think I would want that job either at  all.You know, and he I don’t think J and J makes Robotus and he actually was you know, he  would drink a lot of Robotus and this guy. And so I don’t know how that confounded these  medical experiments he was doing. But anyway, I’m sorry, that dive divergent there, but also  that think about that guy. Me up. And of course, they weren’t J and J drug trials, I’m sure.So  innovation wise and this is how we opened and originally were what we were gonna kinda  focus on, but these are not these are fun conversations that can go in any directions, which is  how it should be. You know, let’s talk a little bit about, you know, zooming over innovation and  I’m curious, again, I’m not gonna be able to get away without with doing any content without  mentioning, you know, large language models, GPT. So One of the things I’m fascinated about  is that is kind of the spectrum of how companies are looking at using those tools because they  do create all kind of privacy issues and all kind of accuracy issues. But at the same time, and  And I’m starting the thing where I am not afraid to admit that I use the hell out of those tools. 

Speaker 2: I love them. 

Speaker 1: And and I’ve saved I’m starting the thing on LinkedIn, you know, every Friday. How  many hours did you save this week by using GPT? Let’s just admit that we do it because it’s  making us better. Right? 

Speaker 2: Hundred percent

Speaker 1: just take what it spits out and use it verbatim. It’s a good starting point. So but  justifiably companies, I mean, there’s a thing about, like, Samsung, you know, some kind of  sensitive past stuff leaked out because somebody so, you know, there’s consequences on a lot  of levels but benefits. And that’s the paradox of AI in my mind anyway. It’s like, it’s possibly  gonna bring the end of the world, but, boy, are we efficient?We’re making more money. And we  have more time to be busier with other stuff. Right? So that there’s a duality there that and the  die’s been cast. In my mind.Like, we’re we passed the point of no return. This stuff owns us.  You know, I’m surrender to that. I have I I’m not gonna not use it because it’s awesome. So it’s  winning.But how how how are you guys approaching that in in whatever way you can actually  feel comfortable talking about? 

Speaker 2: Sure. So we’re really trying to embrace large language models so much so that we  do have an internal tool. Now that it’s available to all employees, for a large language model  because we because the the toothpaste is out of the tube. It’s not a cat out of the bag. You can  put a cat back in the bag.You can’t up with the toothpaste back. It’s done. I knew you know  what I knew it was done. It was interesting when I was sitting around at the end of last year  with other swim parents watching our kids swim. And there were other companies this  company folks did have a rather laptop, right, the parent working parents. 

Speaker 1: Right. Right. 

Speaker 2: They’re doing their performance reviews. 

Speaker 1: Yeah. 

Speaker 2: They were all using chat EBT to write the performance review. Speaker 1: Yeah. Yeah. Yeah. 

Speaker 2: Right? It saves so much time. Right? You you’re just a certain practice. You gotta  get these done.Put put it in there. Right? And I was like, oh, okay. It’s done then. Like, this is  it.This is this is what we’re this is what we’re doing. I understand companies not wanting to put  and companies should be putting their private information in a public model that has been  using that. 

Speaker 1: Sure. 

Speaker 2: And so we’re really fortunate that we were very quick to create our internal model. I  lovingly call her Jenny, I know I’m anthropomorphizing the tool. I understand. But it’s called,  you know, Jenny, I am sorry. I Oh, yes. 

Speaker 1: I didn’t even think of that. 

Speaker 2: So I call her Jenny. 

Speaker 1: Right. 

Speaker 2: There is a training that we have, but it’s available it’s available, you know, to about  how to write prompts and how to use it and how not to use it and what to put in and what not  to put in, but it’s a secure place for people to go. We also have spun up very quickly a  generative AI community of practice board, where if you wanna do a larger Gen AI project,  within j and j, you submit that idea to this body, and then folks can ask questions. So it has  folks like legal, on it. It has a risk folks. It has technology folks.It has it has me. It has other  rates looking at this ideas submitted and determining, like, hey, is this a risk or is this not a 

risk? Is this something we wanna pursue? Is this not something we wanna pursue? There’s  gonna need to be more and more structured governance like that, but at least that’s a start. 

Speaker 1: Yeah. There has. 

Speaker 2: And my plug for all your listeners is HR needs to be a part of that. And sometimes  the tech folks don’t see why. They’re like, we’ll we’ll why. But even there’s so many examples  of why. In terms of when I think of projects that have come through.If you’re you if you’re  putting together a project with a large language model that then substantially changes the job  that the person’s doing. We need to know that from an EHR perspective. Is that a different job? 

Speaker 1: Yeah. 

Speaker 2: Is it a different job? 

Speaker 1: Well, yeah. 

Speaker 2: Or have you created new jobs? Like, what does that mean? What are the  consequences of that? Other questions in terms of, like, okay. Well, what data are you putting  in there?Right? Like, what are you putting in data on people? There’s a lot of talk in thinking  about using, like, language models for coaching. I don’t wanna be careful about, like, like,  different kinds of coaching. Like, there’s different, like, how should I phrase this questions you  can ask versus broader coaching?I think that our learning and coaching professionals should  have a really clear point of view on that because I don’t know if you want a predictive text tool  actually being your coach or maybe it is helpful if you if you need to have help immediately  about how to phrase something, maybe that is a fine use of it. I’m not sure. I get really nervous  about large language models for job descriptions, particularly when it gets to the minimum  qualifications part of the job description and the model suggests that you have need a  bachelor’s degree for the job. Really? Based on based on what?Model based on, oh, that’s the  next logical word. Oh, okay. Yeah. 

Speaker 1: Exactly. I 

Speaker 2: don’t think that counts as validation. So 

Speaker 1: No. 

Speaker 2: But I’m really excited that we’re trying it. We’re trying to spin up. We have that  governance in place. We have the internal tools. And we also have our different data science  organizations across the the broader enterprise, they are able to use large language models in  their own instance as well.So, for example, in our people analytics team, they are creating they  are using large language models on very confidential engagement survey data to look at our  qualitative comments. We’re actually very good at J and J at answering our engagement  survey every year. I’m getting a lot of qualitative comments, and we’ve been able to organize,  categorize, summarize those qualitative comments. In a way that’s been phenomenal this year.  And also, Charles, from an assessment perspective, if it gets that good, by categorizing  qualitative information?Can we move away from quantitative liking rating scales? 

Speaker 1: Oh my gosh. Boom. I didn’t even even thought of that too, but yes, we’re gonna be  moving away from a lot of the tools that we’ve been using to accomplish the same end results.  I’ll tell you, I just wrote a little GPT. I haven’t really tested it much yet because now you can just  write your own little GPTs and string them together because we do a lot of qualitative  

interviews too.And I typically go through those things and code them. And then put them into  some kind of thing and crunch the numbers. And I’m like, wait a minute. Can I just feed this 

and get the themes out of this thing? And so I built one and I’m gonna give it a try and I tried to  do it manually with some of my team a while back and we just hit the wall, you know, I don’t  have any any serious AI type people on the team, and I don’t wanna go pay for all that stuff.So  anyway, we did it the old fashioned way, but you’re right. And the idea of Liker scales going  away. Wow. That’s cool. I think that we are gonna see a lot of stuff going away.I was just in a  interesting LinkedIn discussion about cheating using GPT. And I have the idea that, you know,  what happens when we use GPT to write our assessments and candidates are using GPT to  answer the assessments, then GPT is assessing GPT and the people Oh, wow. 

Speaker 2: A hundred percent. 

Speaker 1: And that’s the most insane thing. 

Speaker 2: A hundred percent have you seen the tools to help you with the interview? Or g  GPT will, like, listen to the interview questions and give you the answer, and you can just read  it. Yeah. Absolutely. I a hundred percent also, I do think very soon, it’s gonna be GPT emailing  another GPT emailing another GPT because I’ve also I want you to test I’m testing pilot  Microsoft co pilot right now.Which helps you which it has a large time charter. Right? So Well, 

Speaker 1: it’s great. It’s our listeners. Sorry. Yeah. 

Speaker 2: Yeah. So it’s so it’s a Microsoft’s large language model you know, GPT to to Speaker 1: Yeah. 

Speaker 2: Embedded within the other Office Suite. So word PowerPoint Excel Mhmm. And  Outlook. And so it could help you draft your emails. You say what you wanna say, and then you  say, like, and make this four paragraphs and make it sound really professional.And and Right.  And it’s super and it is a time saver. It is from a not starting with a blank sheet of paper. I have  found that it hasn’t quite got my tone. I haven’t found it ever to be funny.I don’t know if you’ve  tried to make it funny, but I’m like 

Speaker 1: Oh, absolutely. I’ve made it funny. I entered You 

Speaker 2: made it funny. Okay. You got teachers. 

Speaker 1: Called jokes. It’s called corny jokes. I just said, I interviewed it for my podcast, and I  typed in the prompt. I said, respond in a witty and engaging way as a guest. And it would do  that.And then I said, tell me a joke. And it told me a joke. And then I said, how did you choose  that joke? And it said the probability that you would laugh at that joke was high. So I put that  joke in there.And absolutely. And I I I poked it, you know, made fun of it a little, and it it  accepted that, and it never made fun of me back. I don’t know if it’s programmed to do that.  But, you know, if you have an hour or whatever listening to that episode, it was my favorite one  besides this one. 

Speaker 2: I will. I will. 

Speaker 1: It’s super entertaining. It’s super entertaining. So, yeah, it can do that stuff. 

Speaker 2: I mean, I think soon, you know, that’s gonna be drafting emails. So you go to  someone else who’s gonna ask GPT to summarize that email and draft the response back and  leave back and forth. And and then and then when will we I do worry about them. When will we  meet each other? Right?Like, so if that’s how we communicate would would it just be face to  face conversations or phone calls where to use We’re gonna be able to really interact

Speaker 1: we’re gonna be living in our own metaverse. That’s the only way like, that’s the way  I say it too. Our lives will be the the metaverse because we’ll have all these agents doing stuff and the deep fake possibility, you know, GPT five, which they’re working on, and with the reset,  whatever is happening that, you know, at a open AI. This will continue no matter who’s doing it.  GPT five is gonna be able to do video.So then you’re thinking about, like, you know, how I think  there’ll be something with, you know, you gotta put digital stamps on it and stuff, but, you  know, but people will figure out how to circumvent that. So Yeah. We’re lucky that right now.  We’re having fun thinking about this stuff. It’s gonna get absolutely crazy, absolutely weird.You  know, we won’t have any control over that. So we’re gonna have to just surf that wave. And all  the things we talked about today assessment, the core idea of predicting how well someone  will do on a job or giving someone feedback to make them better on their job. The core ideas  and universal truths dealing with people on the job, those are not gonna change. The way we  do it is gonna become Like, it you know, like your scale is gonna look like a typewriter.So Oh,  yeah. You know? And that’s what’s gonna happen. Good for that as long as we know the stuff works. So there’s that piece of it.And I think there’s a lot of hesitancy. The the big companies I  work with while they may be doing innovative stuff, maybe not as much as you all. They don’t  wanna try you know, there’s a lot of fear in just putting these things justifiably so. Right? So at  what point are we gonna have confidence in this Right.That’s the thing. When are we gonna  have the confidence to let go? To fully let go? 

Speaker 2: To fully let go. I don’t know when we’ll have the confidence, but I do wish I agree  there’s a lot fear. I wish there was less fear and more just critical thinking and critical  evaluation. Because you’re right. It’s not working right now.We all know that that it hallucinates  We all know that it’s not perfect yet. Right? And so we’re trying to understand more, but the  again, it should just be science. The way innovation work, we will try it. We will critically  evaluate, we will test and try again, and we will repeat, and we will 

Speaker 1: repeat. Exactly. 

Speaker 2: I I am happy to see so many people trying to educate themselves so quickly on this  to get past the the fear. 

Speaker 1: Yeah. 

Speaker 2: To get to the critical evaluation set. But you’re right, like, there’s definite harm that  can come from these tools a hundred percent. And we should be aware of them, and we  should guard against them. And and that and we should continue to to innovate Yeah. To 

Speaker 1: A lackard scale never hurt anybody, you know, that didn’t. I don’t think so. But  Yeah. I no. I totally I totally agree with that, you know, and I think that we’ll we have to to get  past that.I think, you know, if you ask GPT those questions, it’s it’s a little biased on itself, like,  you know, in in other words, it’s gonna tell you not to fear anything and how you can trust it. I  asked it on my thing, you know, why it hallucinates? And it gave me and I asked it, you know,  do you take LSD? And and that’s why you hallucinate it. No.I’m purely a digital model. I can’t  take any substances. So that was pretty hilarious. Right? But Let’s 

Speaker 2: say no, I guess. 

Speaker 1: Yeah. Just say no, but it won’t say no. It it is Yes. 

Speaker 2: It was

Speaker 1: fine to please. And it said, well, you know, there are things where they’re low  probabilities, and I don’t know. And if I kept telling you that I don’t know, would you keep using  me? You know, that kind of thing. 

Speaker 2: Yeah. 

Speaker 1: There you have it. So, wow. So we’re running out of time. What a great  conversation. Feel like I could keep doing this for days.Just tell everybody you know, I say this  every time and everybody says, yeah, follow me on LinkedIn. So, I mean, I’ll say how do people  keep track of you and all that kind of stuff and you’ll say probably what I just said, but but go  ahead. 

Speaker 2: Well, in addition to following me on LinkedIn, you know what I’m gonna say? Find  me at Saya. Find me at conferences. I’m so excited. Conferences are back.I 

Speaker 1: Yeah. I 

Speaker 2: I really appreciate in person conversation with people. I will I mean, knock on wood.  I will be at Sayout this coming year. 

Speaker 1: Yeah. 

Speaker 2: Find me there. I’d love to talk to you. 

Speaker 1: Hello. 

Speaker 2: I I was at LEC. You know, I try to notice that. 

Speaker 1: Me. That’s so fun. 

Speaker 2: We have such a phenomenal profession. Right? They are just amazing amazing  people who are doing amazing work and thinking through really hard problems in the world and  it’s fascinating. And so 

Speaker 1: It is. 

Speaker 2: Come say hi, at science. 

Speaker 1: I’ll send my agent to talk to your agent, you know, and they’ll let us know what the  results of the conversation were Yeah. You know? 

Speaker 2: They could just summarize it for me in three bullets. 

Speaker 1: Yeah. Well, sometimes it’s tied up as much as I love it. That’s what I want. I like  those lightning rounds where you get the point right away. Yes.And, you know, anyway, well,  we gotta close out. It’s closing out on an hour. So thank thank you so much for a great  conversation. I’ve got notes I took of things I never thought of, which is my real secret goal of  all this, you know, is is that. So great job today.I appreciate it. 

Speaker 2: Thank you so much for having me. This was such a fun conversation. Charles, and  you know I’ll talk to you anytime. 

Speaker 1: Yeah. Good deal. As we wind down today’s episode to your listeners, I want to  remind you to check out our website rockethire dot com and learn more about our latest line of 

business, which is auditing and advising on AI based hiring tools and talent assessment tools.  Take a look at the site. There’s a really awesome some FAQs document around New York City  local law one forty four that should answer all your questions about that complex and untested  piece of legislation.And guess what? There’s gonna be more to come. So check us out. We’re  here to help.