Building Digital Coaches Using AI and Psychology

Featuring: Dr. Matt Barney

“When GPT 3.5 came out. I  was like, oh my gosh, we don’t need to train the AI anymore! We just need to prime it.  It’s so similar to psychology!  Instead of programming and language, we’d do it in English.”

In this episode of Science 4-Hire, I chew the digital fat with Dr. Matt Barney, founder and CEO of XLNC, a company that is using AI to create the future of leadership coaching. 


Dr. Barney and I laud the transformative role of AI in talent development and assessment. Dr. Barney, with his profound expertise in IO psychology, AI, and leadership development shares insights from his journey in integrating these domains.  Dr. Barney and I share the same passion for “distributed:” interventions such as assessment and coaching.  By distributive I mean the on-demand, and on-going delivery of relevant information for measuring and improving performance.  We talk about how, with this modality, the world we live in becomes a measurement and feedback session, liberating people practices from the confines of any one specific location or situation.


The discussion covers the evolution of AI from a tool requiring extensive training to one that can be effectively primed for diverse applications using plain old human speech or writing.  We look at this phenomenon through the lens of leadership development and coaching.


Dr. Barney articulates the challenges and breakthroughs in AI, particularly focusing on generative AI’s impact on talent development and coaching. He describes his innovative approaches, like his LeaderAmp platform and his recently created “CialdiniBot”, in which he has created a digital persona of renowned social psychologist and master in the science of persuasion, Robert Caildini.  The CialdiniBot is a living example of the possibilities that are available when we blend psychological insights with AI for more effective and powerful leadership and persuasion training at scale. 


The episode goes beyond technical discussions, delving into the humanistic and ethical considerations of AI application in talent development. Dr. Barney’s unique blend of IO psychology expertise and AI innovation offers listeners a glimpse into the future of talent development and assessment, where AI not only enhances efficiency but also ensures ethical and human-centric approaches.


Key Takeaways:


  • Listeners will gain a comprehensive understanding of the current state and future possibilities of AI in talent development, along with practical insights into the integration of AI with IO psychology principles.
  • Specifically how to use LLMs that integrate AI with IO psychology to create effective leadership training and assessment at scale via coaching personas.
  • The importance of trustworthy and explainable AI in ethical talent development practices.
  • Future prospects of AI in transforming talent assessment and development processes- including how LLMs are making the future possible.


Follow Dr. Barney by visiting his website or by finding him on LinkedIn.


Full episode transcript


Speaker 0: Welcome to Science for Hyre. With your host doctor Charles Handler. Science for  Hire provides thirty minutes of enlightenment on best practices and news from the front lines of  the Impriment Testing Universe. 

Speaker 1: Hello, and welcome to the latest edition of Science for Hire. I am your host, Doc  Charles Handler and I Well, I have a parade of really great guests. That’s why I do this. And my  guest today is doctor Matt Barney. He’s the founder and CEO of x l n c and the the purveyor of  true mind dot a I and up doing that for a very long time.  

He’s one of those people that every time I talk to them, there’s like smoke coming out of my ears,  not in the negative way, not because I’m mad, but because I’m, like, so much is going on that I’m  trying to process. And we’ve shared a love for for some particular permutations of technology,  let’s say, over the years. And that’s one of the reasons that I connected recently with him to catch  up with what he’s doing with all the generative AI stuff going on. I knew if there’s anybody who’s  out there with this stuff, it would be it would be doc Matt here. So I’m gonna let him introduce  himself and and, you know, tell us as we get going a little bit about what you’re doing right now  and what exciting stuff you’re working on.  

So I’ve had it. 

Speaker 2: Appreciate it. I’m thrilled to be here. I’ve as you know, I’ve always worked at kind of  the intersection of tech psychology and business. I’ve done a bunch with tech throughout my  career, even back to grad school days. Bunch of patents.  

All most of those were the old school rules based AI type stuff or just regular tech innovations.  But about ten years ago, I focused pretty much only on AI. And now in the last year, I’ve  overhauled all that with the new large language model stuff that’s explainable and trustworthy.  And so lots of cool IO applications of this, and it’s it’s liberating and exciting to talk to you today  about it because it’s just It’s been so many years that I couldn’t get stuff done that I can now. 

Speaker 1: Oh, yeah. I mean, that’s what that’s like, you probably have a more specific thing in  mind when you say that, like accomplishing things. But for me, it’s generative AI is about getting  stuff done, man. I mean, I’m getting so much stuff done And I’m still having these moments  where all of a sudden I think, wait a minute, why am I not using chat g t p I helping my son study  for a fourth grade bath test last night, and he forgot his review book, and he also is in French  immersion. So I’m at I’m like, oh, man, I gotta figure this out.  

Wait a minute. I don’t know French. What am I doing? I go to chat g t p. Generate me some two  part word problems with multiplication and long division in French.  

Boom. Here we go. And then he was hopefully gonna do really good on this test today. So it’s  little things like that and then big things like, oh, how are we gonna change the world with this?  How are we gonna change hiring?  

With this and what the hell is this thing gonna do to us? You know? So and you recently, when  we were talking said that you I know you’ve been long time Bay area. You said you were moving  to is it New Hampshire? 

Speaker 2: You remember well. Yes. I have been fond of New Hampshire for many years. No  income tax no sales mix, high density of entrepreneurs. The opposite of the People’s Republic of  California where I’ve been I left India about ten years ago.

Speaker 1: Yeah. Yeah. Interesting. Let’s see. And I’m trying to remember this this is really I was  just thinking, okay, what’s the capital of New Hampshire?  

I know Vermont. Well, 

Speaker 2: democrat guy. 

Speaker 1: Right? Democrat. 

Speaker 2: Packet. If you could do 

Speaker 1: that Yeah. 

Speaker 2: Mid New England accent. 

Speaker 1: Yeah. You got it down pretty well. You better you better get that for you move over  there. Well, they’re you’re gonna have snow and Northeastern 

Speaker 2: Johnson, Charles. So, like, that that doesn’t bother me. I grew up in Madison. So  yeah. It Oh, nice.  


Speaker 1: Yeah. Yeah. Cool. So so let’s go let’s rewind a little and let’s talk a little bit about you  know, leader ramp if you don’t if you don’t mind. Like, tell us a little bit about that product.  When I saw that, I was like, wow, the possibilities, I know it’s not a selection product. But for  me, my mind started spinning about what the future would look like. And this was ten years ago,  maybe even a little worse. So, yeah, talk talk about that journey, you know, that part of your  journey and and what was really cool about it, what it did. 

Speaker 2: Happy to, and and actually, you’re right. It didn’t start out as selection product, but it  ended up being partially won in the app. 

Speaker 1: Oh, cool. 

Speaker 2: You know, it came from scratching itches I had when I was mostly in multinationals  at Merck, at Motorola, at Sutter Health, before I ran Infosys Leadership Institute, I just could  never scale I o psychology that well with development. And it frustrated me. So at Sutter, I  looked after eighty two boards and Infosys, I looked after the senior most leaders and all their  successors around the world, but it was just a nightmare to coach them all. And, you know, I was  in India. So it’s like it was a nightmare for me to get the team I needed.  

And and so and what we really frustrated me was I worked for these a billionaire, the chairman  of the company, and people wanted coaching like they wanted their Lamborghini or their Rolex.  It was a status symbol. These are high net worth people in India, billionaires and millionaires in  

India, dollar And and so this is a way to derisk coaching in between sessions and the selection  part of this. Was how do I find people that aren’t gonna squander this investment? And I actually  went back to them when I left, they liked my AI enough and leader amp.  

That I said, alright. Give me your top five thousand senior female leaders that are good  performers, and I’ll I’ll find you the ones that are highest potential because they’re only allowed 

to come into the AI coaching process if they deliberately can certainly practice and get good  scores on computer adaptive tests, including three sixties. So it was sort of a action learning  selection internal internal of who gets who gets the extra investment because they’re not gonna  squander it. So anyway, leader ranked was it was machine learning based and rules based  psychology AI explainable but it still had computer adaptive questions. And there’s and and I just  and it’s so hard to get the data for those machine learning models.  

You know? Yeah. 

Speaker 1: The training data, you mean? 

Speaker 2: A graphic. I mean, what we consider large in psychology is ridiculously too small in  computer science. And, you know, I had samples of five thousand CEOs talking to investors,  high stakes, and almost none of them were even good with persuasion. So one of the AIs I won  an award in twenty eighteen when I was at Leader Amp from Saia for the Grey Howard Award. I  could never finish it because if if you don’t have the data, you can’t teach the stupid things.  And so that’s why and I left leader in almost two years ago now. My co founder still runs it, but a  year ago, is when the GPT three point five turbo came out, and that really changed everything for  me because they’re probably all these problems. Yeah. 

Speaker 1: Yeah. I know. I was just thinking in my mind, well, you wish you had a, you know,  GTP for that. But tell us a little bit, like, describe what it actually did. I interacted with it, you  know.  

And I’ll I’ll give my after you after you kind of lay it down for us. I’ll give my interpretation of  what I thought that solve the possibilities in it. But tell us a little bit about how it worked. Like, if  I’m if I’m using this app or this this application, that you created. What am I doing? 

Speaker 2: And you’re asking about leader and Leader and stuff. 

Speaker 1: Yeah. Yeah. We’ll get to the new state. All that stuff. 

Speaker 2: So the old school computer adaptive assessments that I did even at Infosys are way  shorter, way more precise, Right? Who wants a pain in the s, small assessment? Who wants one  that’s imprecise? But it was still a super big hassle to get the three sixty surveys. And you can  imagine at scale, in a multinational, twenty five thousand raiders in fifty countries.  So instead of all that, what the app would do is help do let you do the self assessment, let you  wait While you’re the you go nudge your stakeholders before 

Speaker 1: Uh-huh. 

Speaker 2: Like so it’s not just an e spam. It’s like, hey, care about your feedback, Charles.  Would you mind spending a few minutes? So, like, just shorten that cycle time. So the app  helped you manage the three sixty.  

But while you’re waiting for it, it lets you schedule artificially intelligent coaching. So we would  calibrate psychologists’ authored low medium and high coaching about what they ought to do  and whatever it is they were trying to grow. 

Speaker 1: Right.

Speaker 2: And they could schedule at least once a week exactly the day and time. The app  would help you do that. And then they could practice and the ideas they practice deliberately,  conservatively. If they have a coach, they also get reminded at the end of their day after they  practiced it to journal about it and the the coach can then see the journal entry. The journal entry  was fed into machine learning based and rules based AI for emotion.  

So the coach could then intervene and see how they’re doing, you know, before they’re  unpleasantly surprised and the the client didn’t do squat the next time they’re together. So I the  ideal use case was high touch, high-tech, that the AI, that the app did assessment, and artificial  intelligence coaching, and nudge reminders. And then the human did did the other parts around  helping them, you know, in between sessions ideally. 

Speaker 1: Right. So so in other words, the three sixty would inform or the leader would inform  on things that I might need to know about or work on. And then the that the product would say,  hey, if you need to strengthen, you know, your resilience, go run a marathon. I don’t know. You  know, stuff like that.  


Speaker 2: Yeah. Well, it was so it had both the self assessment dimensions or things that the the  individuals a better judge of, you know, like your optimism, your your colleagues may not have  any point of view about how you your identity, your grittiness, you know, that guy Whereas, the  three sixty is things like charisma. You know, it really doesn’t matter how awesome, exciting you  think you are. It’s how other everybody else responds to your charismatic behavior tactics from  Johnny Antonakis. So we took And and in, like, if we just stick with charisma, John’s done  seminal work on what, you know, low it’s not hard to talk about the moral purpose of the team.  It’s damn hard to get exciting and tell a great story and use your nonverbals and Yeah. Your voice  and repartless. So it would be appropriately challenging. The psychometric calibration and the AI  would only give them stuff that’s in their Goldilocks. So not too hard, not too easy.  It’s just just in their sweet spot. 

Speaker 1: Uh-huh. Wow. So I would always thought a charisma is something that you’re kinda  either have it or you don’t. You’re born with it or not. Like, try to keep things.  Or Yeah. Let me tell you 

Speaker 2: let me tell you about it. It’s 

Speaker 1: how can I have more charisma? What? Just just offhand. Like, what do I need to do? 

Speaker 2: John Antonakis is has done the seminal work in this area. He’s the outgoing editor of  leadership quarterly. He’s literally done experiments. He he’s Greek, so he read the original  Greek and pulled that into his studies. But he literally randomly assigned people to to  experiments where he was able to show that not only subordinates outperformed, when their  leader was exciting.  

But but the the they were seen as leader like. And the the original sensitivity is so powerful. I got  published in nature, which never happens in I o psychology. Right? I mean, he’s done it on  economic variables too where he’s literally counted how much money people donate or or how  how much money the people trying to get philanthropic donations get.  

This is in the UK. So it’s it’s things like three part lists and using nonverbal behaviors and telling 

a great story, talking about the moral purpose of the team. Those things are totally teachable.  And, you know, in leader amp in in one quarter, I could get a leader wherever they were on  charisma that could get five to thirty percent better in one quarter, which is like I it took me, like,  a year to get that Right. With before that.  

Right? Because it’s hard to get people to practice and Exactly. And they’re Right. 

Speaker 1: Right. Ah, interesting. Yeah. I mean, I guess there’s a difference between how you’re  operationally defining charisma here, which I believe is is, you know, perfectly great. When I  think about it immediately, I just think of oh, there’s people when they just walk in the room, all  of a sudden, you know, they’ve got this light around them and everybody.  And it it happens a lot with really famous people. I’ve had a chance to be around just like  everybody in the travels of your life. You’ve you’ve been in the same room with some famous  person, and I’m not gonna sit here. You know, name names, you know, like that Ringo star.  Excuse me.  


Speaker 2: Let me. No. Let let me let me build on that. I mean, you know, there’s a psychology  word for what you just described. It’s called the romance of leadership.  

Right? But there’s this faxiness. And so to your point, I was working for these guys that created  the entire software industry in India when I was in Infosys. And, like, literally, when the founder  was retiring, at the last shareholder meeting, I had to be there on the stage with him. It’s like he  literally had elderly women singing songs to him, serenading.  

They’ve never met him. He’s such a rock star, and he’s got that that aura that people attribute to  him. 

Speaker 1: And sometimes that can go Like, I’m a big fan of rush, you know, the fan rush, so I  was watching this documentary. There’s a bunch of different documentaries, but, you know, I  believe it was, you know, part of he was talking. He’s a very private guy anyway, but he was  talking about, like, when they first had their first big commercial breakthrough, people come into  his house, you know, being on his lawn, is try to get get to see him coming out the door. I mean,  that kind of that kind of fanaticism to me, man. That kind of that’s that’s just not a good thing.  That’s that’s anyway, that’s people for you. They they behave in all kind crazy ways that we’re  just trying to help people be better. I mean, that’s the way I look at it. Right. Right.  Right. So cool. Well, I liked about that product was you know, the distributed kind of ongoing  dialogue you’re having with technology and psychology at the same time. And so you’re not just  sitting down in one place to a coaching session. You’re not just, like, reading something offline,  whatever.  

It’s it’s there when you need it and not there when you don’t need it and it’s serving a purpose  where the the bits of information compile over time into into a clearer picture that you can action  on. I think about that for hiring. Although, I think, you know, free range Free range hiring where  the predictor is just you moving around doing stuff and you’re being evaluated on it. We’re not  ready for that. I mean, the hard part of all that is really not necessarily even the technology, but  the paradigm of assessing someone for a job usually happens in a tightly controlled window  where someone knows that it’s happening and has opted into that and it it happens and then ends.  But this kind of thing gets it’s like the paradigm is all busted and there’s reasons for that 

paradigm, legal reasons, privacy reasons, whatever, So it it might be a pipe dream to get to that  point, but you could simulate it, you know? 

Speaker 2: I don’t think so. No. No. See, I think even even right now, a part of your free range  vision is possible. So, like, like test gorilla.  

Right? It’s got the ability to ask video questions, capture the data, do the transcription. Now that’s  still normal to kind of structured interview type of settings. So it’s not so weirdo. We’re not  talking Twitter data.  

European Union just came out a bunch of rules that make it even more difficult for us. But that,  we could still get it’s it’s not full of free range, but it’s like just purely what they say. We can now  measure stuff with stream precision just from that with this new AI. Yeah. 

Speaker 1: Oh, yeah. Yeah. I mean, that exactly right. And so it’s to me, I’ve again, I envision it  as you got your phone, you’re doing stuff, maybe your your regular job, maybe there’s even  simulated stuff coming over your phone, like a call from somebody that’s part of this synthetic  organization, you gotta talk to them, and you gotta, like, go online, check your email, get get the  information. So a little bit like an assess center, but it’s not just all sitting down in one place.  And, you know, boy, if you could have AI generated collaborators, coworkers, clients, whatever,  and you’re you’re interacting with those. Totally. 

Speaker 2: And 

Speaker 1: that that might stretch over a couple days. Oh, man. I got a big presentation  tomorrow. I just got the stuff tonight. What am I gonna do?  

Again, there is also the typical versus maximum performance idea. Right? So if you’re if you’re  in a 

Speaker 2: That’s right. 

Speaker 1: Situation like that, you’re gonna potentially behave differently. But we we know  that’s something we’ve been we’ve been dealing with for a very long time. So new stuff then. X l  n c TrueMind AI. I mean, you when we talk to kind of pre pre this recording, Again, you were  kind of smoking my ears talking a little bit about I mean, obviously, generative AI is part of this.  So let’s start a little bit with how are you using generative AI right now to build new things? Tell  us a little bit about that. 

Speaker 2: Sure. And and actually it piggybacks on what you just said. I’m already doing pieces  of your free range vision, not just in a pre higher sense. And I think it’s less hostile in  developmental work than it is in Free Fire. Right?  

The the litigation factors. The science is the same. It’s just that the litigation sensitivity, when it’s  spot my development, is much less than if it’s about me getting promoted or getting the job.  Right? 

Speaker 1: Yeah. There’s no life altering decisions that are being made based on that. Right?  There’s I mean, there could be exclusion for sure, but you’re right. I mean, the lens is always on  hiring.

Speaker 2: That’s right. But but what I’m doing now so when GPT three point five, came out. I  was like, oh my gosh. We don’t need to train the AI anymore. We just need to prime it.  It’s so similar to psychology. Instead of programming and language, we’d do it in English, and  we’d frankly use psychology ways. And so what I’ve been able to do is take normal psychology  of low medium and high for whatever dimension I started with persuasion because I worked with  Bob Chaldini for many, many years, And there’s lots of uses in organizations. Right? Leaders  persuade, salespeople, lawyers.  

Right? They persuade for for a living. And so but but they often get some of it wrong. So I’ve  been able 

Speaker 1: to selling. We’re all selling all the time. Doesn’t matter if you don’t think you’re  selling anything you’re trying to navigate your way through a day, you’re selling to somebody. 

Speaker 2: And you wanna do it in a wise way, right, in a way that that enhances floats all boats.  You don’t wanna burn relationship out with it. And that’s that’s really Chaldini’s approach. So I  built ChaldiniBot to complement with with friends I help Bob Chaldini launch a new institute  called the Chaldini Institute that’s got the his like, a flipped classroom type of approach to  training and coaching. But even there, you know, what are people doing in between coaching  sessions?  

Or when they’re done with their coaching, if they wanna we all have blind spots, and we often  overlook cellity principles. So the AI can not only coach you, but when you’re ready, measure  anything you’ve got. So to your point about free range right now, I’ve got people using childini  

bot with Zoom sessions or they scrape a website and measure it all and get feedback. So it’s not  as as like graceful as I think your vision is of, like, I don’t have to do much and get all this  information and track it. But what I want to, if I wanna go see before I go pitch the board or  before I go, try and raise my capital as of with the VCs.  

If I wanna see if my approach is good or if I’m missing something big, there’s nothing like this.  Yeah. So that’s my exciting kinda next steps to AI. Because, like, anything you can dream up,  what took me years at Leaderamp, I can do in about a month now. 

Speaker 1: Yeah. Yeah. Well, I mean, let’s take a quick pause too because when you talked about  Bob Chaldini, I gonna be honest, I had no idea who Bob Cialdini is. So maybe a lot of people do,  but tell us a little bit about that person and how you’re how you’re capturing his essence in your  tool, you know? Like, what what is he all about?  


Speaker 2: Professor Chaldini, is the world’s most cited living social psychologist in the area of  persuasion. He’s like the pioneering researcher, but he’s not just a geeky academic. He is unique  because even in the sixties, before you and I were born Charles, he was surreptitiously taking  jobs with salespeople, you know, at restaurants, doing cold calling lawyers. The the Khari  Krishna, Khare Ramas back in the seventies, he would study these people and create these very  clever experiments 

Speaker 1: at the airport. 

Speaker 2: Exactly. I don’t know how was it that, you know, people would, the holy Christian is  If you for those who are not old enough, I remember them too. But but for those who don’t know, 

back in those days, the Heart Christians would be there. They’d give you paper flour people give  them money, and then they just throw the paper flower out. And that these weren’t Harry  Christianist supporters, but it was the rule of reciprocity where you feel an obligation to get back.  He he wrote some consumer grade New York Times bestsellers in the late eighties and then  multiple others. And he’s he’s He’s at, like, world economic forum level, rare psychologist up  with the Daniel Donovan who won the Nobel Prize. He’s at that level of psychology. But what’s  very, very special about him now is unlike most of us, geeky iOs, he’s super good at tailoring his  speech to his audience. That’s why he can get six figures for one hour talk.  With w world, you know, with fortune fifty types. I’ve worked with him for about twenty years  back in my Motorola days. I’d required all my my Motorola Six Sigma Master Black belt become  credential, you know, proficient in his stuff. I I my my twenty eighteen award from Saip was  based on his stuff. I just never could finish it until now.  

Right. But he needs his his approach is seven universal things that move people in our direction  in a totally ethical way. So they’re they’re not hard. They’re superficially, deceptively easy to  understand, but the devil’s in the proverbial details how you use them fully. 

Speaker 1: Yeah. Yeah. And you know what? You’ve persuaded me to life about childini quite a  bit. And as a psychologist, man, I’m like, well, how did I not know about this guy?  But I played around. You set me up one social. 

Speaker 2: Yeah. Yeah. I know. That’s why. 

Speaker 1: Yeah. But, man, social, psychologists, how all the fun and think about, like, the your  Stanford Prison Experiment or I I remember a really interesting Really interesting one. Social  psychology study. It was about and you can’t well, you don’t see this anymore either. People with  gun racks their pickup trucks.  

Like, I grew up in Tennessee. Right? 

Speaker 2: And Oh, yeah. 

Speaker 1: I’m proudly although my parents are from the east, and my family roots are from  Eastern Europe. I grew up kind of a redneck a little bit. I had a gun rack, but no gun. Because I  bought a truck and it had one in there. But but and nowadays, boy, you can’t do that.  But and it makes intuitive sense, but people follow a vehicle with a gun rack at much more  distance than they would. One without a gun rack. Right? Just little fun stuff like that where  you’re like, well, I wonder how people react in this situation and why? It’s pretty cool.  It’s pretty cool. And there’s definitely you know, bleed over. But I played around with the childini  bot you set me up with. So I asked it, I just goofy, you know, how do I persuade my wife to let  me buy a new expensive lawnmower, you know. And it came up with some pretty good, clear, I  mean, the dimensions are there, and then it says try this, try this, and try this.  And that’s pretty good coaching. I don’t really need a lawn mower. I just was coming up with  something random, but But anyway, so I’ve interacted with that. So where is where is the  generative AI. Like, how are you incorporating that?  

Like you said, the high, medium, low you were telling me. Right? One of the things I’ll I’ll just  set this up too. As I’m thinking about generative AI in products. Right?  

I broke it down into kind of three things. There’s there’s the building assessment. Right? So what  what scenarios? What questions do we ask? 

Where do we what signal? What are we doing to elicit signal from somebody. Then there’s the  actual deployment of it. So they’re interacting with something and there’s an LLM in there that is  is part of that interaction, and then there’s the analysis. How are you looking at all this stream of  data?  

How are you trained to evaluate? Complex data and make a judgment about it. Right? So it’s  scoring, essentially. So those are the three nodes you can you can use this stuff.  That’s in the assessment. Paradigm. But tell us a little bit. So my point being on that diversion is  you we were talking about the the rating scales or the the performance levels that we don’t have  to sit there. How many times have you I mean, you’ve been maybe functioning at a blesset in the  trenches in me, but I know in my career, I have written narrative feedback statements ad  nauseam, where I’m just taking one, cloning it from low to medium, changing the language a  little, putting it takes a hell of a lot of time and it’s no fun.  

And and every situation in clients different, so you can’t just use the same ones all the time.  Anyway, so there you go. 

Speaker 2: No. I’m with you. So there’s kinda three ways I’m addressing those in a that really go  beyond and are liberating from our classic hassles like you just described. One is kinda classic  computer science with new large language models. I can talk about that.  

The others are active and passive. They’re purely psychometric. But I start with what is the job  related constructs I care about. Right? And child means he’s got very clear science.  Okay. Cool. We know what matters, what predicts, what but we wanna make sure that damn  LLM is gonna behave itself. So I I one of the things I did I used to write them manually, but now  I’ve got GPT four saying, give me a low medium and high item. And an item, instead of an item  item, it’s a prompt.  

It’s a it’s a which is just an English description of what you want the the LLM to do, and I’m  basically writing it so that the large language model is a synthetic rater. Like a human in in our  assessment center. Wow. 

Speaker 1: Wow. So we need that. I mean, honestly, I’ve said for a long time the remember back  in the you never hear this term anymore. But when I was, like, in the Bay Area in two thousand,  it was killer app. Right?  

The the next killer app, the thing that’s gonna take us to the next level, And I’ve always thought  in what we do, boy, if you could get if you could get technology to replace a human a train  human rater, making complex synthesized judgments, not necessarily even did you answer this s  j t. Right? Which is you know, kind of a baby version of that. But I’m talking assessment centers,  etcetera. Boy, that is an unlocked because then you then you can scale this stuff without losing  the stuff that we psychologists are really so good at building and training people.  So so, yeah, that’s that’s what you’re talking about. Right? 

Speaker 2: It’s it it is, and it’s mandatory. I mean, if any of you used Amazon Mechanical Turk or  any of its competitors, Like, they were always been a hassle because a lot of bogus stuff, but it’s  much worse now because they’re using GPT four to generate their either turn so now it’s garbage. 

Speaker 1: Yeah. 

Speaker 2: And and the beauty of it is, this stuff works so much better than humans, like an early  medical grade level reliability. If you if you tune real m’s right with the right prompts.

Speaker 1: Yeah. Yeah. And we’ve seen machine learning more simplified machine learning do  that, you know, just taking using a whole lot of outcomes as training data, right, and and having  experts do the same rating and looking for the you know, the consistency there and then, you  know, making sure the AI is trained to do that. But this is more than that. So so how is that this  more than some of the simpler applications that we’ve seen so far, you know? 

Speaker 2: That’s right. And so what’s beautiful about the LLMs is you can literally just like  you’d write different items in a traditional test. You can have completely different paradigms  about thinking about the latent trait. You know, Bob Hogan does a slightly different take on  prudence. Than the NeoPI did on conscientiousness.  

So if you’re making that kind of isn’t, you you can have those compete and literally tell the OOM  to act as Bob Hogan versus Costa McRae, literally in different prompts, or at the same time. So  that’s at the that’s at the assessment level. Then we make sure we calibrate both the LLMs and the  prompts and the people on the same ruler with the technique from psychometrics, from Roche  Measurement, called many faster Roche Model. And then once we get it, we stick that into an  inverted cat. I invented this in twenty ten, so computer adaptive test only for LOM.  So we already know what prompts are hard and easy. We know what the biases of each of the  L1s being used. And as you it ingests the text that you’re playing with Chaldini bot, that’s what  it’s using step by step for each of Chaldini’s principles to estimate where you are. And if you  want more precision, of course, it takes longer than if you want something quick and dirty. 

Speaker 1: Inverted cat, that sounds like a yoga pose or something. I’d never heard of that  before. Wow. So, yes. Cool.  

I’m just trying to think about how that all works. So so when you’re training, when you’re when  you said feed it, you know, it’s is it Bob Hogan? Is it costume or cray? Are you actually doing  some kind of, like, for trivial augmented generation like a rag? Are you just fine tuning? 

Speaker 2: Mhmm. 

Speaker 1: Are you feeding it anything? You’re feeding it? Papers or what? 

Speaker 2: Feeding it. I’m only feeding it to the the the prompts to make sure the the construct  and all sub facets are completely nailed. And I’m doing it not just for the prompts, but here’s the  kicker. We get traditional psychometrics, it always irritates us that we get more precision in the  middle of the thing than we do in the tails. Right?  

Mhmm. It’s kind of a bathtub curve of error information in classical test theory and in IRT. So  there’s more noise in the tails. And, of course, that’s where you really want to find high  performers or, you know, low performer. So so like where we care the most, we have the least  information shortly why because of normal distribution of our, you know, our populations.  We just don’t have big samples in the tails by definition. So what I What I’ve done with GPD four  is also create samples specifically at those levels. Right. So so I over engineer the sample, I had  real human samples plus these synthetic ones. And now I got super small standard errors and per  per, you know, excellent calibrations no matter what level I care about. 

Speaker 1: Interesting. See, it’s interesting because when I talk to people about assessment tools,  I I always say, yeah. Well, you know, picking picking from the middle hump of a distribution is  hard because there’s a lot of people lumped in there and they’re pretty similar But, man, if you 

can just hire the people on the right hand, you know, one, two standard deviations above. Sure.  And on the left hand, you can kick those people out.  

You know, who cares about the middle as much, but what you’re saying is the middle has got the  it’s just because of the numbers. Right? Is that is that basically it? 

Speaker 2: Your logic’s absolutely sound. I’m just pointing out the confidence interval. You got a  lot more noise than those those high ones than you do in the middle just because you got a ton of  information there to calibrate those. Those 

Speaker 1: items. Interesting. Wow. So when you’re talking about I think when you’re talking  about kind of feeding it the the information, are you talking like, you’re anchoring it in an  objective definition saying, okay. 

Speaker 2: Correct. 

Speaker 1: This is how Bob Hogan defines personality. Here’s the specific stuff. Now use this.  Right? You’re telling it.  

Now use this. You are now Bob Hogan in this situation. Exactly. From this. Yeah. 

Speaker 2: Exactly. And unlike so the computer scientists are so far behind us, psychologists,  and measurement. It’s embarrassing. They treat our our instruments like some kind of gold  standard. They see what percent of the time their machines can 

Speaker 1: Yeah. Their validation concept very hard than ours. 

Speaker 2: Never passed Site one zero one for us. But Right. But so what I did is I took their  hypotheses and our normal psychology ideas of, like, behavioral anchored rating scales. You  know, bars with examples outperform all the computer science stuff and assessment. And why?  Because you know, these are these are things trying to emulate humans and we know something  about humans. If they don’t know, they’re ductile empirical. So just like you said, multiple  examples. It turned out even content relevant emojis helped the LLMs get better. I didn’t 

Speaker 1: understand that. 

Speaker 2: I I threw it in there because that’s that’s true in pre equation and priming. Cognitive  strike, but it works with those too. 

Speaker 1: So explain that real quickly. But if you’re throwing emojis in there, what what is  what exactly are they doing? 

Speaker 2: It’s so just like with humans where you prime them and then they behave a little  differently because of what came before, it’s the same thing here. Only it’s, you know, it’s  obvious with text. This is about persuasion and the principal reciprocity, you know, they’re giving  you gifts. It’s not obvious. It’s but it’s trying to get the LLM to kinda activate its thinking in those  areas, and it turns out content relevant emojis.  

Like, if I’m give talking about giving a gift, something the other person cares about, little gift 

emojis, it helps steal them get a little bit better. Not a big effect, but it’s like enough that, like,  alright, I’m throwing them in. Right. 

Speaker 1: What about the poop emoji? 

Speaker 2: I didn’t try Workplace. I other than Yeah. I know. People I didn’t know. Just that. 

Speaker 1: Well, just a joke. But LLMs love emojis, man. They talk to you in emojis. No  problem. And they’re usually the right emoji, you know, to have it in there, which is it’s it’s  interesting.  

Right? Because they see And I play around with Dolly a lot. 

Speaker 2: Yeah. 

Speaker 1: And now, you know, look, we’re moving toward we’re moving toward this  multimodal situation where that that’s just the next step to you’re having your own personal agent  that you just tell to do stuff. Right. Totally. You can accomplish multiple phases of one project  within the walls of the of the of the of the LMM. Right?  

So that’s pretty interesting. But But Dolly, what I think is really fascinating, and you may know  more about this than me. Like, when when you actually download an image or you look at, like,  it it’ll describe it. It’ll say, this is doing this, this, and this. And I’m like, is that enough?  Like, it’s it’s thinking in words and images at the same time. Some of that some of that gets  scrambled around pretty interestingly. But on the back end of this, is it really just that text that  describes the image that we see as the label or is there more detail under there? Do you know  what I’m talking about? 

Speaker 2: I do. I do. I think the way I like to think of this is kinda like think back to your  Appalachian Tennessee days as a kid. Like, those kids wouldn’t have the same worldview. They  wouldn’t be as worldly.  

And if they happen to be one of Klansman, they would have a very biased worldview than you  did, growing up the way you did. Right? And so I think it’s the same here. We don’t really know  in Delhi, or mid journey or in GPT four. We don’t know exactly what data earn in there, and they  don’t know why they work.  

Just like, you know, like, you teach a kid a bunch of racist, sexist stuff from the KKK. Like, duh,  we would kinda predict that person might not come out as good as one of us did that don’t think  and work that way. So same thing here. It it matters the beauty of what I’m offering to  psychology types is we can no matter how goofy that is, we just have to keep it in its cage. We  keep it in its cage that we define the science of and we don’t let it out.  

So it doesn’t embarrass us because Yeah. Delhi to sometimes generates racist, sexist, or because  it’s got stereotypes just like humans too. 

Speaker 1: Yeah. Yeah. I haven’t seen any of that, but it it’ll add extra limbs. I mean, if you really  study stuff, you can see these weird and the writing I’m sure they’ll get it dialed in, but the the  writing where you ask it to write stuff, I mean, it it I was doing some color stuff with my kid and  it came up with, like, Girlpool, you know, is the name of a color. I mean, it’s just not quite there.  To me, it’s pretty damn funny. You know? Right. There’s a pretty there’s a a newsletter thing.  Something like, ridiculous AI or something like that. 

It just shows you a bunch of images. Kind of like, you know, there used to be a site where you  could look at, like, mispronunciations of English and writing and different countries and stuff. It  is funny. So that stuff’s very or damn you auto correct. I know it’s not exactly the same thing, but  did you ever read that?  

Damn you auto correct. 

Speaker 2: Yeah. Those are good. 

Speaker 1: Oh my god. I’m scrolling on the floor. Laughing with some of those. Such so much  inappropriate stuff coming back from auto. Correct?  

It’s fascinating stuff. And, I mean, where where Jeez. So you’re describing solving a lot of  problems, advancing things in a way that we haven’t been able to do. I think it’s like the sky’s a  limit with the creativity people have in terms of how Right. So, like, I’m learning and thinking of  new things.  

So what like fast forward, five years to the same kind of stuff we’ve been talking about. What’s it  gonna look like? Five years, ten years, I don’t know. Putting the years around it, is nuts because  with generative, it’s it’s just happening. COF and FAS.  

It’s true. 

Speaker 2: It’s totally true. Now to your to your point, like, let me talk more about the feedback  and what you do with measurement because the measurement’s already there. With more excited,  like, the multimodal gets you, like, not only do you get the measurement on, like, all the damn  time, which is so amazing. Instead of snapshots, we can get it. Your your free range thing, you  get it all the time.  

That’s huge. That’s huge for so many things, not just science, but practice feedback. On the  feedback side, We have parts of this today, but not multimodal, where we can make sure the  feedback is appropriately challenging. It’s in the Goldilocks zone and it’s using all the science  about how you give feedback and convince them to use it. So I think I think it’s gonna evolve to  beyond like a grammarly for your job.  

It’s gonna be more like your your personal assistant that’s monitoring you and giving you  feedback and helping you. The performance management stuff that’s a classic bugbear for our  field is gonna go away. It’s gonna be super transparent. It’s gonna be more about, well, where am  I? Where’s my team?  

And what do we do to help achieve our shared goals? You know? So Promez and Pritchard’s  work, I think it’s a renaissance, and it’s super liberating for those of us that are keen on working  with it. 

Speaker 1: Yeah. Yeah. So thinking about I just remember, you know, anytime you talk to  somebody who’s gotta do their performance reviews, oh my god. I got a fifty performance  reviews to do. It’s that time of year.  

And it’s a slog and then part of it is okay, man, are you really being able with that workload to  give people your best? So So imagine that. I think what the downside of that is, well, there’s no  human touch. Do you just wanna bot giving you your performance review? Or is it that the at the  bot is equipping the performance review or with all the information to then personalize the  feedback?

Speaker 2: No. I I still have it. It it totally flips the relationship. Instead of the nobody likes  performance valuations, but instead of that, it’s performance management where the  measurement’s ubiquitous and the job of the managers to coach you and help remove barriers.  Right?  

It’s not 

Speaker 1: at once. Yeah. Yeah. And even just your pulse surveys and feedback, you know,  ongoing feedback. I mean, that’s nothing new.  

But my my little thing I just wrote down on my notebook is, like, I think we’re moving from a  snapshots to a movie. Right. Like, that’s how this stuff works. And these moments in time give  you some signal. But a continuous flow of movie data gives you a lot more signal, you know.  It tells a richer story. 

Speaker 2: Including your your your pulse surveys, those today. We could do those, like, once a  week for the team, for the org, just from from email, from Slack, from Discord scrape. Like,  that’s And no no more pain in the ass, questionnaires, if you wanted to. 

Speaker 1: Yeah. Well, questionnaires are there’s no doubt in my mind that they’re going away.  Like, I mean, we we are going to see an era where your, you know, a like or scale will be in a  museum next to the Torontosaurus Rex skeleton 

Speaker 2: or something, 

Speaker 1: most likely. But I think we have a ways to go. I’ll tell you, you know, in my travels,  I’ll talk to a lot of people who use assessment. And the like, I still work on projects, which is  straight up knowledge base, multiple choice questions. Right?  

So And there is a it’s interesting to think about it’s just like electric cars are placing ICE internal  combustion cars. Right? I mean, the electric car stuff’s there. You know, we know how to use it.  It’s getting better and better, getting more user friendly, whether it says the environment more is  debatable, given the the battery to, you know, to to there’s all kind of things there I don’t wanna  get into.  

But, you know, it’s it’s kind of like the same thing. Like, we’re still reliant on the old stuff that we  know works and can can deal with. But slowly, slowly the new stuff’s gonna take over and at  what time is it? Is it that the same timeline we don’t really know? I think there’s there’s a fear and  maybe justifiably so that, like, if you can’t touch the item or the thing that’s measuring, like, can  you trust it?  

You know, that’s that’s something that I feel like is 

Speaker 2: Well, possible. Let me touch on that. Two pieces of minds. One science and one  technical. But there’s massive work with the older AI from a developmental psychologist named  Theo Dawson or companies called Lectica.  

Where she’s done what I’ve just described you, but the old school way and a lot of manual coding  with pure knowledge, the kind of knowledge items you’re just describing and purely just based  on text. So I’m optimistic there. But the the touch and feel the item, it it’s a it’s a double edged  sword. You know, the downside is what you’re saying. If people like that work, that’s a challenge  or, you know, especially subject matter experts. 

So that’s their that’s their worldview. But the other hand, SMEs often acquire you to destroy  some of the best items because they don’t like them for some stupid non psychometric reason. 

Speaker 1: I know. I know. 

Speaker 2: So that that’s hugely liberating to take that out and saying, you don’t really like going  through that pain in the ass knowledge test. Well, give me this following samples and I don’t  need them anymore. And then and then you just make sure and this is the European Union and  the New York City new AI legal rules require audits. I think that’s where this is going. So the the  the the second metric audits will look at the prompts, make sure your second metrics are good.  But do do sub do non psychologists really want the pain in the ass questions? I don’t think so. I  think they just wanna trust that the information’s good and it’s painless. 

Speaker 1: Yeah. Exactly. I mean, we’re just taking a layer out using technology. But I my point  is I think it’s gonna be even with the capabilities growing, which are always gonna grow faster  than adoption, in the enterprise space especially in the in the new era of intense regulation, which  we’re getting to ready to go into. It’s happening and it’s always gonna be behind the technology.  That’s nothing new. I mean, remember remember twenty some years ago, The Big Brew, uh-huh.  Oh, u I t, unproctored Internet testing. Oh my god. We can’t allow this no way.  Well, guess what? Allow it or not is happening, so you better get used to it. And, I mean, it’s it’s  some of the same stuff, but regulation wise, that there wasn’t anything there. Now Now we’ve got  we got a lot of regulation. We don’t know how it’s all gonna be, you know, exactly executed.  The thing that’s interesting to me about regulation, I was just happened in this conversation  yesterday. You might have a New York law, a couple California laws, the the no robot bosses act  or whatever, I think, is is, you know, all these things. And then the EU law well, if you’re trying  to comply, and you got all these disparate things that might have different requirements. To me  though, if you fall back on an ethical framework because if you look at the ethic frameworks.  They’re all basically the same.  

Right? The basic truths about how to be equitable. Right? And equitable. And so it’s it’s really  like, okay.  

If you can check these boxes, you might have to run a dataset a different way or something to  satisfy one of these things. But then also, the big thing we’ve never really had to deal with is that  vendors and purveyors of these things are gonna have to be audited in And and nobody knows  what the hell that’s gonna look like. And there’s just, you know, there’s so much opportunity even  there for chicanery and crap. So, you know, just do it right. 

Speaker 2: So I’m I’m here to help with that. I mean, I’m excited because I’ve I’m calling these  guardrails. For computer scientists who don’t know anything about our stuff or care, But to the  point you’re making, Chaldini has a very excellent model of ethics. Whether the information is  true, whether it’s natural to the situation, it’s not contrived, and whether it’s wise. And I have I  

have three hundred sixty six percent more precision in my measures of ethics using Chaldini’s  framework in my AI, then physician credentialing tests.  

And so you could see where you could have an active guardrail and make sure GPT four isn’t  allowed to send anything that’s not ethical. Right? 

Speaker 1: Yeah. And it does that a little bit on its own and then there’s the this is more of a

Speaker 2: hardcore quality guarantee, then there’s they can’t guarantee it the way I can. 

Speaker 1: Yeah. I think if you yeah, there’s all kind of little that’s the other interesting thing is is  that there’s kind of the just like anything, just like you know, any security based thing. There’s  people who figure out how to get around it, and then there’s the how do we patch that, and then  there’s another worm that comes out and, you know, and we may always be dealing with that. I  think in the early early days of chatGTP, at least public facing, like, a little less than a year ago,  there was a lot of, hey, we can we can get it to divulge this or that 

Speaker 2: Right. 

Speaker 1: You know, for us wears it. You don’t hear about that as much these days because I’m  sure they’re patching stuff. 

Speaker 2: That’s exactly right. But that’s got it’s mixed blessing for us assessment types. So  that’s why My current AI is way better than GPT four using open source, an ensemble of them.  Because what GPT four did to us in the earlier part of the year, they didn’t tell us they were doing  that. And we we noticed the defect density was getting really bad because like half the time, it  wouldn’t spit out an integer that we want.  

It wouldn’t behave itself. With the open source ones, you have so much more control over it than  we do with them. 

Speaker 1: Yeah. Yeah. Well, you do. Yeah. I’m working with a friend of mine has a company  called Bookend AI that that creates a secure wrapper for open source models.  Right? And then you can fine tune them in there. And I’m really learning a lot about you know,  first I thought, oh, you just build a product on chat g p. No. Because it’s it’s proprietary and own.  You can’t get in there and change it. You can change it by your in and out, like, collectively a  million people interacting with it can can help fine tune it on stuff, but But in general, you don’t  own that thing. You can’t you can’t make it do your own purpose. 

Speaker 2: What and what I found is you don’t even need to do the fine tuning. You just need to  do the fine priming, and then you want predictable that it’s gonna behave the same way in your  validation study as it does in the future. And you can’t guarantee that if you just use GPD four. 

Speaker 1: Yeah. Yeah. Well, very cool. So I know we’re getting up to time here. Tell everybody  a little bit about as we kind of wind out here about True Mind AI.  

What what you all do? What what’s what reason would someone contact you for and how you  can help them? 

Speaker 2: Yeah. Appreciate it. We pioneer this kind of trustworthy, explainable AI for any kind  of purpose. Obviously, as an I o sec guy like you, that’s where my head usually goes, but we’re  also supporting the regular computer science community. I’m working with the international  coaching federation to redefine their standards.  

You could see where this would be a brilliant way to credential them because, like, they, you  know, A lot of coaching post COVID happens through recordings. Work with leadership  development specialists, several different ones around more classic reports. In some cases, other  cases, assessment type thing and feedback. So we custom create those. We our first we did with 

the Professor Chaldini’s company, the Chaldini Institute, So that’s a child, the antibodies.  But you can see where this is an all purpose on obtrusive assessment and coaching approach.  And it could be an individual, the team, the organizational level, we’re looking to collaborate  with others who already have domain expertise that we don’t necessarily have and markets. To  custom create that stuff and make it scale because it’s so easy to do compared to what the old  school stuff was. 

Speaker 1: Right. And so you’re taking it sounds like a combination of open source LLMs and  putting them in a kind of secure environment where you’re teaching them and and they’re  accessible within a product. 

Speaker 2: That’s exactly right. And in most many cases, headless APIs. So the same fancy pants  science, but maybe it’s the customer’s UI UX, their their interface, 

Speaker 1: not ours. Right. And are you going to, like, hugging face or something thing and  looking for these models? Are you building them yourself? Or We’re 

Speaker 2: not building them ourselves. That that’s what’s brilliant about this. The o the open  source LMs, even the smaller ones. You don’t have to use the big ones. Smaller ones are super  human in emulating what an assessment radar would do.  

So we’ve been extremely happy with those things. They’ve been we didn’t redefine tune them. We  just need to get them from hugging face or some other open source spot. 

Speaker 1: Yeah. Yeah, for people who don’t know, and I just I just learned, you know, you hear  hugging face, interesting name, but it’s a repository with all kind of open source. And I think  there’s thousands So thousands of them. 

Speaker 2: That’s right. Right. 

Speaker 1: But you gotta be careful when you pull those down, and that’s where you gotta kinda  do your own covenants. Good deal. Well, we are up on time, and I I always say this. I’m never  gonna stop saying it, but honestly, like, tell people how they can keep track of you and you’re  gonna say I’m on LinkedIn because everybody is. Is there anything beyond being on LinkedIn  that that people should know about following you? 

Speaker 2: Yeah. My team’s work is on True Mind TRU m I n d dot a I. And the the company  that that owns true mind dot a I is x l n c dot c o. Those are the the spots you can find me. 

Speaker 1: Good deal. Well, thank you so much for your time today. Dr. Matt Barney, profit of  of this stuff. 

Speaker 2: Thanks for having me. 

Speaker 1: As we wind down today’s episode to your listeners, I want to remind you to check  out our website rockethire dot com. And learn more about our latest line of business, which is  auditing and advising on AI based hiring tools and talent assessment tools. Take a look at the site.  There’s a really awesome FAQs dot augment around New York City local law one forty four that 

should answer all your questions about that complex and untested piece of legislation. And guess  what there’s gonna be more to come.  

So check us out. We’re here to help.