Balancing Responsible Innovation & AI Based People Systems: w/Eric Sydell, co-founder of Vero AI

“It’s about selling objectivity, not just science. We need to make sense of the vast data around us to help businesses make better decisions.”

-Eric Sydell

Summary:

In this episode of “Psych Tech @ Work,” my long time friend Eric Sydell, IO psychologist, genius, and co-founder of Vero AI, joins me to discuss the transformative potential of AI in the workplace and the importance of responsible innovation. This episode offers a deep dive into the practical applications of AI in HR tech, the necessity of ethical guidelines, and how businesses can implement AI responsibly to drive innovation while mitigating risks.

In our conversation we get into the brass tacks of responsible AI.

But first we share some stories about our respective career journeys and life as IO psychologists here in 2024.  And how we both find peace and harmony in this crazy world by working with our hands (he builds guitars, I work on old cars).

Besides talking about hobbies, my agenda for having Eric on the show was to learn more about what Eric’s company, Vero AI, is doing to help drive safe innovation with AI.

Eric delves into the technical aspects of how Vero AI leverages advanced analytics and AI tools to enhance decision-making processes.  He highlights the company’s unique approach to converting unstructured data into quantitative insights, enabling businesses to monitor and optimize their operations effectively.

Eric emphasizes the importance of continuous output monitoring to ensure AI tools are fair, unbiased, and effective. He explains how Vero AI’s platform uses a combination of AI and rigorous scientific methods to provide comprehensive analyses of algorithmic impact, compliance, and fairness. 

Our discussion also covers the evolving landscape of AI regulations, the importance of aligning with these regulations, and how Vero AI assists companies in navigating this complex terrain. Eric’s insights provide a detailed look at the practical applications of AI in HR tech, underscoring the balance between innovation and ethical responsibility.

Takeaways:

  • Sell Objectivity: Focus on using AI to make sense of vast data, helping businesses make better decisions based on rigorous scientific methods.
  • Monitor AI Outputs: Continuous output monitoring is crucial to ensure AI tools are fair, unbiased, and effective.
  • Responsible Innovation: Approach AI adoption with a rigorous, ethical mindset. Balance innovation with the responsibility to monitor and understand AI systems.
  • Regulation Awareness: Stay informed about evolving AI regulations and work towards compliance by maintaining transparent and accountable practices.
  • Leverage AI Thoughtfully: Use AI to enhance decision-making processes while being mindful of potential biases and ethical considerations.

No show would be complete without the Take it or Leave it” Show.

In this episode Eric and I discuss two interesting articles about hiring bias and regulation

Articles Discussed in the “Take it or Leave it” Segment:

  1. “Colorado’s New Law on Regulating Brain Implants and Neurological Tech”
    • Summary: This article discusses Colorado’s new law aimed at regulating brain implants and other neurological technologies, focusing on data privacy and ethical concerns.
    • Discussion: Eric and Charles debate the necessity and timing of such regulations, considering the current state of the technology and the importance of data privacy.

 

  1. “Employers Ask, What is AI? as Regulators Probe Hiring Biases?”
    • Summary: This article explores the confusion among employers regarding the definition of AI and the importance of evaluating adverse impact in hiring decisions, regardless of the technology used.
    • Discussion: Eric emphasizes the need for clear definitions and the importance of focusing on outcomes rather than the specific tools used, while Charles discusses the practical implications for employers.

Full transcript:

 

PTW_Sydell  

Charles: Alright. Welcome, Eric. It’s really good to see you.  

Eric: Thanks, Charles. You too. Great to be here.  

Charles: Yeah. We’ve known each other for a long time, and we’re both kinda on the other side  of some things we’ve been into, and it’s pretty interesting time to be alive in this field, boy. Don’t  you think?  

Eric: You know, in any field, really? I mean, anything that touches technology, it’s crazy time to  be alive and to be working and to be thinking about all these disruptive things that are going on  around us. It’s a lot of fun.  

Charles: It is. You know, it’s super interesting to to think about. I read a lot about this stuff. And,  you know, just in the bigger context, I mean, there’s lots we’ll talk about your background, but  also what you’re into now related regulation and ethics and all that is super hot topics, super  interesting, and you got a great perspective on it. That’s good stuff, but don’t know.  People are probably listening a lot, but it looks like you got a less Paul behind you right now. It  looks like it’s coming out of the middle of your head.  

Eric: Yeah. That’s right. That’s got a less pullback there and a national steel guitar back there on  the other side too.  

Charles: Oh, yeah.  

Eric: I saw that. That’s you know, I I I’m still holding out hope that I might make it as a rock star,  but the chances seem to be dwindling along with my hope that I would be a, you know, fighter  pilot at this point. I think it might be the ship might have sailed on these things, unfortunately, for  me.  

Charles: Yeah. Well, you’re a rock star in our field, so you got that going. Final pilot. I don’t  know about that. Can’t remember how tall you are, but you gotta be pretty sure.  You gotta be Tom cruise short, I think, to be a Well,  

Eric: there you go. Okay. So I’m not real tall. So that’s good. But I have glasses, so that’s bad  also.  

If I didn’t have glasses and I’d actually, you know, been able to become an an aviator in that  sense, I’d definitely be dead by now. So, you know, I think it worked out for the best.  

Charles: Yeah. The g’s been, I get really motion sick pretty easily. So when they put you in that  that thing that whips you around to see if you can handle the g’s, I would be in really, really bad 

shape. And I’m foot two hundred anyway. So chopper pilot though, you know, that that might be  more more doable I have a friend who did that.  

I’ve kinda written in a couple of those in the shotgun seat. It’s pretty cool.  Eric: Mhmm. Oh, that’s fun.  

Charles: Yeah. I like to keep my feet on the ground, though. So good stuff. Well, I don’t play any  musical instruments. I used to play the drums I’m not very good at that, but I do love music and  that’s really cool.  

I have certainly never heard you play guitar. There haven’t been a lot of like I know some some  Mayo musicians for sure.  

Eric: Yeah. There are a few out there, I think, definitely. So, yeah, you know, I mean, music is  it’s a great it’s a great pastime. I you know, you’re in front of the computer all day, and I don’t  know about you. I well, actually, I know.  

Like, you have plenty of hobbies that involve physical things, cars, and whatnot. And I mean, I’m  the same way. It’s like you’re in front of the computer all day. You’re looking at a screen. It is so  nice to just pick up something physical and do something physical, you know, whether, you  know, I got different hobbies and things, but music is is front and center for me just so nice.  So it’s like my zen. It’s my meditation. You know, pick up a car and just play something badly for  a little while and it feels great.  

Charles: One million percent I I it’s the only time I feel like I can really find calm in my brain, I  think, and and I don’t think about work, and I have those, you know, Cisco Mahalhi, you know,  flow experiences where I’m just kind of one with what I’m doing and and then that feels really,  really good. It it that’s when I really underneath things where I do my best thinking if I could  just, like, discipline myself. You did not try to be figuring out the problems of the world of every  minute, you know?  

Eric: Well, that’s I always band is, like, you know, when I whenever I’m in the middle of doing  something, just physical and that doesn’t use my brain, then all of a sudden that’s when I get my  best ideas, you know. It’s tough in there. I mean, I think that’s how it works a lot of the time. You  just need a little distance from what you’re thinking about, and then all of a sudden, no, here’s a  solution, you know. Yeah.  

Now, I mean, sometimes out of out of ten. It’s a completely horrible idea. But, you know, it’s it’s  the one out of ten that actually turns out to be good.  

Charles: Yeah. It’s ideal idea fluency, man. It’s good to come up with a lot, be able to trash some  of them. Because if you dam it up, you might miss that really good one, you know. That’s right.  I think Yep. I think I have too many and then just trying to figure sort them all out. Well, you’ve  done some good stuff with you know, building some some pretty cool things we’ll talk about a  little bit. But I guess, you know, be for our audience and those who may not know you, which  could be a number of people. Just give us a little bit of back, you know, your own personal  background of a quick appreciated version of your story and and, like, what your passion areas 

are.  

Think that’s good stuff.  

Eric: Howard Bauchner: Yeah, absolutely. Thanks. Appreciate it. So, you know, I came out of  graduate school in around two thousand or so and worked in the Cleveland office of SHL at the  time doing assessment related consulting. So I’m an industrial psychologist like you by  background.  

But shortly after that, myself and a few other colleagues started Shaker Consulting Group  together, and it was about two thousand and one into two thousand and one or so. That we did  that. And that, we created virtual job track. So we did, you know, creating creating custom  assessments and simulations for clients. For a long time we did that and built that business.  And around two thousand seventeen, two thousand eighteen, we ended up being bought out by a  private equity firm and then merged in with another company, an interview company called  Montage at the time, and that combined company became modern hire. And then modern hire  was acquired by HyreView or the private equity that owned HyreView last year. And so that that  whole thing was most of my career just doing that sort of consulting work, assessment work. It  was tons of statistics, tons of validation work, and trying to build good quality rigorous scientific  tools that predict new hire success in the job and do so with very limited or low levels of, you  know, bias, adverse impact, and things like that. So, you know, that’s that’s most of my career.  That’s what I did, but, you know, the big thing that is always driven me and it goes way, way  back from me. Actually, I can trace it back to even like nineteen ninety six. My birthday. My  birthday at nineteen ninety six. It says December twentieth.  

And my parents gave me a book. They gave me a book by Carl Sagan called the demon haunted  world Uh-huh. Signs as a candle in the dark. And Oh. Reagan’s like one of my scientific idols.  And, you know, he just wrote so compellingly about science and about how we can fit to shine a  light on darkness in the world, how we can use it scientific method to understand the world  around us, to learn things, to accumulate knowledge, to make the world better, to fight  disinformation, misinformation, to fight pseudoscience, you know, and all this sort of stuff. And  so this was nineteen ninety six. And if you go back and read that, now you’re like, Why did he  write that? Like, this week? Because it seems so common and so necessary even now.  But, you know, he talks about in there that skepticism doesn’t sell. And I’ve always thought about  that. I’ve kind of rephrased it a little bit in my brain science. Doesn’t sell. And you and I are  scientists.  

So, like, in some sense, I think, like, we’ve been trying to sell science, the scientific method,  rigorous objective analysis and thinking for our whole careers. And it’s hard. You know, it is hard  because it’s complicated stuff. And so you have to explain it and you have to talk about it and  socialize the ideas and I think that’s my mantra is, like, I don’t wanna sell science, I wanna sell  objectivity. Now I would also mention has another point on this an end cap to this story is Carl  Sagan died on December twentieth nineteen ninety six.  

My birthday the same day that my parents gave me this book. So that’s always stuck out in my  mind, you know, as like a formative thing for me, and I’ve carried that forward. So I think that’s  what drives me today is there’s all this data, there’s all this information in the world. We have to  make sense of it. To help the world, to help businesses make better decisions about people. 

And just in general, we want to put forth methods that can help people understand the world  around us to, you know, show what’s real and what’s not real. And that I mean, that to me is a  calling. That’s exciting. And, you know, my new company, Vero AI, is basically dedicated to  

doing that at scale and using different tools that are available to us, some of which are AI, to try  to make that a reality, and to try to scale objectivity, to scale the scientific method so that we can  study the huge sets of information all around us. And hopefully make better decisions as a result.  

Charles: Yeah. That’s good. So you know what else happened on December twentieth? But not  nineteen ninety six was my kid was born. He was really December twentieth.  Yeah. Okay. Yeah. Maybe he’s you know, Karl Sagen reincarnate. I don’t know.  That might not be fair to to Karl and start throwing around his reincarnation like that. I’ve never  read much of its stuff. I didn’t yet wasn’t either one that had to show Cosmos, I think, on PBS  Yeah. Billions and billions of time.  

Eric: Yeah. That’s right.  

Charles: Molecules. Yeah.  

Eric: He was, like, the previous version of Neil Neil Degrasse Tyson, you know. But he was I  mean, he was amazing. And Yeah. Just you know, he was a he was a scientific genius and and  super smart and everything, but his I think the thing that made him so unique was the way he  wrote and spoke so compellingly. Science and stuff.  

Mhmm. It was his command of of of a space probe. I forget whether it was Voyager or Pioneer,  but one of those probes was so far out in the solar system and about to escape, you know, the the  communication with Earth and base from mission control, at his instruction, they sent at one last  command, which was to turn up turn itself about and look back home at Earth and take it on final  picture. And it did and it sent this picture home and it’s just a little pinprick which is Earth in a an  array of light and they call that picture. Pale blue dot.  

That’s where that comes from is the idea of calling out the pale blue dot is from that instruction  and from that story. And you know, it’s just he he did some powerful things and helped to show  kind of our place in the universe and how how Yep. In some ways insignificant and small we are,  which puts things in perspective.  

Charles: It really does. I think about that all the time, especially when I’m having a bad day. And  I talked to my son about it a lot just that the grand scheme of of things. It’s like dinosaurs were  three hundred million years ago, but our plants billions of years old. We’ve only had smartphones  for twenty years out of that entire thing.  

It’s like that’s that’s a subatomic particle of a major magnitude. Right? So Yeah. Pretty pretty  interesting pretty interesting stuff and, you know, that’s a really we talk a lot about AI and I read  a lot of philosophical stuff about it, you know. And there’s a lot of folks that say, if you start  thinking about artificial general intelligence and everything.  

So a lot of people who are like, look, large language models look amazing. They’re complete  dead end. Like, they’re they’re gonna hoover up all the resources to to build them and all the  information is gonna be at some point saturated them, they’re really an a natural language 

processing on hyper steroids. Is really a dead end for the total overall evolution of AI, and there’s  gonna be some new stuff. And that’s pretty interesting to think about.  

I mean, it’s certainly easy to throw all kinds of speculation around because us living in the world  of this stuff now, it just feels complete I don’t know about you, but every day I use Chachi TV for  something, and I still am just marveling in amazement that I could just type something and have  all this stuff happen. I mean, it just feels otherworldly to me even though we know exactly what’s  happening in a lot of sense. We don’t maybe know how all the things are connecting and  synopses and stuff are connecting down there, but neurons, etcetera, that you call. But general,  it’s pretty it’s pretty mindbigh. What do you think Carl Sagan would you know, would say about  this stuff.  

Eric: You know, I don’t know. I mean, hopefully, he’d have some grounded perspective though. I  mean, I think that was generally what he tried to do. You know, I think we live in a world of  hype. Right?  

Everything is fun and everything is just everything is overblown. Everything is always like that.  And I think that’s what gets eyeballs. That’s what people wanna see in the news media is  exciting, you know, updates and new stories and things. And that’s what investors want as  exciting opportunities to invest in greatest technologies and everything.  

You know, there’s no two ways around it. I mean, AI is transformative. It’s disruptive. It’s  amazing. Generative AI is amazing.  

Is it a dead end? I don’t know. Maybe maybe not. I mean, you know, people think it is. Some  people think it isn’t.  

I mean, I’m not, you know, AI engineer, so I can’t incredibly comment on it. I think probably  there’s more hype than it deserves. I think right now we’re seeing a lot of information in the news  media about how it hasn’t been very profitable for these companies that are  

Charles: Yeah.  

Eric: You know, not yet. They haven’t made a lot of money on it. There’s not a lot of great  economic benefits to companies help these rolling them out. Yet, will there be? I don’t know.  I think probably over time, there will be legit applications of these tools that make money. And I,  you know, I think I think the generative AI thing like, generative AI is at the core of what we do  in in in bureau AI. But we’re not really using it for the generative component of it. Right?  Generate thing you can make fun pictures, you can make it synthesize, you know, some paper  into a shorter version.  

You can, you know, you can make it give you ideas for things. You can do these things. And it’s  largely just taking everything it’s vacuumed up from the Internet and Mhmm. Spin it back to you  in some synthesized way. Right.  

I mean, it’s fine. You know, and like you said, you use it every day. I use I use various AI engines  every day as well, and they’re useful. But I don’t know that that’s the b l end all of, you know, AI  development either. I mean, I we’re using it in a way that leans more on the intelligence as aspect  of generative artificial intelligence than the generative aspect.  

Yeah. And particularly, like, you know, what we’re doing is using its ability to understand human 

prompts. And we can tell it to do things. We can tell it to manipulate text and unstructured data.  In ways that are very interesting, and it does it really well.  

And we can use, you know, a certain corpus of data that we upload in a sort of so we’re not using  the entire Internet. We’re just using a subset of information that we want to study. So it’s a we use  a rag approach, which is you know, less subject to hallucinations. It works really well. So what  we’re using it for is to take unstructured data and break it down and parse it and analyze what it  means and actually attach a number to it.  

So we’re changing qualitative information into quantitative information. And, you know, as a  scientist, like, we only can do statistics really on quantitative data on number data. Right? Yeah.  Well, eighty percent plus of the world around us is not number data, but it’s data.  And now what we’re doing is making it into meaningful numbers that we can actually study at  scale. So if you think about that and sleep on it a few nights. Like, I there’s not a day that goes by  where I don’t have another idea about what we could study or do with this. Like, there’s so much  and information out there. So we’re using it in a way that I think is very legit and can actually  help further science, scientific study.  

And I think that to me, that’s the exciting thing about it. And you are seeing various scientists,  various academics are using LLMs in similar ways. Like, I there’s a paper a recent paper where  they analyze financial statements of companies with  

Charles: Yeah. I saw that.  

Eric: Yeah. I mean, it’s it’s pretty cool. You know, they’re pulling some interesting information  out of there. So academics are exploring, you know, using LLMs in various interesting ways. So  to me, you know, are they the be all end all of AI development?  

Probably not. I don’t know. Probably not. But they are very powerful. I think the hype the hype  cycle is needs to die down.  

You know? Because we you know, we don’t approach it from, like, oh, we’re building a  foundational model. We’re building technology. I approached it. I was like, hey, we’re just trying  to use whatever tools are out there to surface interesting insights in data.  

I don’t care if it’s higher if it’s anything else. If it’s good and useful, then, you know, we might we  might use it in our engine. Mhmm.  

Charles: So, yeah.  

Eric: You know, I feel like we’ve like, we’re moving past that. AI.  

Charles: So I watched man, that was a great talk track. I got like a whole stack of notes here that  I watched. Right. So first of all, you know, I use it to make silly songs you could go to that, you  know, music generator thing. It’s incredible.  

It makes up the lyrics. It does the style of the song. So you can do that or you can do research.  One of the things that I really like and I read a lot of these academic papers, it’s just they come in  my news feed. I’m not, you know, I’m not, like, going after them, but I’ve read some pretty cool  ones, but what really is cool about it is that the research cycle is so fast.  

Now they may not be peer reviewed or whatever, but they’re in that same format that we’re used 

to seeing articles in. And I feel like they’re pretty credible, but that stuff gets out fast. I mean, if  you wanted to put something in Academy of Management or JAP or PSYK related to this stuff. It  we’ll have, you know, artificial general intelligence before we even have our article published,  like our study published. It’s it’s at least  

Eric: a two  

Charles: or three year cycle. So at least in this area, we’re getting you know, stuff that’s not just a  a buzzfeed venture beat, you know, summary and there’s no substance behind it. So Yeah. I think  that part is really, you know, is really good. I I feel like what we talk about is scale.  So when you talk about science and good science and you talk about business and how AI is  work working, it’s the money is at scale. Creating scale for something that may not have been  able to scale before. And we often lose in the process of doing that over the years you scale a  good restaurant. It’s never as good. Right?  

I mean, you lose some of the some of the real ethos of what you’re doing when you try to turn it  into SaaS, so there’s no There’s no hands on and human judgment is extremely hard to replicate.  We can still do a lot of stuff way better than ChatGTP or any AIs. But they get us faster. I mean,  one of the things I say a lot, and if you listen to this show, you probably heard me say it, anyone.  

But if ChatGTP, Fours, you know, Fours Omni or Llama three, whatever is out there now, never  evolved any further.  

They still have had a huge impact on what we can do. Right? So we’re in the infancy and we’re  still finding value. Yeah. Does it do cookie things or you know, you can’t trust it to just get a  hundred percent of what you need.  

I’m doing a lot of data analysis now with it and it’s like that I gotta go back and do a bunch of  hand coding because this thing can’t quite understand me. It’s still helping out though. So we we  keep that in in perspective, then we we put the hype cycle on there. But the money’s going into  scaling it. And scaling some science may not be that hard of physical science, but there might  even be people who listen to us talk and start start lobbying for the fact that psychology is a  pseudo science or, you know, that measuring psychometrics is a pseudo science.  I don’t think so. That’s that’s for other people to debate. But at the end of the day, the stuff that we  do is harder to scale. I think. When you’re trying to understand people and how they behave, it’s  harder to scale that in a way that has a lot of accuracy.  

I think that we’ll we’ll get there because we’re we’re clearly, we’re learning and and this stuff is  evolving. So when we think about the future of things, the scale and the money and the hype all  come together into holy shit. We better have some guardrails on this thing because it might go,  you know, Ray Kurzweil on us. And, I mean, he’s not necessarily a doomsday person, but he  certainly is all about the singularity, right, and and the time when we have, you know, machines  being smarter than people in some way. Right?  

So So we’re we look to the future and we think, boy, this could happen. Nobody can tell us that it  won’t happen. And our sci fi brains and our storytelling brains and our paranoid brains living in  the world that we do now. Think that. But the reality on the ground is that’s not happening other  than massive privacy issues and massive bias issues potentially.  

Those are our those are our death stars or, you know, sky nets or whatever of the day right now. 

And we do have to control that stuff. And I know that’s a lot of what you you know, started out  with with Vero doing and still can do. But let’s talk about it a little bit. You know, we both have  some pretty strong takes on regulation.  

We’ve talked about it a lot in our conversations. Boy, I sure thought a year ago, not that the New  York City law was gonna be the one thing that, you know, was the dominant law, but it was it got  so much airtime and and hyphen PR because New York is a media town, everybody, you know,  that that everything that comes out of there is is looked at across the world. And we all started  thinking, oh, man, how do we react? Our clients, our customers in the corporate world, providers  like us, everybody, how the hell do we react? How can we control this?  

And we’ve just seen this complete potpourri shotgun of different levels of legislation, thoughts  about legislation, and I have my opinions. I had commissioner Sondling on the show. He gave  some real good opinions. But but let’s talk about it. You you have the floor here.  What do you think? Yeah. We’re we’re up to right now with this regulation. And what what  should people be thinking about? What’s gonna happen on just what should be?  Yeah.  

Eric: Well, yeah. You know, it’s slow. I mean, it’s actually everything’s happening at light speed,  but it feels slow Yeah. Because we thought I know in our field, we thought there would be some  legit regulations about this by now. And we a lot of us thought that New York cities local law one  forty four was gonna be the thing that really, you know, established the first credible, enforceable  you know, well designed legislate around the topic.  

That did not turn out to be the case Right. As you know. I mean, So, you know, very few  companies actually complying with it. It really is just requiring companies that hire in the city  too. Post disparate impact calculations, which but only in certain situations if they’re actually  using an automated employment decision tool.  

It’s very easy to kind of wiggle out of and act like you’re not doing that. So it really has not had a  good impact. Furthermore, a lot of companies we’ve heard anecdotally are just not using tools  that were AEDTs Yeah. So that don’t have to comply with a lot. And then what happens is, you  know, you use humans to make these decisions who are probably more quiet.  I know you’re based you’re sending it back into the dark ages of the hiring process a little bit. So  it’s unfortunate that one hundred and forty four turned out that way. And I think that there will be  other better laws in the future and that over time, how much time I don’t know. These things will  become more standard. Now, I’m generally in favor of regulation on AI things, especially for  high risk AI systems, which are ones that make decisions about people in particular.  I do not necessarily have faith that companies and employers and everybody using these tools  will, you know, do things the right way without some guidance on on the topic. And But here’s  the thing. I mean, all of the legislation that is nascent that’s out there that’s being worked on, you  know, there’s thousands of pieces globally. We’ve themed it. I mean, we’ve looked across all these  different pieces of legislation to say, hey, are all are they all different?  

Are they all looking at different things? Or is there a commonality here? And it turns out the  huge amount of commonality on  

Charles: that. Absolutely. 

Eric: Now, generally speaking, they’re they’re they want there to be some type of impact analysis  this, but it’s not really clear what that means. So it’s pretty open, whatever you want, I guess, in  some cases. You know, they want there to be some level of communication back to the  candidates about the system, so it’s not invisible to the candidate. They certainly want there to be  some level of, you know, fairness or bias, analytics. You know, there’s a few other things that are  in there that a lot of these pieces of legislation want.  

But not that much. And honestly, like, if you think about it, those were all pretty obvious. Like,  those aren’t numerous things. That’s like minimum, that you should be doing almost none of  these laws mandate that you need to actually have a tool that predicts anything. So if you think  about, you know, in a in a hiring context, you literally can flip a coin and decide who to hire and  you’re okay.  

You’re you’re not that’s not gonna have bias against protected classes. It’s not gonna invade  anybody’s data privacy. You’re gonna go. Right. Now, hopefully, It’s completely useless as a way  to hire somebody.  

But, you know, legally, you can have a fancy AI tool that’s literally a coin flip And, you know,  you’re fine. It it doesn’t necessarily do anything. You might be paying a lot of money for it. It  might not be predictive at all, but you’re not violating any laws. So, you know, to me these  regulations, that are that everybody’s worried about.  

I mean, they’re low bars. It’s not that hard. Yeah. Me, if you really wanna make sure that your  prop process is working, whether it’s AI or anything else, you need to study it more than that.  You know, I don’t know  

Charles: how that works. Yeah. Yeah. I mean, the New York City law came out. I I’m blank.  Okay. I need to come up with, you know, a model and I can do these kind of audits, you know.  I’m feel pretty confident about it. It’s a ratio analysis. I’ve been doing those for years.  There’s no need for validity or any kind of predictive accuracy. There’s no penalties. And if you  post it, that you don’t have to go back and even say it doesn’t have So it’s it’s pretty weak in a lot  of ways. It kinda got the ball rolling. But at the end of the day, one of the more interesting things  I’ve seen.  

Right? Well, let me let me back up. I mean, I think that we we talked about this the other day. If  you go back to the what now seems pretty stone age in terms of the time of the of of things in  nineteen seventy eight uniform. Guidelines, they still tell it all.  

I mean, you know, and Mhmm. The themes there are really all about the outcomes as well. They  don’t care about necessarily what went into things? Well, I mean, you you should have content  validity, but I think if you have criteria related validity and no adverse impact, you’re good.  Nobody’s gonna force you to have the content validity.  

I think it’s all about the outcomes. Right? I’m it’d be better to have a job analysis that’ll help  defend you. But if you don’t have any bells and sirens going off because of what you’re doing,  nobody’s probably cares if you did a job analysis or not. Right?  

So it’s it’s it’s definitely a standard that I think is relatively immutable commissioner, Sutterlings,  like, we’re not changing that anytime soon. Like, that’s the government. It’s gonna keep standing  on that. And it applies to AI. Know, it it if we didn’t have all these other regulations, it it’s and all  the states are subservient to that, It does a pretty damn good job of saying it doesn’t matter if 

you’re using a coin flip, a horoscope, your shoe size.  

If it if it has adverse impact. It’s a problem. You can try to mitigate that certain ways, but we  don’t care how the sausage is made, you know. So that’s pretty interesting. The the big shift that I  see that the uniform guidelines does not do, the New York City law does not do, is hold providers  accountable.  

And I think when we see the EU AI act, where anybody providing anything with AI is gonna  have to be audited in some way, it might the training data. It might be the outcomes. It might be  both. I haven’t memorized it yet, but that’s a huge shift. Because what I’ve seen in the in the New  York City stuff and I’m sure you’ve seen the same thing is providers going to get there in New  York City audit, which may be a great thing that says, hey, we did all this work.  We don’t have bias in these situations. But if if someone’s called up on that, or someone’s called  up on the on the uniform guidelines. It’s the user. It’s the person who bought the gun, not the gun  manufacturer. And what the New York’s I’m sorry.  

What the e the EU Act is gonna say is, yeah, we’re gonna make the gun manufacturers  accountable. That’s a new shift. That’s a big new shift that’s gonna have major repercussions for  our industry and many others. You know?  

Eric: Yeah. I don’t know the answer there, but certainly both the developer and the deployer, you  know, bear some responsibility for what this tool is and how it works. So the developer can’t  can’t totally monitor and understand what the client might be doing with that tool. So you know,  if a tool is being misused by a client in some way or not used properly, you know, that maybe  isn’t something that the developer should be responsible for. But at the same time, you know,  there’s plenty of developer practices and methods and things that, you know, some could be  better than others and and developers certainly need to be putting out good quality tools Yeah.  Based rigorous analysis and things. So I don’t know exactly what the answer is there, but I like,  you know, the point that Sondralink made about the uniform guidelines and everything. Like, the  systems are in place already and have been for decades to evaluate complicated algorithmic tools  and how they’re used in hiring, in high stakes decision making. To me, this whole AI thing and  the AI legislation is, you know, it’s almost not necessary because we already have all this  legislation in place that governs how these tools are used You know, and to me, it’s not about AI.  It’s about algorithms.  

So everybody gets all up in arms and confused about whether something is AI or not. I mean,  whether it’s generative AI or some other type of AI or it’s just an algorithmic tool, you know,  meaning a multiple choices assessment that our field has been making for decades and decades  and decades, but it still creates a score that can be used to hire somebody or whatever.  Absolutely. Score is the output. It doesn’t really matter whether it was created with AI or with  anything else.  

That score can be studied and monitored to make sure that it’s working and that it’s not  discriminatory. So, you know, to me, it’s like, yeah, I mean, the uniform guidelines, it might be  old, and it might be nice if they were updated. But, you know, they still they still basically apply.  I think we need to stop thinking about it as AI and just think about algorithms in general. 

Charles: Yeah. That makes total sense. Yeah. Well, the the uniform guidelines aren’t technically a  law, though. So I guess if you but but they’re gonna default to case law or whatever, and you  have press it in.  

So it’s interpreted in a way that people are still held accountable for it, I guess. So I’m the  attorney. So I don’t know necessarily the difference there. But but yet, you know, if we build bio  data assessment or, you know, something and it has or any assessment and has clear transparent  language that’s, you know, could be biased or we could see the source of that going in, like the  the actual stimulus that a applicant or someone is reacting to and engaging with, you know, is  gonna have something that predisposes it to amplify some problems or differences. That result in  the, you know, in the score value.  

I think when we get to the AI stuff where it gets a little people go after it and stuff is that in a  neural network or something, you know, deep learning. We don’t know exactly why it’s making  these decisions. We don’t know exactly what went into the training data. So it’s harder for us to  remedy it on that side of things. We remedy on the score side and say, oh, well, we can’t use this  level, you know, this we have to dial back the validity of this thing for you know, to get the ratio  down in in technical terms.  

It’s the stuff that you and I’ve done for a long time. But at the end of the day, we can’t go in and I  can go I’ve written a bio data thing that went through a a a review of diversity board that said, oh,  you know these words, you might not know it, but this concept or whatever, this isn’t good.  People can interpret that wrong. But butness in an AI tool, especially one where the individual  doesn’t even do anything. It just looks at their digital exhaust.  

It’s hard to go back in and change it at the source, and I think that’s gonna be something that is is  an opportunity is for us to be able to go in and and say, the outcome could be a lot better. We  would have to monkey with it less if we could if we could handle it on this side of the equation,  you know. But that’s hard to do. And I think that’s where people get all all, you know, riled up  about it. Like,  

Eric: yep. Well, you know, our my focus is on output monitoring. So that’s that’s the level of  results that are front and center that are there that you can see. And so if you can see them, then  we can monitor them. And Yep.  

You know, it is important to understand how the sauce just made at what goes on on inside the  black box, especially when we’re making decisions about people. We wanna know why Yes.  Decisions made. We have to know why. We have to have some understanding of that.  But, you know, our main primary focus is on the output. So, you know, regardless of how that  score was created, if it’s valid, and if it’s fair, then you’re on pretty solid ground, you know, in  terms of hiring, having a tool that actually leads to somebody being more likely to succeed on the  job and and hire them in a fair way. That’s a great thing. So, you know, to me, the the the key to  all of this discussion about AI and algorithms and how these tools affect us, is very simple. It’s  output monitoring.  

It’s continuous output monitoring. And it’s interesting, like, I think about this all the time, of  course, because my you know, I feel so strongly about this that I created a company to do it. But,  typically, you know, the way it works in HR tech and in the business where a lot of times as  complicated tools are sold on hype and marketing and people buy them thinking, oh, this is 

gonna work. This looks very official. It looks amazing.  

This company has a lot of clients. It’s gotta work. It’s they don’t even question whether it really  works or not. You put it in place. You spend a lot of money and a lot of time.  You put it in place. And then there’s no real great way to monitor whether or how it’s working.  And that is that’s absurd to me. Like, that’s that’s what we wanna change. We wanna put in place  ways to make sure that complicated tools are being monitored and harnessed appropriately.  And when you do that, what you find is that, oh, you know what? There’s some bias here that we  didn’t know about. But guess what? Now that we know it, we can fix it. Or there’s a lack of  effectiveness or validity in certain cases or certain regions or whatever.  

And we can go in and we can look at why and we can adjust it and we can fix it. And that keeps  your business running optimally. That keeps the process running optimally. It keeps you on the  right side of the law. You know?  

So that’s the thing is, like, And if if as a society, if we wanna harness AI to make it work for  humanity, that’s the key. Monitoring these systems is the key. I mean, full stop. And I think about  like some Think about like nuclear power plants. Like nuclear power is a great green energy  source.  

But It’s dangerous. People are scared of it for obvious reasons. So would you build a new nuclear  power plant and just, you know, skimp on the monitoring and maybe just have, like, your buddy,  Gary, go check it out. Just drive by home  

Charles: or something. Man. Just hire home or something. He’s great at that.  Eric: There you go.  

Charles: Right? It’s bad monitoring.  

Eric: It has home or go by and and make sure it’s not smoking once a year or something. You  wouldn’t do that. Right? I mean, same thing, like, think about like a newborn baby with all of the  potential, you know, that a that a baby has in this world. Just what do you mean?  Get its get its own apartment and just have home or go by and check on them once a week or  something. No. You’re not gonna do that. These things are powerful and they can do a lot, but  they require a certain level of care and monitoring and understanding. So that’s I mean, that’s I  feel like we’re we’ve got a newborn baby here, and we do we got we did got an apartment down  the road.  

Charles: Yeah. Yeah. Well, ops, you know, as I learn more about LLMs and work with people  who have kinda LLM security platforms like the book in that I advice and work with. You know,  that’s ops. So part of it is just ops, but it’s it’s the ops part is continually making changes to it as  opposed to just the monitoring.  

But the monitoring piece of it is important. What you’re talking about too trips into a dream that  I’ve had for a long time because we know output. We know the output of job performance, right,  that some kind of metrics of job performance if we could continually stream that back into the  system and understand how well it’s working and make adjustments. Right? So so what you’re  talking about is is is kinda similar. 

You need some of that same data to look at adverse impact. Right? Or or a bias, I think. Yeah. It  it’s what what’s the product here?  

Now it’s what’s the product the test is doing, I guess, is part of it. But a continual stream like that  if we could get that because, you know, let’s talk let’s not even talk about bias and stuff. Just talk  about accuracy. How many times have you worked with companies where you know, you’ve put  something in place and then ask them to to get you the outcome data so that you can show them  

the lift they’re getting and the ROI they’re getting and they’re Even though that’s what they talk  about all the way to the, you know, all the way into the project, it’s it’s hard to make it happen.  And and without that, we’re just still kinda shooting in the dark.  

We’re and that’s what also I think helps a lot of companies spend the bullshit of it works it works  great. And you’re never even gonna verify whether it works or not, so we’re good. That stuff is  problematic. So the more output that we’re able to put into something that understands it,  interprets it, and informs decisions going back the other way to make changes to the inputs.  That’s important stuff.  

Does that make sense? That’s what I’m saying.  

Eric: Oh, absolutely. Yeah. A hundred percent. Yeah. Kind of the closed loop analytics idea, you  know, of always being able to evaluate what’s going on and how these tools are working.  That’s gold. I mean, that’s the way it should be. And and another piece of that is fairness in  analytics, you know. And in my history, in my career as a assessment consultant, you know, I had  I can’t tell you how many times we would go to clients and say, hey, can you send us some  demographic data on your candidate pool so that we can run adverse impact statistics and see  how this thing is working in terms of any bias against protected groups, because if there’s bias,  we want to find it and fix it. And so many times, so many times we get the response no.  We don’t wanna send you that data because we don’t want you to do the analysis because if you  do it’s discoverable and if you find something bad, we don’t want anybody to know. And so Yeah.  And that’s that’s a legal response. That’s a legalistic response that the company made the  determination that they could minimize risk by not looking at the data. And and I’ll tell you what,  that is I mean, that keeps me up at night.  

That drives me crazy. Yes, the very head in the sand approach, and it’s very, very common out  there in corporate America. And when you do that, when you bury your head in the sand and you  don’t look for the bias guess what? I guarantee there’s bias there. Yeah.  

Because bias is an onion. It’s always there. You peel back a layer. There’s another layer. You keep  we will never be done peeling back the layers of bias because they’re everywhere.  You know, whether you’re talking about protected classes or other groups, other biases, like our  brains are bias machines. We have to always keep working on that. So far, you know, to me, like,  get the data, find out whether there’s bias, and if there is tweak the scoring and fix it so that there  isn’t moving forward. That’s the only way to go. So when you when a company thinks they’re  minimizing risk by not looking at something, that’s actually in my mind maximizing risk because  they’re they’re almost guaranteed to have a level of bias there, and that has all kinds of harms  down the road. 

Charles: Yeah. And look, if you go into court and and you’ve shown that you’ve at least  attempted to look at this over the years and seen that you have a problem and done something to  remedy it, that’s a hell of a lot better than going in and saying, you know, oh, well, we didn’t even  realize, you know, having nothing. I feel like there’s a contextual thing around that that you show  some kind of concerted effort and care, hopefully, goes a long way. Well, cool. So let’s take a  little shift here.  

We’re gonna do the take it or leave it show and we’re we’re in the take it or leave it show and we  are essentially gonna look at a couple of articles here. We are. We’re gonna look at some articles.  We’re gonna have some fun. And here’s the first article.  

I’m gonna put it up on the screen and this one’s a little bit more down to Earth. Couldn’t get rid of  the ad for Danica, Patrick here. But right. And what we’re showing here is Colorado. I had no  idea of this until I was looking for the articles for this show that Colorado, I’m blocking the  image here, Colorado is apparently an act of a law that says that you we’ve got to regulate brain  implants and other neurological related technology.  

Yeah. That’s a thing. I don’t know how I heard hadn’t heard about it. Actually, I thought it was in  April fools. When I first saw it, but I’ll hold my opinion.  

Yeah. What do you what do you think about this, Eric? What’s your take? Well, it’s interesting  subject.  

Eric: And my first reaction was, oh, we can do that. I didn’t, you know, quite know that that was  to the point where it was actually providing, you know, therapeutic benefits to individuals. But if  that’s the case, then I think that’s a pretty cool use of AI But the problem being, right, that the  data that these companies are using is not protected or regulated so they can sell it. They can do  things with it that they shouldn’t and probably be allowed to do. So I’m like, well, the core  technology is awesome as a use of AI to improve a human life But hey, hello.  Obviously, you can’t be just selling that, you know, add that data to everybody every which way  either. So It seems like there needs to be a balance there.  

Charles: Yeah. I mean, definitely. I guess I guess my take, I I read it a little differently. I guess I  just went alarmist on the why the hell are you trying to regulate something that doesn’t truly  exist? I mean, maybe it’s the the the neural link Elon Musk thing.  

Right? I mean, he’s a hype machine in in his self. Some things have come true. Ultimately, this is  gonna happen in some way, and it’s good that we’re that we’re on the, you know, cutting edge  here and putting some barriers in place about the data part of it. It’s really just an extension of all  the other shit that’s happening with data privacy and who can and can’t sell data.  And I don’t trust that our data is truly private from the people that are saying it is, quite honestly.  And I’ve just I just accepted that and I don’t do anything that’s nefarious anyway. So, yeah, take  my data. Help me buy shirts that I like more, you know, and stuff like that. I don’t I don’t  

Eric: do that.  

Charles: I’ve just surrendered to it. Otherwise, I’d be living in a doomsday prepper thing in the  woods, and I wouldn’t be having much fun. So so that’s That’s my, you know, that’s my take on it. 

So as far as what what we’re reading about here, what this article says, you know, what’s your  What’s your take on it? Are you a are you a take it or a leave it on this one?  

Eric: I I think I mean, are we talking about the article itself or the technology?  

Charles: Yeah. I feel like it’s the concept of the article. This changes a little. This has been  something with this show. Yeah.  

I’m still trying to so I think it’s the concept in this one. This is a pretty neutral article. The concept  that you know, we need to regulate that that this is an important enough issue now to start being  regulated at this point in time. That’s that’s the way I look at it.  

Eric: Well, I am gonna I’m gonna take it. I am gonna take it because I do think that, you know,  overall, we’ve we can’t just have companies sucking out thoughts or, you know, whatever. I don’t  know what data that might be, but we don’t want, you know, our innermost thoughts ideas, things  like that. Just sucked up into some sort of vacuum that someone else can monetize. I don’t think  that’s the world we wanna create with these tools, and it just seems to me that that’s the kind of  thing that can very quickly get out of hand where companies end up down the road having a huge  amount of leverage over us as individuals because they know all this stuff.  

Charles: Yeah. I think that’s a good one. My take originally was definitely a a take of, come on,  why the hell are we regulating this stuff now? Are we buying are we buying into Elon Musk’s  hype? And so for that reason, I was kinda like, we don’t need this at this point because it’s not a  thing yet.  

So maybe this state senator Chris Hanson’s just trying jump on the AI hype train and be like, hey,  here’s something we can pick on. Colorado does have a hiring related or other AI related law  happening that’s kind of getting bannered around that has some interesting stuff in it. But from I  gave it a thumbs down for that reason, the overall concept of, you know, do we need to do we  need to defend against this stuff sure? It’s crazy. It’s scary and creepy.  

So That’s Yeah.  

Eric: Maybe it’s too early. I get it. Yeah. I I can I can follow-up with that?  

Charles: That’s Yeah. I mean, I I think we’re we’re pretty good agreement on that. So Here we go  with the second article. Employers ask, what is AI as regulators probe hiring biases? This is  something we’ve talked about quite a bit over the years.  

What’s the TLDR here? Right? This one is really about the interplay between the EEOC and  what employers are thinking about, and it really is about, hey, what the hell is artificial  intelligence? You’re trying to regulate us on this stuff. Without a clear definition, how the hell do  we know to what we’re doing or what our risk is.  

Right? And so we gotta have some kind of uniform definition that everybody agrees on. You  know, without that, we’re gonna have a lot of problems. But at the same time, the article kinda  brings up, well, we’ve been looking at the same stuff for a very long time. It doesn’t matter what  the outcome is.  

I mean, what the what the tool is. It matters what the outcome is. So why does it matter if how 

we define AI or not? I mean, we were talking about that a little bit earlier. So that’s that’s kind of  my that that’s kind of the summary, excuse me.  

And now let’s have your take on that.  

Eric: Okay. Well, my take is I hate the term artificial intelligence. I mean, this is a term that was  created in, like, the fifties when researchers were trying to create little you know, simulations of  the way neurons work to calculate things. And so, you know, it seemed like a good idea to call  that artificial intelligence at the time. But it’s taken on a meaning that is completely not what it is  at this point.  

And so to me, you know, people don’t know what AI is. And no one reads definitions, no one,  like, out there in the population, most people just hear artificial intelligence and they guess what  it means by what it sounds and what it sounds like is very Hollywood kind of thing. Right? Yeah.  Not what it is.  

I mean, to me, AI is just another type of statistics. Statistical technique that allows us to make  sense of unstructured information at the core. That’s what it is. And so it’s how you use it that  matters and it can be good or bad. So, you know, to me, it’s just algorithms.  I wish we could move away from AI as a a term altogether and just talk about algorithms. So I’m  all for clarifying. I think it needs clarification. Gotcha.  

Charles: Good luck with that, by the way. I I I feel you on that one. So again, I read it I read it  way more tactically as the the idea that Ultimately, why does it matter that much because it is  just about the outcomes? And if we just follow the idea that if it has bias or adverse impact, it’s  bad. If it doesn’t, well, then it’s not harming anyone.  

At the same time, we have laws that are saying, there’s a disconnect here in my mind, but  nobody’s thinking about it, which is, you know, it’s what you just said. I mean, I agree with what  you just said. And I can understand how employers are confused, but if employers would simply  say, we need to evaluate the adverse impact of any decision making tool we have in hiring. They  would be okay, I believe, because they would know what the end result is. And hopefully, they  would look at the accuracy too and say, Boy, we want both of those things because as you said  before, a a tool that delivers buyer’s freeness, but fairness but lacks any accuracy is a worthless  tool.  

You know? Yep. So there you go. So so what is your take on this one then. What do you think?  Eric: I’m gonna take it. I like it.  

Charles: You’re gonna take it. And I’m gonna I’m This is so hard sometimes. I think the article’s  doing a good job, sharing the pain of employers, but as a concept, I’m I’m on the fence because I  think employers should be evaluating this stuff no matter what, whether it’s AI or not, maybe  that’s a little bit of a cop out, you know, that they don’t need to do that. So Well, I can  

Eric: definitely agree with it.  

Charles: But Yeah. But I’m gonna get bumps down on it, because I don’t wanna know contrarian,  but there we are. Good stuff. Awesome. Yeah. 

For for playing, you get this fine a copy of this fine taker or leave a t shirt that I have made.  That’s a distinction and only guest of the show yet. So I’ll be I’ll be posting you one of those after  the show. So let’s transition back and close our show. Alright.  

Good. So we’re actually get close to time. So as we wind down, Eric Lots of good stuff to talk  about. I wanna make sure you got a chance to tell everybody about Viro dot ai and what you all  are doing. We you’ve alluded to a lot of that, smart person could probably tack it all together and  gain some understanding, but but let’s hear it from the horse’s mouth because I think you guys  are doing some amazing stuff that hasn’t been done before by anybody, and that’s always a good  spot to be in.  

Eric: Yeah. Thank you. Appreciate it. So, you know, Bureau AI is a new company. We have a  platform, an analytical platform that allows us to suck up data, not just in there.  Data, but unstructured information and actually quantify it, make sense of it at scale. And what  we can do with that is report on a variety of different things. So one thing we can report on is  certainly algorithmic auditing related outcomes such as bias, fairness, but also bigger picture  things that would require the analysis of documentation. So we can read in tech manuals. We can  read in legislation.  

We could do all sorts of fun things to check whether a large corpus of information, text, or  numbers, what it means. What the impact of it is. And we’ve even got a model created called  Violet, which is our impact, a model for algorithmic tools. And we don’t have time to go through  that, but you can check it out on our website. It’s sort of a holistic model that looks at algorithmic  impact.  

And we have tons of other exciting things we can do as well, like compliance related things, like  ISO certifications or that type of thing can be automated and done much, much quicker. With our  technology than without. So anything that requires a large amount of analysis of information,  you could think of procurement, vendor analysis, stuff with job descriptions we’re working on.  There’s a whole bunch of ways that we can load in text information and score it and make sense  of that it’s scale. In ways that weren’t possible before.  

So anybody listening has thoughts or would like to learn more, please don’t hesitate to reach out.  

Charles: Yeah. Awesome. And so as we play it out, thanks. Always a great conversation with  you. And we could go on for days and hours, but you know, what what words of wisdom on any  of this stuff would you wanna leave our audience with as we kind of fade on out into the ether  here or out of  

Eric: the nose? Or I think when it comes to AI, don’t be afraid of it, but but don’t just take the  hype. You know, AI can be a value to your business. But you’ve got to approach it from a  rigorous standpoint. You’ve got to monitor it and you’ve got to make sure you understand it.  But the best companies, the ones that are succeeding and and and you know, leapfrogging in  some cases their competition are the ones that are figuring this out. That’s that’s my advice is is  use it, be a early adopter, but do it in the right way and the rigorous way and make sure you’re  monitoring it. 

Charles: Yeah. There you go. Responsible innovation. That’s what I That’s what I keep saying.  That’s the one I keep blowing responsible innovation.  

It’s great to innovate. Just be responsible about it. And it’s it is maybe kinda hard, but at the same  time it’s not, and you’re gonna have to do it. So you might as well start learning and figuring it  out. And I think that’s where a lot of end user companies are, at least in hiring side of things right  now is holy shit.  

How are we gonna how are we gonna develop our policies? How are we gonna handle this? And  like you said, a lot of it is just let’s pun it. Let’s put it on hold, etcetera. The numbers of the  number of vendors that are popping up using AI hiring solutions.  

I track it. Goes up every week. I find more. So Yeah. Somebody out there is bet is making some  bets on this stuff.  

So awesome.  

Eric: Nice. Thank you, Eric.  

Charles: Have a good one.  

Eric: Thank you for your talking.