“From the EEOC’s perspective, whether an employment action, employment decision is made by a human or an algorithm, liability is going to be the same for those companies.”
“AI tools really have the ability to prevent discrimination, but at the same time, they have the ability to discriminate more than any one individual human being.”
-EEOC Commissioner Keith Sonderling.
Before we begin– Commissioner Sonderling requested that I share a link to this important report (Algorithms, Artificial Intelligence, and Disability Discrimination in Hiring). While the report focuses on the Americans With Disabilities Act, the ideas put forth apply directly to employment decision making and is an important missive summarizing the government’s position on the relationship between AI and foundational regulation related to concepts such as the ⅘ rule and disparate impact.
Summary:
How lucky are we? My guest for this episode is none other than the grand poobah of employment regulations in the US, EEOC commissioner Keith Sonderling. The Commissioner has many great attributes that underlie his approach to the creation and enforcement of legislation critical to ensuring everyone gets a fair shake when it comes to employment opportunities. But I think one of his greatest attributes is his mission to make himself accessible to all channels of media and communication, including humble podcasters such as myself.
In some sense, my big takeaway from our discussion is the idea that the more things change, the more they stay the same. By this I mean that the central tenets of fair and equitable hiring practices are immutable. While the tools that support employment decision making have, and will continue to become infinitely more complex; ensuring that signals used for hiring decisions are job related, and free of systematic differences based on irrelevant factors is all that matters.
The Commissioner and I have a really awesome and enlightening conversation about the evolving landscape of government regulation on AI in hiring. We begin with a discussion about his career trajectory, his insights about the integration of AI within HR practices, and the critical balance needed between innovation and ethical considerations.
We have fun delving into the specifics of current regulatory frameworks, including the seminal Uniform Guidelines on Employee Selection Procedures and the recent developments in laws such as New York City’s Local Law 144. Commissioner Sonderling shares his perspective that the future of regulation will likely be driven by state initiatives rather than new federal legislation.
Takeaways:
- State-Led Initiatives: Commissioner Sonderling highlights that while the federal government may not introduce new legislation soon, states like New York and California are likely to lead the way in regulating AI in hiring. Employers should stay informed about state laws and consider adopting best practices from these regulations proactively.
- Navigating a Patchwork Regulatory Environment: With states potentially leading regulatory efforts, HR professionals must prepare to navigate a patchwork of regulations that may vary significantly from one state to another. This emphasizes the need for adaptable compliance strategies.
- Existing Federal Standards: Even in the absence of new federal legislation, existing laws and standards, such as the EEOC’s Uniform Guidelines on Employee Selection Procedures, still apply to AI-driven employment decisions. Organizations must ensure compliance with these standards to avoid legal pitfalls.
- Proactive Compliance through Audits: Commissioner Sonderling advises businesses to conduct regular audits of their AI systems to ensure compliance and prevent discrimination. These audits should be thorough and based on relevant data to identify and mitigate any biases in the system.
- Vendor Responsibility and Data Integrity: The discussion highlights the importance of holding vendors accountable for the AI tools they provide. Employers must ensure that their vendors comply with ethical and legal standards and provide necessary data for compliance checks. We can expect vendors to be required to participate in 3rd party audits of their tools at some point in the near future.
Full transcript:
Commissioner Sonderling
Charles: Welcome commissioner Sonderling. How are you doing today? Good. Thank you for having me.
Commissioner Sonderling: I’m really looking forward to our conversation.
Charles: This conversation, I think, will be very enlightening. I have so many questions to ask you. I’m not even gonna be able to get through all of them because I’ve been deeply entrenched in, you know, legal compliance for testing for twenty sub years. Haven’t worked under a consent decree before in the public sector and haven’t done litigation support and advising my clients. And it’s not as easy.
It’s never been easy. I don’t think it’s as easy now as it used to be or at least kind of what we’re thinking about.
Commissioner Sonderling: No. I mean, think that and think about now how this is being driven by technology. I mean, before the work, you know, in your field, you were doing was largely driven by getting people to go to places and take tests on pen pencil and paper and scan trunks And, you know, the democratization that technology is bringing to your field is like something we’ve never seen before. So an area that normally was reserved to you know, let’s say knowledge workers in corporate settings once you reach a certain level, you know, as an executive now, can literally be used for your entire workforce. So thank you for being in this industry. Thank you for being a pioneer for a long time, but I think it’s really going to really reach new heights that we haven’t seen before.
Charles: Yeah. I agree. And and so before we get into it, you know, tell tell the audience here, and I’m curious as well. I mean, of course, I looked at your background. But what’s your career journey bid?
And how did you end up sitting in that seat with the wonderful flags and official stuff behind you there?
Commissioner Sonderling: Well, I was a a labor and employment attorney in Florida before joining the government so I was defending corporations and HR departments when the EOC or Department of Labor would bring actions or private lawyers. So I I, you know, I first learned this industry from kinda coming in after the fact when when things had gone wrong and and sort of seeing and having to defend those actions. So that was a really great experience for me to learn how HR operates to learn how employment decisions are are being made lawfully sometimes. Unfortunately, unlawfully. Yeah.
And then in in in twenty seventeen, I completely left the beautiful state of Florida where there’s a lot of sun and palm trees to move to Washington DC where I joined the US Department of Labor’s wage and hour division. The wage and hour division deals with some pretty significant employment laws such as Obviously, the Fair Labor Standards Act and how employees are paid in addition to the Family Medical Leave Act. A lot of the payments related to agriculture
workers, government contract workers and under the Immigration and Nationality Act, those wage provisions there. And then in twenty nineteen, I was nominated to be commissioner on the EEOC. I had to go through the senate confirmation process and then was confirmed in twenty twenty.
So this is sort of the world I’ve been in, in labor and employment, but obviously here at the EOC, especially for a labor and employment lawyer. It’s really a a dream come true because I always say that this is the pinnacle agency worldwide when it comes to civil rights in the workplace. You know, the E EOC, you know, is the governing body for human resources. It is the agency that protects all workers from entering and thriving in the workforce without being discriminated against and making sure equal employment opportunities promoted. So This is an opportunity of a lifetime to be here, and I really have enjoyed it.
And one of the more important parts of my jobs that you alluded to in the reduction is getting out there and speaking to as many practitioners, you know, as I can in this role.
Charles: Yeah. That’s a huge a huge thing, you know. I see you popping up all over the place talking to a lot of friends of mine or people who I really respect, you know. In their opinions. And I think that’s just so important, you know, instead of just saying locked up there, I’m sure you have plenty of work to do.
Right? But But being able to get out and and talk about these issues, I feel like you’re you’re gonna be learning a lot and educating. So, yeah, tell us a little bit about this. You know, what are some of the most interesting and impactful conversations that that you’ve had? And how do you think those lot of influence you know, all of us who are working so hard to make this stuff work properly.
Commissioner Sonderling: Well, you know, I got to the EFC, I wanted to be forward thinking, and I wanted to, you know, talk to people in the HR community, see what the big issues are. And if the EOC is just so easy to to allow the the news or what’s going on to kind of swing the direction where we need to use both enforcement and compliance resources. So if you think, historically, let’s look the back of the last ten years. You know, the EOC started to focus on issues like aid discrimination. And then the Me Too even happened.
We had a stop and shipped all of our resources related to the Me Too Movement, cases, compliance related to that, and then COVID happened. And then it was all about, you know, early on about accommodations in the workplace. Can you work from home? Can you test your employees for COVID? Then the vaccine came out?
Can you exempt your you know, can employees get exemptions related to COVID for religious reasons? And then George Floyd, and then racial discrimination, diversity equity inclusion became front of mine. So it’s really in this job, you can sort of go with the win, which is important. We still do as like what I call the day job in that sense. But I really want to say, well, okay.
Knowing all that is always gonna happen. How do we get ahead of the next large issue? And that’s when I found out the biggest issue on chief human resources officers minds HR department slides was the whole issue of technology in in HR. And artificial intelligence in the workplace and how that was going to impact not only the HR practice itself, but also employees being an applicants being subject to these tools. And the amount of technology out there to make HR decisions faster, more efficient, but also more fair.
And to remove bias can really, you know, get you your head spinning with the amount of
offerings and the amount of programs and look all of them promise one thing. To make better hiring decisions. To make hiring decisions without bias. And as you quickly learn when you start looking into HR technology, there’s a lot that goes into that to actually get there. Yeah. So that’s sort of been my mission. Is how do we get there? How do we use these tools to actually help employers take eight skills based approach, which is, you know, a a very big buzzword in HR where a lot of people wanna get to and actually have hiring be perceived and actually be Fair and prevent discrimination from occurring. And, you know, AI tools really have the ability to do that. But at the same time, they have the ability to discriminate more than any one individual human being.
So a lot of it is now, okay, how do we discuss that? And how do we build that into the ecosystem of HR technology? And there’s I found out there’s a lot of players involved.
Charles: Oh my gosh. It’s a giant. In fact, I was talking to somebody the other day. He was an analyst and was saying, you know, the HR tech industry itself has so much more going on than a lot of other industries when it comes to technology. There’s so many pieces they’re not they’re often very disparate, you know, like the the hiring systems not really talking to the internal mobility systems or the compensation systems.
There’s a lot of that. Going on. And, you know, technology is impacting everything. Hiring has been, you know, I’ve been kind of on both sides of the hiring when it was pieces of paper to the first job boards, to to, you know, where where we are now. And it’s It’s pretty incredible, and I agree.
I I think it was a really interesting point you just made is that, you know, that these systems scale. Right? A big piece of this is scale. So if you train a system that has bias, it’s going to scale that bias you know, over and over and bigger and bigger and amplify it. So we kinda gotta cut it off at the root, you know.
Commissioner Sonderling: And there’s there’s a lot of different ways to look at that, and you brought a really good point. And so much of I don’t wanna say the analysis for the distraction in these AI tools is, you know, looking at this data set discrimination. You know, and what does data set discrimination mean in terms that you don’t need to be a computer scientist to understand. What is that for HR professionals? And, you know, kind of simplifying that, well, what is the data?
It’s your applicant flow. It’s your, you know, where you put your job boards. Yeah. And then, you know, who apply for the job or, you know, a lot of these tools analyze your existing workforce, your your current workforce. And, you know, the simplistic examples of that are, you know, okay.
So if your applicant pool have made up of one race, one gender, one national origin, you know, the the computers may think that that is the most telling sign of of what an ideal candidate is. And obviously, it’s a lot more sophisticated than that. But you brought up just now a really interesting point that it’s a lot more than that. Right? And how that individual bias can be injected into not only that dataset, but also how it’s analyzed and how it’s run.
And I think that’s sort of the fear of all this and not just having your data set properly and making sure that you’ve done what you’re required to do whether you’re a federal contractor or just in the EU, you know, policies regarding hiring and advertising, but also, you know, access to these systems. And if you have somebody who has access to systems in HR, hiring manager or or otherwise, that now has more access to data than ever before and can inject their own bias into a
systems and the systems be able to recognize that, and you could eliminate individuals who are qualified from the applicant pool based upon their race, action, national origin, religion, etcetera Yeah. With a few clicks. And as you know, you know, when things were on paper, it took a long
time to discriminate. So you have a hiring manager who looks who want to look at physical resumes, and you wanna look for, you know, you don’t wanna hire any women for the job. You have to look for female sounding names. You have to look for characteristics associated with being a female with your eyes and say, okay, male or female? Female trash. Oh, this is a close call. Let me look longer.
Takes a lot of time. Right? But these systems are so sophisticated whether directly or adjacent characteristics to be able to pick up that. And with a few clicks, you know, it could be catastrophic to an organization of injecting bias in there. So that that goes to the broader conversation about how it’s just so much needs to go into properly designing these tools from the vendors, carefully using it from the employer’s perspective.
And that’s what people are trying to figure out now. And I think that’s where a lot of people have a different roles in, whether it’s lawyers, whether it’s IO psychologists. I mean, there’s just we need to create that sort of universe of who’s involved in actually making sure that these tools are not only doing what they’re saying they’re gonna do, but also legally compliant and also, you know, ethically compliant within the confines of your organization. And that is what I’m trying to figure out, and it is very complicated.
Charles: It’s the accordion knot, if you will. But hopefully, we can we can conquer it. So something that you were saying there and and a a thought that I’ve had and I’ve expressed it a lot, but I I can’t keep saying it enough, and I’m very interested in your take too. I feel like, you know, hiring is a is a game of probability. You can’t hire a a plat person if they’re not in your funnel. Right? It just doesn’t happen. So if there’s misrepresentation going into the funnel, it’s gonna be harder to to have something equal, you know, come out the bottom. And I feel like maybe the great Satan, I would say, and and a lot of the stuff we deal with, is who even sees a job at? The algorithmic placement of a job at it, if if I’m, you know, an African female I IT professional or developer and, you know, whatever algorithm doesn’t even show me a job opening potential I can’t apply for that job.
I’ve already been discriminated against. And then you have these recommendation engines. Right? Even before someone is a is a candidate, a recruiter or a sorcerer, and they say, oh, yeah, thumbs up on this person, thumbs down on this person. That trains the system. As to what you want. And if there’s bias there, even before someone’s a candidate, you’re you’re rejecting people that don’t sit. And and you’re reaping the the issues with that when you’re trying to be compliant with everything else.
Commissioner Sonderling: And and you’re and you’re liable for it, and that’s the thing. You know, as as you know, as some people don’t know, federally job applications are federally protected. I mean, job descriptions, advertisements are all federally protected. You know, before we get to the content of what’s in those, you know, where there’s generally a lot of issues, but to your part on the the advertising side. And I think this is where we saw some of the issues very odd.
And especially the way big tech companies are designed, especially the way they make a lot of money is through advertisement and through placements. And having what’s called, you know, these micro targeted ads and to show you based upon your demographics, which way based upon
all your protected characteristics, of, hey, we wanna show you this product because you’re likely to buy it. And look, in the advertising, you know, selling selling shoes or selling electronics that’s completely illegal. I’m sure they see that’s completely legal. Yeah.
And that’s how they make a lot of money. But in the employment context, it’s completely different in that sense. So, yeah, if you’re going to say that I wanna only show this job advertisement to this group of people based upon all this demographic information I have from these tech companies now, Right? You are, like I say, going even further than pre civil rights error discrimination because at least before technology, if you saw a job ever license. And there’s some crazy examples from the pre-nineteen sixties civil rights shop saying, you know, no women or childbearing years apply.
Charles: Yeah.
Commissioner Sonderling: You know, no wires in your pocket. Yeah.
Charles: Yeah. That’s a big one. That’s a big twenty hundred. Yeah.
Commissioner Sonderling: Yeah. Yeah. Google it. In black and white, no wires in your pocket. At least then, you knew you were being discriminated against because of rejected characters. So if civil rights laws existed, you would at least say hey, here is me here is where I’m being discriminated in the job advertisers. You know, in the scenario you gave when you’re using those same tools unrestricted to have a job advertisement. And there’s, you know, an example of this, a case brought by the ACLU, not by the EUOC. You know, regarding Facebook, saying that a lot of employers could use the same advertising tool that they could otherwise for products. And what they it was a state based age discrimination case.
And they said that they went in there and they they limited the job advertisements, let’s say, just the twenty twenty twenty to twenty five years old. As an example
Charles: Yeah.
Commissioner Sonderling: Saying, you know, we want recent college graduates or or people in that age frame. So everyone, you know, outside of that group who didn’t see the job advertising because they were limited. If they were qualified for the job, they could potentially have a claim against that employer for age discrimination, you know, if they’re over forty or or just, you know, with that one example, at scale. Right? Mhmm.
Having the existence being withheld based upon your protected characteristics. So you could see quickly just on the job advertising fine. Just trying to find the right candidate. You know, having those kind of closed advertisements versus, you know, broadly putting it out there and letting the most qualified people apply regardless of age, regardless of theft, etcetera.
Charles: Yeah. Yeah. And, you know, you mentioned, you know, selecting people in an age group, but I believe a lot of those tools are completely algorithmic and you don’t even have as much choice as an employer, maybe as where stuff shows up. But, you know, we’ve talked about problems and we could talk about problems till the cows come home here, but but let’s talk about the solutions to the problems and specifically legislation because we’re we’re entering into I would say a new era of legislation and AI is is requiring that. And legislation does not come quickly.
There’s a lot that has to happen, you know, the bill on Capitol Hill from the from the schoolhouse rock. If you ever see that, maybe I maybe I’m a I’m a lot older than you than you with that. But There’s a there’s our cartoons that help teach kids about government science and stuff back from the seventies. So there was one called the bill, and it it walks its way to through Capitol Hill. It’s very It’s very entertaining and important.
But let’s start with the biggest. I I we’re gonna talk about a couple of different significant legislations based on geography, really. So so let’s start, like, with the biggest, broadest one and that the EU AI act just just passed. Right? And my thoughts, I originally was thinking, well, you know, it’ll be just like GDPR.
It’ll be an adopted standard because that it’s it makes sense and it’s well defined. But GDPR is simple in some sense. These are the requirements for your data. These are the things you have to do. There’s there’s not a lot of gray area.
With the EUAI Act and any AI Act, the gray area becomes gigantic, you know. We are we are immersed in gray, a sea of gray sometimes. And we gotta hack our way out of it. So what what do you think about that act? How is that gonna influence things gear state side?
Commissioner Sonderling: Well, I wanna take a step back because it’s really important to remind everybody that there are laws in place that a deal with algorithmic decision making just as they deal with human decision making in the United States. And, you know, from our perspective, it the EOC, whether a employment action, employment decision is made by a human or an algorithm, liability is going to be the same in going to be for those companies. So if we’re seeing what AI tools are doing, they’re they’re making the employment decision, hiring promotion, benefits, training, etcetera, or they’re assisting in that. Right? And that’s where you hear, like, a lot of the human either way, there’s going to be an employment action, and that’s what we regulated the EUC since the nineteen sixties.
So before we get into the sometimes chaotic discussions of what foreign and state governments are doing. We can’t lose sight of the fact that the EOC is going to ensure that every employment decision that is made does not have bias, including with algorithmic tools. So Yes. I like to remind everyone of that because, you know, there’s a lot of confusion of whether or not we need new laws, whether or not we need new agencies. And that if that happens, you know, it will happen.
But in the meantime, You have to still comply with federal law.
Charles: Oh, yeah. And yeah. And I have I have definitely reserved some time to talk about the uniform guidelines I don’t know if now and there’s other legislation too.
Commissioner Sonderling: Yeah. Let’s let’s pick up with the I just have to have a disclaimer because, you know, there’s all the there’s because the technology is so new because people don’t understand algorithms in the technology Right. You know, either the fear of using it, which is not good. No. You know, or we can use it unchecked because, you know, the US hasn’t passed a global AI bill.
Right? You know, they’re they’re both misconceptions. So Let’s start with the with the Europe because it’s very hot right now. It just was passed, and it is really the first global standard just like the DPR. And what happens when you have global standards, that starts to infiltrate other countries as well that that that that do business there and for multi national employers that are going to be subject to one law just from a compliance perspective.
Sometimes it’s easier just in a sense to federalize that law. Right? Yeah. There’s no there’s no federal standard. So the, you know, the EU EU AI act has taken a much different approach than really anything we have in the United States.
Where they’re saying we’re going to assess AI based upon risk based categories. From Yep. You know, low risk, to unacceptable risk, and everything in between. And, you know, there is some confusion over this, but you know, for conversational purposes, for the most part, using AI in HR is going to be considered a higher risk
Charles: Absolutely.
Commissioner Sonderling: And because you’re dealing with people’s jobs, you’re dealing with their livelihood. So, you know, there’s good reason they they put that in. But then when you you dig into it, there’s different levels of some of it, like having a chatbot do an interview schedule interview is obviously not gonna be the high risk. So I’ll flip and bump it all in that category. And what it comes with is is a few things.
Oh, a lot more disclosure,
Charles: a
Commissioner Sonderling: lot more testing. A lot more auditing, a lot more, you know, a lot additional consent requirements, and then just really significant significant penalties including, you know, this is more of a for law enforcement, but it also touches into employment, you know, banning AI that can deal with some of the emotional affect emotional
Charles: Facial recognition. Facial recognition.
Commissioner Sonderling: Facial recognition. Things like that. So and also, you know, much different than the state of affairs in the United States, which we can also get into, is that they are saying that vendors are going to be liable for these tools. So,
Charles: you know,
Commissioner Sonderling: taking it a step back in the United States, only, you know, an employer can make an employment decision. And we essentially define that. Our jurisdiction is over employer’s companies, unions and staffing agencies. Right? So that’s been that’s our world in in the United States.
Those are the only groups that can make an employment decision. Though they’re the only one subject to our decision. You didn’t hear AI HR tech vendors anywhere in that, you know, list Yeah. From the nineteen from the nineteen sixties when Right. Our loss was established. So the you was saying that they’re gonna be liable as well, which certainly puts more skin in the game for the vendors which they don’t necessarily have here in the United States. So what does this mean? It means that, you know, to use these tools there, there’s gonna be an extra level of governance, an extra level of testing. And similar to what we’re seeing in proposals here in the United States, you know, let’s just take a step back and look how the United States has addressed AI in HR and actually goes back to twenty twenty where the state of Illinois was really the first one to dive into this with their facial recognition video act, which essentially bands, facial recognition, interviews. You could do it, but there’s so many consent requirements.
There’s so much to it that it’s almost impossible to to use. And then New York City’s local law one forty four was really the first all encompassing. And obviously, you know, one of the biggest and and greatest cities in in the world Putting out a a law specific to AI automated employment decision calling it automated employment decision tools, But, you know, you look at it. It has been a lot written and a lot debated discussion.
Charles: I think it’s weak.
Commissioner Sonderling: You know, your own commentary. You know, like the government official. Yeah. You know, I can point out that it it only applies to hiring and and promotion. And the way it defines what an AI tool is, as there’s been a lot of articles on, you know, has allowed a lot of employers to say, well, our use of this AI tool doesn’t actually meet the definition of New York.
So then we if we don’t meet the definition of using an AI tool, whether it’s, you know, making a selection or ranking or, you know, whatever the the ways out of it are, then we’re not gonna have to do this. But but let’s take a step back. And let’s look at some of the similarities between that and the EU. So in New York City, they’re gonna require consent. They’re gonna require cons disclosure.
They’re gonna require a pre deployment audit. They’re gonna require a yearly audit. They’re gonna require you to post those results. On your on your website so that the public can see it as well. But now so those are common themes between, you know, you know, But, you know, if you look further at it saying, okay, well, it’s only gonna apply to hiring and promotions, and it’s only gonna apply to the EU one categories of race, sex, ethnicity.
Not even age. Hence, you can not even age. Not even disability. Not even that religion. All these other, you know, fact, these other laws that the EUOC is going to say, well, you still need to be compliant to make sure that you’re hiring assessment programs are not discriminating against, you know, certain religions, certain disabilities, certain Yep.
You know, ages or not based on genetic information. You know, you can go kinda step down our laundry list where you can have a false sense of security almost being lulled in saying, well, look, I did a pre deployment audit. I did a yearly audit for New York. I’m good. Right? Like, my tools are completely validated and fine. And then the EOC comes in because, of course, federal law still applies. In New York, and say, okay. Well, How about when it was used for compensation? How about when it was used for terminations?
And Right. How about the impact it had, you know, on this group you didn’t even test for. So you know, you could see a lot of it. It’s adding on additional requirements, but, you know, not everything. So just being in compliance with one place is not
Charles: yet in
Commissioner Sonderling: ensuring compliance. However, I do wanna say though, what are the benefits of that though? So if you’re doing an audit, if you’re forced to do an audit, then if you’re eliminating bias, even if it’s on those categories. Right? That New York has deemed the most important.
Then, if you see a disparate impact, if you see discrimination, you could adjust the tool before it ever discriminate. And then being compliance, you know, with our laws, which we’ve encourage employers to do assessment audits. So it’s not all. You can’t look at all negative. They’re kinda so
positive to it.
So that was a long winded way of of talking about some of the, you know, proposals and how it already meshes up with existing law and Uh-huh. You’re starting to see some of the common threads. Like, there’s only Yeah. Almost so much they can make up to require employers before using this.
Charles: So I’ve got a couple of Yes. I don’t know if they’re counter points about New York, you’re gonna have to see I’m jumping off a bit. Is it a good thing? It’s great to have these things started. What I don’t like about the New York law is, a, there is no actual mandate of remedy. You have bias. You’ve gotta post that you have bias. But as far as I know and I’ve studied it pretty closely, you don’t actually have to change anything. Now it would under the federal you know, umbrella there or or roof, you definitely would be identified as somebody who’s got a problem and then you would need to rectify it. Also, there’s no requirement of validity. So you you know, there’s nothing that so says that what you’re using to hire somebody has to actually be related. To the job. It just doesn’t have to have bias. So I don’t like that part of it very much either. And you know, the final point is it it doesn’t hold the vendors accountable. You know, the vendors should be able to provide their historical data or support that. But just like with the present state now and different from the EU is, you know, vendors are are selling guns, but they’re not necessarily on the hook if somebody shoots another person with a gun. They they typically are gonna support do and want to to help, of course, but it’s not mandated. So those are a couple of the things on the on the New York side that and and I’ve look, I don’t wanna I’ve had a guest on here and people that I know where they’re like, you know, it’s actually just big business influencing that New York. Law so that it doesn’t have enough teeth to get people in trouble.
And maybe even we can supplant things like the uniform guidelines. A little bit of a grassy, dull conspiracy theory that may not be true. But you could see the controversy just with that law that’s happening, you know.
Commissioner Sonderling: Yeah. And, you know, you you raised some good points there as well, and not to get too technical, but if you look at the pre deployment audit side for New York, if you don’t have your own enough of your own data, you could use generic data or mixed customer data. Right. And in in your career, as you know, that’s unheard of. Right? That means absolutely nothing to us at the EOC. Because when we do a a bias audit, we’re only looking at how that tool was used on that specific route advertisement that Yeah. App we can pull and nothing else. Yeah. And and that is just gonna be a complete false sense of security that, you know, hey, these tools are in compliance.
Well, as you know, you could cherry pick data from any kind of publicly available employment data and, you know, have no discrimination, but it’s so specific to that one individual employer’s uses of it. Well, but then I do wanna kinda counter also about okay. So New York’s requiring bias on it. He used requiring a bias on it. What?
Let’s get to the point one. Well, how do you do a bias on it? Right? Yeah. And you know what? So if these state and local and foreign governments, you know, wanna take up this very complex topic of algorithmic discrimination and bias audits Okay. Well, what’s your solution? What is your you know, if you’re creating these new laws, these new requirements on employers and, you know, not doing it through the federal process, Okay. What’s the standard you’re gonna use? Because there’s been a standard at the EOC since nineteen seventy eight, where you have
generationals, generations of I O psychologists trained on.
And, you know, in the meantime, nobody’s come up with anything different. Nobody’s come up with a new standard that has been universally accepted. And the global university accepted standard is not always used. Right? So in that, what happens?
How are you doing on it? Well, look to the EOC. Look to look to title seven employment disparate impact testing. Right? So that’s sort of like, okay.
So you you you gimmed all this up, and then it’s coming falling back in our our lap here. In in a good way for, you know, the IO community knows how to do it, but they didn’t come up with a new standard.
Charles: No. It’s it’s a four fist rule of ratio analysis. And nothing else. And and you don’t audit the algorithm. You’re just auditing the output.
So, you know, what? It could be chickens you know, play in TikTok behind the scenes. In the New York thing, at least, nobody is is actually accountable for for that. They’re just accountable for that ratio at the end of the tunnel. And even in sometimes and I’ve done a lot of you know, adverse impact analyses over the years, sometimes you got empty cells, a lot of empty cells where you can’t actually have the substance to show the ratio, you know? And that can be problematic too.
Commissioner Sonderling: And the other part too and what people, you know, in in this world who are, look, a lot of the vendors a lot of the HR buyers are now suddenly walking into employment assessment testing world because that’s what essentially these tools are doing. Just modernizing it. And they have never dealt with how do you test for disparate impact. And as you know, you know, everyone points to the EOC’s nineteen seventy eight guidelines on uniform employment section procedure, the four fifth rule, you know. And as as the global standard, as the this is the get out of jail free card.
Right? You know, that that terminology that a lot of people like to use. But as you know, in nineteen seventy nine, the EOC put out frequently asked questions related to the nineteen seventy eight guidance which is still on our website in twenty twenty four that says the four fifth rule is one of many ways to test for this impact. And that EUSC will use it sometimes they’ll use another standard.
Charles: So Yep.
Commissioner Sonderling: The reality of it, there is no universally accepted standard. And what happens is when these cases actually get to law enforcement, when they go to court, they hire experts. And it becomes a battle of statistical analysis. And that part of the equation which I owe psychologists have been dealing with for a long time and data scientists understand hasn’t surfaced yet in, oh, well, this is a lot more complex. Than doing a generic audit under the four fifth rule.
Howard Bauchner:
Charles: Yeah, absolutely. And I’m glad you’re talking about the the uniform guidelines, obviously, they’re pretty important. And my my experience is definitely you know, when we do this, we look at a two s test, which is sample dependent, so that can be skewed or, you know, Cohen’s d, how much that, you know, powers so there’s other conventions. Right? And I mean, I also think well, I don’t I don’t have this dialed in to my equation here, but And it’s also looking at
isn’t it the the general composition demographically of the labor market in the geography that you’re working in and seeing, like, how consistent And to me, there’s other pieces of evidence.
Commissioner Sonderling: Regular. Yeah. Regular. Okay. So And it’s not only that, it’s per job description within that area.
Yeah. Which is different than other areas of the country for the same job.
Charles: Yeah. I know. That’s why we sample. I mean, you know, we when I do sample for our job analysis or validation, we make sure we hit every geography. Right?
I mean, it’s really important to have that representation. So for me in in a practice here, I always advise my clients to do it’s a bunch of little things that send a signal that you actually care as an employer. Right? It’s not just do you have four fifths, you know, ratio problem. It’s all what are all the other things you’re doing?
And and then in my experience doing litigation support, there’s a lot of opportunity preemptively to say, alright, let’s have an a third party firm, like the one I work for audit, what you’re doing, and come out with some recommendations. And if you comply with those and show that you’re you know, making some changes, you’re not gonna have to go to court or, you know, have have a lot of these other things. So there are opportunities for people to to to get it right. Once it’s been flagged, you know.
Commissioner Sonderling: I think it’s a combination of everything you said, and that’s what I’ve been trying to raise awareness kind of, you know, going in full circle here is that, you know, there’s no one size fits all approach to HR technology just like there are none no one size fits all approach to anything related to employment. Whether it’s accommodations, whether it’s it’s testing. And in addition to investing in the infrastructure of actually buying these expensive HR tools, which do have a lot of promise to eliminate bias if carefully designed and properly used. I always make sure to qualify, you know, that in that, you have to build that governance structure around that. And what does that mean?
That means, you know, working with the vendor to make sure that all the testing is only done on your data that only people within your organizations that have proper CEO training have access to this to not inject the bias like we talked about before. Yeah. But also, you know, doing that pre
deployment testing, doing that yearly bias audit testing as the data changes, as their job requirements change. And we encourage that at the EOC because that prevents employment discrimination. Doing that at least gives you some certainty that if there are issues, you can create you can change it in advance before discrimination occurs.
And that’s where we need to get with this. But look, that’s a cost just like buying this. This is a cost, but here, you know, it’s not like other software where you can just let it go. You’re dealing with civil rights and you have to ensure that it’s being used properly or you’re gonna be liable for really harming your own workers, which, you know, I believe no employer actually wants to do. That’s It’s just a lot more to it.
And and, you know, we’re also on your point, which I think is really important, is that you’re right. We have to deal with, you know, let’s say almost every employer in the United States. We also have jurisdiction over state and local and federal government as well. Right?
Charles: Right.
Commissioner Sonderling: So we have a very, very big portfolio. And like most government agencies, we have limited resources. So when we get two cases of employment discrimination, and our investigators show up, and one company says, here’s what we did here’s what we asked the vendor. Here’s why we chose this vendor. Here’s the dataset we use.
Here’s the the skills we asked to assess for, which is based upon not only the industry standard, but our own business necessity within our organization. Here’s the testing we we did. And, you know, etcetera, everything we’ve just been talking about versus another company that we show up and say, I don’t know. This vendor promised us diversity equity inclusion and better hiring,
you know, and and that we can replace our talent acquisition departments by buying this. Who’s gonna be in a better position.
Right?
Charles: Yeah. Exactly. It’s bias you know, I came up with this term bias washing too. Like, if you I audit a lot of websites of predictive I call it now predictive hiring tools, not just assessments because there’s, as you noted earlier, a lot of stuff that’s not traditionally seen as an assessment because it’s below the surface and sometimes completely it on a parent or based on just you know, NLP, whatever it is. And so, you know, you look at the websites and it’s all, no bias, no bias, no bias, it’s so easy to say that.
It’s absolutely so easy to say that. And that’s why I think one of the cool things that I feel is gonna be happening at some point. Based on commonalities with EU and just common sense and substance is some kind of auditing of vendors. Probably not locally to the specific use case. But, you know, let’s let’s look at how you develop this tool.
Let’s look at you know, across all of your samples, what does it look like? Have you had a board or somebody come in that’s diverse and look at the language that you’re using? There there’s the surface level stuff, there’s the algorithmic stuff. So do you see that happening here in the US at some point. I know that California law has something about their thought law yet, but there’s a couple in California that are working their way.
To the capital. So what do you think about that?
Commissioner Sonderling: Look, this goes back to our earlier conversation where you need to follow what other the federal government here in Washington, D. C. As you know, it takes us a long time to pass the budget to keep, you know, the lights on in these ages. So, you know, there
is a lot of noise in Capitol Hill now about AI, but a lot of that is around generative AI and copyright issues, privacy issues, I’m certainly some unemployment, but not as aggressive as we’re seeing in the states. I think that, you know, you’re really going to have to pay attention to what these larger states like California and New York are doing in in what, you know, there are the themes that they’re picking up on this legislation.
Because at some point, you know, as we saw with paid transparency laws within the space. You know, Colorado, then California, then New York, Massachusetts, all mandate paid transparency. And at that point, national employers were just saying, you know what, it’s just it’s gonna be too difficult to try to not to to license dice this between the states, and we’re just going to do this
across the US. So I think that’s where employers need to be looking out for now? And what are those changes?
What are those potential requirements? And look, complying with them in advance, creating an advisory board, creating it, you know, doing that Okay. Audits giving consent. Whatever it is that these state legislators are pushing for, you can do now. And and the more you do now It’s just the
more compliance you’re going to be with the federal government, which is truly, at the end of the day, going to be everyone’s fear just because of the impact we have at in a only a nationwide perspective, but a global perspective.
Charles: Definitely. You know, I I couldn’t agree more in in my advising writing or whatever. It’s like, it’s It’s internal governance. We know what the right things to do are for goodness sake. So you don’t have to have, you know, the government forcing you to do those. Right things. You can you can opt to do those right things. And they’re not just arbitrary things. They’re actually very meaningful things. Right?
To to to to accomplish the equity we’ve been talking about.
Commissioner Sonderling: Right.
Charles: Yeah. So I’m interested in so let’s talk a little bit about uniform guidelines, and let’s let’s talk about relevance for, you know, let’s say, just really people, black box, neural network algorithms. So I’ve I’ve been waiting to use this analogy for a really long time. I hope, like, I can’t write it because it’s too long. But I hope it makes sense.
Right? So I’m really in the cars. That’s my my hobby. I have a huge gear head. So I have a truck. My truck was built in nineteen seventy eight. So I always think, well, that’s when the uniform guidelines were built when my truck was built. It’s a pretty crude machine. Right? It’s a pretty crude machine.
It’s very understandable. How it works. Well, I I got a car business. I upgraded it. I put digital fuel injection on there.
Right? So I have gauges in there and all the parameters are monitored, the the temperature and the pressures and everything. They’re monitored by gauges. So it’s the same gauge that I have, and it’s the same engine that I have, but a carburetor is an analog, you know, metering device. Well, now I’ve got a computer on there.
And I don’t know exactly what the hell what I do. It’s it’s somewhat transparent. There’s maps of different parameters and you can adjust those. The computer does it automatically. The gauge that I’m reading are saying the same stuff.
But what’s going on behind the scenes to give me those readings is very different. Right? If I’m making sense here. So so the analogy, which may be a stretch, but I haven’t talked about much because I love it, is is that, you know, the uniform guidelines are looking for those signals on the gauges and the inputs are, you know, the the engine is basically same in terms of the hiring process, but now you’ve injected a digital thing in the middle. And some of those digital things are not easy to understand what’s going on.
So so how does the uniform guidelines, if I remember it, it It’s a lot about construction and validation. I don’t know that it gets to the level of what does your algorithm look like, but it does wanna know how you built it and what the outputs are. So talk a little bit about the continued relevance of the uniform guidelines to these more sophisticated tools. I I think with more simple machine learning stuff, maybe it’s pretty apparent, but we we get into a territory. Especially now a generative AI where it’s supernatural.
We don’t have any idea what the hell is going on in there. Am I making sense? Oh, we lost your lost your mic.
Commissioner Sonderling: You can edit that out.
Charles: Yeah. Yeah. Now we now you’re there. Okay. Good.
Yeah. So the making sense?
Commissioner Sonderling: Totally making sense. In the sense where also to we’re we’re constrained here in the executive branch. Right? We can’t make new laws. We have to just continue enforcing, let’s say, older laws to this new technology.
But it but it works in the sense. And, you know, we actually put out guidelines on this and I’ll make sure you could link it in the notes to the podcast about how, you know, some how the title seven and the disparate impact testing on the four fifth interacts with this new technology. And look, we are not technologists here We don’t know how to regulate to the technology. We know how to regulate employment decisions. And, you know, in a way of simplifying it, we have to just look at the look at the results.
And look what was put into those inputs. And I don’t wanna say ignore the middle part of the algorithm of what the algorithm actually did. But again, it doesn’t matter because it doesn’t matter what the human brain did. Right? It’s just gonna it’s gonna measure what the result is. And that’s what we keep beating the drum on is that when you run, you know, you use one of these AI tools to make an assessment for hiring decision. It’s gonna be the same, you know, inputs of the applicant pool, the skills you asked that you thought were relevant for the job. And then the output is going to be, well, what was the selection rate of Yeah. Candidates versus the highly selected rated candidates. Without knowing what the the crazy number algorithm, machine learning, natural language processing, all those AI buzzwords actually did, and and we we can’t have that be a distraction.
Because if we do, then we’re never gonna be able to perform these audits. So so my plea and and what we we’ve said is that, you know, take what the inputs were And that that that’s the the the Yeah. None of its employees. What the metrics were saying, oh, I need ten years of this skill, five years of this skill, and then it didn’t have a desperate impact. And if it had a desperate impact, then, okay, can you justify it?
And was there a lit What does the regulatory mean? So it’s going through that same analysis don’t wanna say skipping over the algorithm,
Charles: but Right.
Commissioner Sonderling: You’re gonna be liable if what the algorithm is.
Charles: Yep. Absolutely. Yeah. And that’s the take I’ve had. Well, but it just as things get so sophisticated.
I think people look at nineteen seventy eight, a lot has changed between that now and then. Let me ask you this. You may or may not know this, but it relates to what we’re just talking about. But let’s say we’re in court and you know, would it be I’m the let’s see. I’m the, you know, the the plane of here and and I’m saying, grilling somebody about, you know, the stuff they used. Could I say, tell me about the training data that was used to create this AI algorithm. Can you produce where that was from? Like, with the judge throw that out and say, you know what? That’s not relevant here because it the uniform guidelines don’t necessarily talk about that. Because that’s kind of sometimes the root of the problems with this stuff. Right? So is it is it meaningful in court?
Commissioner Sonderling: Yeah. I mean, you’re gonna have to be able to produce that.
Charles: And I
Commissioner Sonderling: think that gets to an even more difficult issue is who owns the data. And, you know, a lot of these times when you apply for, you know, taking employment AI assessment, you’re being rerouted to the vendor’s website. The vendor owns the data. Yeah. But Guess what?
From the EOC’s perspective, from a, you know, an evidentiary perspective in court, none of that matters. Yeah. Shows to the employer you are liable.
Charles: Yeah. Have the data.
Commissioner Sonderling: You don’t have the data. Does it matter? You’re reliable. So when we ask you for it and there’s gonna be no defense in saying, well, we don’t have it. Right? Yeah. With the go get it from the vendor. No. You need to do that. And that’s to your point about how do you you know, the the vendors in this space will rise to the top.
We’ll be able to answer this. The vendor in the space who will provide that data, who will store that data, just like any other kind of employment record, will be able to meet that child in sort of in a sense that has been going on for a long time.
Charles: Yeah. One hundred percent. So let’s talk about and kind of play widen out here. There has been definitely a lot of good stuff coming out of the federal government around, you know, like the Notepad boss’s act, it’s kinda working So there’s there’s a bunch of these things. Right? And I guess, then you have all these different state laws, even some city laws. Right? So do you
feel like, can we talk about the uniform guidelines still being the can and still being very, very important and useful? Do you feel like what’s coming as potentially a federal law that knocks together all the different things in the like the no robot boss’s actors probably I know there are
other things. So there’s there’s different federal things that are saying a lot of the same stuff and even coloring it in with more information, say, on the AI side.
Is that gonna be all tied up nicely for us in some kind of thing or is it just, hey, the uniform guidelines are still relevant to all this. We don’t need anything to do. So what are what are your thoughts on that?
Commissioner Sonderling: This is just my personal opinion, of course. Yeah. Don’t bind the EOC or the federal government. I don’t think any of that’s gonna happen.
Charles: Right.
Commissioner Sonderling: I think it’s gonna be detached work of state and local laws. Still essentially relying on the EOC in a way sort of saying, like, here where here’s the basics. I think as we’ve discussed at length, you know, there’s gonna be add on requirements, but I don’t think you’re gonna have that consensus from the federal government. I don’t know if there’s you know, maybe, you know, my prediction is because generative AI, people really understand the issues a lot more with Yeah. Copywriting with, you know, likely the actors and actresses, images and musicians.
Yep. Maybe they do something there because it’s understandable on the the pro on the copyright protection, but I I just can’t imagine anything just like privacy laws. You know, GDPR and California has one. And, you know, states are just starting to do it himself. So Yeah.
I I just don’t think there’s gonna be any action at University. But look, you are getting by these bills, you’re getting a peek into what potentially powerful senators Yeah. Or legislators would are going to expect from used employers using a act. Like, you brought up the the no robot bosses act. And look, if they’re saying they don’t want employees being subject to an algorithm for their performance reviews or their management, you know, that’s a pretty good sign that, you know, of at least what their personal
Charles: Yeah.
Commissioner Sonderling: Taylor. And, like, you know, I don’t in a way, you know, you should take that as a warning. If very powerful senators are saying that this is what their concerns are. And you can judge your own practices around that.
Charles: Yeah. And it just look, it’s like environmental stewardship. Is it has been a big one. Thank goodness in the last, you know, probably decade or so. And companies have their own, I mean, their federal stuff, but companies have done a really good job of having their own internal governance, their own moral compass, if you will, on this stuff.
And I think that all the stuff we’ve talked about today, I just continue to think that companies can be so pre act proactive, excuse me. And so I do so many meaningful things under their own power. Because they know what the issues are. And and they they have the ability to to make their own standards to supplement, you know, the and helps make sure they’re compliant. With the federal, you know, global planetary, you know, whatever it is.
So as we got the last couple of minutes here. I’ve started to do this with a lot of my guess. I’d like you to fast forward ten years to to twenty thirty four. You know, jeez. Describe what you think in your mind the hiring, you know, process looks like and and how are these things governed if if in any different way than now.
Commissioner Sonderling: I I think that, you know, I don’t think the civil rights laws are going to be amended. Because they haven’t been really in a really long time. I don’t know if there’s gonna be a new standard relating to employment assessment testing because there hasn’t been in a long time.
Charles: Right. What I
Commissioner Sonderling: do think is you’re gonna see just a general awareness whether it’s through litigation or government enforcement of actually having cases decided in court and maybe even the court setting some of the guardrails for parameters of what these laws mean related to AI, which we haven’t really seen yet, of course. Uh-huh. And but I think just more of a general adoption across the board. And at some point, employers have spending money on that infrastructure to use it properly because, you know, I always say it’s no longer a question about are you going to use, you know, AI in HR, it’s it’s how are you gonna use it at this point? For what purpose?
And how are you gonna comply with current law or any future law? So I think that’s where it’s all going. And I think at some point, there’s gonna be just a general awareness or understanding of the baseline obligations of what you need to do.
Charles: Yeah. So what what as an an employer well, now let’s excuse me. What you can you form a vision in your head of what an assessment or a hiring tool is gonna look like ten years from now? Oh, jeez. You’re right or wrong.
I I can’t
Commissioner Sonderling: even I can’t even imagine. Guys, I couldn’t have created the the the technology that’s out there already, which has some pretty wild stuff about how to assess employees, the the gamified assessments. So that’s that is way beyond my simple understanding as a as a labor and employment wire.
Charles: Gotcha. Well, a lot of the stuff that that you deal with this way beyond. No. My simple understanding too. So so thank you so very much.
I I thank you for being so accessible to people and really getting out there and sharing important messages with people who are practicing, you know, this stuff, building the tools, consuming the tools, We have an obligation to to be fair and treat people like we’d wanna be treated. I think it all goes back to the golden rule. That’s the biggest standard. We all have to live by. And, you know, it’s it’s complicated, but that’s gonna be life here in the age of significant AI. If yes, me. So thanks so much for your time. I know you’re busy. Been awesome.
Commissioner Sonderling: Thank you so much.