Empovia rainbow gradient logo made up of three overlapping circles in increasing sizes with the Empovia wordmark next to it in a bold black font in all caps

How AI Impacts Biases & Inequities In The Workplace With Kieran Snyder

How does generative AI impact teams? And what can we do to mitigate the impact of biased AI?

In Episode 132, Kieran Snyder, Co-Founder and CEO at Textio, joins Melinda in an insightful discussion about the impact of AI on workplace biases and inequities. They explore the potential harm and social biases perpetuated by generative AI, and look closely at how this applies to performance reviews. They also discuss key factors business leaders should consider when they adopt AI solutions in their teams, including recognizing these biases, establishing accountability systems that support DEI efforts, and safeguarding confidential employee information.

Additional Resources

Watch Episode

Subscribe To The Show

Don’t miss an episode! Subscribe on your fav app to catch our weekly episodes.

Accessibility: The show is available on YouTube with captions and ASL interpretation. Transcripts of each episode are available by clicking on the episode titles below.

Subscribe to our Podcast newsletter

When you’re setting that policy [for teams using AI], there are a couple of questions that we really recommend that managers and… business leaders… ask. First is… ‘Who made this? What is the diversity of the team that made this thing that we’re using…?’ The second question… is, ‘What was it made for? Was it purpose-built for what you’re using it for? Or was it one-size-fits-all…?’ And then… we recommend that people ask vendors, ‘What are your biases? What did you build this to do?’
Guest Speaker

Kieran Snyder

Co-Founder and CEO at Textio
(She/Her)

Kieran is co-founder and CEO of Textio, the platform for inclusive and equitable communications. After earning her PhD in linguistics at Penn, Kieran spent a decade creating the world’s most impactful language products at Microsoft and Amazon. In addition to developing linguistic capabilities like spelling and grammar-checking for more than 100 languages, she led the large-scale effort to integrate the Bing search engine into Microsoft Windows. Kieran is a world-renowned expert on language and bias at work, and her writing has appeared in Fortune, The New York Times, Slate, and the Washington Post.

Learn more about the host and creator of Leading With Empathy & Allyship, Melinda Briana Epler.

Transcript

MELINDA: Welcome to Leading With Empathy & Allyship. I’m Melinda Briana Epler, Founder and CEO of Empovia, formally Change Catalyst. I’m also the author of How to Be an Ally, and your host for this show. 

 

What is allyship? Allyship is empathy in action. We learn what people are uniquely experiencing, we show empathy for their experience, and we take action. As a part of that process, we learn and unlearn and relearn. We work to avoid unintentionally harming people with our words and actions. We advocate for people, and we lead the change on our teams, in our organizations, and across our communities. 

 

In this episode, you’ll learn tangible actionable steps that you can take to lead the change to be a more inclusive leader, no matter what your role is. Want to learn more? Visit Empovia.co to check out more of my work. 

 

Let’s get started. 

 

Our guest today is Kieran Snyder, who is Co-Founder and CEO at Textio. Since ChatGPT launched at the end of 2022, you’ve seen generative AI explode. It’s now offered in many of our workplace apps, most of them at this point, I think. Companies have been very, very quick to add AI to their products, so they don’t get left behind. And there’s a problem that we’re seeing, that humans are biased, and we know this, that AI programmed by humans amplifies those biases. So in this episode, we’ll be speaking about how AI in the workplace can impact biases, inequities, and team environments as well, and how we as individuals can become more aware and take action. 

 

So welcome, Kieran. I’m excited to have the conversation with you.

 

KIERAN: Me too. Thanks for having me on the show.

 

MELINDA: Yeah, of course. So let’s start first with your own story. Where did you grow up? How did you end up doing the work you do today?

 

KIERAN: Yeah. So going back as long as I can remember, like way back into my childhood, I’ve always been kind of half math and half language. My dad is an engineer. Still in his mid-80s, running his boutique electrical engineering company, making components for space shuttles. So grew up with that influence. Then my mother was a creative writer. I grew up to be an engineer and a writer, kind of taking both halves of that. So I ended up getting a PhD in linguistics, after studying math and linguistics in college, with a real focus on natural language processing, which is very timely now. I’ve always been a competitive athlete. So that has ended up really formative in my view of teams and teams that work really well together, whether in technology or another domain. I’m actually still, 30 years after I started, a youth basketball coach. I started coaching with my dad when my sister was little, when I was still 15 myself. All of that has gone into making the professional choices that I’ve made.

 

MELINDA: Awesome. I’ve also youth soccer coached for quite a while. 

 

KIERAN: Nice.

 

MELINDA: That’s great, thank you for sharing that. Well, could you share a bit about what you do now? What do you do now in Textio? What does Textio do, for those who don’t know?

 

KIERAN: Yeah. So I’m the CEO and one of the two founders of Textio, and we make communication software that is specifically and purposefully designed for HR, with equity at the center. So how do you hire and retain a diverse team, that’s the mission that we’re all about is helping organizations do that, and how do you harness the power of language to bear on those goals that you have? How do you write inclusive job descriptions, candidate communication, website content, performance reviews, feedback at work? So everything through the whole employee lifecycle, how are you communicating in a way that helps you not only hire a diverse high-performing team, but retain them. So we make software that helps on the communication side exactly for these HR scenarios, with DEI really in the forefront of the product that we make?

 

MELINDA: Awesome. You’ve been working with generative AI for some time. For folks fairly new to this, can you just give us an overview of what generative AI is, and then where it shows up in the workplace? I’m not sure everybody would’ve noticed.

 

KIERAN: I don’t think everybody will know. I think there’s actually quite a bit of jargon confusion, what’s AI versus generative AI versus ChatGPT. For the last couple of decades, AI was anything that used maybe machine learning technology underneath what you could see on the screen. So you take really big sets of data. In a text use case, it’s often language data. So a set of job descriptions with information about who’s really applied to those jobs in the past, or a set of performance reviews with insight about employee retention, what kinds of patterns were likely to drive retention versus attrition in an employee group. And what the technology does is it finds patterns in all of that data. So it allows you to say things like, when you describe a manager role as managing a team, you are more likely to attract candidates who are men to apply to your role. When you describe the role as building a team, you are statistically more likely to attract candidates who are women to apply to the role. When you say lead a team, you get a gender mix. That is the kind of insight you can get if you’re using machine learning behind the scenes. Generative AI takes all of that technology and uses it to actually create content. So not just analyze what’s happened in the past, but use all of that language to write something that sounds plausible for the future. I say sounds plausible, may not be plausible.

 

MELINDA: So you get ChatGPT results that sound plausible, but are absolutely not.

 

KIERAN: Yeah, I’m glad you mentioned ChatGPT. Because ChatGPT is an app that is built with generative AI underneath it, and it’s a chatbot. So you could go to the website and type in a text box and have a conversation, I did air quotes there, conversation, with the bot. Behind the scenes, what it’s doing is it’s looking at the whole corpus of the internet and trying to predict which word should come after the word that it’s already written to get something that sounds credible, and sometimes quite authoritative, without it actually being credible or authoritative.

 

MELINDA: Yeah, I read a story recently about an attorney who’s currently in a lawsuit. I think it’s a lawsuit where he searched ChatGPT for cases that would backup his case particularly, and turns out that none of them were real. That’s a perfect example.

 

KIERAN: It’s a perfect example, and I saw that too. and it caused the judge in that situation to require that attorneys in their courtroom in the future, specifically call out places where they’ve used generative AI to assist in the preparation of their arguments. I think this is a pretty highly educated person, this is an attorney doing this, and that’s somebody, by the way, who probably has pretty good skills of discernment to understand real case from hallucination or false case. For everyday users, it’s really easy to be misled into what’s true, what’s not true. That’s why you were searching in the first place, is you didn’t know. 

 

MELINDA: Yeah. So recently you traveled to Capitol Hill and spoke with members of Congress. Could you share a bit about what that trip was about and what you discussed?

 

KIERAN: Yeah. I will tell you, when I went on the trip, it’s easy to be a little cynical about political conversations. I ended up really hopeful after this discussion. So I was part of a small group that was invited to address the Congressional Caucus for AI, which is a group of congress people over 50 now, the group is actually growing, all members of the House of Representatives who are thinking about legislation for AI. In contrast to a lot of issues discussed on Capitol Hill, this feels almost pre-partisan. It doesn’t feel like Congress has already decided what they think based on their political affiliation; no one knows what to do. So I think there’s wide recognition that Sam Altman, who leads Open AI, which makes ChatGPT as well, as a lot of the most cutting-edge generative AI models that other workplace apps are using, he’s been really central in the conversation. He is advocating, seemingly advocating for regulations. And when you look at what he’s advocating for, it’s often to maybe prevent other organizations from becoming competitive with OpenAI. So Congress is really interested in a greater diversity of perspectives from innovators, entrepreneurs, policy scholars, which is amazing. 

 

So I was invited to share a little bit about the bias in these models that is there by default. If you’re not purposefully designing your generative AI solutions with equity in mind, you end up propagating and perpetuating existing social biases in a way that’s pretty dangerous. So Congress was really interested in this. I would say, along with things like accuracy, IP rights, trustworthiness, data privacy, bias is a really hot topic in the congressional conversation right now.

 

MELINDA: That’s awesome, that’s good to hear. I certainly have been following with horror, as the Bing’s chatbot becomes hostile to journalists, and AI search results are showing really deeply sexist and racist and ableist stereotypes. I also just read an article, or a study by the United Nations Development Program, they found that in their search results for people in STEM, 75% to 100% of AI generated images were men. That’s continuing to perpetuate the issues, where it’s absolutely not true. Then also, another thing that I read recently is, Google’s AI, back in 2015, auto-tagged Black people was gorillas. I just read an article that they didn’t actually fix the problem, they just stopped tagging any image with the term gorilla. So it’s worrisome, because that’s not a fix. 

 

KIERAN: Oh, completely. So one of the things when organizations ask how they should consider which AI solutions to adopt, of course there’s a lot of sophisticated questions you can ask about the data set and what it was built for, we can talk about that. The simplest question you can ask, if you’re not an AI technology expert, is tell me about the diversity of the team that built this thing. The reason you ended up with stuff like the Google solution is because there weren’t a lot of Black people on the engineering team building the algorithm, and so they didn’t think about those test cases. This exists in technology that’s not software. I read recently that women are 73% more likely to die in car accidents than men, because crash test dummies are designed after average male body proportions. That happens because industrial engineering teams don’t have very many women on them. So this is not a new problem. But with AI, I think we have the potential to really accelerate harm. 

 

So I have done quite a bit of work now with ChatGPT and the biases that are there. So a simple example, when I asked the system to write sample performance feedback for people of different roles, for an engineer, for a doctor, and so on, what comes back is very consistently gendered. So if I asked for feedback about a receptionist, the system comes back writing she/her pronouns in the center of the feedback. She’s really friendly, she’s really bubbly, she’s very effective, she’s not very nice, whatever it might be. If I ask for feedback about a construction worker, I get he/him pronouns. ChatGPT isn’t asserting an intentional point of view about gender. But it is propagating one that is present in the underlying data set, because the team did not build the dataset to counteract that. We’ve had generative AI in Textio, since 2019. But we built the dataset very intentionally, by the way with an agenda, and the agenda is to offer opportunities to people who haven’t had them before. We don’t pretend we’re bias-free either, we totally have a bias. The bias is to make sure women and Black people get jobs and get promoted, that’s our bias. So when you build a system without a point of view on equity, you just inherit the social biases that are present on the internet by default. That’s what you see in ChatGPT.

 

MELINDA: Yeah. I was reading an article that you wrote recently, where you said at its core, generative AI is essentially a technologically advanced mirror; what goes into it is what comes out. If the inputs are biased, the outputs will be too.

 

KIERAN: Yeah. I think it can be really hard to tell, in any given case, whether what you’re seeing is biased. It’s really obvious in that pronoun case, when we talk about it statistically, when you look across hundreds of documents. But in any one case, it may be less clear. I also looked at sample performance feedback for theoretical employees, same roles, same prompts. But in one case, the person is an alum of Howard University, which is a really prominent HBCU in Washington DC. In the other case, an alum of Harvard University in Cambridge. Again, this is all like, write me a sample. But the Howard alums are most likely to be criticized for lack of attention to detail, lack of analytical skill, not getting along with others. The Harvard alums are most likely to be criticized for being condescending, maybe not stepping enough into leadership. So the stereotypes that exist in the underlying dataset come through, and in any given case, you may not know. Because all of those could be totally valid reasons for offering a performance critique about any individual employee. But when you look across hundreds of examples, and it just so happens that all the Howard alums are posited to have the same performance gaps, and also all the Harvard alums are, that’s bias. That’s bias in the dataset.

 

MELINDA: Yeah, absolutely. You can see, even if you’re not googling to get a performance review, how many other ways that those same biases can impact the results that you receive. So let’s talk a little bit about how this then starts to impact the workplace. I mean, you can start to see already. Maybe performance reviews is a good place to start, since you used that example earlier. How would you recommend that managers keep this in mind as they’re developing their performance reviews, assuming they’re using some AI? What advice would you give them? 

 

KIERAN: Yeah. So first of all, don’t write performance reviews with ChatGPT. Don’t send your confidential employee information to an insecure website on the internet. Bad plan, no matter what the underlying models are. Don’t do it, use only trusted applications. So I’ll start with that. But from an equity standpoint, even once you assume you’re using a secure environment or a trusted application, let’s begin with the fact that people are also not very good at this. The reason that these systems have problems is because we have problems as people, and have for a really long time. So last year, Textio published largest study ever of its kind about demographic bias in performance feedback at work. So we looked at over 25,000 people’s worth of data across 253 different organizations, and we looked at the kinds of feedback received by people of different races, different genders, and different age groups. It’s not equal, and these are documents written by people. Women receive 22% more personality feedback than men. Not feedback about their work, but feedback about their personality: are they nice or not? Black women in particular are five times more likely to be called overachievers, than White men. That might sound like a compliment, it’s not really. It’s sort of like, you did a good job despite my low expectations for you. Whereas, White and Asian men are four to six times more likely to be called brilliant than any other group. So you start seeing these patterns in the documents that have been written by people. 

 

So enter AI, and if you’re using an assistive technology that hasn’t been designed to flag these biases for you while you’re writing, or worse, it’s producing documents with these biases baked in for you, you can’t trust the fairness and quality of what you’re putting together for employees. Part of why, at Textio, we think whether the content is written by a person or by AI, you need an independent validation of the biases that are showing up in the output before you actually publish something or send it to somebody. By the way, I think AI can be a huge assist here. It’s great at finding patterns. AI can tell you: Hey, you’re describing this person in a way that seems like it might be demographically oriented, and you probably didn’t even know you were doing it. It could be really helpful to combat bias if it’s designed appropriately. But if it’s not, then you can’t really trust the output, without that kind of independent validation of the fairness of what you’re writing.

 

MELINDA: Yeah. I think a lot of people wouldn’t just say, ChatGPT or whatever algorithm or AI system don’t create a performance review. But I know a lot of people are using it to make whatever they’re doing better. So there’s where I think that there’s an issue that’s probably happening in the workplaces.

 

KIERAN: It definitely is. If you’re not using software to help you spot bias, you’re doing things a really traditional way. You’re writing a performance review where maybe you’re giving sensitive feedback. What do you do in a normal case? Well, you ask your HR business partner to give it a read through, you ask your boss to give it a read through. You ask for a second opinion, to say: Hey, am I being fair here, am I communicating this effectively? By the way, a second opinion is better than no second opinion. But a second opinion just introduces one other set of eyes and the biases that they come with. Of course, if they’re similar to you in any way, and statistically, they probably are, if you’re asking them for help, they may not spot all the issues; they won’t spot all the issues. 

 

So I am a real believer in software’s potential here to point out issues we may not see on our own. Because what the right software does is it aggregates the perspectives of millions of other people, with millions of different identity backgrounds, and helps you understand how you’re coming across. But especially with sensitive communication and high stakes communication, like performance reviews, writing without that, it’s almost assured you are producing something that carries with it some bias. It’s funny, when I first started looking at performance reviews, which was in 2014 now, one of the most interesting findings is that insight about personality feedback where women receive more than men, that’s true even if the person’s manager is also a woman. Women in management are just as likely to produce biased feedback about their women employees, as men and management are, even though they themselves have probably experienced the same sort of discrimination in the workplace. So we can’t see it by ourselves. It’s a really important insight.

 

MELINDA: Yeah, it’s so embedded in our systems and processes and how we grew into roles. Well, how else are you seeing AI impact teams?

 

KIERAN: So the first thing I would say is, nobody knows really what their strategy is yet. I think there is a real fear of being left out, a bit of FOMO in the conversation. Like, are you using AI, how should you use AI? So I feel a lot of anxiety in the conversation. There’s a bit of a divide between people who are embracing of the technology, and people are who are more nervous to embrace the technology. But I would say that it feels quite unsettled. Especially large organizations haven’t decided yet what their rules are for employees. So what are we allowed to use, how are we protecting our privacy of our most important people data, especially, and trade secrets is really important? Then there’s the anxiety a bunch of people are feeling, I would say, about job replacement. So I saw this research the other day by Checkr, published maybe a few weeks ago, that said, it was like, three quarters of people who are using ChatGPT at work are not telling their manager that they are. The fear is, if I tell my manager that I’m using it, maybe I could be replaced. 

 

MELINDA: Yeah, redundant because of AI.

 

KIERAN: Yeah. But actually, I don’t know anybody, and I literally mean this. I’ve talked to hundreds of people about this. I don’t know anybody who is using ChatGPT to fully replace all their writing tasks, because the output is just not very good yet. It’s just not. I know a lot of people who are using it to get inspiration or story starters. But when you can’t tell what’s true, you can’t tell if those lawsuits are real or not real. I visited with a couple of my cousins last week, one of them is a first year in college, and one of them is a rising senior in high school, and they were talking about ChatGPT at school. The one going to college was like, what I do is I use it to help me write an outline, and then from there, I sort of fill in. I’m using it to make sure I’m like covering the right points. I was like, oh, that’s a pretty smart use case. It’s assistive. It’s not replacement. I think a lot of workers are using it in that vein, to try to get a little structure in place, or get a little information to help elaborate.

 

MELINDA: Yeah, I have heard that some copywriters and social media folks are starting to. Maybe they’re not replacing the people, but they’re having less work, and so needing less people, if that makes sense, because of AI.

 

KIERAN: Well, I guess we’ll see how well the content performs.

 

MELINDA: Yeah, exactly. I do know that part of the reason for the writers’ strike is because the writers want protections embedded.

 

KIERAN: Completely. Well, it gets even more complicated from a DEI standpoint. Because here’s what’s happening in the US right now. The country is getting more diverse, specifically more racially diverse, and more age diverse. That is just happening. You can look at the US Census reports every time they get released, and you see more and more people who are not White, you see an aging population and people working longer. So what’s happening for most businesses right now, their customer bases are diversifying. That’s just a fact. And if your team isn’t diversifying along with your customer base, chances are you do produce facial recognition algorithm that fails for Black people, or those crash test dummies that fail women. You literally can’t serve your customers. So if the content you’re putting out there for customers is not written by a diverse team, but is written by a machine with extremely homogeneous input, you will fail your customers, and your business is going to suffer. 

 

I think this is only getting more acute with the affirmative action ruling by the Supreme Court a few days ago, which will definitely impact hiring pipelines down the line around racial diversity. It’s going to be even harder to make sure you have a team that is as diverse as you’re inevitably diversifying customer base. So AI has the potential to really screw this up for companies on the content side, if they’re not mindful about the provenance of the content.

 

MELINDA: Yeah. So what would you recommend? Let’s start with managers first, people who are managing teams, where almost everybody on our team at one point or another has used ChatGPT. So one of the things we realized is, we have to create a policy for this real quick. So basically, our policy is, if you use it, say how you’ve used it, so that we all know, just like the judge that you mentioned earlier. So what would you recommend for managers who are grappling with all the things that we’ve just talked about, with teams some of whom are using AI. It’s not just ChatGPT, it’s now embedded in a lot of our different products that we use every day in our workplace. So teams that are using AI, teams that are maybe using AI and not being aware of biases, and so on. What would you recommend managers to be thinking about and doing?

 

KIERAN: Yeah. I think you started in a great place by getting prescriptive, like really actually putting some policy in place so people don’t have to guess. I would say, right now policy has to be a little bit adaptive, because the landscape is emerging so rapidly. You actually do want people who work with you and for you to feel empowered to tell you what’s going on, and to bring new tools with new potential to you, and not hide them. So you do need to have a conversation that’s a little bit more collaborative about policy. But ultimately, if you’re the manager, you have to set some policy for your team. 

 

When you’re setting that policy, there are a couple of questions that we really recommend that managers, and I’d say business leaders in general, ask. The first is what I said before: who made this, what is the diversity of the team that made this thing that we’re using? That is not foolproof, but it’s a pretty good shorthand for, were a diverse set of perspectives incorporated when this thing was created? So if you’re not a technology expert, there is no single question that’s going to help you filter through faster than who made this. Tell me about the team. The second question that we often like to recommend that people consider is, what was it made for? Was it purpose-built for what you’re using it for, or was it one-size-fits-all, like ChatGPT? 

 

ChatGPT, part of the reason the content is not very good is it wasn’t designed to be good at anything. It wasn’t. It’s amazing technology. It is amazing. It is truly amazing. But it wasn’t designed to be good at writing marketing copy that’s compelling. It wasn’t designed to write job descriptions. It wasn’t designed to write a poem. It wasn’t designed for these things. So what you get will always be kind of lowest common denominator. Whereas, some of the more specific tools, we certainly embraced this at Textio when we purpose-built for HR, but there are similar organizations purpose-building for other scenarios. I often say, get something that was built for your scenario, because then the data that’s in it is actually relevant to your domain, and you can be a little more confident in where it comes from. 

 

Then yeah, the second place is, we recommend that people ask vendors, what are your biases? What are they? Like, what did you build this to do? If people say we build without bias, that’s a good clue that you shouldn’t use that solution. Because they have bias, they just don’t know what it is. Those are three questions we really recommend that managers and business leaders ask as they make policies for their teams.

 

MELINDA: Awesome. Then anything else that you would recommend for leaders of companies or HR folks that are looking more broadly at their company around AI?

 

KIERAN: The other thing I would say, besides those questions, is that your most valuable thing, your most valuable asset at your company is your people. I know that’s a cliché, but it’s actually true. Because without them, you don’t have a business. So data that is important enough to you to measure and track is important enough that you probably don’t want to share it with the broad internet. So be really, really mindful of the people data, the tools you choose to trust. If it sounds like free and open source and too good to be true, you’re probably paying for it with data exposure in some way. So just be really mindful of what you use free and open source things to do. That it’s stuff you’re totally happy having published on the internet, it’s stuff you’re happy having your competitors have access to. 

 

I saw this, I talked to a partner that I met at an event recently at a hospital system. She was sharing, she works in HR, but she was sharing the whole organization ended up with really hard policy about AI, when it turned out that doctors were putting patient notes into ChatGPT to help write full patient summaries, like people’s medical conditions, really high privacy content. Of course, if you’re the CIO of a hospital, you have to crack down on that right away. It’s like every possible organization-ending HIPAA violation. So you’ve just got to be really careful about what people use broad tools for, and the vendors you feel confident to trust with your really sensitive information, about people especially. You wince at it because you can totally see how that could happen, it’s just an efficiency aid. But wow, big problems. That’s not the same as like, write me a tweet. It’s a little bit different. But there’s a slippery slope there.

 

MELINDA: Yeah. I mean, it’s got to be a HIPAA violation.

 

KIERAN: Yeah, right. But some of the things you might choose to put there wouldn’t be a legislative or legal violation. But I probably don’t want to put on my performance notes on an employee who’s struggling, into a system where I don’t know who’s going to have access to the data. What if they’re a different background than me? What if there is bias in what I’m writing? What if that’s now all over the internet? There’s real challenges. It may not be illegal, that doesn’t make it advisable. So we often think about that. 

 

The other thing is, a lot of these systems don’t know when to push back on prompts. So one of the things Textio makes is software to help with job posts and job descriptions. That seems like it should be pretty safe for public consumption, you publish these all over the internet. But the system happily takes prompts that are very illegal. The other day, I asked it to write a job description for an HR business partner who was a devout Christian and regular churchgoer, totally illegal. But it totally happily didn’t push back on the prompt and wrote me something all about how your Christian values had to be infused throughout your HR work. So sometimes, if you don’t write the prompts correctly or mindfully, what you can get out can be problematic in all kinds of ways. So if you’re running a company, and you give your employees free rein of like, use it, again, it can really propagate bad decisions very quickly.

 

MELINDA: Yeah. Since you have built a product where you’ve kept bias in mind, kept equity in mind, are there any recommendations that you have for folks who are working on new products and designing new products for the future that may impact us in different ways?

 

KIERAN: Yeah. The first is, understand how and who you’re going to trust to validate the equity and bias impact of your solution. So it’s impossible for a model to regulate itself. By definition, you don’t have objectivity on your own model. So think about who you’re partnering with in the ecosystem to make sure that the equity outcomes are what you’re going to intend them to be. I think we’re going to see a whole set of providers emerge here around bias and equity, Textio is one. I think we’re going to see providers emerge around IP rights, like, who owns the rights to this. We’ve all seen the musician cases, where the musicians’ work has been remixed, with no rights, no approval, and no ability to earn royalties on the new songs. Big problems. I think we’re going to see validators around data and privacy. But start thinking now about how you’re validating what you’re creating, and what data set you’re using. Start building a proprietary data set that really is built for your scenario vertically. Those are the two things I would recommend.

 

MELINDA: Awesome. Where do we go from here? Well, maybe first, let me ask, are there things that that people within organizations that care about inclusion, that care about equity, are there things that you would recommend people do within their organizations to advocate for really creating more equitable workplaces, in light of what we’ve just talked about?

 

KIERAN: Yes. I think the most important lever that you have in an organization is to make sure that your systems of accountability, literally what gets people hired, promoted, and fired, encode the things you care about. So if you think it’s important that a manager is offering equitable feedback to all their employees, that White people and Black people and Asian people and Hispanic people are getting the same amount, frequency, and cadence of feedback, don’t promote a manager who’s not doing that. Measure it, and make sure that actually the people who were not doing that don’t get a chance to grow in your organization. These accountability systems. If you think it’s critical that pay equity is a foundational principle of your organization, don’t promote an executive who doesn’t pay have equity in their organization. By the way, one of the reasons I love feedback equity is it’s a leading indicator. It’s a chance to get ahead of the pay inequities before they come to pass. 

 

But there is no substitute for the accountability systems that get people hired, promoted, and fired. Like, whether you use AI or anything else, if you don’t have those accountability systems in place, all of your DEI efforts will fail. They just will fail. So we always start with our partners by saying, what are the things you actually care about, are you willing to fire people if they don’t do those things? Then we can help you measure them and improve them, but you’ve got to have buy-in at the accountability level. I think that’s extra too in the era of AI-driven tool use.

 

MELINDA: Yeah. So if you’re not able to make decisions about who gets hired or promoted, are there things that you would suggest that other folks do that don’t have? 

 

KIERAN: Well, I think advocate for that. I think advocate for that. I think you can certainly, if you’re a line manager, make sure you’re doing the right things by your own team here. But I also think, even right now in this market, which has been challenging for tech in the last year especially, I still think labor is going to win. I think it’s a labor-empowered moment that we’re at culturally, and I think you have a lot of ability to influence your leadership team to measure the right things. There are enough leadership teams now who want to measure the right things, that if yours doesn’t, you can probably go work somewhere that does; maybe not tomorrow, but next year. So know what you care about, and actually be willing to advocate hard, and organize; be willing to advocate hard with your co-workers to get the right accountabilities in place, and ask your leaders for honest conversations about those things. There are plenty of leaders who will tell you. Elon Musk, fun fact, my next door neighbor in college, will tell you he doesn’t care. He doesn’t care, that’s just not a priority. Okay, well, I would choose not to work in his organization then, good to know. But I think you should force your leaders into honest conversations about what their values are.

 

MELINDA: Yeah. And if you leave, to say something in the exit interview as well, so you’re planting seeds for the future.

 

KIERAN: Absolutely. I think we are at a moment of some seed change in a bunch of organizations that are trying to get the right accountability systems in place. The more we can make this connection, that your customers are diversifying, your business will not succeed if your team doesn’t keep pace, it really creates a lot of business incentives for leaders to care about diversity progress. Because if they don’t, they won’t have business progress. That’s just becoming clearer and clearer with each passing census right now.

 

MELINDA: Yeah. So looking to the future, I’m sure you get asked this question from time to time, what are you looking to the future of work that includes AI, what are you thinking about? What are some things you’re thinking about?

 

KIERAN: There’s so much here. Obviously we’ve spent a lot of time in this discussion, talking about bias. One of the things that is most fascinating in my life is, I have three daughters; I have a 14, a 13, and a 12-year-old. So they’re like coming of age in this environment, and they all have three very different orientations on it. Remember, they’ve grown up with Textio, so they know more than average. 

 

But a year ago, before ChatGPT, she was like: “Hey, summer project, why don’t we write a bot to do your workplace one-on-ones?” I was like, I would never do that. She’s like: “No, you could automate a whole bunch of it, and then just have time for the high-value conversations.” That’s her starting point on how work should work, which is really different. Whereas our oldest, who is a creative writer, we actually had to have a pause on a dinner table conversation. Because when ChatGPT was launched, and again, she has grown up with Textio, there were tears for her. Because she’s like: “No, that system will never write sci-fi like I write sci-fi.” This is identity for her. But the same kid the next day is happy to use image-generation AI tools to write illustrations for her story. You ask her about it, and she’s like: “Yeah, I see what you’re saying. If I publish it broadly, I’ll hire a real artist. But for now, this is good enough for me to have ideas.” 

 

So their starting point is totally different. Their starting point is that AI is going to be a thing that they use to get their work done. So I don’t think there’s a lot of point in pretending that’s not the case, that’s just the case. So it’s on us to think about how to harness that in a way that creates economically beneficial and fair outcomes for us as a society. Because the kids are going to win, they always do; kids are always right, they’re going to win. They’re going to take over the workforce in 10 years, and then their attitudes will be the attitudes. So we’ve got to figure out how to get the right policies in place now so that it goes in a direction that we want. So I’m spending a lot of time on that as a professional and as a parent right now, and I feel fortunate in my professional life that I have kids this age, because it’s super-informative.

 

MELINDA: Yeah, I can imagine. So what action would you like people to take coming away from our conversation today?

 

KIERAN: There’s a couple of things. Read up. So maybe when we send out the podcast, when I share it on social too, I’m happy to share links to some of our research on it. If you go to Explore.Textio.com, you will find a bunch of research and data on bias and AI that you can take back to your organization. So that’d probably be my top ask for people, is get curious and learn. Then second, really start thinking about what it means for your team and what action you want to take with your own organization.

 

MELINDA: I love it. We’ll put that link, as well as maybe a couple other articles, in our show notes as well. So you can go to ally.cc, or in whatever podcast platform you’re using, or YouTube, you’ll also see it in the show notes. Thank you. And where can people learn more about you?

 

KIERAN: You can follow me on Twitter at Kieran Snyder. I’m prolific on LinkedIn, so you can follow me on LinkedIn as well. Or you can go to Textio.com if you want to learn more about the work that I’m doing at Textio and that our team is creating.

 

MELINDA: Awesome, thank you. Thank you for this conversation. and thank you for doing all the work you do.

 

KIERAN: Likewise. Thanks for having me on the show, this was really a fun discussion. I really appreciate the work you’re doing too.

 

MELINDA: Awesome, yeah. To all of you listening or watching, please do take action. We will be taking a pause. So I encourage you to go back and check out some of our previous episodes you may have missed while we’re pausing to regenerate. Pick up a copy of my book, How to Be an Ally, great summer and fall reading. Enjoy the break, and we will see you soon.

 

Thank you for being part of our community. You’ll find the show notes and a transcript of this episode at ally.cc. There you can also sign up for our weekly newsletter with additional tips. This show is produced by Empovia, a trusted learning and development partner, offering training, coaching, and a new e-learning platform, with on-demand courses focused on Diversity, Equity, and Inclusion. You can learn more at Empovia.com. 

 

Allyship is empathy in action. So what action will you take today?