Lydia X. Z. Brown is an advocate, organizer, attorney, strategist, and writer whose work focuses on interpersonal and state violence against disabled people at the intersections of race, class, gender, sexuality, faith, language, and nation. Lydia is Policy Counsel for Privacy & Data at the Center for Democracy & Technology, focused on algorithmic discrimination and disability; Director of Policy, Advocacy, & External Affairs at the Autistic Women & Nonbinary Network; and founding executive director of the Autistic People of Color Fund, a project of collective care, redistributive justice, and mutual aid. Lydia is an adjunct lecturer in the Women’s and Gender Studies Program and the Disability Studies Program at Georgetown University, as well as the Self-Advocacy Discipline Coordinator for the Leadership Education in Neurodevelopmental Disabilities Fellowship program. They are also an adjunct professorial lecturer in American Studies in the Department of Critical Race, Gender, and Cultural Studies at American University. They are co-president of the Disability Rights Bar Association, a commissioner on the American Bar Association’s Commission on Disability Rights, and Disability Justice Committee representative on the National Lawyers Guild board. Lydia is currently creating the Disability Justice Wisdom Tarot. Often, their most important work has no title, job description, or funding, and probably never will.
The Impact Of Surveillance Tech On Marginalized Populations With Lydia X. Z. Brown
In Episode 92, Lydia X. Z. Brown, Policy Counsel for Privacy & Data at the Center for Democracy & Technology, joins Melinda in an enlightening discussion around the impact of surveillance tech on marginalized populations. They address the importance of understanding the threats of surveillance in our daily lives brought on by algorithmic technologies used in education, policing, healthcare, and the workplace, and they discuss how this tech can be disproportionately damaging to people of color and people with disabilities. Lydia also shares what actions are needed to protect health data following the overturning of Roe v. Wade and how individuals and organizations should approach data privacy to protect everyone’s rights and advocate for marginalized communities who are harmed by surveillance technologies.
Additional Resources
- Learn more about Lydia X. Z. Brown
- Learn more about Lydia’s work at the Center for Democracy & Technology
- Read CDT’s report, “Ableism And Disability Discrimination In New Surveillance Technologies: How New Surveillance Technologies in Education, Policing, Health Care, and the Workplace disproportionately harm disabled people”
- Read CDT’s report, “Surveillance Tech Discriminates Against Disabled People”
- Read CDT’s report, “Following the Overturning of Roe v Wade, Action is Needed to Protect Health Data”
- Read CDT’s report, “Warning: Bossware May Be Hazardous to Your Health”
This videocast is made accessible thanks to Interpreter-Now. Learn more about our show sponsor Interpreter-Now at www.interpreter-now.com.
Watch Episode
Subscribe To The Show
Don’t miss an episode! Subscribe on your fav app to catch our weekly episodes.
Accessibility: The show is available on YouTube with captions and ASL interpretation. Transcripts of each episode are available by clicking on the episode titles below.
Subscribe to our Podcast newsletter
Learn more about the host and creator of Leading With Empathy & Allyship, Melinda Briana Epler.
Browse Our Episodes by Category
Recent Episodes
Transcript
MELINDA: Welcome to Leading With Empathy & Allyship, where we have deep real conversations to build empathy for one another, and to take action to be more inclusive, and to lead the change in our workplaces and communities.
I’m Melinda Briana Epler, founder and CEO of Change Catalyst and author of How to Be an Ally. I’m a Diversity, Equity, and Inclusion speaker, advocate, and advisor. You can learn more about my work and sign up to join us for a live recording at ally.cc.
All right. Let’s dive in.
Welcome, everyone. Today our guest is Lydia X. Z. Brown, Policy Counsel for Privacy and Data at Centre for Democracy and Technology. We’ll be discussing surveillance tech in education, policing, healthcare, and the workplace, how they can harm specifically disabled people across multiple intersections, including race. We’ll also talk about how these same technologies can be a threat and cause harm following the overturning of Roe versus Wade. We’ll talk about what these technologies are, who they impact on how, and what we can do as colleagues, managers, and humans wanting to be better allies.
Welcome, Lydia.
LYDIA: Thank you so much for inviting me, Melinda. I’m excited for this conversation.
MELINDA: Yeah, me too. Lydia, can you please share with us a bit about you, where you grew up, how you ended up coming to do the work that you do today?
LYDIA: I grew up in Massachusetts in a small town outside of Boston, and where I grew up, I lived in a town that used to be 97% White people. Then when my sister and I came to the town, I think, as I joke with people, that the two of us managed to decrease the percentage of White people in the town, and now where I grew up has diversified so that White people comprise 91% of the population. Growing up as certainly a gender non-conforming person of color, who was certainly perceived as neurodivergent, even before I was officially diagnosed as such, I always knew that I was different from my peers and different in ways that according to society meant that there was something wrong with me, that I was the problem. That I was a problem that was supposed to be fixed somehow, either by making myself small and trying to blend in and assimilate into dominant cultures, or that I was to be fixed by being somehow disappeared or silenced. But I don’t listen to those kinds of expectations, I never really did.
As I grew older, I developed a very, very keen sense, not just of right and wrong, but a particular focus unwaveringly on justice as a principle. I always had an innate sense or an innate conviction that things in the world we live in are often wrong. That things in the world we live in are often profoundly damaging, dangerous, or even deadly, for people treated unjustly by their neighbors, by their communities, by their government, or by society. As I grew older, and I ended up in high school meeting young activists and older adults who’d been where I was at the time, I learned that there were whole movements and communities of people who were dedicated to fighting injustice, to challenging oppression.
I believed then, as I do now, that everybody has to use whatever resources we have available to us to challenge oppression in all of its forms. But how I’ve understood what that has changed over time. In high school, I thought a lot about the violence and harms of our government in response to the terrorist attacks of September 11th. I thought a lot about the violence of police, state, local, and federal law enforcement. I thought a lot about the horrors and the devastation of war, all of which are areas that many advocates are working on the ground now every single day to challenge those types of violence. Where I am now as a community organizer and advocate and a lawyer, I focus on a range of issues involving different forms of state and interpersonal violence that harm marginalized people, and in particular, disabled people who are multiply marginalized.
That focus, and the area that I’ve worked on within that realm, has changed over the years. But that theme has kind of been in my work for 15 years now, ranging from situations where I advocated for students and their families, where disabled students in elementary and middle school were arrested and charged with adults because of disability discrimination, and even as a response to disabled students acting out because they were overwhelmed. Because they were physically attacked, restrained, and held to the floor by adults who are multiple times their size. That also includes work that I do now to address harms that happen at scale, when systems, policies, and practices disenfranchise disabled people, exploit us, deprive us of access to necessary resources, burden us, and even target us for violence or death.
MELINDA: Amazing! One of the things I’m noticing is you didn’t mention all the different things that you do, which I think is really important for people to know. That you’re even now tackling the issues that you’re talking about from multiple different angles, doing multiple different things with multiple different hats. Can you just say a little bit about all the work that you do?
LYDIA: Sure, I’d be happy to. But actually, you pointing that out reminded me that I frequently do not introduce myself at the discretion of official or formal roles that I have, or organizational affiliations. Part of that I think is very much a resistance to the expectation rooted in ableism, classism, and racism. That our worth, our value, our credibility is rooted in what our institutional affiliation is, what our job title is, where we are in a social or economic hierarchy. I very much reject that notion that I should be valued or afforded respect based upon my connection to formalized recognition of my labor. That my job title means I deserve respect, whereas somebody who has been struggling to find employment does not deserve respect. Of course you know this. But in so many subtle ways societally, we reinforce the notion that your job title, your degrees, your credentials, your current position, or affiliation, determine what level of respect you should be afforded. Or that that’s the reason why somebody should be deferred to, that’s why someone should be considered an expert.
Whereas for me, I think often that my expertise draws first and foremost from community experience and transmission of knowledge. That a lot of what I do, and a lot of the education I have, didn’t come from classrooms or textbooks. It came from mentors, who were community organizers and advocates in a range of spaces, whether working on policy, or working at the grassroots, doing mutual aid work, doing work that sometimes has been criminalized, and other times doing work that is completely within the realm of legitimized legal and political institutions. That kind of education can’t be captured in the degrees that I hold.
But you are right, that I do work in a lot of different spaces, and that too is not often typical, except it is for those of us who are working on the frontlines as advocates. Many of us, by necessity, are working across spaces across contexts. My work, for me, has found myself working as a community organizer, and also as an adjunct professor, and also as a practicing lawyer at times. Currently, I do work as a policy expert, I also do work in a range of ways to support other disabled people. I’m engaged in cultural work that isn’t necessarily as easily commodified, and that work has spanned multiple organizations, current and past, it has spanned multiple educational institutions. Oftentimes it looks like a fordable position or a project that I am involved with, and at other times, it is impossible to describe the work that I’m doing within the bounds of a project as might be recognized within academia or nonprofits.
MELINDA: Fantastic, thank you for that! I was going to add, and then I realized you said exactly what needed to be said. So I want to talk specifically about some of the work that you’ve done recently around AI surveillance. You released a report called Ableism And Disability Discrimination In New Surveillance Technologies, as well as the article also, because it is so timely around following the overturning of Roe versus Wade actions, Action is Needed to Protect Health Data, and we’ll share the links to both of those for audience to in our show notes. Can we talk first about how your work around surveillance is important to all of the activist and advocacy work that you do? Why is it so important to focus on surveillance?
LYDIA: Surveillance is ubiquitous in every aspect of modern life. Some of that surveillance, people might be forgiven for making the mistake of thinking it’s a little creepy but functionally benign. Think about when you have had a casual conversation with a friend and one of you casually mentions Dolly Parton, it comes up in conversation. Then a couple days later, you start noticing ads for Dolly Parton concerts and memorabilia popping up on every app you’re visiting, on your browser, on your phone, on your computer. You didn’t even search for anything related to Dolly Parton; it was just mentioned casually in conversation as a topic. You weren’t even planning to go to one of her concerts, you or a friend just happened to mention her name, and now you’re seeing all these ads.
There are few of us today who are living in the 21st century who haven’t had an experience like that. If you’re anything like me and most of my friends, your reaction to that was probably: That’s kind of creepy, I feel like someone’s watching me or listening to me. The reality is both a lot less scary and a lot more insidious than we might think, at the same time. Because your phone might not be using its microphone to listen to your conversations, and in fact, it probably isn’t. But even if you in this scenario hadn’t searched for Dolly Parton’s name anywhere on your devices, your friend might have, or somebody else in the household might have. Someone near you who hangs out with your social group might have looked up something related to Dolly Parton, because you heard her name mentioned.
So a lot of our apps and websites that we use, use increasingly sophisticated and undisclosed algorithms to attempt to create profiles of their users, and some of those profiles connect geolocation data with our searches and the searches and profiles of people who might be in physical proximity to us. So even if you hadn’t looked up Dolly Parton in that scenario, but someone else near you had, your devices were in close proximity around the same time, and perhaps have been in close proximity frequently, suggesting some form of social connection between you and the person that actually did look up Dolly Parton. So now you’re the one getting served ads related to her concerts and related to her memorabilia. Sidenote: I kind of want to know when her next concerts are, and I am certain that we will get an ad at some point soon telling us when those next concerts will be, just because I’ve used that as an example.
But here’s the deal. Most surveillance is a lot more sinister and dangerous than some companies that sell concert tickets trying to profit by getting you to impulse-buy a front row seat ticket for Dolly Parton’s concert. Surveillance tools and technology are now commonly deployed by our employers, by our schools in K to 12 and higher ed, in our communities and neighborhoods, by police and other state and local government agencies, and even by private companies that make critical decisions like access to housing and credit.
Those surveillance technologies, like all types of algorithmic technologies, have to work by using certain data, whether hypothetical data sets, training data sets, or pre-existing data, in order to make assessments, predictions, or decisions about whatever it’s trying to decide. Whether that’s software trying to guess which students taking a test remotely are cheating, which people on the premises of a school are not allowed to be there, which tenants applying for rental housing are worthy, which neighborhoods police should be focusing their time on and perhaps targeting more people for arrest and for charges, or which people within a workplace or school ecosystem are considered to be possible threats to themselves or others. The data that the algorithmic technologies use to make those assessments, predictions, and decisions has to be based on something. In the world where we live, where we know and can witness and experience the real-time impact of systematic racism, misogyny, religious discrimination, class discrimination, and ableism, among other forums of systematic discrimination and oppression, those values and past patterns of discrimination and marginalization will evidently shape what the algorithms are taught to do, reproducing, perpetuating, exacerbating, and ultimately, accelerating the harms of our existing social policies and practices.
MELINDA: So can you give some examples of how those technologies potentially impact people with disabilities or disabled people?
LYDIA: Let’s take the example of automated test proctoring software. As most of us know, if we are students ourselves, or we have children in school, or we know somebody who is, over the last two years, virtually everybody in an educational programme of some kind or another has had to go through online learning for some or the majority of the last two years. When schools in K to 12 and higher ed made the shift to online learning, the proliferation of Remote Test Proctoring software exponentially increased. Already in use before the COVID 19 pandemic broke out in 2020, remote proctoring software gained an enormous boon because of the shift, in many cases, to mandatory online learning, which was necessary as a life-saving public health measure. But when classes went remote, so too the test-taking, whether a seventh grader science test, a college freshman’s survey European history course final exam, or the bar exam for law graduates, and everything in between.
Now there are two types of remote proctoring software, both of which I personally find quite creepy. The first one doesn’t necessarily involve an AI programme or artificial intelligence, but requires a Remote Proctor who is watching you and listening to you through the camera and microphone on your computer, to make sure that you are taking the test and not cheating or using unauthorized material, or anything else that would be breaking the rules.
The second kind, however, is almost more insidious in its creepy factor. The second type of remote proctoring software, automated software, operates by using an algorithm to assess whether how a test-taker exists in the test-taking environment is potentially indicative of unauthorized activity. In real people language, what that means is if you the test-taker are taking some exam on your computer, the software programme might be using the webcam to monitor your surroundings and your bodily movements. It might monitor your eye gaze. It might monitor your facial expressions. It might use the microphone to listen to the noise in the room or space where you were taking the test. It might be using recording software to record what is on your screen, what you have up on the screen. It might record your keystrokes. It might record your mouse movements. It might record how long it takes you to deal with any of the questions. So by definition, because it’s an automatic programme, and there’s not a human that’s watching you in this scenario, the programme is designed to detect abnormalities. Now, what is defined or recognized as an abnormality must be based upon an assumption about how a “normal” person takes a test.
So by definition, disabled people with a range of disabilities fall outside those norms. The test-taking software can, and already has, flagged as potentially suspicious, test-takers who have Tourette’s syndrome and have vocal or motor tics. Test-takers who have personal care attendants who need to assist them physically. Test-takers who are poor and don’t have a private or quiet space to take their test. Test-takers who are blind or autistic and have atypical eye gaze movements. Test-takers with cerebral palsy, or any form of paraplegia, quadriplegia, or tetraplegia, who have atypical body movements. Test-takers who might have ADHD, or anxiety, or obsessive compulsive disorder, who might be mumbling or humming under their breath. Test-takers with Crohn’s Colitis, IBD or IBS, who need to use the bathroom more frequently. Test-takers who need to get up and walk around the room. Test-takers who have underlying anxiety, depression, post-traumatic stress disorder, or panic disorders, who already enter the testing environment with elevated anxiety that is only exacerbated by the knowledge of the presence of the automated proctoring software.
Now this places disabled test-takers in a bind. In order to avoid the possibility of being automatically adjudicated as cheating and then having to appeal that determination, a disabled test-taker who knows that automated software might be used has the burden of affirmatively and proactively disclosing and requesting accommodations. A situation that, for many disabled people, is not safe or reliable, or even necessarily going to result in approval of accommodations. Or the disabled test-taker—whether by ignorance of the software being used, how it functions, or of the mere fact of its existence, or by choice of choosing not to disclose or not to request accommodations, or simply not knowing which accommodations to request if any—forgoes requesting the accommodations, and risks heightened scrutiny and potential investigation into their conduct and the integrity of their test, simply by virtue of being disabled; a cloud of suspicion that non-disabled test-takers do not have to contend with.
Now this all of course even doesn’t include the ways in which these concerns are amplified by the fact of disabled students belonging to a whole host of other groups, both marginalized and privileged in society. Some automated test-taking programmes even use facial detection and facial recognition software to determine if the person taking the test is the correct person to be taking the test. Researchers and advocates have already shown that facial detection software is substantially less accurate at recognizing darker skinned people, and particularly, women of color. Facial recognition programmes also tend to be less accurate for any woman, but especially women of color, even compared to White woman, and for transgender people of all genders. Then there are disabled people who have disabilities that affect their facial movements and facial appearance. They might be cranial facial disabilities, they might be absent body parts, they might be a particular experience of born blindness, or other aspects of their disabilities that impact how their face and their body are detected by that software.
So even from the outset, automated proctoring software assumes that all people taking tests can be assessed in the same way. Because it assumes that the default is a cisgender, White, masculine-presenting, able-bodied, neurotypical person, and that all others should have the burden of requesting some accommodation as to how they are perceived and responded to by the software. Now those tests might strike people as, well, that seems annoying, but is it really that big of a deal? Well, if your graduation, and therefore your job prospects are at stake, then yes, those tests do matter a great deal indeed. Those software inequities, inaccuracies, and other unreliabilities help to illustrate why reliance on algorithmic technologies in and of itself cannot occur outside of the system and values that we already operate with. They don’t necessarily eliminate bias. If bias already exists, the systems will amplify that bias at scale.
MELINDA: Yeah, I’m sure listeners and viewers can see and understand how, to similar technologies in our criminal justice system. There are two things that I think are really important to talk about. One is employment, because so many folks here listening to our show or watching our show are thinking about how this can apply to their work. So maybe we could talk a little bit about employment, and then also reserve time specifically to talk about healthcare and the Dobbs versus Jackson Women’s Health Organization ruling that just came about, too.
LYDIA: Oh man, we could spend hours and hours just talking about all of this! But we don’t have hours, we promise we’re not going to subject you to like four hours of unedited time, we won’t do it. So our report talks about surveillance at work in two contexts primarily. But you should know, and many of you probably already do, that algorithmic technologies are routinely deployed now by both public and private employers, larger and small ones, in a range of areas, including from recruitment, to hiring, to worker management, and to worker surveillance. The report we recently published addressed two particular ways in which algorithmic technologies impact workers in a range of employment contexts, one of which is more on its face problematic and potentially harmful, and the other which is ostensibly beneficial or to promote a positive outcome for the workers and not just the employers.
Now, the first one that we talk about is algorithmic management and surveillance technologies. We also explore it in a report that was mostly written by one of my colleagues, Matthew Scherer, about what he calls the rise of Bossware, i.e. software programmes whose sole purpose is to monitor the work activities for the employees in a range of jobs, blue collar and white collar. Jobs performed onsite, like at a warehouse or an office. Jobs performed on the road, like for gig workers doing deliveries. Or jobs that allow for remote work from home. That worker surveillance technology is now pervasive and nearly ubiquitous, and it’s growing. Whether that are technologies that listen to a delivery driver’s car, that try to optimize the routes that an Uber driver is taking. That attempt to incentivize Amazon or other large companies’ warehouse workers to pack and load and ship as quickly as possible, while taking as few breaks and moments of rest or pause as possible. Or software that watches a worker who is remotely working from a home office to determine when they get up from their computer and walk out of the room, ultimately penalizing workers for taking bathroom breaks or walking outside, and deducting that time, in some cases, from their pay on the clock.
These technologies, like the example I gave about test-taking technology, rely upon an assumption, not just that all workers operate in a similar manner, but that all workers fundamentally do not have human needs. There’s a deep-seated ableism embedded in these technologies. The idea that we don’t need rest, we don’t need breaks. That a better worker is one that works harder and faster, and that any moment of rest—and listen to the language that we often use, it’s called stolen rest—stolen rest is somehow thieving from a company’s time, from time that is owed to the company. There are whole histories and legacies of enslavement, servitude, and class oppression, wrapped up in the idea that a worker’s time belongs to the employer, and that rest is stolen.
So if an ideal worker, in our current system, is expected to minimize, reduce, or even eliminate the possibility of rest, then the software programme is doing its job by creating routes, pathways, or task lists that prevent the worker from being able to take a pause, to slow down, to pace the work to work for them and their body. So what this results in, particularly with a lack of really clear guidelines about algorithmic technologies at work from the relevant regulatory authorities, are software programmes that, again, employers in a wide range of contexts are using, that functionally incentivize or coerce workers into working longer and harder with fewer breaks, which ultimately increases the risk for workplace injuries and illnesses, including those that can turn into chronic long-term disabilities and chronic illnesses, both physical and mental health.
Thinking about that scenario in particular, are there ways that you’ve thought about how colleagues or managers can approach companies that are working with this, or their co-workers, with empathy and allyship? Are there any ways that we can activate our allyship in support of people with disabilities, people of color, in particular, that are most marginalized or most likely to be harmed by the software and the surveillance?
One thing that more and more employers really need to consider is questioning what the purchase or acquisition of a particular software programme is for, and asking the people who have the most to lose and the most at risk from deployment of a new software programme. Now, that might not mean employees of a current company, if employees might not feel safe disclosing what their concerns might be. But it could mean asking experts and community leaders who represent affected communities. Have you heard anything about this type of technology? Have you ever heard of this software programme? Do you have any concerns about it? Do you think there are better alternatives to it, whether other programmes to use or not using software at all? Are there different practices or policies that we could be implementing? Ask those questions and truly be prepared to listen to the answers.
Now, if you are in the process of procuring particular software programmes for use in the workplace, then it is incumbent on you to ask the developer and the distributor of the software programme, how it works, who was consulted in its design or development, how it has been audited, if any of that audit has taken place externally or by third parties or if it has been entirely by internal audits. It is also incumbent upon you to perform an audit of software that you use, not just in-house, but externally, with experts who can represent, and therefore, accurately provide to you feedback that comes from marginalized communities. Are you able to determine, on an ongoing basis, what kinds of impact your software programmes might have upon people from marginalized communities within your company or organization, as well as those who your company or organization is serving, your clients or customers, etc.?
Those questions, we should be clear, won’t necessarily guarantee that you will not potentially purchase and use a software programme that might have deleterious or harmful effects. But it can curtail the likelihood of inadvertently falling for the marketing propaganda that a lot of programmes will use. Whether it’s, again, in an automated Test Proctoring context, or it might be in the hiring AI context, that says these types of programmes help to make work more efficient, and they will reduce human bias by being more objective because it’s computerized. Remember, computer programmes don’t exist in isolation, they exist in social and cultural context.
So you have a responsibility to ensure that the software programmes you are acquiring not only meet policy objectives or end outcomes that are desirable for your organization or your company’s purported values, but also, that actually operate in such a way as to not contravene those stated values. If one of those values is promoting accessibility, but you have a software programme that penalizes workers for getting up to use the bathroom too frequently, that you are not, in fact, prioritizing accessibility. If you have as a value, embracing people from a range of racial, ethnic, cultural, religious, or caste backgrounds, but you have a software programme in place that screens out resumes, based upon assumptions about what makes someone a successful candidate or a successful employee, you’re probably going to be screening out people who come from caste-oppressed, minoritized racial or ethnic or cultural groups or marginalized religious groups. Because people from those communities are going to be less likely to have had access to the same markers of conventional success as people who had more access to privilege and power in society. So you might not know what that programme is doing without both carefully questioning the developers and distributors of it, and also asking for third-party auditing from people who come from directly impacted communities.
MELINDA: Thank you for that. Thank you for sharing, there are so many things in there. I think that, in general, as we walk through this world, as we walk through our lives, as we walk through our work, as we work, really paying attention to how surveillance is affecting us and affecting our colleagues is so important, and continue to ask those questions when we vote and when we go throughout our lives.
I want to talk specifically about after Roe vs. Wade was overturned with the Dobbs versus Jackson Women’s Health Organization Ruling, more than half the population lost our rights to privacy and our uteruses are now governed by our governments. Many are worried that it doesn’t stop here, that this ruling also jeopardizes several other rights that we hold dear: the right to contraception, the right to marry, or even have sex with people with the same sex or gender, the right to marry people of different races, and potentially many other rights. I know you’ve done some thinking around this. How does surveillance matter here, and is there anything that we can do for each other, to support each other and our rights as allies?
LYDIA: Surveillance is ubiquitous in every part of modern life. Data on us, all of us, is constantly being collected, passively and directly, by a wide swath of private corporations and government entities. Even that data collected by private corporations can, and often is, turned over to law enforcement and prosecutors. Even sometimes, in the absence of a subpoena or a warrant. Many people don’t realize that a wide range of websites and social media companies and apps will turn over data if a prosecutor or law enforcement officer simply asks for it, without requiring to be served with a warrant or subpoena in order to turn over that data. That should be scary to many people for a range of reasons.
The Dobbs decision is only the latest in a long string of reasons why advocates on the front lines have long talked about the importance of privacy and security practices and policies. People who protest racial injustice, people who are concerned about supporting immigrants and refugees, people who use drugs, or do sex work, have all faced the terrifying ramifications of a surveillance regime, in which huge amounts of often deeply intrusive personal data—down to information about our health, predictions about our mental health, information about our exact geolocation, the contents of our messages and web searches—is already collected and made available, for profit and for prosecution.
MELINDA: Are there any ways that we can protect that privacy, protect our rights; the rights of our colleagues, the rights of our team members, for managers or companies, overall, for employees? Are there any ways that we can really activate our own allyship and an advocacy to really support that right to privacy?
LYDIA: There are many guides out there by a range of organizations and grassroots collectives for individual people about how better to protect your own digital privacy, as well as suggestions and guides for organizations on their own digital security as well. From the company’s perspective, for those of you who might be working for companies that collect or have personal information about people—again, whether that’s a marketing practice where you passively are collecting data, or whether that is more specifically because clients or customers provide you with data that then you hold as part of your regular practices—then it is incumbent upon you to not only adopt really strong encryption, but to also be clear about how you will, going forward if you haven’t already, institute strong privacy protections to protect the privacy of your clients and customers? How to protect the privacy of your employees? Do you have to collect certain data, or can you avoid or refrain from collecting certain information in order to carry out your work?
If you do collect data that could be used to extrapolate information about a person’s reproductive healthcare, or their status as potentially pregnant or pregnant, or what might have happened in the course of a pregnancy, then what are you doing to minimize that data and its collection? Where are you doing to delete it as soon as it is no longer needed? What are you doing to enable the customer or the client to have full control, or as much full control as they can have, over that data? What are you doing to protect that data against unwanted and unconsented transfer or sharing? That includes making a decision as a company about what you will do if someone comes asking for that data, and whether it matters if that person is a private individual attempting to profit off of the bounty provision, for example, of Texas’s anti-abortion law, or whether that person is law enforcement? What will you do in order to try to protect your clients and customers as much as you possibly can? Sometimes that begins with thinking whether you really need to collect certain data.
MELINDA: Even thinking about some of the companies that are doing the right thing to provide services, to provide grants, to provide monetary ways for people who need to go out of state to access basic healthcare services for your reproductive rights, they have to go out of state, companies are starting to develop policies around this. But what they need to also be thinking of, I think, and correct me if I’m wrong, or add to this, is they also need to be thinking about what data they’re collecting along with that. Are you actually asking people to go through certain channels where there’s a written trail for doing things like that? The data you’re collecting there, I think, is a huge part of developing that policy. What does that look like to really protect the data while you are providing that service?
LYDIA: Right. That’s something where companies can learn a lot from grassroots activists and organizers, and also from sex workers. Sex workers have long been organizing how to keep themselves and their community safer in the face of increasingly terrifying surveillance regimens, targeting sex workers and intensifying criminalization. I want to be really clear, when I say sex workers, I mean people who are doing sex work because they’re choosing to, I’m not talking about people who are being trafficked.
MELINDA: Absolutely. Thank you for all of this. Thank you for the work you do, and the research that you do, and also for sharing some of that here with all of us. We always end with a call to action. So I want to ask you, what action would you like people to take following our conversation?
LYDIA: I really encourage each and every one of you to look up, take some time to look up, what kinds of apps and software programmes you personally use, and your company is using. What kinds of information are they collecting, and how can that information be used to passively or actively discriminate against, or harm, vulnerable or marginalized people? If your company is using the app or the software programme, then you can be an advocate to ask and demand and fight for your company to stop using that programme, to stop engaging in that potentially predatory or dangerous data practice that can put people’s lives, freedom, or health at risk.
MELINDA: Fantastic. Where can people learn more about you and your work?
LYDIA: You can reach out to CDT via our website and read our reports there at www.CDT.org. You can also read about me and my work at my homepage, www.LydiaXZBrown.com.
MELINDA: Awesome, thank you! Thank you all for listening and watching and please do take action. See you next week.
To learn more about this episode’s topic, visit ally.cc.
Allyship is a journey. It’s a journey of self-exploration, learning, unlearning, healing, and taking consistent action. And the more we take action, the more we grow as leaders and transform our communities. So, what action will you take today?
Please share your actions and learning with us by emailing podcast@ChangeCatalyst.co or on social media because we’d love to hear from you. And thank you for listening. Please subscribe to the podcast and the YouTube channel and share this. Let’s keep building allies around the world.
Leading With Empathy & Allyship is an original show by Change Catalyst, where we build inclusive innovation through training, consulting, and events. I appreciate you listening to our show and taking action as an ally. See you next week.