Today’s guests have written a study about the Google Search engine, and the subtle – and not-so-subtle – ways in which it shows its bias, and in many ways perpetuates tired old stereotypes.

Orestis Papakyriakopoulos is a postdoctoral research associate at Princeton’s Center for Information Technology Policy. His research showcases political issues and provides ideas, frameworks, and practical solutions towards just, inclusive and participatory algorithms. Arwa Michelle Mboya is a research assistant at the MIT Media Lab. She is a virtual reality programmer and researcher who investigates the socio-economic effects of enhanced imagination.

Links:

Beyond Algorithmic Bias: A socio-computational interrogation of the Google search by image algorithm,” July 2021, Orestis Papakyriakopoulos, Arwa Michelle Mboya.

Transcript:

Aaron Nathans:
Just to note to our listeners, this podcast episode does include some sensitive language. Let’s get started.

Aaron Nathans:
From the Princeton University School of Engineering and Applied Science, this is Cookies, a podcast about technology privacy and security, I’m Aaron Nathans. On this podcast, we’ll discuss how technology has transformed our lives, from the way we connect with each other, to the way we shop, work and consume entertainment. And we’ll discuss some of the hidden tradeoffs we make as we take advantage of these new tools. Cookies, as you know, can be a tasty snack, but they can also be something that takes your data.

Aaron Nathans:
On today’s episode, we’ll talk with Orestis Papakyriakopoulos and Arwa Michelle Mboya. Orestis is a postdoctoral research associate at Princeton’s Center for Information Technology Policy. His research showcases political issues and provides ideas, frameworks, and practical solutions toward just inclusive and participatory algorithms. Arwa is a research assistant at the MIT Media Lab. She is a virtual reality programmer and researcher who investigates the socioeconomic effects of enhanced imagination. They’ve written a study about the Google search engine, and the subtle, and not so subtle, ways in which it shows its bias, and in many ways perpetuates tired old stereotypes. Let’s get started. Orestis, Arwa, welcome to the podcast.

Arwa Michelle Mboya:
Thank you. Excited to be here.

Orestis Papakyriakopoulos:
Thanks for having us.

Aaron Nathans:
All right. So if I google the word lawyer, up come images of mostly men in suits. If I google judge, I see men in black robes, with the exception of Judge Judy. If I google firefighter, they’re mostly guys. And if I google nurse, they’re largely women. Orestis, why does the search engine work like that?

Orestis Papakyriakopoulos:
So the search engine practically indexes pages that exist online. And then when you try to match a query that the user makes to these pages to return the best results. So actually the search engine brings in front all the stereotypes and biases that actually exist in the society, and consequently also exist online.

Aaron Nathans:
How does a search engine work? Who creates the algorithm? Is it something that was created a long time ago or is it fluid, is it being fine tuned?

Orestis Papakyriakopoulos:
So search engines are opaque generally, we don’t know exactly how they function because they usually are property of a company, but we know more or less how they behave. So because Google or another company indexes the web, it has a database of websites, and then creates an algorithm based on the information the user provides, it will return to them the best results.

Aaron Nathans:
Is combating bias usually a part of the designer’s thinking? Has it been that way for a long time or is this kind of a new concept?

Orestis Papakyriakopoulos:
Actually, no. Usually designers don’t think about biases and what might emerge. So the field of algorithmic bias has emerged (over) the last years and becomes more and more prominent and important. Algorithms usually are part of search engines are dynamic, so they map the internet constantly and update their inference and send results depending on the information that exists on the internet. So designers usually think of the technical issues of search algorithms and less about the ethical and political consequences they bring.

Aaron Nathans:
Have those consequences started to seep into the algorithm already, or is it just kind of all academic at this point?

Arwa Michelle Mboya:
I think you’re starting to see some of those changes happening, although not necessarily at the pace that they should be happening, and not necessarily in all the areas that they should be happening in. But I think a very sort of notorious example is when the Google search engine was categorizing black people as monkeys, and people identified that bias and identified that misclassification and so… And I think we’ll talk about this a bit later in the interview, but that was overcorrected. So now you’re not going to see a black person being labeled monkey, at least it never happened in our study. I don’t think we saw that even once. But those were very specific instances that were called out that then the company was able to correct for. But researchers like us are the people who have to go in and look for those biases and look for those misclassifications and then say, Hey, this is what’s happening. And it’s not all the time that they take our feedback or our critique into consideration.

Aaron Nathans:
So is it possible, Arwa, to eliminate all bias? There’s a lot that individual people, people who may be programming the algorithm, don’t know about cultures that are separate from their own.

Arwa Michelle Mboya:
Yeah, that’s the big question. And I think if somebody had the answer to that, like could you eliminate all bias if someone had an algorithm that could do that, then they would be either very rich or in jail, or one of the two. And so I think part of eliminating the bias is by diversifying the talent of people who are creating the algorithms and the data that is being put into algorithms. Orestis, this is kind of like your area of expertise. I know that in another study, you’ve actually done some of this work. Do you want to share what that look like?

Orestis Papakyriakopoulos:
Yes. So actually the main issue is that all these algorithms today try to replicate human behavior. So automatically because human behavior is biased, all these algorithms are going to be biased. So in general, in order to combat these issues and at least reduce them, we need to think, okay, at the end product, when an algorithm does inference, how can we prevent it from happening or filter it, and so on, this is a practical solution. And now the more utopian solution is to try to think alternative ways of how this algorithm should be trained and not based on the existing human behavior, but probably in respect to an idea of one.

Aaron Nathans:
There’s aspects of bias that we don’t usually think about. What’s the significance of black hair politically in some cultures, and why would that matter? You’ve written about this, I don’t know which of you would like to take this question. Go ahead.

Arwa Michelle Mboya:
Orestis, do you want to talk about black hair?

Orestis Papakyriakopoulos:
I don’t want to talk about hair in general.

Aaron Nathans:
Those of you at home are not seeing what I’m seeing, but go ahead, please.

Arwa Michelle Mboya:
Orestis this is bold. Black hair is political for black women, more so in the Western I would say, even in Africa, because… I’m from Kenya, and black hair isn’t, I wouldn’t call it political in so much as just a really important feature of black womanhood. Whereas in the West where black hair is much less understood or politicized in that it is seen as unprofessional or is seen as… There’s just been a lot of efforts in the West to make black hair seem not good. You’re seeing that changing over the years in that in the past black women would have to perm and relax their hair so that it was straight, so that it would be professional.

Arwa Michelle Mboya:
You were seeing in schools Black students not being allowed to wear afros or dreadlocks or other protective styles that are common in black culture. Those are things that I never struggled with growing up in Kenya, but once I moved to the U.S., did struggle with. And now you’re seeing a whole another movement where Black people are reclaiming the beauty of their head, the versatility of it. And there’s a natural hair movement now. And those are words that I’m seeing kind of both of you being like, “Huh, what? Oh my God.? It’s totally not in your periphery of topics to think about, whereas it’s constantly in mine. And so if I’m googling a picture of a Black woman or if I’m searching for something and all the results that I’m getting back are stereotypes, to me, it’s very clear that there’s some sort of bias in the search engine. But if I’m not an engineer, if I’m not an algorithmic designer, I might not be able to have the language to describe what that feedback really is.

Arwa Michelle Mboya:
And so, like I said before, part of that is diversifying the pool of people who are creating these algorithms, because like Orestis says, these algorithms are really opaque. And even as researchers, we can’t necessarily get into all the nitty gritty about how it’s working, but you have really amazing researchers in Google and Facebook and all these companies that are doing all these things that… But if none of them are Black women, if none of them are people who have that context about black hair the way that I do, then things like this keep getting missed just like the monkey example, until somebody brings it up.

Aaron Nathans:
So can you tell me a little bit about how you conducted your research? What were you looking for? You guys sorted through a lot of different images, you used a web crawler. How did that work?

Orestis Papakyriakopoulos:
So we knew that the Google search engine carries biases through examples like the one that Arwa mentioned, and we wanted to see how systematic this bias was. Because these cases of bias, they were really targeted to see and to mitigate and so on. So we wanted to see, okay, Google is removing such biases, but do they actually remove biases in general? That’s why we focused on a very little part of the search engine where you can feed an image and get some search results back, as well as a label for that image. And we wanted to see thereby systematically showing pictures to this part of the algorithm, what general stereotypes does the algorithm reproduce?
Aaron Nathans:
So you actually put an image into the search engine?

Orestis Papakyriakopoulos:
We put 40,000 images of individuals in the search engine, of different ethnic groups.

Aaron Nathans:
How do you put an image into an engine?

Orestis Papakyriakopoulos:
That’s a very good question. So Google has the option, it’s called the Reverse Image Search, in Google Images, and then you can slide in an image or upload it and you will get related results to that image as well as a label assigned to that image.

Arwa Michelle Mboya:
So you can actually, the same way that you might google cheetah, you can actually take a picture from your desktop, just go onto Google and instead of… You can do web image, whatever, just go search by image and you can actually drag in an image into the Google search engine. And it will return to you the same way that if you put in a word it’ll return to you some options. What it returns to you tends to be images that are similar, related articles, and what Orestis is saying, which is really what were studying is a label.

Arwa Michelle Mboya:
So if I put a picture of you, Aaron, into the search engine, if it recognized your face it might say, Hey, this is Aaron Nathans. But if it didn’t, it might say white man, or it might say researcher, or it might say podcast or something like that. So that’s what we were looking at. We were feeding 40,000 images of people of different ages, races, genders, and then seeing, what does the label return? And we were looking really at, is there some sort of misclassification going on? So do they label people what they’re not? And then from a more computational social science perspective, we were trying to see, what is the language that the Google search engine returns and how does that vary across different demographics.

Aaron Nathans:
So Arwa, I found it interesting that the algorithm tends to revert to some pretty tired and unfair stereotypes on race and gender. You’ve spoken about how people can be classified by the search engine as quote, “hot and human.” What does that mean?

Arwa Michelle Mboya:
Yeah. So hot and human are two labels that came back primarily for non-white individuals, especially for non- white men. So, like I said, if I fed in an image of you, it might say researcher, you as a white man, it might describe your career, it might describe the action that you’re doing, so speaking, if I shared a picture of you with a microphone. However, those labels hot and human became much more, and to a significant degree, prevalent for women of color particularly, were frequently labeled as hot or other appearance-based characteristics like sexy and most beautiful. So all these superlatives about beauty and attractiveness and then human.

Arwa Michelle Mboya:
And why human is really interesting is because it’s really the most basic term you could use to describe somebody. And I think the human must have come from an overcorrection of that finding that we said about the monkey, because human was used more often than not on black people, as if that was something to label someone by. If human was a label across the board, across gender, across race, then we would say, okay then, the algorithm is just extra precise on giving us exactly the species that we are. But that label was never used on, not never used but very rarely used on men, especially white men. It says something about the algorithm. It’s not very creative on people of color, on women of color specifically, and can get, I think we’ll talk about this a bit later, but can get more creative for men, and then more than just men, white men.

Aaron Nathans:
Human, it sounds like you’re trying to impart some kind of dignity, but human says almost nothing about a person.

Arwa Michelle Mboya:
Yeah, exactly.

Aaron Nathans:
Whereas as a researcher, you’re elevating that person.

Arwa Michelle Mboya:
Yeah.

Orestis Papakyriakopoulos:
And if I may, may add, so there was this. If you were a white man, you had all these diverse words describing yourself, your appearance and actions. If you were a non- white man, you were (inaudible) to human. And if you were a white woman, you were usually classified as blonde. And if you were a non-white woman, you were sexy, hot, and so on. So we had a clear hierarchy in the results of the algorithm that replicate white patriarchy and the hierarchy of masculinity as well. And to the point that what humans mean, the more a general description is and redundant actually, the more the discriminatory potential behind it. Because stereotypes are generalizations and being human is the biggest generalization you can do and say about a human.

Arwa Michelle Mboya:
To be able to so explicitly see those overcorrections, the human and things like that, kind of says that even though they fixed a particular bias, that they’ve not necessarily restructured or redesigned the algorithm in and off itself, it’s kind of like a Band Aid on top of a cut instead of actually going in and giving it stitches.

Aaron Nathans:
You’re listening to Cookies, a podcast about technology, security and privacy. We’re speaking with Orestis Papakyriakopoulos and Arwa Michelle Mboya. Orestis is as a postdoctoral research associate at Princeton Center for Information Technology Policy. Arwa is a research assistant at the MIT Media Lab. On next week’s episode, we’ll talk with David Sherry, the Chief Information Security Officer at Princeton University. He will give us some tips on how we can shore up our own digital security.

Aaron Nathans:
It’s the 100th anniversary of Princeton’s School of Engineering and Applied Science. To celebrate, we’re providing 100 facts about our past, our present and our future, including some quiz questions to test your knowledge about the people, places and discoveries who have made us who we are. Join the conversation by following us on Instagram at @eprinceton, that’s the letter E, Princeton. But for now, back to our conversation with Orestis and Arwa.

Aaron Nathans:
Your research included an interesting exercise of finding photos of both male and female top executives and stripping them of their privilege to see how the algorithm associated them just by their appearance. Was there any difference in how the men and women were labeled?

Arwa Michelle Mboya:
Yeah. I’m excited to take this one, because it was an innovative research method, let’s say. We wanted to challenge power. We’ve been thinking about how research sometimes really focuses on the groups, the minority groups, the groups that are affected, like I said, like, oh, Black people are named monkey and all this. Borrowing on some research from actually a colleague of mine, Chelsea Barabas, who writes a paper on challenging up, like studying up, challenging power itself, studying the people who hold the most power. We said, okay, we we’ve seen some of these results, let’s see how these results apply to the people at the top of the technosocial hierarchical ladder, let’s say.

Arwa Michelle Mboya:
And so we wanted to look at images of the Mark Zuckerbergs, the Tim Cooks, the Larry Pages, the Peter Thiels, the people who kind of have the authority to design or to make decisions about these algorithms. And we wanted to do that for both men and women. I will say that it was much more difficult to come up with a list of nine women at that level of power that we are talking about when we say Mark Zuckerberg and Jeff Bezos. But nevertheless, we did our homework and we found nine women, primarily white women, including one black woman, Bozoma Saint John, and-

Aaron Nathans:
Who were some of the other ones?

Arwa Michelle Mboya:
Sheryl Sandberg was in there, Marissa Mayer, the CEO of Yahoo, Gwynne Shotwell, SpaceX President, Susan Wojcicki, YouTube CEO. I think these names are less household names than some of the names on the men’s side, but they all hold a lot of power. Amy Hood, Microsoft CFO. And so the idea was, in the same way that we were feeding images into the algorithm, was to feed their faces into the algorithm. However, these people are really well indexed by the internet. And if I go and try and google a picture of Tim Cook, it probably already has a label Tim Cook. And so what we did was try to find videos of them on YouTube and then screenshots really random moments of their faces that we didn’t think would’ve been indexed already. And for the most part, we were pretty successful in doing that.

Arwa Michelle Mboya:
And then we fed them in and then we did a little study to see what are the labels that return for the men and the women. And it was a really enlightening investigation because while the algorithm was way better at classifying the individuals by name for men, in that when we did it for the men, Mark Zuckerberg was identified a couple of times. We didn’t just use one photo for each one, we used a couple of photos for each one, so there was multiple opportunities for the algorithm to label each individual. And what we found for the men was, yes, sometimes they were identified by their actual name. And so the algorithm was that smart to recognize them. But even when they didn’t, the number of descriptors used to describe them was very diverse. And so they were labeled as gentleman, they were labeled as businessperson, they were labeled as the action they were doing, they were public speaking, or they were singing, or they were sitting, or they were official. They used all these adjectives to describe the men, said like academic address, police officer, selfie, all these descriptive language to describe the men.

Arwa Michelle Mboya

With the women, the words that they used the most were girl. That’s also in contrast to gentlemen, they don’t call them ladies or women, they call them girls.

Aaron Nathans:

So Sheryl Sandberg, and some of these other high ranking executives, they labeled as girls?

Arwa Michelle Mboya:
Mostly girls. Sheryl Sandberg was labeled a girl nine times, and then labeled spokesperson one time, so out of 10 queries. There was some inappropriate labels that were returned, like I think one was tetas de Martha Sanchez, which in Spanish means the tits of Martha Sanchez, I think Amy Hood was labeled that. Just very undiverse labels, so Afro for the one Black woman that was there.

Arwa Michelle Mboya:
Blonde was the second most used label. I think half the women are blondes, but still categorized by their hair. So when the algorithm doesn’t know who it’s dealing with, you see there’s a huge bias towards men. There’s, like you said, when you searched up firefighter, when you searched up judge, you were getting men. So there’s all these ideas about what men can be and such a small imagination about what women can be. And it gets worse when you add women of color on top of it. The same thing is happening to white women, it’s happening at a worse scale to a woman of color, but all of this is happening to women in general. So you see some of those power dynamics really being exhibited by the algorithm. And you would hope that findings like this are what challenge power to make different decisions. Do you have anything to add onto that, Orestis?

Orestis Papakyriakopoulos:
I would like to add also that although we tried to strip the privilege of Zuckerberg or other prominent male tech individuals, still the algorithm kept recognizing them. So sometimes it was impossible for the algorithm not to find their names. And besides that, the general point also of studying up was not to say, first of all to look at the people with power and challenge them, but also to bring awareness because by bringing the bias closer to people who actually hold the power, they might get more influenced by it in order to change things. Because there have been thousand studies that show the problems and issues of minorities and so on, but when the problem comes to who holds the power, you start seeing it differently. And that was one of the aims that we wanted to achieve by studying it.
Aaron Nathans:
So Orestis, do people perceive themselves in the way that a search engine perceives them?

Orestis Papakyriakopoulos:
No. And it was really interesting when I was showing the results to colleagues of mine here at Princeton. They also took pictures of them and starting feeding them in the search engine, and they were really negatively surprised about the results the algorithm was returning to them. And some were really weird results. For example, a colleague of mine, she was labeled about her teeth, the label was teeth, that the algorithm returned. Yes, because she has a very nice smile. Indeed in the pictures she uploaded, she had the nice smile, but the algorithm saw teeth. And she told me, “I’m really feeling weird by the result that the algorithm brings.” And that brings the point that these search queries are not biases that you say, oh, something is biased, it comes with real world consequences and real world harms, both in a systematic way, but also in a direct way when the people use the search engine.

Arwa Michelle Mboya:
All right. We put ourselves in it. Why am I forgetting what it labeled me as? I think when I put it in, it just said Afro.

Orestis Papakyriakopoulos:
Arwa put a picture of her and it said Afro, but the issue was not that. In the search results, next to her picture, there was Afro, a description of what Afro is, but it did not have a picture of an African, it had the picture of a white man wearing a wig, an Afro wig.

Arwa Michelle Mboya:
Yeah. Wearing [crosstalk 00:27:42] Afro wig. Yeah, I remember that. And I think you were described as… What were you described as? Did you put yours in?

Orestis Papakyriakopoulos:
I think I was just human, or gentleman.

Arwa Michelle Mboya:
[crosstalk 00:27:52] that. Yeah.

Orestis Papakyriakopoulos:
I did not have anything special return.

Arwa Michelle Mboya:
Yeah. But we have, I don’t know if that was the Google search engine, but there was another… I think we put your face into something else, some other engine, and it returned “rape suspect.”

Orestis Papakyriakopoulos:
Yeah.

Aaron Nathans:
Oh god.

Orestis Papakyriakopoulos:
Yeah. So it was one of these… Exactly. I think it was from AI Now, a project that wanted to bring awareness on biases that computer vision algorithms have, and it classified me as rape suspect.

Arwa Michelle Mboya:
Yeah.

Aaron Nathans:
And how did that make you feel?

Orestis Papakyriakopoulos:
Not nice of course. I don’t think that either Arwa, with how the search engine returned the result about her and me. I didn’t want to talk about it to anyone, for example, in my case. It’s something I don’t mention to anyone, that the algorithm returned such a label about me.

Arwa Michelle Mboya:
Oh, sorry that I brought it up.

Aaron Nathans:
Wow.

Orestis Papakyriakopoulos:
No, it’s fine. It’s true. And we should talk about these things and bring awareness.

Arwa Michelle Mboya:
Yeah.

Aaron Nathans:
So what’s the harm? We see these things alone, it’s an interesting exercise, but what does this say? What is the overall societal harm in these biases, in the algorithm?

Arwa Michelle Mboya:
Well, there’s so many, and this is really Orestis’s expertise, but two of the things that just come up to me is, well, a) it’s all about perception, perception matches. The internet is how we perceive the world these days. It’s kind of like how when you see images of Africa, it’s mostly starving kids. And it just keeps the perception of the continent that way, even though that is not what it is in its entirety. And that affects how people interact (with) you, that affects how you get to travel and see and encounter the world, is through how people perceive you. And then the second is what these algorithms are used for outside of just a search engine.

Arwa Michelle Mboya:
Another colleague of ours from the Media Lab, she has really famous research, Joy Bolamwini, she is a research scientist and has been tackling bias in AI after finding that the computer vision algorithms could not recognize or identify Black and people of color and women and gendered faces, so how there’s facial recognition technology. She’s been the driver of a lot of the work to diversify and improve those algorithms. Because when she was testing them out in 2015 or something, they were not recognizing her face. She would have to wear a white mask for the algorithm to recognize her face. And how dangerous is that if those are the same technologies that are being sold to the government and to the military and to the police to identify people, to identify crimes and to try cases. If it cannot recognize my face, how is it going to tell the difference between me and some other person who is also Black? So there’s all these huge risks that come with it.

Arwa Michelle Mboya:
And the problem is that we don’t know exactly how these algorithms are working, so we can’t interrogate the problem very specifically. We can bring these to light, just like Joy brought to light, “Hey, this thing isn’t recognizing Black people.” And because obviously there’s no Black people designing these algorithms, no one’s ever thought about it. And so now she’s a superstar researcher, and everyone is trying to correct for their mistake. But if Joy hadn’t done the work that she had done, where would we be right now? And so there’s also the burden of putting this labor on researchers of color, and other researchers in general, to identify all these potential harms instead of that being part of the design and the creation of the algorithm.

Orestis Papakyriakopoulos:
Like I totally agree with all the things you said. And I believe also that also the harms are a lot of times untraceable because these applications and these input data, like the images of us, the social media accounts of us, are fed in numerous algorithms from credit score calculation to receive a deviation, for example, to any other case study we can think of. And we really don’t know how these problematic inferences contribute to decisions that affect people lives. And it’s not only the search engine that comes with harms, but also the other cases and the other applications that have a much bigger direct implications. But also, even the Google search engine, and if we think like if you put cowboy, you will get cowboy, a normal cowboy. If you put female cowboy, you will get a sexy cowboy. Also, all the new generations are learned to reproduce all these biases and social roles through the search engines, because the search engines don’t reflect on what they’re actually doing.

Aaron Nathans:
So Orestis, what needs to be done to address these problems and what is being done to address these problems?

Orestis Papakyriakopoulos:
So, as Arwa said, what is done until now is to put patches in specific cases that get traction in media and would companies say, oh, we solved the problem, and they try to be as opaque as possible. And to bring an example, translation algorithms have the issue that if you translate something from a foreign language, that it is gendered. So if you use a search engine and you want to translate something that is not in a gendered language, and you translate it to a gendered one, usually the gender that the algorithm chooses is the one that replicates the biggest social stereotypes existing in the society. For example, the nurse is going to be translated to the female version in another language while doctor to the male. And what Google has done is that they have corrected it, if you put very smart sentences and then you get both gendered results, but if you put the whole text inside, the biases persist. So again, you find these patches they tried to fix and say, oh, we dealt with that, without actually either reflecting on the issue or solving it from its root.

Orestis Papakyriakopoulos:

And to what can be done or should be done Arwa mentioned diversify. The more different people from different backgrounds are responsible for creating these systems, the more inclusive the systems are going to be. But also, designers of the algorithms should think and take that component while they design a system, they should also include the ethical component when they evaluate the system’s results. And the most important, there must be regulation that forces the companies, individuals who deploy these algorithms, to conform to specific standards.

Arwa Michelle Mboya:
Yep.

Aaron Nathans:
Is it enough to say that a search engine algorithm reflects the culture it serves, biases and all? Is it merely a mirror of the stereotypes society has held, or do we need to hold our search engines to a higher standard?

Arwa Michelle Mboya:
I think it’s fair to say that they’re not coming from nowhere, the algorithm didn’t make them up. These biases exist, but they’re the biases of, and people still say the majority, but it’s not necessarily the majority anymore, because I think the world is not primarily whites. And so it’s the biases of those with power. And so it’s not that minorities, it’s not that Black women see themselves as only being defined by their hair, it’s not that women of color want to exoticize themselves, these are things that are happening to them from people of power, and the algorithm and the search engine is just reflecting the power that is being exhibited by those who create the technologies, by those who hold wealth in general.

Orestis Papakyriakopoulos:
If the society was behaving in a specific way until now, this doesn’t mean that it should keep behaving like that. That’s why we insert regulation. That’s why we discuss about all these things to change them and search engines and these algorithms bring the opportunity or the privilege that we can really shape them. We did not have this opportunity for all other things, but we can shape the algorithm. So why not do it correctly? Like the argument of, oh, the society is biased, so the algorithm is biased and it’s fine is problematic in all levels.

Aaron Nathans:
This has been a really fascinating conversation. I’ve really enjoyed this.

Arwa Michelle Mboya:
Yeah. Thank you so much for having us. It’s been fun.

Orestis Papakyriakopoulos:
Thank you very, very much.

Aaron Nathans:
Thank you. Well, we’ve been speaking with Orestis Papakyriakopoulos and Arwa Michelle Mboya. Orestis is a postdoctoral research associate at the Princeton Center for Information Technology Policy. Awa is a research assistant at the MIT Media Lab. I want to thank our guests as well as our recording engineer, Dan Kearns, thanks as well to Emily Lawrence, Molly Sharlach, Neil Adelantar, and Steve Schultz.

Aaron Nathans:
Cookies is a production of the Princeton University School of Engineering and Applied Science. This podcast is available on iTunes, Spotify, Stitcher, and other platforms. Show notes and an audio recording of this podcast are available at our website, engineering.princeton.edu. If you get a chance, please leave a review, it helps. The views expressed on this podcast do not necessarily reflect those of Princeton University. I’m Aaron Nathans, Digital Media Editor at Princeton Engineering. Watch your feed for another episode of Cookies soon. Peace.

 

Research

  • Security and Privacy

Related Center

  • Center for Information Technology Policy