This is the second half of our conversation with Arvind Narayanan, associate professor of computer science here at the Princeton University School of Engineering and Applied Science.

He is a widely recognized expert in the area of information privacy and fairness in machine learning, with a huge Twitter following and a knack for explaining tech privacy matters in terms anyone can understand.


Arvind Narayanan


Arvind Narayanan

In this half of our conversation, he talks about why he’s so active on Twitter, but not the Facebook platforms. He talks about his research into “over-the-top” set-top devices like Roku and Amazon Fire TV, and how they provide content that looks like television content but takes your data like the Internet apps they are. He has critical things to say about Zoom, the platform so many of us are using to work from home. And he discusses one group of people who have seen their privacy actually improve as a result of social media. If you missed the first part of the conversation, you can hear it at this link.

Subscribe: iTunesSpotifyGoogle PodcastsStitcherRSS Feed


Arvind Narayanan’s Twitter account (with 73k followers).

Watching You Watch: The Tracking Ecosystem of Over-the-Top TV Streaming Devices,” Freedom to Tinker, Sept. 18, 2019.

Zoom’s security and privacy problems are snowballing,” Business Insider, April 1, 2020.


Aaron Nathans:

From the Princeton University School of Engineering and Applied Science, this is Cookies, a podcast about technology security and privacy. On this podcast, we’ll discuss how technology has transformed our lives from the way we connect with each other, to the way we shop, work, and consume entertainment. We’ll discuss some of the hidden trade-offs we make as we take advantage of these new tools. Cookies, as you know, can be a tasty snack, but they can also be something that takes your data. This is part two of our conversation with Arvind Narayanan, associate professor of computer science here at Princeton. He’s a widely-recognized expert in the area of information, privacy, and fairness in machine learning. Let’s jump in.

Aaron Nathans:

A lot of us these days are spending a lot of time on social media. I see that you’re active on Twitter. I’ve followed you for a while and it’s a great Twitter account. I’d suggest if anybody out there is on Twitter, that they follow you. What’s your Twitter handle?

Arvind Narayanan:

It’s @random_walker, random underscore walker, W-A-L-K-E-R.

Aaron Nathans:

You’re active there, but you’re not active on the Facebook platforms. Why is that?

Arvind Narayanan:

I think different platforms have different pros and cons. Something that I find Twitter to be very useful for is quickly getting information out and commentary out to a lot of people and similarly listening to other people’s commentary and acquiring information. I don’t think Twitter is very good for debate. A lot of people try to use it that way and then it just turns into battles between entrenched opposing interests.

But I think if somebody is a member of the research community and wants to follow all the research that’s going on, I think Twitter is a really good platform for that. What it allows you to do is to very quickly see in a tweet what is happening and then file that away in the back of your brain, or sometimes in a document that I keep for tracking what is going on in various areas of interest to me, and then later go back to it when I want to explore it in more detail, and so it allows me to very quickly consume a large amount of information without a huge amount of depth. Twitter is not, of course, the end-all and be-all of online interaction, but it’s very good for one particular thing.

How about from a privacy point of view? Is Twitter any more benign than the Facebook platforms?

I think what’s very good about Twitter is that it’s not designed for private communication in the first place. Sure, there is a protected tweets feature, there is a direct messaging feature, but by and large, the kind of activity that happens on Twitter is intended to be public and not only to be public, to be publicized. Again, I know I’m making a generalization, but as generalizations go, I think this one is a relatively useful one, and just because of how Twitter is designed, I think it has resisted some of the privacy mishaps that other social media platforms have found themselves repeatedly embroiled in.

Aaron Nathans:

They say on Facebook that, again, we are the product. What do people mean by that and is that the case with non-Facebook social media?

Arvind Narayanan:

I think Facebook, perhaps more than other social media platforms, has been designed in a way that tries to maximize how often people keep coming back, how much time they spend on it, and just because this kind of design is very core to how the platform functions, it has repeatedly made decisions that are not very friendly for privacy.

It’s not just that. I think Facebook has had a culture of repeatedly rolling out privacy-infringing features, waiting for the inevitable outcry and the pushback, and then rolling back the feature partly and gradually moving in a more and more privacy-eroding direction over time. Something that I find particularly concerning about Facebook is that there is a lot of basic housekeeping that they should be doing when it comes to whether how third parties might be exfiltrating data or how their targeted advertising features are being misused or content moderation. Over and over again, Facebook has been, let’s say, negligent in doing this and has left it up to third parties, concerned entities to bring this to the company’s attention and only taking action when it is absolutely necessary. For a company that is so profitable and has the resources to invest in better moderation, better privacy protection of their platform, better protect against misuse by third parties, I think this is simply inexcusable.

Aaron Nathans:

You were on a team that conducted the first large-scale study of privacy practices of over-the-top streaming channels. Can you tell us about this research and tell us what is an over-the-top streaming channel?

Arvind Narayanan:

Sure, yeah. This is a study that we did last year, several of us here at Princeton, including graduate students, post-docs, and some of my faculty colleagues. What we were looking at is the trend that I think many or most of us are now part of where we have cut the cables, so to speak, and the so-called television that we watch is really streamed over the Internet and to turn our internet connection into something that goes on our television for a 10-foot viewing experience, as they call it, instead of a two-foot viewing experience on our laptop. Many of us use products like Roku, Amazon has one called Fire TV, there’s also Apple TV and others and these devices help turn internet content into a format that looks more similar to channels for television consumption.

But as we might expect for anything that comes over the Internet, these channels are basically apps, and being apps, they are completely full of trackers. A typical channel that we might watch on a Roku TV, for example, might have as many as 50 trackers, many dozens of companies that are collecting records of our watching behavior, so what it is that a particular consumer is interested in watching. The immediate question that comes to mind is… Well, one immediate reaction that comes to mind is, “This very creepy.” I was not fully aware of the extent of this before we started doing our research and mentally, we often tend to think of the home is still a sanctum, a bastion of privacy. That’s not quite the case anymore once we let the Internet into our homes, so that’s one reaction.

But a question that might immediately arise is, “What’s so bad about this? What can go wrong?” One way to ask that question is what harm can come to me as an individual because of this tracking? But a better question to ask is, “What harm can come to society collectively as a result of these products being architected this way?” I think that is a better thing to focus on and that has a very clear answer.

Let’s put it this way: these products like Roku TV, these are things that we pay for. The consumer didn’t have to be the product and yet that’s how these companies have chosen to design these platforms and what that means is that their revenue depends on the amount of time that we spend plonked on the couch and watching television and that means that these software platforms are inevitably going to be designed in such a way that maximizes addiction with the same dynamics that we see on YouTube with those rabbit holes of polarizing content that we keep hearing about in the media so that users are watching TV for as long as the app developers can get them to keep watching because ad revenue is proportional to the amount of time that we spend watching television, whereas if we’re simply paying for a hardware product, then the revenue does not depend on how long we spend watching TV.

What this means is that the way in which app developers are going to nudge and shape consumer’s viewing habits are not going to be aligned with the way in which we want these devices to work for us, and so that, I think, is the fundamental concern here and that is what I find so problematic with the pervasiveness of trackers on these devices and the fact that they make most of their money through targeted advertising.

Aaron Nathans:

These days, we’ve been spending a lot of time on Zoom, and I know you’ve been critical of Zoom for its privacy and security implications. Why is that?

Arvind Narayanan:

I would say there are two or three main ways in which Zoom has made privacy missteps. One is making it too easy for third parties to violate the privacy of Zoom users, like by Zoom-bombing, which I think most of us have heard of. That’s the first one.

A second one is the ability that Zoom provides for some users to track other users. Specifically what I mean is features like attention tracking that allow bosses, for example, to track their employees, in some cases, perhaps, professors to track their students, and what is going on here is Zoom trying to put all of the control in the hands of the administrators, the managers, the bosses, the people who make the corporate decisions about Zoom and selling out the individuals, the workers, the employees, and others, and putting no control in their hands. What attention tracking does is if you’re in a Zoom meeting and you click away from the Zoom window for more than 30 seconds, then it sends a notification to the meeting host, who is typically a boss or an employer, or someone like that. This is one example of a kind of creepy tracking feature that Zoom has and there are many others.

A third way in which I think Zoom has fallen short is using very intrusive techniques to get on people’s computers and stay there. I think the reason they have these misfeatures is with a relatively benign intent, which is that they don’t want users to run into installation trouble and then not be able to use Zoom when it’s time for a meeting, but the way in which they’ve chosen to go about it is one that exploits features in operating systems that are similar to the way in which malware operates, and I think that is very concerning, that Zoom is willing to cut these corners and potentially compromise the security of users’ computers that it runs on.

These are some examples of privacy and security mishaps that Zoom has had. This is not so different, perhaps, than from some other apps that have also gotten into hot water, but I think what makes it particularly problematic in the case of Zoom is that many of us, myself included, has no option but to use Zoom because it’s a product that depends so strongly on network effects, especially during the pandemic, as most of us find ourselves working from home, we are essentially forced to use Zoom because our colleagues or our professors or the organizations that we’re part of have made a collective decision to use Zoom, and so as individuals, we don’t have the ability to say, “No, I don’t want this on my computer. I don’t agree with the privacy and security implications of installing this app.”

I think when an app is in a position like that, when it benefits from strong network effects, there is an additional onus on them to do the right thing when it comes to privacy and security and I think Zoom has, for the most part, not done that. Since the most recent outcry, a couple of months ago, I think they’ve been moving in a more positive direction and so I’m willing to give them the benefit of the doubt for a little bit longer and see if they continue to make privacy and security improvements, but I think it’s a long road and I think they have damaged their reputation quite a bit and the burden of proof is on them to make improvements and show the world that they have done something.

Aaron Nathans:

You’re listening to Cookies, a podcast about technology security and privacy brought to you by the School of Engineering and Applied Science at Princeton University. This is part two of our conversation with Arvind Narayanan, associate professor of computer science. In our next episode, we’ll talk election security with Andrew Appel, a well-known expert in election technology here at Princeton, but for now, let’s listen to the end of our conversation with Arvind Narayanan in which we’ll discuss one group of people who have actually seen social media help their level of personal privacy.

Aaron Nathans:

Overall, is there anything that’s given you heart about privacy and security and technology?

Arvind Narayanan:

Overall, despite the specific privacy and security failures, let’s say, that we’ve discussed over the last half an hour to an hour, the listeners might be surprised to hear this, but overall, I’m an optimist when it comes to privacy and the reason for that is looking at history, looking at what happened long before we had these privacy worries with digital technologies.

In fact, as far as I’ve been able to trace, for the last 150 years, every time there was a new technology, every generation thought that they were uniquely the ones witnessing the end of privacy because of the latest technological development. Going all the way back to the invention of photography, people thought that that was the end of privacy because there would be photographers constantly capturing everybody’s every move in public, and of course, we still continue to struggle with those kinds of privacy debates, especially when it comes to facial recognition and the most recent iteration of some of these technological developments: What does it mean for privacy in public spaces? But I think we can take heart from the fact that it’s been 150 years of photography and we have not completely lost privacy in public spaces and we’ve mostly learned to manage that and I’ll come to the specifics of how we’ve learned to do that.

But going back to history again, when the X-Ray machine was invented, again, there was a moral panic around it. In Victorian England, modest women went into baths fully clothed because they imagined that there was a peeping tom on every street corner standing there with an X-Ray machine in his hands trying to peer through walls, I mean, so that certainly didn’t happen.

Later on, when the computer was invented, and I’m talking these bulky devices, these gigantic machines that filled up entire rooms, there were huge concerns about computers as the end of privacy because they were seen as instruments of the state, specifically of a socialist state, that was the big worry at the time in the ’50s and ’60s, they were seen as the instruments of the state that would use it to catalog and profile its citizens.

Of course, all of these worries are with us today to some extent, but none of these have been a dystopia and by and large, we have managed to reap the benefits of computing technology without completely giving up our privacy and managing to put boundaries around it. How have we done that? I think that it’s also very instructive to look at that.

One thing that’s happened is that the law has responded to these developments, albeit slowly, maybe not as fast as many of us would like, but nonetheless, I think the law has been an important source of privacy protection. I referred to the worries around photography, but it was in fact precisely the concerns around photography in public that led to Warren and Brandeis a century ago to write seminal articles about the right to privacy that has since influenced court decisions as well as legislation in the United States and a lot of modern privacy law comes from that reaction to privacy worries that people had at the time.

What are some other examples? Now, we’ve talked about social media a lot, but actually, social media in one very important way has been very helpful for the rights to privacy in public and that is when it comes to the matter of paparazzi. Paparazzi have for a long time been very interested in tracking every details of celebrities’ lives, and that’s not only a violation of privacy, but also comes with many other negative aspects, the most famous example of which, of course, is Princess Diana’s car crash. The way in which celebrities have responded to that is to use social media to take control of their own public persona and to essentially remove, to a large extent, the market for all of those paparazzi business because now celebrities can be in direct communication with their fans and they don’t need paparazzi in order to provide this inside look into celebrities’ lives, so in that sense, social media for at least one group has been a huge benefit when it comes to privacy.

What we’ve seen is social norms evolving, we’ve seen markets evolving, we’ve seen the law evolving, and we’ve seen, certainly, many technological privacy protections as well. Today, users have many powerful options, things like the Tor browser that they can install on their computer with a little bit of technical know-how in order to really minimize the number of footprints that they leave online.

Because of all of these ways all mutually reinforcing each other, what we’re seeing, really, over the course of centuries is not a loss of privacy, is not an erosion of privacy, but a constant renegotiation of the boundaries between individuals and their peers and governments and other institutions and a renegotiation of the features of technology and what are we comfortable with, what do we want to regulate, where should we draw the lines, and I think we’re in the constant process of seeing that when it comes to digital technology and social media as well. We have not yet reached a new equilibrium, but I think we will reach an equilibrium, one that we will be reasonably comfortable with and not too unhappy with where we can derive a lot of the benefits online technology while minimizing the harms.

Aaron Nathans:

Well, this has been really fascinating. Arvind, I appreciate you taking a moment to chat with us about all this fascinating material.

Arvind Narayanan:

This has been really fun. Thank you, Aaron.

Aaron Nathans:

Well, we’ve been speaking with Arvind Narayanan, associate professor of computer science at Princeton. I want to thank Arvind as well as our recording engineer, Dan Kearns. Cookies is a production of the Princeton University School of Engineering and Applied Science. This podcast is available on iTunes and other platforms. Show notes are available at our website, If you get a chance, please leave a review. It helps. The views expressed on this podcast do not necessarily reflect those of Princeton University. I’m Aaron Nathans, digital media editor at Princeton Engineering. Watch your feed for another episode of Cookies soon when we’ll discuss another aspect of tech security and privacy. Peace.


  • Arvind Narayanan

Related Departments and Centers

  • Computer Science

    Computer Science

  • Center for Information Technology Policy