While we’re using electronic gadgets, apps, platforms and websites, they are often using us as well, including tracking our personal data. The premiere episode of our new podcast features Arvind Narayanan, associate professor of computer science here at the Princeton University School of Engineering and Applied Science.

He is a widely recognized expert in the area of information privacy and fairness in machine learning. This conversation was so good, we split it into two episodes. This is the first half of our conversation.  

Arvind Narayanan

Arvind Narayanan

In this half, he discusses “cross-device tracking,” in which one electronic device (say, your work laptop) sends you ads based on your browsing activity on another device (say, your mobile phone). He talks about which web browsers are more likely to allow third-party trackers to record your activity. And he talks about steps you can take to protect yourself against these trackers.  To listen to the second part of the conversation, click this link.

Subscribe: iTunesSpotifyGoogle PodcastsStitcherRSS Feed

Links: 

Online Tracking: A 1-Million-Site Measurement and Analysis,” Princeton Web Census. 

Deconstructing Google’s Excuses on Tracking Protection,” Freedom to Tinker, August 23, 2019. 

Arvind Narayanan’s Princeton homepage.

Transcript:

Aaron Nathans (host): 

From the Princeton University School of Engineering and Applied Science, this is Cookies, a podcast about technology security and privacy. On this podcast, we’ll discuss how technology has transformed our lives, from the way we connect with each other, to the way we shop, work, and consume entertainment, and we’ll discuss some of the hidden trade-offs we make, as we take advantage of these new tools. Cookies, as you know, can be a tasty snack, but they can also be something that takes your data. 

Our first guest, Arvind Narayanan, knows these trade-offs well. He’s an Associate Professor of Computer Science here at Princeton, and a widely recognized expert in the area of information privacy, and fairness in machine learning. This is part one of our conversation. Let’s get started. Arvind, welcome to the podcast. 

Arvind Narayanan: 

Thank you, Aaron. It’s great to be on the podcast. Thank you for having me. 

Aaron: 

I’ve heard it said that when we consume free content online, whether it’s on social media or a news site, it’s free because the product is us. Is that true? 

Arvind Narayanan: 

Huh, I would say that that’s mostly true. Here’s how I would put it. Here’s how consumer technology has evolved, and this might sound a little bit cynical as if there is some evil genius pulling the strings, but I don’t think that’s true. I think what we’re seeing is companies going in the direction that their business incentives are taking them. So I think there are two or three important things to understand. One is that tech products are designed to provide addictive experiences, and the reason for this is to maximize the amount of time that users spend on these products. So if you go on YouTube, you’re going to get recommendations of videos that are designed in such a way that you want to keep watching one more video, to maximize the amount of time that you spent on YouTube. Every other social media or a tech product is going to be designed in a similar way. So that’s number one. 

Number two is the tracking of our personal data. There is a philosophy in Silicon Valley of, “Collect their data first, and then we’ll figure out what to do with it later.” So maximizing the amount of tracking, the amount of information collected about users, turns out to be useful in a number of ways, some of which (are) anticipated in advance by these companies, some of which, not necessarily. 

And the third aspect, and this is probably the most problematic one, is behavioral manipulation. And there’s a good source for understanding this in much more detail, the book, The Age of Surveillance Capitalism, by Shoshana Zuboff. But to put it very briefly, here’s the reason why manipulation is so effective. Companies will often tell you that they are catering their ads, in order to give you what you want. If you’re looking for a vacation, for example, then you will get tailored vacation packages that might be customized to your needs and preferences. 

However, if you think about what makes most commercial sense from the point of view of an advertiser, from the point of view of a tech platform, it’s actually far more profitable to influence our tastes, and desires, and preferences in such a way that over the long term, we adjust our shopping patterns, our consumption patterns in a way that the long-term revenue extracted from a consumer is maximized. And here’s where it gets most dangerous. The way to maximize long-term revenue extraction might not be in a way that simply caters to a user’s existing needs, it might be in a way that pushes their behavior in a way that is potentially harmful to them. 

So I think that’s the gist of what is happening when people say the product is us. It’s the cycle of addictive experiences, of harvesting of our personal data, and the use of that personal data for manipulation of our online shopping behavior, let’s say. And I said this is mostly true, and this is mostly true in the sense that it’s particularly true of the free products that we consume, but it is also true of the paid products we consume because just because you are subscribing to a product, doesn’t mean that the service provider cannot also play all of these tricks on you in order to manipulate your online viewing and shopping behaviors, in such a way as to maximize long-term revenue extraction. So that is the somewhat dystopian reality that I think we find ourselves in. 

Aaron: 

I remember recently being on line at a Burger King, literally on line, waiting for my turn at the register, and I was reading my phone, and up popped an ad for Burger King. That was odd. Sometimes I’m watching TV, and up comes an ad for the very thing that I was shopping for online an hour ago, on a different device. Sometimes I’m in my car listening to satellite radio and based on the ads, the radio seems to know me all too well. These can’t all be coincidences, right? 

Arvind Narayanan: 

That’s right. For the most part, these are not coincidences. So let me perhaps talk about some of the technologies that make these types of targeted ads possible. One is when you’re shopping in a store, it could be plain old geolocation technology on your phone, or perhaps something like Bluetooth beacons that allow apps on your phone to figure out not just where you are roughly, but also exactly which store you might be shopping in. That’s one. 

Another is called cross-device tracking. Cross-device tracking is about connecting the profiles of users between the different devices that they might use, a phone that you might have, a laptop that you might use, a computer that you use at work, and even perhaps a television that you just think of as a device that receives data, but is also in fact, transmitting data. So putting together all of these devices allows marketing companies to create much more comprehensive profiles, much more comprehensive dossiers, if you will, of users. And the way cross-device tracking works, sometimes it might be in a straightforward way, if you’ve logged into your Google account on both your phone and your laptop, then of course, Google can link the two, but sometimes it might be in much more subtle ways. It might be because a user is visiting a similar set of websites during similar periods of time, on two different devices. So statistically speaking, that creates a link between those two devices. 

What else? There are even more intrusive technologies being developed or at least trialed, in order to create these links. One of the most interesting ones… And I’ve got to say, sometimes when I hear about these, I have to sit back and chuckle at the ingenuity of the people who came up with them, but they’re also deeply worrying, I think. So we have things like ultrasound beacons, as they’re called, being embedded in television advertisements. And this one, happily, is not widely deployed yet, but there is technology for it. And how this works, is you might be listening to a Burger King ad on the telephone, but what you might not realize is that the audio of that ad contains ultrasound signals that are not audible to the human ear, perhaps they’re audible to dogs or other pets, I don’t know, high frequency sounds. And there might be an app on your phone that’s listening for a particular pattern from the background, and specifically from the ad that shows up on your television. 

Arvind Narayanan: 

And so, what that allows your phone app to do, is to recognize that you viewed a particular ad on the television, and so that allows these advertisement companies to do more accurate analytics by understanding which consumer has watched which ad, and perhaps also use that to beef up the profile of the user, the dossier of the user. So that’s the purpose for which they’ve created these technologies, but really, at what costs? What sorts of ultrasound and other technologies for tracking our behaviors are we going to allow into our homes? 

Aaron: 

But we allow it anyway. I mean, at a certain level, we sort of know that they’re watching us, right? I mean, we’ve sort of made a trade-off. 

Arvind Narayanan: 

At a certain level, we know that they’re watching us, but I think the specifics change so quickly, it’s really hard for anyone, including so-called experts like me, to keep up with what is going on in terms of these tracking technologies. Yes, you can say we allow it in a sense, but there is a way in which technically, from a legal perspective, we might have control, but only in the sense that we clicked on an “I Agree” button when we signed up for an app. So in practice, it’s unclear if we have much control at all, and that’s a deep topic that we can perhaps get into a little bit later on this podcast. 

Aaron: 

So, why do these companies want to track our information? After they collect it, what’s happening to it? 

Arvind Narayanan: 

Huh, I think researchers are still trying to figure that out, researchers, and investigative journalists, and others. But I think at a high level, we have a pretty good understanding of why companies are tracking us and what’s happening to that information. One type of use to which this information could be put, is genuinely to improve these technology products and to make them work better for us. There are certain personalized features that might make Netflix work better for you or me, so that’s one reason to track our information. So here, it’s important to make a distinction between our information being tracked by the very company that’s providing us a service. For example, when you’re watching Netflix, again, to use the same example, it’s obvious that Netflix is tracking what we’re watching, and personalizing our recommendations. That seems relatively unobjectionable, partly because it’s so transparent that that’s what’s going on. 

But another type of tracking that is much more concerning, and which is what I personally am interested in studying a lot more, is who are the companies that we’re not even aware of interacting with, that are collecting our information? And so, when it comes to those companies, one reason could be targeted advertising. And again, we’ve talked a little bit about how companies will frame targeted advertising as something that is pro-consumer, and sometimes it can be, but other times, it can be deeply problematic, it can be manipulative, it can trigger addictive behaviors in people, for example, gambling, alcohol, various other things. It can be problematic on a political level, because of the ability for political actors to hijack those targeted advertising systems for ends that we may not be able to easily anticipate. 

So targeted advertising is one category, and then there is a variety of other uses that are less well understood. There are companies using this kind of tracking information for providing reputation and scoring services. We hear a lot of this coming out of, for example, China and the social credit system, but there are smaller versions of it happening in the United States and other countries as well, where you might apply for a loan, and one of the sources of information that your bank uses is your behavior on social media, or your insurance premiums might be adjusted based on this information, et cetera. I don’t think this is very pervasive, but it is deeply problematic, and we don’t have a lot of transparency into when and where this is happening. 

Aaron: 

You’re listening to Cookies, a podcast about technology security and privacy, brought to you by the School of Engineering and Applied Science at Princeton University. Our guest is Arvind Narayanan, Associate Professor of Computer Science. This is part one of our conversation. We’ll bring you part two in a future episode of the podcast. Let’s get back to our conversation, in which Arvind will walk us through how to take some specific steps to guard our privacy. 

Aaron: 

So is there anything we, as consumers, can do to protect ourselves against these practices? 

Arvind Narayanan: 

Oh, for sure. There are things that we can do as individuals, and there are things we can do collectively as a society, and those are both important, I think, and those two responses are going to be different, and so, let’s talk about both of those. As individuals, let’s start with the fact that most technology products that you use do give you quite a bit of control over controlling the amount of tracking that happens. And it does require a time investment, it’s not equally easy for everybody. Some of us have more of the luxury of time, as well as technical savvy, to be able to figure it out and adjust these controls accordingly, but I think all of us can try to work this into our schedules. 

One thing that I try to do for myself that might be an interesting strategy for many listeners to try out, is to set aside half an hour or an hour on a monthly basis, just like cleaning the house, to take a look at all of the tech products, and apps, and websites and other things that we’re using. Many of these services have privacy checkup or similar features these days, to be able to just take a quick look at what is being tracked, what are the policies around the use of that information, and how can we use the privacy controls that these apps provide, in order to lock down some of the tracking of this information? So that’s one thing we can do. 

A second thing we can do, is just be more judicious in our selection of software, and hardware, and apps, and other products. We’re already used to the idea that when we’re buying a new product, we look up reviews, or when we’re downloading an app, we look up reviews about how well it works, how fast it is, et cetera, and I think understanding the privacy implications should also be a part of that app selection process. 

For example, if we look at web browsers, which is one of the most prominent pieces of software that we all use, there are major differences in the privacy protections that various browsers offer. Chrome, for example, built by Google, perhaps unsurprisingly, given that Google is probably the number one online tracking company. Chrome is, I think the only major browser that doesn’t have any efforts to block these third party trackers, at least out of the box. There are some extensions that you can use on Chrome, but there have also been some concerning developments about Chrome taking away some of the ability of those extensions to block third party tracking, whereas Firefox, Safari, and Edge have a better track record when it comes to blocking third party trackers, and there are even browsers like Brave that make privacy and security a core part of their pitch to users. So I think that’s a useful piece of information to have. I think that might be something that listeners might want to keep in mind the next time they are considering switching browsers, and a similar thing for every kind of app or product that we use. 

So those are all things we can do individually, there are also things I think we can do collectively. There are a number of nonprofit organizations that fight for our privacy, starting from very general ones like the American Civil Liberties Union, that have a number of things that they fight for, privacy being one of them, and more specific ones like the Electronic Frontier Foundation, that are more focused on tech issues. I think donating to some of these organizations can be a good way of pushing back against the huge power of the technology industry, and the lobbying that they do against privacy regulation. 

And finally, being politically active and supporting political causes and movements that are aligned with privacy and specific candidates, and also pressuring your lawmakers for more privacy-friendly regulation and legislation. Those are all things that we can do collectively, and I think those are necessary in these interesting times, when digital privacy and democracy seem to be becoming more and more linked with each other. 

Aaron: 

So could you walk us through a specific example of a way that you can take a specific action on a specific app or website to increase your privacy? 

Arvind Narayanan: 

Oh, for sure. Let’s do this in real time. Let me pull up my phone here, it’s an Android phone, and Firefox is an app that I use a lot. And really, you can do a similar thing for virtually any app. So I’m looking at Firefox, I right-click on the Context menu on the top right, and I see Settings. Usually for almost any app, you’re going to see item in Preferences that’s called Settings Privacy, something like that. So I click on Settings, and under Settings, I have a number of categories, General Search, and then after that is Privacy. So when I go to Privacy, I have several specific privacy options here. I have something called Do Not Track, I have something called Tracking Protection, I have something called Cookies. And it takes only a minute, I would say, to click on those options, understand what they do, and figure out which options are more privacy protecting. For example, I have enabled a tracking protection here. There are also other options, tracking protection could be disabled, it could be enabled only in private browsing mode. 

And then under Cookies, what I’ve done is I have selected the setting enabled, excluding tracking cookies. So normal cookies will be enabled, so it won’t break the functionality of most websites, but cookies that are used for tracking our searches and other activities from one website to another, and for compiling these databases of our online activities, so those cookies will be disabled by Firefox. So those are the privacy settings in Firefox, and you may have similar options in most of the apps that you use. 

Aaron: 

You can do that on your desktop as well? 

Arvind Narayanan: 

I can do that on my desktop as well, exactly. And usually on desktop apps, you’re are going to have perhaps a few more options than you have in mobile apps, just because it’s easier user interface wise to give users more choices on desktop apps. 

Aaron: 

Is there any government regulation in this area now? What kind of regulation is necessary to protect consumers, that we don’t currently have? 

Arvind Narayanan: 

Sure. The main piece of government regulation when it comes to online privacy and tracking, is the European General Data Protection Regulation, and it’s very comprehensive, it does a lot of things, let’s perhaps not go into too much detail. When it comes to the U.S., we don’t really have a comprehensive federal data privacy law. We have sectoral laws like the HIPAA, the health privacy law, and others for banking and other specific sectors, but those laws don’t do much to prevent the kind of tracking we’re talking about, or even give users many options about the kinds of tracking we’re talking about, where it’s almost every app that you use on your phone or your computer. And so for that, there are some state level privacy laws. California has a couple, Illinois has a very specific Biometric Information Privacy Act, but for the most part, U.S. privacy regulation is very much lagging behind. 

I would say, what do we need here? So here are a couple of things. This is not some sort of perfect plan, this is not a comprehensively thought-out plan, but there are definitely some low-hanging fruits that can be picked off with well-designed privacy legislation and regulation. One is, currently, privacy law, at least in the U.S., works based on the principle of informed consent. So if an app can track me, it’s because I clicked on something when I installed the app or at some other point, or implicitly by using the app, I’m consenting to my understanding and acceptance of the app’s privacy policy. 

This is of course, it’s a legal fiction. This is not how things work in reality, we all know that. I think this is fixable. I think we can insist on more affirmative notions of consent. And similarly, when apps are using deceptive techniques to make it harder for us to understand the privacy implications, I think regulators can crack down on that. Some of this will require Congress to act, but some of this will only require enforcement agencies, like the Federal Trade Commission, to step up their act and go after some of these manipulative and deceptive apps. 

So those are things that could happen. I think more transparency. Now currently, it requires Ph.D.s. I’ve had many phenomenal Ph.D. students who’ve done a lot of the research that it took to really expose some of these tracking activities. It’s a whole subfield of computer science and academia that looks at some of these privacy violations and seeks to bring them to light. There are many investigative journalists who also do this, there are other nonprofit organizations. It shouldn’t require that. I think there should be more laws compelling companies to be more transparent about what happens to our data, and when our personal data is used to make a decision about us, whether it’s about loans, insurance, or even something relatively frivolous like online advertising, I think companies should be required to give much clearer explanations about how that decision was made, and how our personal data was used for that purpose. 

Aaron: 

Are there companies that are better actors than others when it comes to the use of our information? 

Arvind Narayanan: 

That’s certainly true. There are many types of differences between companies. Some companies are, for example, more careful about data breaches and getting hacked, and so on, but there’s another important difference, which is simply how well the company’s business model aligns with what we might want from a privacy perspective. So a company whose business model is primarily based on targeted advertising, I think there is only so much they can do, even if the individuals working in those companies wants to be good actors. And I think this is the critical difference between some companies and others, and it’s been really good to see that Apple, for example, among the tech giants, is perhaps somewhat unique in not deriving a most of its revenues from targeted advertising and therefore, has found it well-aligned with their business model. And also, I should give them credit. It’s not simply always companies following their incentives, it also takes some amount of leadership from the CEO and others at a company. I should give them credit for making privacy a key part of their product design. 

And there are other smaller companies, again, we’ve talked about Firefox, which is made by Mozilla, who’s a nonprofit, whose mission also aligns well with protecting privacy. So I think that’s the key difference I would look for, how well does the business model, the fundamental way in which the company makes money, how well does that align with not collecting and monetizing consumers’ data? 

Aaron: 

Well, we’ve been speaking with Arvind Narayanan, Associate Professor of Computer Science at Princeton. We’ll bring you part two in a future episode of the podcast. I want to thank Arvind, as well as our recording engineer, Dan Kearns. 

Cookies is a production of the Princeton University School of Engineering and Applied Science. This podcast is available on iTunes and other platforms. Show notes are available at our website, engineering.princeton.edu. If you get a chance, please leave a review, it helps. 

The views expressed on this podcast do not necessarily reflect those of Princeton University. I’m Aaron Nathans, Digital Media Editor at Princeton Engineering. Watch your feed for another episode of Cookies soon. Peace. 

 

Faculty

  • Arvind Narayanan

Research

  • Data Science

  • Public Policy

Related Departments and Centers

  • Computer Science

    Computer Science

  • Center for Information Technology Policy