Princeton’s Center for Information Technology Policy (CITP), a joint venture of the School of Engineering and Applied Science and the School of Public and International Affairs, is marking 15 years of addressing society’s use of digital technologies through interdisciplinary research, teaching, and engagement with policymakers.

This year, Professor of Sociology Matthew Salganik began a term as the center’s director after serving in the role in an interim capacity. Salganik, who joined the Princeton faculty in 2007, has pioneered uses of big data and digital technologies in social research — including co-leading a recent mass collaboration that pointed to the limits of machine learning for predicting people’s life outcomes.

Here, Salganik discusses CITP’s priorities, the progress of its new tech policy clinic, and opportunities for students from a variety of backgrounds to take on challenges at the intersection of technology and society. 

Professor Salganik with CITP logo

Director Matthew Salganik discusses CITP’s priorities, the progress of its new tech policy clinic, and opportunities for students from a variety of backgrounds to take on challenges at the intersection of technology and society.

Q. What do you see as the main goals of CITP?

A. Our work focuses on understanding and improving the relationship between technology and society. We take both of those verbs very seriously. In terms of understanding, we do research that advances the state of the art. In terms of improving, we do a lot of direct engagement with policymakers — taking our research out into the world, and bringing the problems of the world into the center so that our research can help address important policy challenges.

Our research covers topics like privacy and security, cryptocurrency and blockchain, online misinformation, and fairness in machine learning. The impact of technology on society is something that changes a lot and is multidimensional, so we also think it’s very important for our center to be interdisciplinary. CITP was started by a computer scientist, Ed Felten, and it comes with a lot of technical expertise, but we also engage a lot with social scientists and, increasingly, people in the humanities.

Q. How is CITP engaging with policymakers?

A. Our biggest new program is the tech policy clinic, which launched in spring 2019. It’s led by Mihir Kshirsagar, who was a lawyer and then a prosecutor at the New York State Attorney General’s office. The clinic is loosely modeled after a law school clinic. It takes our research and ideas and people, and gets them out into the world in the right way, at the right place, at the right time. Many of the engagements that the clinic creates actually become sparks for teaching and research.

I think the best example is the workshop we did last fall with the Federal Trade Commission. Several members of the CITP community, including faculty members Jonathan Mayer and Arvind Narayanan, have done research on online dark patterns. These are potentially deceptive web design practices, mainly on shopping websites, which had been reported on by journalists, but there wasn’t yet a clear sense of how widespread these patterns were.

Several CITP researchers were able to partially automate the process of looking for dark patterns, which allowed them to study dark patterns at a scale that had not been done previously. Mihir has connections with people at the Federal Trade Commission, and we had a convening with them at Princeton. We discussed how the research at CITP and elsewhere could help the FTC fulfill its consumer protection mission. Critically, the people from the FTC also told us about problems that they’re seeing out in the world, which is incredibly generative for our research.

Q. What else are you doing through the clinic?

A. We also started a case studies series, where various outside groups come to CITP and present their problems to us [virtually since March 2020]. Our researchers ask questions and work on the problems, and then a week later present them some ideas, which we hope will get put into practice. Last year we had guests from the Wikimedia Foundation, The New York Times and the Federal Election Commission — one of their commissioners came to us to explore how to combat disinformation and dark patterns related to voting.

These case study events are open to students, which can give them a chance to find projects to work on. Participants have included undergrads, grad students, postdocs, faculty and staff. And these problems require a mix of skills and a mix of perspectives, so we do have computer science students, we have students from the social sciences, and we have students from the humanities as well.

Q. Are there other opportunities for undergraduates who want to get involved in tech policy?

A. Absolutely. This summer we started a new internship program called the Public Interest Technology Summer Fellowship, which is open to Princeton students [rising juniors and seniors] and students at other universities. The goal is to match undergraduate technologists with government agencies that want to hire technologists.

A lot of students are interested in exploring this kind of work, but they have trouble finding the kinds of opportunities that exist, because unlike big tech companies, many government agencies don’t have sophisticated recruiting operations. Likewise, many of these organizations want to be able to find these students but don’t really know how to do it. So, we try to solve that matchmaking problem.

This year, despite the pandemic, we were able to place nine students to work remotely at three organizations: the Consumer Financial Protection Bureau, the Federal Trade Commission, and the New York City Mayor’s office. This program is a great opportunity for us to serve the larger tech policy community, and also expand the diversity of people working in tech policy.

Q. What about teaching? How do CITP courses integrate teaching and research?

A. Teaching and research often go hand-in-hand at CITP. One example of this is a new interdisciplinary graduate seminar that I’m teaching this fall with computer scientist Arvind Narayanan. Our course, which is called “Limits to Prediction,” is motivated by the belief, in some parts of computer science and some parts of the policy world, that with enough data and the right algorithms, everything becomes predictable. There are many policymakers who, I think with good intentions, want to try to improve decision-making by taking advantage of big data and algorithms. But I think there are a lot of deep, open questions about to what extent things are predictable in social systems, natural systems and engineered systems.

This course grew directly out of the research that Arvind and I are each doing. For me, the course builds on a project I co-organized called the Fragile Families Challenge, a scientific mass collaboration involving hundreds of researchers that was focused on measuring the predictability of life outcomes. Arvind was involved in that project too, and we’ve spent a lot of time puzzling over the results. Arvind has also been thinking about similar issues, but from a computer science perspective, as part of his research on fairness in machine learning. Over time, we both realized that these questions about the limits to predictability come up in different ways in the social sciences and in computer science, and we thought that these approaches might be fruitfully combined.

For the class, which is mostly graduate students in computer science and sociology, we have readings about a wide range of domains, from individual life trajectories, to computer vision, to weather, to armed conflict. Through this series of case studies, we hope to build a better understanding of real, fundamental limits to prediction and where they come from. Ultimately, I hope this will help change the way sociologists think about social systems, and I hope that we can provide guidance to policymakers. It may turn out that the use of algorithms may be quite beneficial in certain kinds of settings with certain kinds of safeguards, in which case it’s a challenge for us as researchers to figure out: What are those settings and what are those safeguards?

This course is a good example of the kind of thing CITP does. It’s interdisciplinary, it’s deeply technical, it’s motivated by real problems in the world, and it’s part of training a new generation of people to go out in the world and find the right balance between technological capabilities and social goals.

 

Faculty

  • Jonathan Mayer

  • Arvind Narayanan

Research

  • Data Science

  • Public Policy

Related Departments and Centers

  • Computer Science

    Computer Science

  • Center for Information Technology Policy