When Brad Smith ’81, the president of Microsoft, spoke on campus this spring, he posed provocative and important questions: “Will we ensure that machines remain accountable to people? Will we ensure that the people who design these machines remain accountable to other people?”
Brad was speaking in the context of artificial intelligence (AI) and its many promises and potential pitfalls, but his questions apply equally to most computing and communications technologies. How do we ensure that the devices, networks, and algorithms we create truly serve humanity? How do we build in, at the very core, safeguards of privacy, security, and fairness, so that even users with no technical background have assurances of safety and control?
These are giant questions, far broader than any single institution can answer. But many Princeton faculty members and students are focused on these problems, and we feature a sampling in this magazine. Among many examples, Princeton’s Center for Information Technology Policy recently teamed with the University Center for Human Values to create “Princeton Dialogues on AI and Ethics,” a series of conferences and case studies to help guide practitioners and policymakers.
In 2014, Supreme Court Justice Sonya Sotomayor ’76 gave a speech on campus that inspired Princeton to change its informal motto to “In the nation’s service and the service of humanity.” Her point of focusing on people rather than national (or corporate) constructs seems particularly relevant in addressing the sweeping impacts of technology.I believe we must bring together diverse strengths and expertise — from engineers to humanists, from many personal backgrounds — to ensure that technology serves people.
What do you see as the greatest opportunities and risks that emerging technologies bring to society? Please write us at email@example.com or join us on social media.
Gerhard R. Andlinger Professor in Energy and the Environment
Professor of Mechanical and Aerospace Engineering and Applied and Computational Mathematics