Portrait of Sebastian Seung

In crowd wisdom, ‘surprisingly popular’ may be more accurate than most popular

Portrait of Sebastian Seung

in-crowd-wisdom-surprisingly-popular-may-be-more-accurate-than-most-popular.jpg, by nadelant

Crowd wisdom such as what might arise from online voting is popularly assumed to provide better answers than any one person by aggregating multiple perspectives. Democratic methods, however, tend to favor the most popular information, not necessarily the most correct. The ignorance of the masses can cancel out a knowledgeable minority with specialized information of a topic, resulting in the wrong answer becoming the most accepted.

To give more weight to correct information that may not be widely known, researchers from Princeton University and the Massachusetts Institute of Technology have developed what they call the “surprisingly popular” algorithm. Reported in the journal Nature Jan. 25, the technique hinges on asking people two things about a given question: What do they think the right answer is, and how popular do they think each answer will be?

The correct answer is that which is more popular than people predict, the researchers report. The technique could refine wisdom-of-crowds surveys, which are used in political and economic forecasting, as well as many other collective activities, from pricing artwork to grading scientific research proposals.

The researchers tested their algorithm through multiple surveys conducted on various populations. In one test, they asked people a yes-or-no question, Is Philadelphia the capital of Pennsylvania? Respondents also were asked to predict the prevalence of “yes” votes. Because Philadelphia is a “large, historically significant city,” most people in the group thought that, yes, it is the capital of Pennsylvania – Harrisburg is in fact the state’s capital. In addition, the people who mistakenly thought Philadelphia is the state capital also predicted that a very high percentage of people would answer “yes.”

Meanwhile, a certain number of respondents knew that the correct answer is “no.” But these people also anticipated that many other people would incorrectly think the capital is Philadelphia, so they also expected a very high percentage of “yes” answers. Thus, almost everyone expected other people to answer “yes,” but the actual percentage of people who did was significantly lower. “No” was the surprisingly popular answer because it exceeded expectations of what the answer would be.

Sebastian Seung, Princeton’s Evnin Professor in Neuroscience and professor of computer science and the Princeton Neuroscience Institute, said that the surprisingly popular, or SP, method is still democratic because there is no expectation of who would have specialized information, only that the information exists. Seung added that the researchers’ work was published 110 years after Nature published the seminal paper in crowd wisdom, Sir Francis Galton’s 1907 study “The Wisdom of Crowds.”

“The SP method is elitist in the sense that it tries to identify those who have expert knowledge,” Seung said. “However, it is democratic in the sense that potentially anyone could be identified as an expert. The method does not look at anyone’s resume or academic degrees.”

The researchers developed their method mathematically then applied it through surveys on multiple groups of people, including U.S. state capitals, general knowledge, medical diagnoses and art auction estimates.

Across all topics, the researchers found that the “surprisingly popular” algorithm reduced errors by 21.3 percent compared to simple majority votes, and by 24.2 percent compared to basic confidence-weighted votes (where people express how confident they are in their answers). It also reduced errors by 22.2 percent compared to answers with the highest average confidence levels. On the 50 test questions related to state capitals – such as the Harrisburg-Philadelphia question – the SP method reduced incorrect decisions by 48 percent compared to the majority vote.

“The argument in this paper, in a very rough sense, is that people who expect to be in the minority deserve some extra attention,” said co-author Drazen Prelec, a professor at the MIT Sloan School of Management as well as of economics and brain and cognitive sciences. “In situations where there is enough information in the crowd to determine the correct answer to a question, that answer will be the one [that] most outperforms expectations.”

Aurelien Baillon, a professor of economics at Erasmus University in Rotterdam who is familiar with the new paper but had no role in it, said that the researchers’ work “opens up completely new ways to think about an old problem.” The paper is persuasive because it contains both theoretical arguments “and empirical evidence that it works well,” Baillon said.

The paper, “A solution to the single-question crowd wisdom problem,” was published Jan. 25 by Nature. The work was supported by the National Science Foundation (grant no. SES-0519141); the Institute for Advanced Study; and the U.S. Department of Interior National Business Center’s Intelligence Advance Research Projects Activity (IARPA) (contract no. D11PC20058).

Morgan Kelly, Princeton Office of Communications, and Peter Dizikes, MIT News Office, contributed to this story.

Related Faculty

Sebastian Seung

Related Departments

Computer Science

Computer Science

Leading the field through foundational theory, applications, and societal impact