Everyone knows that ideas get better when you share them, so last week Collin Sullivan and I took the opportunity to attend the 3rd Global Conference on Genocide at San Francisco State University. The four-day event, organized by the International Network of Genocide Scholars, brought together many of the world’s leading scholars studying the issue of genocide. Together with the biennial conference of the International Association of Genocide Scholars, this event offers a regular opportunity for people from a wide range of disciplines to share their latest work in history, law, sociology, and various other fields related to genocide. Curiously, these conferences still tend to remain very academic with only a small amount of content focused directly on genocide prevention. That’s why Collin and I decided to make an appearance not only in the audience but also on the stage last week by sharing a new idea for predicting genocide more accurately.
The specific problem that we’re aiming to solve is the tension between quantitative and qualitative methods of assessing the risk of genocide. This longstanding gap between proponents of statistical modeling and advocates of analysis by subject matter experts is stopping the genocide prevention field from combining what we at the Sentinel Project see as complementary approaches. Perhaps the biggest issue here is the valid criticism that expert analysis is ultimately subjective and that the opinions of individual experts are as susceptible to bias and error as anyone else. Furthermore, opinions are difficult to measure and qualitative assessments are usually based on intangible factors such as ethnic tensions or the exclusionary nature of certain ideologies, which are themselves difficult to quantify. Statistical modeling comes with the advantage of having values which can be ranked and theoretically highlight the most urgent cases of potential genocide, while expert assessments are difficult to compare to one another.
If it were possible to “quantify the unquantifiable,” as Philip Tetlock has called it, then this problem might be at least partially solved. Our proposal then is to create a system for aggregating expert opinions and take advantage of the “wisdom of the crowds” effect popularized by James Surowiecki in his book of the same name. Several studies have shown that large numbers of people essentially guessing can collectively provide more accurate answers to certain questions than any one member of the group is able to, potentially even more accurate than external experts. The key element is to develop the right aggregation mechanism, which in our case will be a relatively simple expert survey.
Participants will be selected from amongst people with expertise in a variety of fields related to genocide, to ensure that they have both a basic understanding of the issue (which would not be found consistently in the general population) as well as a diversity of knowledge. Next, they will be provided with standardized information packages containing the risk profile of a given country, after which each participant will provide a numerical rating for the perceived genocidal risk level in the country. Additional questions may solicit information about participant confidence levels, prior knowledge of the country in question, and any external information used, all of which may be factored in for weighting purposes. Theoretically, aggregating large numbers of such responses will lead to more accurate assessments of the risk of genocide.
To read a more complete description of the idea, check out our brief concept paper Tapping into the “Smart Crowd” to Predict Genocide: Leveraging Group Intelligence for Risk Assessment. This idea has a lot of exciting potential but there are still challenges to overcome and lots of experimentation to be done before it can start being used routinely. That means that we need participants, so if you’ve got a couple of hours per month to spare and are interested in joining our smart crowd, just email me at [email protected] to volunteer.