Timothy Quinn recently became The Sentinel Project’s Director of Technology. Tim is a hands-on technology leader with 15 years experience deploying software for web, mobile and social media. He’s taught at both New York University and the City University of New York, developed The Sentinel Project’s Hatebase and WikiRumours software, and has been closely involved with our organization’s efforts in Kenya’s Tana Delta.
When I was a kid, a personal computer wasn’t particularly personal. It sat in the family room or basement, plugged into the mahogany console TV, waiting for you to key in machine code from the back pages of Compute’s Gazette. It was a step beyond pecking FORTRAN into the local community college mainframe, but it was still basically a one-way relationship.
These days, it’s hard to imagine an impersonal computer. Whether you squirrel it away in your pocket or bag, anchor it to your wrist, perch it on the bridge of your nose or back it out of your parking garage, your personal computer has figured out where you are and understands at least some of what you say. Yet strangely, it’s not yet unfashionable to ask whether technology should augment real world efforts to mitigate mass atrocity, as if technology was both optional and not quite palatable, like the seafood rangoon at Manchu Wok.
This conceit of extrinsic technology means that you don’t often hear technology and crimes against humanity mentioned in the same breath. Although organizations such as ours are fertile ground for innovation, technology is still sometimes treated within the broader international development community as modular or extraneous, and reinforced with opaque, outdated acronyms like “ICT” 1 — or, worse, as a threat to direct marketing and dinnertime phone solicitation. “Everything digital activists do is meticulously monitored and analysed,” complains Micah White in The Guardian. They “damage every genuine political movement they touch… [they] unfairly compete with legitimate local organisations… [they] silence underfunded radical voices.”
Few other sectors of the economy are as conflicted when it comes to technology. Healthcare, long reliant on clipboards and color-coded file folders, has finally embraced a twenty-first century approach to long-term care by digitizing patient records (14% increased adoption per year), employing algorithmic diagnostic tools (10% increased adoption per year), fingerprinting and mapping pathogen vectors, and using genetic sequencing to revolutionize pharmacology. Manufacturing, while popularly associated with acetylene torches and assembly line hydraulics, has become a proving ground for handheld CAD, virtual materials design and real-time supply chain management. Yet the bleeding edge of international development remains mired in straw man accusations of technofetishism and ethnocentricity. Why?
To give armchair cynics their due, there’ve been some notable missteps on the road to global sustainability. Stories of electricity-generating soccer balls and texting livestock are fodder for columnists, bloggers and commenters still grumbling about NASA’s alleged million dollar space pen and other taxpayer-funded indulgences. (The punchline that the Soviet Union bested us with a pencil is unfortunately apocryphal.)
As humanitarian technologists, we also do ourselves a disservice when we fail to implement technology with reasonable standards of rigor or transparency, when we’re protectionist of our intellectual property, when we release analysis and withhold raw data, and when we hide weak controls or flawed processes. I don’t blink anymore when a potential partner organization from which I’ve requested intake data sends me, several weeks later, a two-page PDF containing bar graphs. That’s just how many NGOs seem to roll.
I believe organizations like The Sentinel Project have an obligation to their partners and donors to adopt a responsible technology policy which furthers the core goals of the organization and encourages a culture of collaboration. Our technology policy can be summarized as:
- Technology is intrinsic to our core goal of mitigating the risk of mass atrocity. We use technology to support our internal and external operations, research and track our current situations of concern, and incubate online products and datasets for the broader international development community. There are no facets of our organization in which technology does not play a role.
- Our technology products (e.g. ThreatWiki, Hatebase, WikiRumours) are optimally designed for the environments in which they’re deployed. We don’t believe in building technologies or workflows that are unsustainable in the real world.
- Our research and intake data are, by default, open source and available to all, except where we feel security considerations warrant obfuscation. While the urgency of humanitarian crises may occasionally take priority over full diligence, our activities and are inherently driven by data rather than intuition or ideology.
Transparency and rigor are not themselves a panacea for humanitarian technology. You also need people.
Volunteerism is taken for granted within the open source community, yet for some reason few technologists encroach into the world of international development 2, perhaps because NGOs — if they have visibility at all in the technology world (and the lack of nonprofit sponsors at many technology conferences would suggest the contrary) — are perceived by technologists as staid, ineffectual and overly bureaucratic. Do a LinkedIn search for software development jobs in the nonprofit sector and most of the results concern fundraising, not innovation. Not unsurprisingly, many well-meaning technologists end up in trickle-down philanthropy: let’s disrupt the first world, and sooner or later the benefits will be felt in Kinshasa.
In fact, the international development community is overdue for technological disruption, and thus an exciting area of opportunity for both new and seasoned technologists. At The Sentinel Project, our goal is to push the boundaries of what technology can accomplish in conflict zones. Our Hatebase software has amassed the world’s largest open dataset of hate speech vocabulary and sightings, and is designed to be easily interoperable with other software platforms (for instance, our ThreatWiki platform for managing conflict events and precursors). We’re also working to develop affordable humanitarian UAV technology while simultaneously field-testing Una Hakika?, an instance of our WikiRumours software to counter the spread of misinformation in Kenya.
Conflict data and humanitarian aerospace are my favorite examples when I’m asked that question about whether technology has a role to play in the prevention of mass atrocity, but a better answer would be: aid organizations which fail to effectively leverage technology at every stage of their intervention risk losing relevancy within the communities where they operate, and that really hurts everyone: participants, advocates, colleagues, investors, donors, taxpayers.