3 April 2016

In human rights reporting, the perils of too much information


Burundian refugees attend a rally addressed by Tanzania Prime Minister Kassim Majaliwa, at Nduta refugee camp in Kigoma, Tanzania in December 2015. (AP Photo)Editor’s Note: This post was produced as part of a graduate course on media writing and storytelling taught by the editors of Columbia Journalism Review.

Last month, the human rights organization Amnesty International revealed the exact location of a mass burial site on the outskirts of Bujumbura, Burundi. It allegedly held the bodies of at least 50 people who died from political violence in December of last year. International media outlets like The New York Times, Reuters, and Foreign Policy were quick to report on the site’s importance, saying it adds to the growing evidence of atrocities, including murder, violence, and gang rape committed by the Burundian security forces.

Amnesty’s evidence is important for other reasons, too. It shows how the use of open-source intelligence is becoming a common, yet underexamined, approach to human rights reporting in uncovering crimes against humanity. “Open-source intelligence” refers to a broad array of information generally available to the public, including Google Earth’s satellite imagery, content from social media sites like Twitter and Facebook, online videos and images, and geo-referenced field documentation.

Amnesty is not the only organization involved in using new technology to document human rights. In South London, Forensic Architecture is using mobile phone technology from ordinary people to collect data, while Bellingcat, a site for citizen journalists to investigate current events using open-source intelligence, has previously investigated events such as the Syrian Civil War and the downing of Malaysia Airlines Flight 17 in Ukraine.

“I actually think that maybe it’s wrong to call some of this stuff open-source intelligence, because what we’re really talking about is just journalism,” says Keith Hiatt, Director of the Human Rights and Technology Program at the UC Berkeley School of Law. “There’s nothing new about it; it’s just how you do journalism, but with new types of documents.”

There’s little doubt that the use of new data is growing as part of reporting conflicts and human rights abuses. For journalists reporting during a humanitarian crisis, these sources provide relief by verifying facts and gathering information quickly and, more often than not, reliably. In the face of new and exciting ways to uncover human rights stories, however, there are also new challenges. As human rights reporting has changed drastically in a digital age from old-fashioned street reporting to work far more reliant on sophisticated data, there is a struggle to use these technologies in a smart, safe way.

For one, the increasing reliance on open-source materials means that journalists do not always use witness testimony or boots on the ground, especially in high-conflict areas like Syria and Ukraine. Instead, the use of compelling evidence compiled by human rights organizations and citizen evidence on the internet is often seen as enough.

“These are good tools, but they have to be accompanied by narratives on the ground, because otherwise you don’t know what you’re looking for,” says Siobhán O’Grady, staff writer at Foreign Policy. “It’s hard to navigate, because it’s a really strange thing where you’re saying this looks like proof, but no-one is actually on the ground.”

O’Grady recalls writing about the attacks by Boko Haram militants in the towns of Baga and Doron Baga using satellite imagery released by Amnesty: “When I confronted Nigerian officials with reference to the Amnesty International reports, they mocked it and countered [the evidence], and it was interesting to me just how defensive the Nigerian government got,” she says. “There’s a lot of fear from governments, as you don’t need permission to get a satellite photo, and that scares them.”

In Burundi’s case, confirmation of the site was all about data: Amnesty’s researchers cracked the case of the mass graves using the mobile-phone footage that had emerged by determining the location using the video’s content.



Satellite imagery by Amnesty International

“The initial request I got was if we can use satellite imagery to look for the mass graves,” says Christoph Koettl, a senior analyst at Amnesty International and founder of Citizen Evidence Lab. “I said we could in theory, but it’s not feasible if we don’t know where to look.” After getting some clues about the geographic location in Bujumbura from the human source who shot the footage, Koettl analyzed the site using Google Satellite Imagery. The images also “strongly enhanced” eyewitness testimony on the ground, according to Amnesty.

Reliability is another obstacle. Since open-source evidence is almost always circumstantial, inferential steps are required to piece together what the evidence can prove. Koettl concedes that in many cases, satellite imagery can only tell so much. “When you see a satellite image of a building, what does that mean? The image itself might not tell you everything; you always need other information to corroborate with the image and verify the source,” he says.

Assuming the evidence is authentic, an expert witness may be required to explain the context, such as what the photographs or video show. High-quality satellite imagery can also be expensive and require people with expertise, Koettl says, which means either journalists will have to start becoming more familiar with source verification in a digital environment, or face limitations and rely more on human rights organizations like Amnesty to verify the sources.

“As someone who works for a human rights organization here at Berkeley, I absolutely think that journalists should not uncritically use our materials,” Hiatt says. The reason, he adds, is that human rights organizations simply have different objectives and obligations that may not be journalistically appropriate. “A journalist has an obligation to fact-check our materials, and also to do their own analysis of whether that report should be used.”

O’Grady agrees. “It’s important for journalists to use this information, but to let the readers decide what they’re going to think about it. It’s important to always say, ‘according to Amnesty’ or ‘according to [Amnesty’s] analysis,’ because I’m not a satellite image expert,” she says. 

Another challenge with open-source intelligence is that it increases the level of risk faced by citizens who gather the information firsthand—the uploader of an incriminating video, the bystander in a photo, or the person who tweeted her geolocation. In some situations, the tools used in human rights reporting in a digital age can also be helpful to an authoritarian regime.

“I think journalists sometimes get really excited about these technologies, because when they observe something in a refugee context or a human rights crisis, they want to call attention to the issue,” says Hiatt. “In so many contexts that is exactly right, but in a conflict zone, calling attention to someone’s suffering may get them killed, as it can inadvertently give someone away.”

Hiatt points to photos of refugees fleeing as an example: “I’m always conscious of the fact that I’m not the only person seeing those photos, and that there’s somebody out there who wants to hurt them who is also seeing those photos. If I can figure out where that person is, so can their adversaries.”

Claire Wardle, who co-authored The Verification Handbook and has been working with Storyful in social newsgathering and verification, says that while journalists and human rights organizations have different objectives, both need to use the same open-source verification techniques, whether it’s “Amnesty releasing a report about Burundi, orThe New York Times writing about ISIS dropping bombs in Syria.”

O’Grady agrees that in many ways, human rights organizations have the same instinct and job as reporters, but with a different mission. “A lot of what they’re doing is reporting, but they’re just not reporting for an unbiased publication, they’re reporting with different intent,” she says.

“You have both journalists using that material to tell stories, as well as human rights organizations using that same material to capture atrocities and to bring people to justice,” Wardle says. “So, while you’ve got the materials being used in different contexts, the protocol that you have to go through is the same.” 

This can include looking at provenance—the source, date, and location for every piece of social media data, and checking if the video or image is original. And while open-source evidence is useful for cases where journalists may not always have boots on the ground, where possible, the evidence should be used as a supplement.

“At the end of the day, the publications the journalists work for are making money off this content, so they’re getting value out of this content,” Hiatt says. “As they collect images to post them and to report on them, they need to think what risks are attached to it in a human rights crisis. It’s just a thorny, tricky situation.”

Astha Rajvanshi is a student in a course on media writing and storytelling, taught by the editors of Columbia Journalism Review.

No comments: