Using Big Data to Predict Terrorist Acts Amid Privacy Concerns

Using Big Data to Predict Terrorist Acts Amid Privacy Concerns

Using Big Data to Predict Terrorist Acts Amid Privacy Concerns
Before Ahmad Khan Rahami planted bombs in New York and New Jersey, he bought bomb-making materials on eBay, linked to jihad-related videos from his public social-media account and was looked into by law enforcement agents, according to the Federal Bureau of Investigation.

If only the authorities had connected the dots.

That challenge — mining billions of bits of information and crunching the data to find crucial clues — is behind a push by U.S. intelligence and law enforcement agencies to harness “big data” to predict crimes, terrorist acts and social upheaval before they happen. The market for such “predictive analytics” technology is estimated to reach $9.2 billion by 2020, up from $3 billion in 2015, according to research firm MarketsandMarkets.

It’s the stuff of a science-fiction movie like “Minority Report,” in which Tom Cruise played a Washington cop who used technology to arrest people before they carried out crimes. It’s also a red flag for privacy advocates already fighting U.S. spy programs exposed by Edward Snowden and the FBI’s demands that Apple Inc. help it hack into encrypted mobile phones.

Read Also:
Why Sharing Cancer Big Data is Key to Personalized Medicine

The idea is to make sense of the vast and disparate streams of data from sources including social media, GPS devices, video feeds from street cameras and license-plate readers, travel and credit-card records and the news media, as well as government and propriety systems.

“Data is going to be the fundamental fuel for national security in this century,” William Roper, director of the Defense Department’s strategic capabilities office, said at a conference in Washington last month.

For the first time, the White House released a strategic plan on Wednesday to advance research and development of artificial intelligence technology, including to predict incidents that may be dangerous to public safety.

Weeks before Rahami allegedly carried out the attacks in September, he bought circuit boards, electric igniters and ball bearings — all of which are known bomb-making materials, according to charging documents from the FBI.

In previous years, he was flagged by U.S. Customs and Border Protection and the FBI after he made trips to Pakistan and after his father told police he was a terrorist, before recanting the remark.

Read Also:
Big Data, Artificial Intelligence, IoT May Change Healthcare in 2017

Law enforcement agents could have been tipped off that Rahami was moving toward an attack had all of those data points been culled together in one place, said Mark Testoni, chief executive officer and president of SAP National Security Services Inc., a U.S.-based subsidiary of German software company SAP SE.

“This is a big data world now,” said Testoni. He said his company has developed a computer platform for doing predictive analytics that is being used in a limited way by a Defense Department agency and by a national security agency. He declined to name the government customers or specify what they are doing.

The technology to predict events is only in its infancy, Testoni said. National security and law enforcement agencies also have different rules when it comes to obtaining and using data, meaning there are walls between what can be accessed and shared, he said. U.S. law enforcement agencies, for example, need a court warrant to access most data.

Read Also:
Balancing the Demands of Big Data With Those Of Accurate Data

Privacy advocates express concern about the “Big Brother” implications of such massive data-gathering, calling for more information and public debate about how predictive technology will be used.

“There’s often very little transparency into what’s being brought into the systems or how it’s being crunched and used,” said Rachel Levinson-Waldman, senior counsel to the National Security Program at the Brennan Center for Justice at New York University School of Law. “That also makes it very hard to go back and challenge information that might be incorrect.”

Computer algorithms also fail to understand the context of data, such as whether someone commenting on social media is joking or serious, Levinson-Waldman said.

Read Full Story…


Leave a Reply

Your email address will not be published. Required fields are marked *