Does the CIA want your geopolitical input? Yes, it actually does!

Intelligence agencies are considering open innovation to better predict geopolitical trends and to better respond to the new face of terrorism. Is this a great idea or a misguided attempt at forecasting the future?

In the United States, the Intelligence Community provides valuable information to the US government and military branches to improve national security. The accuracy and reliability of their information is of key concern, and the community is constantly seeking to advance its capabilities in terms of information gathering. One area of exploration is through Open Innovation. Often the most critical issues at hand for intelligence agencies involve assessing human behavior. Either they must find and evaluate pertinent information which is otherwise hidden from the general public, such as identifying a potential “lone-wolf” terror attack [1] [2], or they must interpret some broad social intuition that may not be evident at an individual scale, a “wisdom of the crowd” such as estimating refugee flows in Syria [3]. Each of these situations represents an extremely difficult challenge for a single expert assigned to the task. Either the expert is searching for a needle in a hay stack or they cannot see the forest for the trees. Crowdsourcing, or Open Innovation, where numerous human perspectives are gathered to construct are cohesive theory, offers a potential solution for the expert. With many new perspectives, there are many new eyes, making finding critical and rare information more likely while also providing a better sense of broader geopolitical trends that any one individual may otherwise miss. IARPA is an organization seeking to fill this gap [4] [5].

Intelligence Advanced Research Project Activity (IARPA) is a government organization that researches new technology and processes for the Intelligence Community in the United States. In 2018, it conducted a $200,000 contest for demonstrating geopolitical forecasting through open source information gathering [6]. The idea was that despite having extremely well-educated and trained agents, who have been analyzing geopolitical trends for years, intelligence agencies’ ability to predict future outcomes was simply worse than the capability of a collective of untrained, average citizens. In an unexpected outcome, by assembling the relatively uniformed opinions of a broad collection of people, more accurate conclusions could be achieved than by relying on experts alone. This proof of concept could serve to enhance the predictive capability of intelligence agencies in our world immediately.

Another area where IARPA has considered open innovation is in counter-terrorism [7]. The modern face of terrorism has moved away from traditional organizations that plot and organize attacks as a cohesive unit. Instead it has been replaced by a so called “lone-wolf” terrorism, where the attacker does not have direct interaction with the terrorist organization itself, but instead sympathizes with the organization’s goals and ideologies, prompting an independent attack [8]. This new form of terrorism is significantly more difficult to find and trace, as it often appears sporadically, while also distributed throughout the nation. However, there may be warning signs of this behavior as it develops over time at an individual level. Crowdsourcing this information to a national reporting channel could increase awareness of these possible threats for further assessment. Similar to the Waze app [9], where map editors report traffic or construction concerns on the roadways, concerned citizens could report suspicious behavior in order to assess potential threats to national security. This application of crowdsourcing is likely not to be adopted in the immediate future, for the following reasons:

Each of these applications of open innovation logically presents several challenges, which must be addressed by the Intelligence Community to achieve successful results. In both applications presented, there is a question of how to go about motivating a collective of individuals to participate in these crowdsourcing activities. For the Waze app, there is a direct product (navigation capability) which benefits from the user input and then benefits the user directly as well. However, with a public service like national security, this relationship is much less direct. What incentive does a citizen have to commit time to these crowdsourcing activities when the effects are not immediately observed? This incentive problem must be addressed, perhaps by formalizing the structure of these activities through direct financial compensation for forecasting efforts. To improve participation in the Crime Stoppers-like [10] crowdsourcing counter-terrorism initiative, a points system has been proposed where participants are rewarded for actively contributing to the proposed information sharing platform. However, what are the implications of this rewards system? From a cynical view, one might imagine a collective of children-spies, like those in 1984, [11] seeking to report any suspicious activity to gain points. In context, a lack of trust in government agencies has also eroded the likelihood of adoption for this counter-terrorism application [12]. Perhaps this is not the best point in history for this innovation to succeed, and a longer timeline for adoption should be pursued.

What other public services could be improved with crowdsourced information gathering? What concerns do you have with intelligence agencies using crowdsourced data?

(Word Count: 793)

[1] [7] [8] [12] Coultas, Bryan T. “Crowdsourcing Intelligence to Combat Terrorism: Harnessing Bottom-up Collection to Prevent Lone-Wolf Terror Attacks” Thesis, Naval Postgraduate School, 2015, http://www.dtic.mil/dtic/tr/fulltext/u2/a620622.pdf

[2] Vallone, Julie. “Crowdsourcing Could Predict Terror Strikes, Gasoline Prices Netflix Was Early Adopter Researchers test system for gathering opinions on world events and trends.” Investor’s Business Daily, Investor’s Business Daily, Inc., 29 Aug 2011, http://search.proquest.com.ezp-prod1.hul.harvard.edu/docview/915163452?accountid=11311

[3] Spiegel, Alix. “So You Think You’re Smarter Than A CIA Agent.” NPR, NPR, 2 Apr. 2014, www.npr.org/sections/parallels/2014/04/02/297839429/-so-you-think-youre-smarter-than-a-cia-agent

[4] Weinberger, Sharon. “Future – Intelligence Agencies Turn to Crowdsourcing.” BBC News, BBC, 18 Nov. 2014, www.bbc.com/future/story/20121009-for-all-of-our-eyes-only

[5] Hershkovitz, Shay. “The Future of Crowdsourcing: Integrating Humans with Machines.” TheHill, The Hill, 20 Mar. 2017, thehill.com/blogs/pundits-blog/technology/324807-the-future-of-crowdsourcing-integrating-humans-with-machines

[6] “Intelligence Community Looking At Crowdsourcing For Predicting Geopolitical Events.” NPR, NPR, 26 Jan. 2018, www.npr.org/2018/01/26/581142439/intelligence-community-looking-at-crowdsourcing-for-predicting-geopolitical-even

[9] Del-Colle, Andrew. “Inside Waze’s Volunteer Workforce.” Popular Mechanics, Popular Mechanics, 14 Nov. 2017, www.popularmechanics.com/technology/a15624/waze-volunteer-work-force/

[10] “Crime Stoppers Text-A-Tip Program.” Bpdnews.com, http://bpdnews.com/crime-stoppe rs-text-a-tip-program/

[11] Orwell, George. 1984. Susan Brawtley, 2014.

[Image] Saul Loeb, AFP, Getty Images

Previous:

Dream On: An Exploration of Neural Networks Turned Inside Out

Next:

Crowdsourcing as the Future of Secret Cinema

Student comments on Does the CIA want your geopolitical input? Yes, it actually does!

  1. Matt – I think you pose a very interesting debate. In my mind, there are two sides to this market. First is the public participating in the open innovation. On the other side is the government agency who must trust that the crowdsourcing public, a group that has self-selected to participate in the platform, is acting in the best interest of the country. Finding the right incentives to motivate the public to participate and to act as good actors may not be an easy task.

    Furthermore, adopting and admitting that crowdsourcing can better predict geopolitical events may be a bit scary for government officials – does this put them out of a job? Where does this leave them?

  2. I am wrestling with the tradeoff you allude to in this piece – namely, how can the US government incentivize accurate, appropriate use of crowd-sourcing anti-terrorism technology while limiting potential abuse? You also raise an interesting point regarding compensation. Rewarding individuals for reporting suspicious activity feels like an activity that rewards the general public by producing a positive externality (safety for all). Is it safe to assume that, with the right messaging (e.g., “if you see something, say something”), concerned citizens would act when they sensed a serious threat?

    Christie also raises an interesting point above. It seems like we have two data points: 1) the past performance of government operatives, and 2) the wisdom of the uneducated masses. Is there a third data point we should be trying to measure — namely, how much more (or less) effective are intelligence agents when armed with open-sourced concerns?

  3. Interesting to think about what other public services could be improved with crowd sourced information gathering. Your parallel with Waze made me think of public transport, especially in large metropolitan areas with extensive public transport networks such as New York or London. I’m imagining crowd sourcing information through the “Transport for London” mobile app on how congested certain services are, cleanliness and maintenance issues picked up by customers. Transport users could drop tags, or highlight certain buses or train cars for identifying services and areas that need to be improved. Since public transport systems are often a target of terrorist attacks, this could be a way of creating a behavior of vigilance and reporting that could pave the way for crowd sourcing security ideas.

    My major concern for using crowd sourcing for national security is that the resultant data would likely be extremely noisy and full of cultural bias and racial prejudice.

    1. Love this idea for public transport networks. Makes a lot of sense to me, as it would help both commuters get better information for their journeys and allow public services to react more quickly to issues.

      And yes, the bias potential here is one of my fears as well.

  4. Great post, Matt, and I think this is a highly interesting debate. Just a quick clarification, the term “agent” in this context typically refers to someone who enters into a contractual agreement with CIA to provide information, usually in a clandestine manner. CIA officers manage these agents while CIA analysts are the ones who make intelligence assessments which you discuss here. As a former human intelligence officer, I am very excited about the idea of crowdsourcing leads which solves a “top of the funnel” issue with counterterrorism efforts. However, I think effective agents, ie sources with direct access to the actual terrorists and their operations, will remain the critical aspect of successfully mitigating this threat. Crowdsourcing tips is certainly helpful and can provide useful context, especially utilizing new technology like Palantir to process this information, but it will not replace the value of validated information collected through clandestine operations.

    1. Hill, thank you for the clarification of the correct terminology here. This is something that many of the articles appear to get wrong, so I appreciate the insight. I also agree with your point that this will never fully replace human involvement. My takeaway is similar to that from the IBM Watson case, where I view this innovation as potentially assisting/augmenting human decision making, not replacing it.

  5. Great post! I also find myself wrestling with Hill’s point above – ultimately, the value of validated information collected through clandestine operations will likely reign supreme in the Intelligence Community. Similar to how machine-learning-augmented human decision making works, then, I find myself wondering what the best applications for Open Innovation are in a space so dominated by a need for clandestine action and response. In the case of IARPA’s contest to identify emerging geopolitical trends from the masses, for instance, at what point do you draw the line between “this information is good to share with the masses” and “this information, when shared publicly, poses an additional threat to national security?” Is there an additional risk that, by making trendspotting an open-source effort, we run the risk of providing additional motivation or information to so-called “lone-wolf” operators, thereby undermining the end-goal of the Intelligence Community? I also find myself struggling to see how the IC can overcome Andrew’s point above re: racial and cultural biases in reporting suspicious activity. Are we, as a society, prepared for unregulated reporting from the masses?

    1. Great comment, because this is where I find a little irony in the situation. The CIA is typically perceived as a secretive organization with closely guarded information that is strategically hidden from the public. Does it even make sense for this type of organization to be using open innovation, when dealing with information so sensitive? Are you giving more information about your process to the same people who you may be worried about?

  6. Matt – very interesting topic. As others have mentioned, I have concerns around (i) how to incentivize people and (ii) how to protect against potential cultural biases. On the one hand, incentivizing people to submit as much data as possible will likely provide useful information that may prevent an attack. On the flip side, we need to weigh that against all of the “false positives” that this system will cause, especially if people are rewarded for providing information. Furthermore, this may lead to security concerns among ordinary citizens who are looking to report information on others for their own financial gain.

    Anything we can do to prevent future attacks is immensely valuable. That said, we will need to decide as a society whether the trade-offs to our personal security are worth it.

  7. Thank you for writing about this – fascinating topic and assessment of the potential problems at hand!

    I agree with those who have commented above on the major risks associated with relying too heavily on crowdsourced intelligence information to take real actions in the world of security. I agree with Hill that this information could serve as a great component of the top-of-the-funnel; however, I do worry that even this would ultimately result in intelligence officers having to weed through more irrelevant data, resulting in less time spent on actual pieces of relevant information. Additionally, I worry about civilians being able to fully internalize what relevant intelligence information could even be – especially given our global political climate, would paranoia drive civilians to start reporting on random acts that they witness? Further, similar to other open innovation feedback loops, I am concerned with how the public would react if their input was not acted upon – would that discourage them from participating in the future? Unlike Waze, there would likely be no immediate gratification and I imagine that that would create some dissatisfied participants. Overall, I’m really curious to see how organizations like IARPA continue to grow this capability.

  8. This is a really interesting post. I definitely agree that there is a risk to being able to make accusations against someone without negative repercussions for false accusations.I also worry that the shear volume of data you’d be putting into the funnel would be insurmountable. It seems that with these sorts of attacks we already often hear that a family member or someone close to the attacker had reported suspicious behavior or mental illness before the attack took place but there either wasn’t sufficient evidence or sufficient bandwidth to address the situation.

    The idea of the “wisdom of the crowd” being better than that of trained experts is very interesting to me as well. It reminds me of ensemble methods in machine learning where you train multiple models to achieve the same task and then let the models “vote” to determine a final output.

  9. Haha maybe you could introduce ranks to the reward system for spying on your neighbors. Like you start out as an “espionage intern” and later you can be promoted to “Head of the People’s CIA”.

    I really appreciated this post, and I agree the risks are enormous. My gut is that the incentives have to be pure, e.g. making your country a safer place.

  10. The author asks whether there are other public services that could be improved with crowdsourced information gathering. Definitely yes. Pretty much all the mobility services, education and healthcare could benefit greatly from having access to the enormous data pools that tech companies currently have. The options are limitless!

  11. Matt, this is such a thoughtful and thought-provoking post – thank you!

    My thoughts are similar to Brian R’s. Seems that in all forms of communication, there is a familiar cyclical pattern of one medium becoming too noisy, so we introduce a new one to elevate signal from noise. Ironically, adoption of the new medium eventually creates too much information again, and the cycle continues.

    Given that challenge, one potential way this technology might help intelligence experts is connecting the dots between pieces of information they are actively investigating, rather than adding more leads to the (already noisy) funnel.

  12. Matt, very interesting piece on how open innovation is being used by intelligence to crowd-source information. I too am concerned about the potential for abuse by the public and the volume of information entering the funnel. As some have touched upon, I wonder what role this information will play in investigations by intelligence agencies. Will it be used as supplemental information in an already-targeted investigation or as a screening tool for new, potential security threats. In either case, this will create new challenges for agencies as they will have to read through massive amounts of information (some potentially falsified) and decide which data merits more attention and priority than others. As I would imagine, if this platform exists for reporting public concerns, everything on that database should represent some degree of public threat and it will be up to the interpreter of that data to decide what is important and what is not. Perhaps this is an area where machine learning can be used to screen and make decisions or alerts based on a historical database of security threats.

  13. Thanks for the read, Matt. Very interesting to hear the CIA is experimenting with crowdsourcing. One concern I have is the viability of crowdsourcing as a prevention mechanism. In many “lone wolf” scenarios, the perpetrator was already on the authorities radar and in some cases the authorities had even talked with the person. Even in the case where crowdsourcing can identify an at-risk individual, what steps can be taken beyond an initial reach out? The authorities cannot arrest someone for sketchy activity online. I think of the movie Minority Report, where a prediction system enables the police to arrest individuals before they commit the crime (called “pre-crime”). Does employing crowdsourcing lead us down a weirdly similar path? In the event crowdsourcing is successful in this use case, I think there are several ethical questions to consider before it could ever be put to use.

    1. Great point here. How authorities respond once they have uncovered the information invokes an entirely new set of challenges. Protocols for addressing these at-risk individuals is definitely something that should be considered, balancing how aggressive the response should be.

  14. Matt, this is really thought-provoking. Applying crowd-sourcing to the business of national security is almost like empowering everybody to become a vigilante of some sort. However, two generalizable observations greatly inhibit the usefulness of this approach:

    1. People’s fears are easily driven or influenced by cultural/racial biases.

    2. Lone wolves, like many regular people, are localized and thus interact with only a small portion of the population daily.

    Thus, of the say, 30 people that a lone wolf interacts with in a week, 25 of them will probably be repeat interactions. Of these 25, only a few will ever become suspicious enough to raise an eyebrow at the lone wolf’s tell-tale behaviors. Of those who become suspicious, only one or two, will think it significant enough to warrant making a report. When the report hits the FBI analyst’s desk, the analyst, who is probably aware that the report may be motivated by biases, reviews it to try to assess if it’s a credible lead. Uh oh. Hold on. There are a million other lone wolf reports coming in from other localized situations, each of which is backed by only one or two “concerned citizen’s” suspicions, the analyst realizes. Not enough data points per report to even out the idiosyncratic noise associated with each report, the analyst concludes. Maybe machine learning algorithms can that can process humongous amounts of data can then be deployed to scour the online presences of the reported individuals for validating trends. But imagine the violation of privacy that will be associated with this, given that a huge portion of the reports are likely to be spurious. I don’t know man. Look’s super challenging and leaves me wondering if it’ll only lead to another social problem of privacy invasion and victimization.

    1. This is a valid and well-reasoned criticism of the proposed counter-terrorism application. The risks you have presented are significant challenges to the viability for this project.

  15. Awesome article Matt. I am very interested in the power of prediction markets and especially interested about which environments they fail in. I think the idea of crowd-sourcing such large-scale issues such as terrorism can be very tricky because the awareness of the prediction market itself can cause people to change their interpretation of reality. So for example, if there were some public mechanism in which people could report their beliefs about a potential terror attack, my guess is that if those results were public that would actually affect people’s perceptions about the likelihood of an attack. So you sometimes get weird effects like that. One thing that I think would be really interesting is an internal prediction market within the CIA or broader intelligence agency network. I think Google and many other tech firms do similar things with their employees and they incentivize it by reporting on who is most accurate in their predictions.

  16. Great choice of subject matter, Matt! “Open innovation in government security” sounds like a great title to a dystopian novel, but if done correctly, the program could potentially be quite helpful. A few thoughts on your questions:

    1. My primary concern with the incentives is that citizens are most likely to report their neighbors out of fear, which could lead to politicians ushering in the next wave of McCarthyism. By the same token, the government might be able to improve screening and reach a point where most of the red flags are legitimate.

    2. Another application of crowdsourcing could be infrastructure improvement. Governments would have an easier time addressing public needs if they understood where to fix roads or add a new stoplight.

    1. Your analogy to McCarthyism here is very interesting. You are right that a look back at history might be called for before this initiative is ever adopted.

Leave a comment