Throughout human history, we have been terrified of what the future might hold and have devised all kinds of ways of trying to divine it. But today, with the promulgation of the scientific method, we have far more reliable ways of trying to define it.
Or do we?
In a fascinating opening keynote of the Ethnographic Praxis in Industry Conference, Tricia Wang addresses this very question in a talk entitled "The Conceit of Oracles." She described how, in ancient Greece, the Pythia oracle would inhale gas fumes rising from a natural fissure and, over the course of a few days, reveal prophecies that would guide the actions of kings. As random as that may seem, however, there was in fact a method in how those messages were interpreted.
Fast-forward a few millennia to the inaugural Christmas lecture at the Royal Institute of Great Britain. In 1825, Michael Faraday demonstrated electricity to a transfixed audience in the very same spot that Trisha Wang was standing. It heralded the beginning of an era in which science would trump religion as the most reliable way to make predictions. Quantification, not divination, would provide answers to the question, "What comes next?"
Yet Wang argues that the infallibility of the scientific method is misleading, because it assumes that we can find the answers in numbers. Sure, we might be getting really good at producing data, but it doesn’t mean that we can predict the future any better than the oracle. In fact, according to David Deutsch, the biggest problem that we face today is not predicting the future, but coming to terms with the fact that we can’t.
The big problem with "big" data, claims Wang, is that we feel that it reflects truth, yet it really requires interpretation. And we can get our interpretations incredibly wrong. She presents the case of an American family who were visited by an anti-terrorism force after the Boston bombing. The woman had googled "pressure cooker" and her husband searched for "backpack."
It's not just the government who can't see the context forest for the data trees. Wang reports that Kodak filed for bankruptcy despite being early players in the digital camera market. The problem was that they assumed that customers would use digital cameras in exactly the same way as analog cameras. But sharing and printing practice changed completely as cameras were integrated into a polymedia landscape.
Wang uses these divergent cases to argue that ethnography gives data context: all numbers need interpretation and analysis.
Curiously, some parties are trying to quantify the qualifiers. Apparently, Unilever now require all ethnographers who work with them to be accredited by Unilever themselves. They've invented a host of metrics to determine what counts as good ethnography. It's nothing if not ironic that they are using quantification to judge a method that is being used to solve problems that quantification can't handle.
In sum, there is a whole lot of "big data" out there that presents exciting possibilities for future research. But it doesn't replace the "thick data" of ethnography. In fact, we need ethnography more than ever before, precisely because there is so much quantitative data that needs to be put into context.
What we need to make this happen is not the magic of the oracle. We need confidence to take our ethnographic skills to the table, to sit with the "big data" makers, and develop research models that are integrative and agile. We may never be able to say with certainty what will come next, but at least we might be able to generate a clearer picture of what is happening now.