Lessons from The Economist on the US Election 2020
The answer in summary: You can’t predict the future from past performance, and some people aren’t interested in being surveyed.
The Economist bravely posted two articles this week about how it’s forecasting models failed to predict the outcome of the US Election. These lessons are useful for those of us using research to design customer strategies.
Don’t assume the past will dictate the future
Research data is based on things that have already happened (voting or purchasing behaviour) or is opinion on what people might do in the future (intention to vote, intention to purchase, likelihood to recommend). The problem is: behaviour changes and opinions shift. The US Election results showed us that in highly volatile environments, the research assumptions that we normally take may not be enough.
Our errors may reflect a general weakness of quantitative models: they try to predict the future by extrapolating from the past. Perhaps this election, held in the midst of a pandemic and a volatile economy, stretched this assumption too far.
[The Economist 2020]
Don’t assume you have all perspectives
Is there a segment of the population you want to understand who don’t normally talk to researchers? Think of demographics like age, job type, available time. Personality factors including introversion can effect the data you gather. Are the people who you interview or survey the people you need to hear from?
Researchers for the recent US Election missed an entire cohort of the population who weren’t motivated to be ‘researched’. The Economist raises this problem:
One worrying possibility is that surveys again did not accurately gauge the share of working-class whites who supported Mr Trump. Before the election, polling showed that they had shifted towards Mr Biden. But preliminary election returns indicate that counties with lots of white working-class voters actually swung further towards Mr Trump. This suggests that Trump-supporting working-class whites were less likely to respond to pollsters in the first place. Should that theory prove true, it would present a very serious problem for the polling industry to solve.
Customer strategy researchers can take heed from this. It is an apt reminder that who we gather data from must be well considered. Those who don’t contribute may also be a powerful source of knowledge we are missing. Thinking about how to collect their views in a different way is important. This is where automation and understanding online interactions can be a powerful ally towards deeper understanding. A caveat: sentiment analyses are useful to see what is happening around your brand, however it should be taken in context. Be aware that there might be a ‘squeaky wheel’ part of your customer base who makes proportionately more noise but may not help you make good strategy decisions.
Good research is a foundation of successful strategic design, however as The Economist has shown, it is not foolproof. Strategic competitive advantage occurs by combining innovation with testing in real environments. Tools like wargaming can help. Perhaps if US Election researchers were also able to develop prototypes to test across the population they might have gained a more accurate understanding of what was happening on the ground and known what to scale. Fortunately, in business settings, this is possible.
Sarah Daly is undertaking a PhD at the Queensland University of Technology, investigating the role of trust in the adoption and diffusion of AI based innovation, particularly in the healthcare sector. She is also the Operations Director of CapFeather, a customer strategy and innovation consulting firm.