The general risk perception of the public to cycling appears to be typically ‘talking up’ the risks that cyclists pose to someone else in relation to other risks surrounding us. For cycling organisations it has proved unsuccessful to get across the objectively low risk rating that the activity of cycling enjoys. What is it about risk that makes us see red, sense or none of it?
Risk? Simple! Explained by Russian Roulette. But risk perception? Or comparing risks of different actions and activities? These may be propositions that prove difficult to handle for the human psyche.
The human mind has difficulties quantifying, conceptualising and qualifying matters relating to risk. This section from Paul Slovic’s seminal paper from 1987, shows how interwoven risk perception is with ‘life and everything else’. Risk perception is not about risk at all, it’s in fact as much about personal experience, social conditioning, norming etc as you can possibly shake a stick at:
A major development in [psychological research on risk perception] has been the discovery of a set of mental strategies, or heuristics, that people employ in order to make sense out of an uncertain world (1). Although these rules are valid in some circumstances, in others they lead to large and existent biases, with serious implications for risk assessment. In particular, laboratory research on basic perceptions and cognitions has shown that difficulties in understanding probabilistic processes, biased media coverage, misleading personal experiences, and the anxieties generated by life’s gambles cause uncertainty to be denied, risks to be misjudged (sometimes overestimated and sometimes underestimated), and judgments of fact to be held with unwarranted confidence.
Experts’ judgments appear to be prone to many of the same biases as those of the general public, particularly when experts are forced to go beyond the limits of available data and rely on intuition (1, 2).
Research further indicates that disagreements about risk should not be expected to evaporate in the presence of evidence. Strong initial views are resistant to change because they influence the way that subsequent information is interpreted. New evidence appears reliable and informative if it is consistent with one’s initial beliefs; contrary evidence tends to be dismissed as unreliable, erroneous, or unrepresentative (3). When people lack strong prior opinions, the opposite situation exists – they are at the mercy of the problem formulation. Presenting the same information about risk in different ways (for example, mortality rates as opposed to survival rates) alters people’s perspectives and actions (4).
Whereas psychometric research implies that risk debates are not merely about risk statistics, some sociological and anthropological research implies that some of these debates may not even be about risk (5, 6). Risk concerns may provide a rationale for actions taken on other grounds or they may be a surrogate for other social or ideological concerns. When this is the case, communication about risk is simply irrelevant to the discussion. Hidden agendas need to be brought to the surface for discussion (7).
Perhaps the most important message from this research is that there is wisdom as well as error in public attitudes and perceptions. Lay people sometimes lack certain information about hazards. However, their basic conceptualization of risk is much richer than that of the experts and reflects legitimate concerns that are typically omitted from expert risk assessments. As a result, risk communication and risk management efforts are destined to fail unless they are structured as a two-way process. Each side, expert and public, has something valid to contribute. Each side must respect the insights and intelligence of the other.
1. D. Kahneman, P. Slovic, A. Tversky, Eds. Judgment Under Uncertainty: Heuristics and Bias (Cambridge Univ. Press, New York, 1982)
2. M. Henrion and B. Fischhoff, Am. J. Phys., in press
3. R. Nisbett and L. Ross, Human Inference: Strategies and Shortcomings of Social Judgment (Prentice-Hall, Englewood Cliffs, NJ, 1980)
4. A. Tversky and D. Kahneman, Scence 211, 453 (1981)
5. J. F. Short, Jr., Am. Sociol. Rev. 49, 711 (1984)
6. M. Douglas and A. Wildavsky, Risk and Culture (Univ. of California Press)
7. W. Edwards and D. von Winterfeldt, Risk Anal., in press
We, most literally, have simply no idea when it comes to risk. In some instances we may be risk-blind, in others we amplify risk, chance or scale beyond any statistics and numerically established levels.
This has also been confirmed by the endless logging of cognitive biases identified over the decades – some relating to risk and risk perception. Tversky and Kahneman [pdf] started thinking and writing about this in the 1970s.
Talking risk publicly, is a minefield with hidden mind traps, pitfalls and secret doors to swing open when you least expect them to. The important thing seems to be awareness of being human and that we err. These intense human errors warrant a much deeper and humanly discussion to resolve differences in angle, position, scale, standpoint and suitability.
Talking risk in our (increasingly) litigious society sets off a few more explosions. As an upshot perhaps it’s insurance companies who know most about ‘real risk’ than anyone else. Knowing the above, however, should alter the strategies employed by (cycling) organisations to communicate about cycling and risk. One would be to establish a more interactive and sympathetic process with the public – listening to why people do and say things.