Blog

How to Fucking Love Science

In my previous post, I argued that due to our propensity for bullshit and the unreliability of our introspections, designers can appeal to science in deceptive and unhelpful ways. Needless to say, I don’t think this is a very good state of affairs, and want to suggest a few ways we can avoid this happening.

The sciences certainly can inform our work, from cognitive psychology to anthropology to Human Computer Interaction. My concern here certainly isn’t that science should only be conducted by some sort of initiated cognoscenti. Indeed, remember that Nick Brown was only a lowly postgraduate student himself. It’s great when people are engaged with science that can helpfully inform their design work, but in order to do that one must be at least minimally concerned with the truth. We’re too enamoured with a ‘fuck yeah, science!” mentality, of which Ian Bogost  (and before him, John Skylar) writes:

 

“…most people don’t f*cking love science, they f*cking love photography–pretty images of fairy armadillos and renowned physicists. The pleasure derived from these pictures obviates the public’s need to understand how science actually gets done–slowly and methodically, with little acknowledgement and modest pay in unseen laboratories and research facilities.”

 

This issue is wider than UX or interaction design, but it should inform how we treat “science” and empirical claims more broadly. A paper merely being published that suggests that x is true is not enough in and of itself to show that x is true. I’m not saying that we shouldn’t cite academic research, but that we shouldn’t do so glibly. We should read original papers and judge the quality of those papers, as well as the body of work in which they sit (the same is also true of user research). If we don’t, and we don’t show such a basic concern with the truth, we are peddling bullshit. Moreover, we should encourage our peers to do so too, and not let them get away with bullshit.

What we need then is engagement with research, and to hold ourselves to higher standards even when internally consistent bullshit might be good enough. It won’t be possible to fisk every scientific claim you hear or repeat, but you owe it your clients and to your colleagues to check any scientific claim that you use to guide your design work. I’m not suggesting that every claim you make should be scientific; indeed, I’m sure many of the claims in this post have no empirical basis. It may be for the best that there is no empirical basis for certain choices, because such “evidence” may involve post-hoc rationalisations that leads you to worse decisions due to the introspection illusion.

Where you do make scientific claims, however, you should be rigorous. As such, I’d humbly suggest the following checklist for UX and interaction designers who want to explore how a piece of academic research can inform their work:

 

 

What is the source of the research that you are citing?

Peer-reviewed academic research is generally the best bet, with industry whitepapers (often these are the only source for usage statistics, but their methods can be opaque or suspect). Press-release driven churnalism, regurgitated in tawdry right-wing papers is rarely a good source of truth. Popular science books aren’t really much better – the case presented in the original research is often much more complicated.

 

 

Does the research really add anything to your work or design or is it lacquer?

The best test is to ask yourself if you would still design something the same way whether or not you were aware of the research. If you would, it’s disingenuous to claim that the research informs your work; the truth value of the research is irrelevant to what you’re doing. To me at least, that’s bullshitting.

 

 

Have you read the original research?

Something being published in an academic paper doesn’t mean it’s true, and even if it is, second hand accounts can distort research. You need to read – and on some level at least, understand – the primary research before you can claim it is informing your work. Engaging with this research also means you should try to consider the strengths and weaknesses of the research. Was their sample too small? Does it all sound too good to be true? Some great guides to reading research papers can be found here and here.

 

 

Is the research WEIRD?

There are a number of broad methodological issues with psychology and neuroscience that are now being confronted. These include publication bias (psychological research is 5x more likely to report a positive result than space research (Fanelli, 2010)), insufficient statistical power due to small sample size (which may mean small effects are missed in neuroscience or that significant findings do not represent a real effect (Button et al, 2013)) and bias towards WEIRD western subjects (White, Educated, Intelligent, Rich and Democratic (Henrich, Heine and Norenzayana, 2010)) – most research is conducted on western undergraduates, which do not represent global samples or even the population of their own countries).

Science is self-correcting, and these issues will be resolved in time. For now however, you need to be cautious when reading research papers – a lot of research will be skewed or false due to issues such as these. At the least, check to see if research using only western undergraduates is being generalised to describe the whole of humanity.

 

 

Would you know how to replicate the research?

If you’re not sure, you haven’t read or understood the methods section. To really understand a piece of research, you should (in theory!) be able to replicate it yourself. This won’t always be possible with neuroscientific studies (which is precisely why you should be cautious when appealing to neuroscience as a layperson) but should be for psychological studies. The idea isn’t that you will replicate the study, but that understanding the methods helps you to grasp the strengths and weaknesses of the research. If the researchers haven’t shared how they did things, this should be a big red flag!

 

 

Have you read their citations? Are they used correctly?

Reading around a topic is harder, and you won’t be able to read every citation of every piece of research you encounter (I certainly haven’t for this post). For important research that is meant to underlie your design philosophy, you should try to read at least a few of the citations and ensure that the original paper used them correctly. The least you’ll get is a better understanding of the topic; you may even find the original paper used them in misleading ways.

 

 

Do you understand the research and any maths being used?

A basic understanding of statistics will help you to judge if papers have used maths correctly, and avoid being bamboozled like the readers of the Losada (1999) research were. Some good resources can be found here.

 

 

Is the finding novel, tentative or broadly accepted by the scientific establishment?

Plenty of well-accepted theories began their lives as controversies, but treating a tentative new paper’s controversial new finding as an established fact isn’t entirely honest. You’re on much more solid ground if there is a body of research behind a finding, instead of one new finding heavily hyped by the media. This isn’t a sufficient condition – the Fredrickson and Losada (2005) paper was heavily cited and admired, but it’s a useful heuristic.

 

 

What’s good for the goose is good for the gander

It’s worth mentioning what’s known as the “Bias blindspot” (Pronin, 2009). We view ourselves as being much less susceptible to biases than we do others. This is certainly true of skeptics too. Many people (myself included) pointed to Weisberg et al (2008) as a piece of research that showed how the way neuroscience is presented in the media skews our reasoning. The paper purported to show that participants rated explanations for behaviours as more compelling if they included logically irrelevant imagery of brains from fMRI studies. Later research, summarised in Farah and Hook (2013), found this effect to disappear when the study was replicated and methodological issues corrected. The truth is probably more complicated (as ever) than either paper, but this does show that skeptics (in this case, of how neuroscience is used in public) can be just as susceptible to confirmation bias as those they criticise.

The same is true of my thinking here – for instance, I don’t fully understand the maths used in the Brown, Sokal and Friedman (2013) research, so that’s at least one black mark against me. The goal here isn’t to simply to call out nonsense, biases and the abuse of science when we see it, but to accept that everyone can be guilty of these things and critique each other when necessary. As a community, we need to be less glib in our use of science, less prone to post-hoc rationalisation and more engaged in the research we appeal to. This will require us being honest, as much to ourselves as to others. Remember, we don’t have to appeal to science, but when we do it should actually add to our design work and inform our decisions. The rest is bullshit.

 

References

Brown, N. J. L., Sokal, A. D., & Friedman, H. L. (2013). The Complex Dynamics of Wishful Thinking: The Critical Positivity Ratio. American Psychologist. Advance online publication. doi: 10.1037/a0032850

Button, K. S., Ioannidis, J. P. A., Mokrysz, C., Nosek, B. A., Flint, J., Robinson, E. S. J., & Munafo, M. R. (2013). Power failure: Why small sample size undermines the reliability of neuroscience. Nature Reviews Neuroscience, 14, 365–376.

Fanelli, D. (2010). Do Pressures to Publish Increase Scientists’ Bias? An Empirical Support from US States Data. PLoS ONE 5(4): e10271. doi:10.1371/journal.pone.0010271

Farah, M. J., & Hook, C. J. (2013). The seductive allure of “seductive allure”. Perspectives on Psychological Science 8, 88–90. doi:10.1177/1745691612469035

Faulkner, X. & Hayton, C. (2011). When left might not be right. Journal of Usability Studies, 6, 245-256.

Fredrickson, B.L., & Losada, M.F. (2005). Positive affect and the complex dynamics of human flourishing. American Psychologist, 60, 678–686

Henrich J, Heine SJ, Norenzayan A (2010). The weirdest people in the world? Behavioural Brain Science 33(2–3):61–83.

Kalbach, J. & Bosenick, T. (2006). Web page layout: A comparison between left- and right-justified site navigation menus. Journal of Digital Information, 4 (1)

Losada, M. (1999). The complex dynamics of high performance teams. Mathematical and Computer Modelling, 30(9–10), 179–192. doi: 10.1016/S0895-7177(99)00189-2

Pronin, E. (2009). The introspection illusion. In M. P. Zanna (Ed.), Advances in Experimental Social Psychology (Vol. 41, pp. 1-67).

Weisberg DS, Keil FC, Goodstein J, Rawson E, Gray JR (2008). The seductive allure of neuroscience explanations. Journal of Cognitive Neuroscience, 20:470-477.

MJ ParnellHow to Fucking Love Science

Join the conversation

Protected by WP Anti Spam