Paper Review: The i-frame and the s-frame
How focusing on the individual-level solutions has led behavioral public policy astray
Upon the publication of the book Nudge (2008) — which showed feasible and cheap ways to alter people’s behaviors on a scale — governments and private consultancy agencies across the world have jumped on the opportunity:
“Wait, we can make people consume less energy simply by showing them how much energy their neighbors consume? We can also make people save for retirement by opting them in automatically? We can do all that without introducing a carbon tax or rehauling how retirement works completely? Aces!”
Under these prospects, more than a decade of massive behavior science application has ensued. Today, we are realizing that the effects of behavior change interventions are:
at best, super heterogeneous (meaning some probably work wonderfully, while others are, well, shit);
at worst, negative, because they actively undermine support for the big ticket structural changes that actually move the needle on the societal problems.
This is the gist of the paper of Chater & Loewenstein.
Let’s get to it.
I.
Behavior sciences tend to single out the individual and his behaviors — what Chater & Loewenstein call the i-frame — rather than focusing on the system where this individual operates in — the laws, the rules, the physical structures (the s-frame). This happens naturally: the brain, having been trained in some way, tries to use what it learned to interact with the outside world. Where an engineer sees that a problem can be solved by a system or a tool, a behavioral scientist sees how it can be solved by a behavior. You can call it professional deformation.
Anyway, the question the authors try to answer is this: what are the (unintended) consequences of such framing? Here’s an example.
Imagine the job of a coal miner. Even without knowing anything about it, you can picture how shitty it is — cave-ins, respiratory problems, death in 50 years of age (when not caved-in before that). Let’s say you’re a behavior scientist and you were hired by the mining company to improve the working conditions. After sifting through the literature, identifying the context, and interviewing and observing the workers, you come to several conclusions.
First, you notice that people do silly things because they can’t assess danger properly. They constantly underestimate the probability of a cave-in happening. They don’t wear their masks and protective helmets. Thus, you’ve identified (the lack of) risk perception as one relevant variable.
Next, upon observation of their work, you also notice that some workers work with shabby equipment — half-broken pickaxes, and wobbly shovels. Not only is this woefully uneconomical, it’s also dangerous. A loose hilt of a pickaxe might mean that the metal head dislodges right above the worker’s head, leading him to stumble and comically pinball into a nearby shaft. Thus, you’ve identified the tools and protective equipment as another important factor.
Finally, you also observed that certain parts of the mine are dangerous. There are rock formations poised to crumble on top of people and ledges that they can fall over (this isn’t a German mine). Thus, you’ve identified the physical features of the environment as your last factor.
Armed with such knowledge, you begin to craft an intervention. To tackle the risk perception, you might propose the company invests in a simulation workshop where the workers learn — using cutting-edge methods like AI, gamification, and VR — the consequences of their actions in a safe environment. Shabby equipment is easy to solve — you recommend the company gets new, quality stuff. You also propose to make it mandatory to wear helmets, filtration masks, etc. You recommend there be financial consequences for not wearing a helmet. Heck — in a stroke of brilliance — you propose the company rewards workers for wearing their helmets! Ultimately, to avoid silly and not-at-all cartoonish-looking accidents and premature deaths, you recommend the company places railings in dangerous spots and places “DANGER!!!!!1!” signs in front of abandoned shafts.
The company is happy with this: the workers are safer, more efficient, and their farts smell like roses. You as a consultant are happy too — you’ve improved workers’ lives and got rich(~ish) in the process. Life’s good.
Then, inevitably, a cave-in happens.
An investigation yields that Gary disregarded the danger signs, didn’t wear his protective equipment and was not present at the danger simulation VR workshop.
The question of who’s responsible is thus easy to answer — Gary. After all, we had this fancy intervention with signs and helmets and so on.
Alas, very few people ever stop to question what the underlying conditions of this accident, irrespective of any intervention, were:
brittle and volatile geological area;
inadequate techniques used to make the shafts safe;
lax laws governing work in mines.
In other words, the coal-mine i-frame intervention:
shifted responsibility from the system (company, laws, geology) to the individual;
shifted where we place our attention and look for remedies (individual instead of the system);
improved workers’ circumstances but did not change the reality of coal-mining being the shittiest job imaginable.
Returning back to the main argument and speaking more generally, framing societal ills within the i-frame — which is what behavior science does — shifts how the problem is approached. Instead of large-scale, difficult, and controversial s-frame interventions — interventions that change the status quo (laws, regulations, taxes, prohibitions, bans, etc.) — we have i-frame interventions (colloquially called “nudges”) that help individuals navigate the existing system better. By helping people how to make decisions, overcome biases, deal with emotions, etc., i-frame interventions shift the responsibility for societal problems — caused and most likely remedied by big-ticket actors such as large companies, governments, etc. — to individuals.
Here you might argue: “but Marek, don’t we need both? We need a better system, sure, but we also need people to navigate it better!” Until recently, I’d have wholeheartedly (and not at all out of self-interest) agreed with this argument. Sadly, the evidence doesn’t support it:
A series of experiments by Hagmann, Ho and Loewenstein (2019; see, also, Werfel, 2017) show that merely alerting people (including, policy makers in one study) to the potential of implementing an i-frame intervention (a green energy nudge, defaulting residential consumers into a renewable energy plan) reduces support for more substantive policies (a carbon tax). The research further shows that the green energy nudge crowds out support for a carbon tax by providing false hope that the problem of climate change can be addressed without imposing costlier, but immeasurably more effective, policies.
It seems that we have some sort of a mental checkbox that fits both an i- or an s-frame intervention, whichever comes first. The problem is that once that box is checked, we move on to something else.
The paper goes on to mention several examples where this pattern of “responsibilization” of societal problems through i-framing occurs:
Obesity
Financial literacy
Plastic Waste
US healthcare system
And many more. Below, I discuss the example of obesity (feel free to look into the paper for others).
II.
The status quo is dependent on people being obese.
Coke wants to sell liquid sugar, but not pay for the diabetes treatment, plastic waste, or the thousand other second-order problems caused by overconsumption of their highly addictive product.
McDonald’s wants you to be lovin’ your increasingly rotund figure without paying for the bypass you’ll need if you continue your habit.
It’s easy for these large companies to fund research that — even with the best of intentions on the side of the researcher, mind you! — identifies what the individual can do in order not to become, or stay, fat.
As behavioral researchers, we might identify people’s eating habits as the culprit and craft an intervention to change them. We might identify the lack of willpower as a factor and try to address that. We might identify emotional traumas, stress levels, cognitive biases, and literally hundreds of other psychological constructs that play a role. All situated within the individual or the community.
The second-order effect of such an i-frame intervention is that behavior science plays the unwitting role of a status quo keeper. Although it is very well known that the western environment is obesogenic1 — simply moving into such an environment makes one fat through the availability of high-caloric food2 — i-frame interventions imply that it’s the person's fault that they're fat. Worse still, i-frame interventions skew our perception of where we need to take action. Instead of a structural, s-frame, intervention limiting, for instance, the consumption of sugary beverages, we’re supposed to count steps and whatever else people who struggle with weight do.
In other words, the system matters more than the individual. Sadly, we tend to not see that. Why?
III.
The paper goes on to use the i-frame lens (somewhat ironically) to explain why we’re sold on i-frame narratives:
The credulous mind: Insensitivity to conflicts of interest
Even if we know that an actor might be compromised — say he’s on an advisory board of Coke’s thousandth sister company — we disregard (or aren’t aware of the full extent of) how much this influences his decision-making.
Fundamental attribution error
Humans typically focus on the individual causes of behavior rather than the environment. We are more likely to ascribe the girth of the person to their personal failings, choices, and decisions, rather than the fact that this individual lives within a few steps of 5 fast food joints and his local supermarket offers thousands upon thousands of culinary super-stimuli.
Framing
When we hold one frame in our minds (the i-frame, for instance) we are unable (or it’s very difficult for us) to consider other frames. We can put on the i-frame hat or the s-frame hat, but not both at the same time.
Underestimation of adaptation
We have trouble adjusting our mental models by the function of the adaptation. We grow over time and become quite literally different people, with different likes and dislikes. Yet if asked upfront, we are unable to make a somewhat accurate guess of, for instance, how much pleasure or pain we’ll experience when a certain event occurs3. Thus, we end up under- or over-estimating how these events will impact us. Death of a loved one? Surely a disaster of unparalleled proportions. Yet, most people bounce back. A promotion? Surely a reason to feel ecstatic. Yet, in a few weeks or months, most people are back on the baseline.
In short, there are psychological mechanisms that make us prone to overweigh(t) the individual and disregard his circumstances.
IV.
Interestingly, Chater & Loewenstein don’t recommend anything. No “we need x,y,z”, or “we could do a,b,c” and the usual stuff you can find in discussions. Instead, the authors reiterate their points and conclude:
Although today we see s-frame interventions as the path forward for behavioral public policy, we, and many other behavioral scientists, previously had a very different picture in mind: that, even where s-frame reform was required, a focus on additional i-frame interventions could only help. But if the right s-frame solutions were available but not implemented all along, it is likely that behavioral scientists’ enthusiasm for the i-frame has actively reduced attention to, and support for, systemic reform, as corporations interested in blocking change intend. We have been unwitting accomplices to forces opposed to helping create a better society.
If I were to be charitable — and also hopeful to have a job in the future — I’d say there’s a delicate balance of using both i- and s-frame interventions in concert to achieve the highest impact. Where an s-frame change is not feasible for some reason, i-frame might be the only tool available (but remember the green energy nudge caveat above).
If I were to be less charitable, which I’m wont to do, I could claim that this entire paper is a solid argument against using behavioral sciences in solving large societal problems.
As much as I want to see Gary work safely, with a safe helmet, safe respirator, and a safe pickaxe, I’d much rather see him not work in a coal mine at all. But for that, we’d need an s-frame intervention taxing- or banning the use of coal. Similarly, instead of all the exercise programs, fad diets, and calorie counting, maybe it’d be (thousand times) more effective to tax sugar.
Maybe, instead of yet another feel-good-yet-ultimately-ineffectual i-frame intervention lightly nudging people in the desired direction (in their best interest, of course!), we need a proper s-frame intervention that makes them unhappy at first, but ultimately healthier, financially more stable, and maybe happier later on4.
We don’t need to do just something. We need to do better. The question is: should we look past behavior science to do the job?
If you can’t wrap your head around the abstract “obesogenic environment” I mentioned, I find it is easier to imagine the principle on a more human scale — at home. You can’t get fat if you fill up your fridge with veggies instead of ice cream. Similarly, you can’t get fat if the supermarkets and restaurants you frequent don’t have high-calorie, easily digestible processed food readily available. Despite the picture of human weight being incredibly complicated — owing not at all to the fact that it’s associated with people’s self-worth — the basic principle of calories in >=< calories out still applies. Sadly, this common sense doesn’t sell well.
“But there is very good evidence that people who migrate often take on the obesity characteristics of the locality to which they move (Schulz et al, 2006).”
For an entire book on this, read Stumbling on Happiness by Dan Gilbert. Brilliant, witty, and so true.
happiness is overrated anyway — viva la depression!!
Thanks for the comment! This is broadly in-line with what I read today:
https://behavioralscientist.org/making-sense-of-the-do-nudges-work-debate/?mc_cid=82b02b49e7&mc_eid=d793caf8f0 (full of useful links for further reading, recommended)
Besides mentioning the idea that behavior sciences can still be useful in designing and implementing public policies, the article also mentions an internal meta-analysis (https://onlinelibrary.wiley.com/doi/epdf/10.3982/ECTA18709) of two huge nudge units (UK and US). Since it included all the RCT these units have performed over the years, publication bias is out of the question. The results show that while the effects of nudging are much, much smaller than what academic literature reports, they are at least robust and positive. And, of course, depend on the context and the type of nudge employed.
Regarding your final question: This really is key. One answer might be: Behavioral sciences shall be the groundwork for the design of the s-frames — i.e., of the environment/context (be it built-up structures, interfaces of digital media, or large regulartory frameworks) in which the individual behavior will take place in (or be restricted by). Nonetheless, they can only ever be one part of the puzzle.