Incenting supramarginally in large groups

To change how a group behaves is to change how the people in it behave. There are two ways to do this: change how the people currently in the group behave—the intensive margin—or change the mix of who is in the group—the extensive margin. Let’s call the former “influence” and the latter “selection”. The distinction between these notions can be somewhat artificial: who you select to include in a group influences who chooses to stay, and the influences on those in place affect who a group selects. But in general, all else equal, the distinction can be useful as a guide to analysis or action.1

Suppose you have authority over resource allocation in a large group that is strongly habituated to behaviors you don’t like. Scalable influence is ideal in this situation: if you can elicit different behaviors from the people you already have, you won’t lose any time to bringing new people up to speed or dealing with the loss of undocumented knowledge. Sometimes this works: a “bonus for good behaviors” program may only need a small cash prize for the best-behaved members in order to incent appreciably better behavior on average. On the other hand, this is a bit like saying “a lot of money is an ideal solution to the problem of not having enough money”. Realistically, you’ll need to figure out a mix of influence and selection to get the behaviors you want from the group.

Suppose you do figure out a decent mix. Over time the mapping between incentives and behaviors will drift. The old incentives may not have been fully replaced; some people may remain strongly habituated to the old behaviors; the forces (people) behind introducing new incentives may not last. Whatever the reason, it’s worth thinking about how to ensure that (1) new entrants expect to be practicing the behaviors you want to see, and (2) existing members are happy to stick with them and enforce their practice. Basically, people in the group have to find it more worthwhile to continue practicing desirable behaviors than to revert to undesirable ones.

While scalable influence is hard, scalably selecting for different kinds of behaviors within the context of the existing incentives and behaviors can be even harder.2 Selection only changes the population of behaviors in a group if being selected also appreciably improves the behavior’s fitness in the group. The fitness of a behavior tends to depend on how others in the group see it—in other words, on the social returns to good behavior.

Mechanism designers are people too

In my experience with large groups, social returns are much harder to change than economic returns. One reason is that larger groups tend to have larger surface areas in contact with society at large, creating more vectors through which behaviors can be socially rewarded or punished from outside the group. The more things the mechanism designer has to think about, the harder it is to design suitably thoughtful incentives. Another reason is that larger groups tend to have larger internal volumes in which behaviors can find or create niches. Further, any group has a mix of formally and informally documented processes and patterns, and the larger the group the more group-specific informal knowledge tends to accumulate. Because the knowledge is group-specific, new entrants won’t have it; because it’s informal, exiting incumbents will take it with them. These combine to leave the resource planner with a worse map of the territory, increasing the likelihood that supramarginal changes produce less-fit behaviors.

Because social returns involve people looking at other people, there is often an inherently positional aspect to them. It is tempting to lean into this; stack ranking enjoys recurrent popularity in large organizations concerned with scale. Yet the rivalrousness inherent to ranks limits their scalability as an incentive. Positional contests can sometimes elicit good behaviors even if the ranks that reward them are scarce. This tends to require sufficient and sufficiently broad turnover in rank-possession so that desirable ranks are seen as contestable even if not consistently attainable. To the extent this kind of contestability is less natural an outcome than the ossifying accumulation of rents, it requires active effort to maintain. The downside of getting it wrong is zero- or negative-sum competition within the group.

People all the way down

It’s natural, then, to wonder what kinds of social returns are scalable, non-rival, and have the potential to be sticky. This is a backwards way to approach the issue; mechanisms are only as good as the people who implement them. A better approach starts somewhere like “what kinds of people can capture and generate scalable, non-rival, and sticky social returns to particular behaviors?” A basic observation here is that there are significant social returns to being seen as someone who is not moved by social returns.3 To the extent people tend to recognize their responsiveness to social returns, they are often not enamored of it. It’s less discomforting to justify behaviors from appeals to principles more fundamental than “because I want to fit in”. Following someone who seems to be chasing social returns is just another way to chase social returns yourself; if you wanted to do this you might as well cut out the middleman.4

“Virtues” are in the category of “motivations that are plausibly insensitive to social returns”. If you believe this story so far, it would seem that incenting and sustaining big changes in large groups is easier if the driving forces (people) are seen as virtuous by those whose behavior is to be changed. Being part of a supramarginal shift is a kind of stag hunt, and credible signals of virtue help people coordinate on new behaviors. If you want me to make big changes, I need to know that you have a durable compass that can steer us through the wilderness of change. It helps if the people who propagate these virtues have a decent hit rate in some objective empirical sense. But let’s not kid ourselves—most social prediction problems are underdetermined to the point where if enough people like the vibes you’re selling, they’ll find latent variables to adjust in your favor.

There’s more than a bit of “they’re virtuous people who propagate good behaviors Michael, how elusive can they be” here. Creating sticky new virtues (or finding people who can propagate them effectively) in a large group is a nontrivial task. Practically, the set of virtues that can be instilled in an organization will tend to be variants on virtues already circulating in society at large. And what is society if not a collection of ill-defined groups in a trenchcoat?

  1. Particularly for supramarginal changes in large groups. With marginal changes in a large group or supramarginal changes in a small group, the difference between encouraging one person to do better and replacing them may not be terribly significant. But supramarginal changes in large groups require attention to scale in two dimensions, so small gaps in the relative effectiveness of the margins add up. “Large” here is roughly “a scale that one human charged with allocating resources can’t see the full picture without heavily relying on lossy abstractions from details”. By this definition a typical academic department or few-person firm is generally a small group, while an academic field or an industry is generally a large group. Recent discourse inspired me to actually finish and post this, but I didn’t really have scientific (or, for that matter, any particular) institutions in mind; I think these points apply generally to making big changes in large groups. 

  2. “Who does HR for the HR department” feels a bit too cute, but directionally right. 

  3. Also known as the “cool guys don’t look at explosions” principle. 

  4. On the other hand, some people do want to follow social returns and find an effective dowsing rod convenient. This does not imply public opinion dowsing rods are cool.