Arguments for and against moral advocacy

Spreading beneficial values is a promising intervention for effective altruists. We may hope that our advocacy has a lasting influence on our society (or its successor), thus having an indirect impact on the far future. Alternatively, we may view moral advocacy a form of movement building, that is, it may inspire additional people to work on the issues we care most about.

This post analyses key strategic questions on moral advocacy, such as:

  • What does moral advocacy look like in practice? Which values should we spread, and how?
  • How effective is moral advocacy compared to other interventions such as directly influencing new technologies?
  • What are the most important arguments for and against focusing on moral advocacy?

Contents

What do I mean by moral advocacy?

The first association with moral advocacy is to “go out there and spread the word”. But advocating for one’s values can take many, often indirect forms. For example, Eliezer Yudkowsky’s work on LessWrong and on AI safety is not moral advocacy in the narrow sense, but it did spread his values in the community at least to a certain extent.

Similarly, when I talk about moral advocacy, I don’t mean blunt repetition of our values, but rather having something to show for it. In practice, moral advocacy is closely related to high-quality object-level work to demonstrate what the values are about.

I use the terms “values spreading” and “moral advocacy” interchangeably. In comparison to “movement building” or “community building”, moral advocacy refers to activities that influence people’s values rather than just providing services to the community – though both can and should go hand-in-hand.

Which values could we spread?

Of course, the effectiveness of moral advocacy depends on which values we try to spread. I think the following are plausible candidates:

  • Expanding the moral circle. This means promoting moral concern for all sentient beings. Given the current moral circle of most people, we could advocate for antispeciesism, that is, taking animal suffering seriously. Alternatively, we could spread ideas like the importance of wild animal suffering and digital sentience.
  • Promoting consequentialism. In contrast to other values, this has an explicitly philosophical bent. We can view that as an advantage if we think that philosophical clarity and reflectiveness are essential to have an impact, but it may also be problematic because we reach fewer people that way.
  • Promoting specific ethical views, especially on controversial points such as population ethics. For instance, if one endorses suffering-focused ethics (as I do), it could be worthwhile to write and disseminate more texts on it. One could try to refute common objections or write about intuition pumps in favor of the view.
  • Spreading generic (effective) altruism. Unless combined with other values, this can be neutral with respect to other dimensions like population ethics or moral circle expansion.
  • Spreading the idea that we should reduce s-risks. This involves not only values but also empirical arguments for why s-risks may not be as improbable as it seems.

Figuring out which of these values contributes most to reducing suffering in the future would need much more research. What matters is not only how much the value helps in “foreseeable” future scenarios, but also how it influences unknown unknowns and how robustly it improves the future rather than making it worse.

An ideal analysis of the advantages and drawbacks of moral advocacy would consider all these values separately. Despite this, the rest of this post will talk about how promising moral advocacy is in general, and the term will refer to a combination of the above values. I think this approach makes sense for several reasons:

  • The most plausible form of moral advocacy in practice is that we would (at least to some extent) advocate for all these of these values at the same time, though it’s possible to emphasize certain parts over others.
  • The values overlap significantly. For instance, explicit consequentialism seems to correlate with a larger moral circle (at least in the EA community), even though one does not imply the other in any way. We can, therefore, expect strong flow-through effects. If someone learns about effective altruism because of our work, it’s more likely that the person will also read about consequentialism or vice versa.
  • Because of this strong correlation, I think it makes sense to talk about “spreading compassion”, despite the vagueness of that term.

Arguments for moral advocacy

Fairly little concern for suffering might go a long way

People often view moral advocacy as the attempt to change the values of society at large – which is fairly hard – but I don’t think we necessarily need to aim for this. Convincing a small minority may already yield a significant fraction of the benefits.

This is because low-hanging fruit will likely allow this minority to effectively reduce suffering. For instance, in the case of factory farming, stunning animals before slaughter and basic welfare laws prevent a large fraction of the suffering that would otherwise happen, and society implements such measures despite the fairly low level of concern for animals. Of course, factory farming is still horrific, so this does not get us all the way. But it’s at least plausible that increased concern for suffering has diminishing marginal returns.

Similarly, we might hope that fairly little concern for suffering would go a long way towards mitigating the “incidental” harm caused by egoistic or economic forces in the future. The point may apply even more strongly if advanced future technology facilitates the fulfillment of many values at once – like cultured meat in the factory farming analogy.

The analogy also highlights the role of consistency. Most people are compassionate in that they care about certain animals like dogs and cats, but they do nothing to help farm animals. The situation is even worse if we consider wild animal suffering and digital sentience. So, little concern for suffering only goes a long way if it’s consistent, that is, includes all sentient beings.

Robustness

At first blush, convincing more people of our value system (or parts thereof) is robustly positive because they will pursue interventions that are positive for this value system in expectation (unless they are highly irrational). Better values also reduce expected suffering from unknown unknowns.

That said, moral advocacy may not be as robust as it seems, e.g. because of agential s-risks and the reasons outlined here. A plausible mechanism for how very bad futures could occur is the escalation of a high-stakes conflict, where one or both parties start making threats of the form “if you do / don’t do X, I’ll do [insert horrible thing here]”. With more altruistic values, it may become more likely that (the execution of) such threats would lead to large amounts of suffering. (I suggest surrogate goals as a potential solution to this issue.)

Moral advocacy is disjunctive

Many ways to have an impact depend on uncertain and often speculative predictions of how the future will unfold. For example, working on the risks of advanced AI hinges on the assumption that AI will be a pivotal technology. In fact, we arguably need fairly detailed scenarios to work effectively on the topic.

But we have every reason to expect that our analyses may be flawed, given the intrinsic difficulty of predicting the (distant) future and the lack of robust evidence and data. This reduces both the expected magnitude and the robustness of any intervention that requires such predictions.1

In contrast, a causal chain of the form “moral advocacy leads to more better values in the future, which (in expection) reduces suffering” is fairly disjunctive in that it does not require specific predictions of the future. It does, however, require the following implicit assumptions:

  1. Advocating for a value system increases the number of people with these values. The main reason why this is not obvious, especially in the long-run, is if our advocacy arouses opposition that successfully advocates the opposite value. But I’m optimistic that the assumption holds for smart efforts to spread compassionate values (which will likely not be controversial).
  2. Better values lead to better actions (in expectation). I think this is quite plausible, but a possible counterexample is that many animal rights advocates support wilderness preservation.
  3. Human values have non-negligible influence over the future. This also seems likely to me, but it’s conceivable that e.g. Darwinian competition or uncontrolled AI renders moral advocacy futile.

Note that the impact of moral advocacy is not necessarily based on hoping that better values now directly translates into better values in the far future. It only assumes that additional altruistic people somehow do something valuable. For example, they may contribute to shaping advanced AI, in which case moral advocacy is relevant even if the long-term distribution of values reverts to an equilibrium.

Plausibly high magnitude of impact

Moral advocacy can be highly influential. Individuals such as Peter Singer or Brian Tomasik, whose advocacy has inspired many others, are the most obvious evidence for this claim. More generally, the differences in values between communities (e.g. LessWrong, effective altruism in the English-speaking area, and effective altruism in the German-speaking area) can often be traced back to a few key individuals, which suggests that spreading values can multiply your impact.

It also suggests that the effectiveness strongly depends on how common the values already are, and on how good you are at spreading them. Peter Singer and Brian Tomasik had a large impact because they pioneered their respective causes (animal liberation, wild animal suffering). Similarly, we may prefer to advocate “neglected” values such as concern for digital sentience, the importance of s-risks, or suffering-focused ethics over more common values like concern for animals.

While moral advocacy is plausibly high-impact, it is possible that focused attempts to shape new technologies are an even more powerful lever. But it’s also comparatively more difficult and risky to try to find such interventions.

Objections

Moral advocacy does not have a lasting impact

We may question the long-term impact of spreading values based on the following cluster of arguments:

  • The evolution of human values is a complex dynamical system which is hard to reliably and sustainably influence by tweaking some of the parameters.
  • The evolution of values may be comparable to an ecosystem which will revert to an equilibrium over time, rendering attempts to influence it futile. A plausible mechanism for this is that values spreading provokes opposing forces that push the overall distribution of values back to an equilibrium. For example, the anti-LGBT advocacy of the Westboro Baptist church may lead to more LGBT activists or cause a public backlash.
  • People advocated for a plethora of positions in the past, most of which has become irrelevant by now.

I agree that these are reasonable points that cast doubt on the idea that we can influence the values of the far future by spreading our values. Still, I’d like to offer a few replies:

  • As I’ve argued above, the goal of moral advocacy is not only to improve the values of the far future – which is indeed questionable. Moral advocacy creates more effective altruists who can work to shape pivotal events in the coming decades and centuries.
  • I agree that the evolution of human values is a chaotic system. However, this does not imply that the counterfactual difference in values created by moral advocacy decreases over time. It might even grow over time if new activists do moral advocacy themselves. Overall, it’s not obvious to me whether the difference becomes larger or smaller in expectation.
  • A difference between our moral advocacy and the Westboro Church example is that our ideas are less controversial and less likely to spark strong opposition. There is no anti-EA movement, no anti-antispeciesism movement, and so on. Of course, many people disagree with these ideas, but they don’t view it as a (big) harm to their values. There’s a difference between just disagreeing with an idea and starting to fight for its precise opposite.
  • How one advocates values is crucial. Moral advocacy is only valuable if it’s done in a smart way, and can backfire otherwise. But this is not a general argument against moral advocacy. Instead, it suggests that the groups with a comparative advantage in moral advocacy should do it.
  • We can point to several examples of large-scale changes in societal values in human history, such as the rise of democracy, feminism, or anti-racism. This happened at least in part due to advocacy efforts and is evidence against the idea that values revert to a stable equilibrium.

Deterministic view of history

In a deterministic view of history, technological developments are the driving force behind changes in people’s values, not moral advocacy. Did slavery end because of the abolitionist movement’s advocacy or because industrialization made slaves obsolete? If we think that changing circumstances – like the emergence of new technologies – determine societal values, then advocacy may be pointless.

However, it is just as possible that the causation goes the other way, that is, values influence technological developments. For instance, concern for animals hastens the development of clean meat, and in human history, nationalism may have accelerated the invention of new forms of warfare. The interplay between values and technology is complex; most likely, the causal effects go both ways. (More research on this would be valuable.)

In a similar vein, germ theory suggests that value differences are based on arbitrary factors such as the frequency of diseases, not on advocacy or moral reflection. More generally, one might argue that most people aren’t perfect agents and are not that interested in philosophical reflection, which may reduce the impact of moral advocacy.

However, even if this is true, advocacy still nudges people in better directions. Also, as argued earlier, we don’t need to convince everybody – it may be sufficient to have a minority that cares about reducing suffering.

Shaping technology is more effective

Efforts to shape powerful new technologies may be more focused than moral advocacy. Also, few people work on the technical aspects of new technologies like AI, while one needs relatively many people to make moral advocacy effective. In fact, moral advocacy may require more resources than it naively seems because of tricky questions of impact attribution, especially if moral advocacy is done over many generations.

I tentatively agree that directly influencing technology is a better lever if we can identify targeted interventions that are both robustly positive and tractable. Work on the risks of advanced artificial intelligence – especially fail-safe AImay or may not fill this role. But trying to shape technology is also riskier in that it’s more likely that our efforts will be wasted, for example because we are mistaken about which technologies are most consequential. Also, moral advocacy is a way to shape technology, too – by having more people work on it when the time comes.

All told, I’m fairly agnostic about the relative effectiveness of moral advocacy and directly working on technology.

Values spreading is crowded

Max Daniel argues that “values spreading, in general, is pretty crowded, a lot of religious, political, and philosophical groups are trying to spread values and compete for attention”.2

I agree that lots of groups try to spread their values. But the more relevant question is how many groups spread values that are in direct competition to your own, that is, appeal to the same class of people. If we assume that people have innate intuitions for or against particular moral views, and that this determines whether they adopt a value system upon reflection, then the advocacy of other value systems does not make our own advocacy less effective.

Of course, this assumption is idealized, so argument does not quite work. In particular, people’s values are often determined by the values they first come in contact with, or the values held by respected individuals. In other words, we observe strong path dependencies regarding which values someone endorses after reflection.

But I would still maintain that the largest obstacle for spreading values is usually not the moral advocacy of other groups. Rather, it’s that relatively few people are strongly altruistic, reflect a lot about philosophy, share my suffering-focused intuitions about population ethics, and so on. Of course, this may in and of itself reduce the value of moral advocacy or lead to strongly diminishing marginal returns at some point.

More people work on broader forms of moral advocacy such as spreading generic altruism or expanding the moral circle, but the numbers are still fairly small on an absolute scale. 80000 hours also thinks advocacy is neglected because “there’s usually no commercial incentive to spread socially important ideas”.

Moral advocacy is zero-sum

In his essay “Against moral advocacy”, Paul Christiano argues that trying to spread one’s values is often a zero-sum game that we should abstain from. The idea is that if we spread our values, other groups will spread opposing values, leading to no net change.

This is an interesting argument, but I don’t quite agree with the conclusion, at least for the kind of moral advocacy I have in mind. This is for the following reasons:

  • Many forms of values spreading, such as effective altruism or expanding humanity’s moral circle, do not harm anyone (in expectation). Many people will disagree with the ideas, but even for those, it’s probably more or less neutral if more people are effective altruists. This is particularly true if the counterfactual is egoism.
  • Values – especially altruistic values – are almost never exactly opposite to each other. For example, nobody wants to minimize aggregate welfare.
  • Moral advocacy contributes to philosophical reflection, which is prima facie positive for all value systems. We can split the effects of advocacy into a “moral reflection part” that gives arguments for both sides and leads people towards the position they would endorse upon reflection and another part that’s just about promoting one’s values. Only the latter is problematic on cooperation grounds.
  • Even pure promotion of one’s values is positive-sum unless the value systems are completely opposite. In addition, I think it’s valuable to have a diverse “offer” of plausible value systems out there and let people choose which of these they endorse. In this sense, even pure promotion of one’s own values can contribute to moral reflection.
  • If something is a zero-sum game or a cooperation problem, this does not strictly imply that you should opt for the cooperative action. It can be rational to defect. That said, this Prisoner’s dilemma is iterated, so cooperation is plausibly a good strategy. In general, there are many reasons to be nice to other value systems.
  • The “common-sense method” of handling differences in values, for instance in politics, is that everyone follows their own priorities – including spreading one’s values – and later the competing groups strike an explicit compromise (e.g. in a parliament). This isn’t the theoretical optimum, but it is quite robust and seems to work to at least some extent. (Of course, politics has many other issues.)
  • The argument is similar to the idea that controversial values generate opposition, so we have to avoid double-counting the objection.

Conclusion

Moral advocacy is plausibly high-impact, especially if we believe that a fairly low level of concern for suffering will be enough to reap low-hanging fruits. I think many of the objections are reasonable points, but not decisive, which is why I’m still mildly optimistic.

That said, the relative effectiveness of advocacy compared to other interventions like shaping new technology is highly uncertain. I tentatively think that the latter can be even more impactful in the best case, but it’s more difficult to find a good lever. A promising approach is to combine both by shaping the values of the people who shape technology.

Acknowledgements

I am indebted to Max Daniel, Lukas Gloor, Brian Tomasik, and David Althaus for valuable comments and discussions.

Footnotes

  1. One might argue that our analyses can be flawed in both directions and this consideration doesn’t change the expected value. But I think there are good reasons why it is probably asymmetric. If we choose an intervention based on an analysis, it would be a striking coincidence if the intervention is equally (or more) effective if the analysis is wrong.
  2. Source: Internal conversation.

Leave a Reply

Your email address will not be published. Required fields are marked *