Wednesday, October 7, 2015

The *other* reasons I'm into EA


Every so often, someone asks me why I’m into effective altruism. My response is usually something like,
“I am very aware of how lucky I am to live such a comfortable and enjoyable life. I’ve also realized that my privilege gives me an incredible ability improve others’ lives—and I want  to commit a large part of my life to doing so. When I think about how to take action, I realize that there are many different things I could do, which means I’ll have to choose—whether deliberately or not—between them. It’s intuitive to me that between two possible actions, I’d rather take the one that causes more people to be empowered to make more informed choices between more attractive sets of options, regardless of who those people are, where they live, and what they believe. And I care that outcomes really occur—not just that it feels nice to have acted toward them, or to think of myself as the kind of person who would act toward them. When you combine my desire to improve the world with my consequentialist and maximization-oriented ethics, you more or less get effective altruism.”
This is not a dishonest response; I really do feel the way that the above quote suggests. But it’s only a partial answer to the question. Suppose someone asked me why the Lakers won a game and I only told them about some buzzer beater that Kobe made; I would be overlooking that the buzzer beater only mattered because all the Lakers’ other baskets had gotten them within one point. The way I usually explain my interest in EA has the same problem. That is, I would probably not be an EA if that explanation was a lie, but the same is true of many other facts about me. It is a necessary but not sufficient condition.

Some of the other reasons I’m into EA are uninteresting. For example, I wouldn’t be interested in EA if I had never heard of it, which might not have happened if I went to school somewhere else or if I were born in a different country or different century.

But I also have particular personality traits and personal experiences that predispose me to effective altruism. They, I think, are interesting and yet usually overlooked. Why do I overlook them? For one, they’re less consciously operative; I don’t directly experience having a personality trait the way I do directly experience the emotions and reasoning from which I act. But it’s not only that—I think I also overlook them in part because they prevent me from believing in an idealized version of myself, one who acts from deep moral convictions rather than from self-interest, ignorance, and randomness. Sometimes I am tempted to believe in the cleaner, perhaps more admirable, and (need I say it?) less “human” version of John.

This blog post explores these other reasons why I’m into effective altruism.

Two quick disclaimers:
First, this post reads a lot into my feelings and behavior, inserting sociological explanations which—while they certainly touch on the truth—are worth taking with several grains of salt. I’ve decided to state them at full strength for the sake of clarity rather than aiming at more tempered descriptions. Second, this post paints an overwhelmingly negative picture of me and my involvement in EA, partly for the reason given in the last disclaimer but also partly because it’s only about the stuff I usually don’t talk about. The good stuff gets enough of its own airtime. Anyhow, please don’t mistake this for the full story and think I’m the devil.


Rationalizing other emotional drives

Rationalizing hard work
I grew up in a family and in a larger culture that emphasized the “value of hard work.” Not only did my parents, teachers, etc. emphasize that time and effort are important to achieving many goals, but also they rewarded hard work itself in order to encourage the trait. Even now, when someone tells me “you’re such a hard worker”—whether it’s a compliment or a complaint—I feel a sense of validation. There’s some kind of cultural understanding that “hard workers” are high up on the social hierarchy; they possess a prized virtue.

As a result, I sometimes work hard in order see myself as virtuous rather than to achieve any particular goal. It’s as though I’m a Calvinist, trying to convince myself I’m a member of the elect (if you’re interested in this connection, read Max Weber’s The Protestant Ethic). I can feel like I need some kind of excuse in order to justify spending time just enjoying things.

On one hand, it’s difficult for me to act as though I don’t accept hard work as an important moral virtue. But I am also actively aware that I feel the urge to work hard more than I really value hard work. So on the other hand, it’s dissonant for me to behave as though I do accept hard work as an important moral virtue.

Effective altruism offers me a way out: It gives me an external reason for me to work hard at things; once we consider that my goal is to maximize someone else’s welfare, it’s no longer irrational to work harder than is optimal for my own. That is, EA allows me to rationalize a deeply ingrained behavioral tendency which would otherwise bother me.

Rationalizing the pursuit of prestige
As much as I might wish otherwise, some weak version of the “excellent sheep” paradigm applies to me. I’ve grown up with people telling me how smart and impressive I was, and they’ve given these compliments in a tone that suggests my talents are a reason to care about me or to think I am important. The result of this conditioning is that my sense of self-worth (like that of many ivy-leaguers) is tied to my “achievements”—and the validation that accompanies them—more than I wish it were. On top of that, I suspect I have some innate drive to seek high social status (for evolutionarypsychology reasons).

The need for validation is not a socially acceptable reason to pursue prestige, nor is it one which I myself find particularly appealing. And so, while it would be hard on my sense of self-worth to forgo prestige, it would also be dissonant to pursue it without some other justification. As with hard work, EA provides that justification, in this case by reminding me that prestigious people often have the most power to do good. This reasoning allows me to rationalize actions that would otherwise be motivated by only my emotional need for validation.

EA also has a secondary prestige effect. It tells people that if they behave a certain way, they are doing something that is “important,” and often stresses that what an ordinary EA can do (say donating 10%) is more important than what most people do in their whole lives. That is to say, EA is kind of self-congratulatory, which makes me feel good because, again, I’ve been conditioned to want validations of my self-worth.


Few conflicts with other interests

Minimal group identities and group indebtedness
Many people identify strongly with groups that share their nationality, hometown, cultural background, race, gender, sexual orientation, religion, etc. When people identify more with some groups than others, they often care more about (or feel more responsible to) members of those groups than members of others. This can be enhanced by the feeling that a particular group has done something for oneself and one should return the favor or “pay it forward” within the group.

Among people I do not know, the extent to which I identify with some individuals more than others is pretty weak (I think this has a lot to do with that I’ve never spent a lot of time somewhere where I feel marginalized). The people to whom I really do feel responsible—my family, my close friends—are in positions of wealth or social status where I can’t do anything to dramatically improve their wellbeing.

Being financially comfortable
I am not seriously concerned about finances day to day, and I didn’t grow up witnessing others’ concern about finances day to day. I’m used to thinking that I’ll probably never really have to worry about whether or not I can support myself or my family. (I also have cheap tastes, which makes this a bit easier.)

When someone suggests to me that I donate some of my money, my mind doesn’t jump to worrying about whether or not I can afford to do so. I certainly think about it, but I’m not triggered directly toward it. I think I might be if I weren’t used to being financially comfortable. It’s easier for me to be an EA because of both the absence of this trigger and the understanding that I really will be financially comfortable, EA or not.


Explanations and understanding, for myself and for others

Desire to understand myself a certain way
I care a lot about having fleshed-out models that explain why I feel and behave the ways that I do. I dislike the feeling that I’m acting arbitrarily; I prefer to do things for reasons. And since I must accept the lowest level of reasons—my axioms—for no reason at all, I prefer use as few of them as possible. My preferences between models are especially important when I’m modeling myself, because—since I have some control over my feelings and behavior—I can change myself in order to conform to the kind of model I like, rather than fitting a model to some fixed version of myself.

One exceptionally simple (though not particularly well-fitting) model of me is this: “given his uncertainty, John takes the actions that maximize the expected utility of all sentient beings over all future events—and he enjoys life proportionally to size of his recent actions’ impact.” In theory, this could specify how I act and (roughly) feel at every moment. And it’s so simple. The idea that I might cohere to such a nice model really appeals to some part of me and it makes EA an appealing life philosophy. In reality, of course, I can only get some of the way there, but I can cause my actions to make somewhat more sense, and I can change my feelings (or at least suppress them) to make me more comfortable doing this.

At this point, my attempts to derive actions from first principles—rather than, say, acting on intuition—have extended to almost everything I do (often to my great frustration). I’m so accustomed to this approach that it’s hard for me to understand there are other options.. And as the supposed alternatives become increasingly inaccessible to me, EA’s relative appeal increases.

Wanting everyone to think I’m nice
I care about what other people think of me, and in particular I hate thinking that someone considers me unkind. When friends of mine come into personal conflict, my instinct is to avoid taking a side. When someone thinks I’ve mistreated them, I feel awful until they understand my perspective and (hopefully) forgive me. I really want everyone to think I’m acting in a kind and reasonable way. 

This has two effects. First, it motivates altruistic behavior; I (simplistically) reason that the more altruistically I act, the less likely it is for anyone to think I’m a jerk. Second, it reinforces my desire to derive my actions from very simple and widely-held assumptions; if others can understand where I’m coming from, they’ll probably respond more kindly.

EA's appeal to certainty-seekers
This last section is not about me. But there’s another trend along the same lines that I see in some other EAs, so I thought I’d mention it.

I’ll state my claim now and then work back up to it slowly. The claim: I think some people find EA appealing because it offers a fleshed-out worldview which, once they accept, allows them to avoid the tedious sometimes terrifying process of questioning one’s fundamental assumptions.

Consider that you may be the only sentient being; everyone else is a philosophical zombie. Consider that you may be living in a simulation, and, further, that the world from which you’re being simulated isn’t anything like our own. Consider that when you die, you may wake up and realize it had all been a dream; and that when you wake up from this level you realize it too had been a dream; and so on indefinitely.

This ideas point to a kind of ultimate ignorance—not just uncertainty about parameters of a model, but also uncertainty about the model itself, and about what all we are uncertain about, and about whether there are even objective questions about which to be uncertain.

For some people, contemplating ultimate ignorance is freeing, magical, and profound. To others, it’s frustrating and terrifying—like you’re hurtling through some total darkness and there’s nothing to grab onto.

For many in this second camp, religion offers a way out of the philosophical terror. Most religions insist on a particular metaphysics—the way things are—and on a particular morality—the way we should act.  If one can become convinced of a religion’s worldview, it’s easier to tune out the existential uncertainty.

Most EAs are not religious in the formal sense—but they do tend to dislike uncertainty. Pretty much all EAs want to reduce uncertainty about empirical questions (e.g. what is the causal effect of some intervention). Many EAs are interested in reducing “moral uncertainty” (e.g. how much should we care about animal suffering)—and some do this from a moral realist perspective where they believe there is some absolute moral fact to discover. There are also a bunch of EAs who believe we can get at answers to the ultimate questions through Occam’s-Razor-like approaches related to informational complexity.

Now, effective altruism isn’t about answering existential questions, but—especially because of the homogeneity of many EA sub-communities—it certainly pushes a certain worldview. This worldview is a physicalist one without a god, where our world is all there is (though there’s some chance we’re being simulated by a similar world), and in which utilitarianism is the objective moral truth.

I suspect that some people—particularly the most uncertainty-averse EAs—find EA appealing in large part because it, by establishing a well-defined worldview, allows them to avoid the terror of uncertainty that can result from really questioning one’s worldview. I suspect that they like being able to have answers available and seemingly absolute, over and above the extent to which they think those answers are reasonable or justified or true.


Conclusion
  • EA lets me rationalize emotional drives which otherwise cause dissonance
    • My drive to work hard
    • My drive to pursue prestige
  • Various privileges have led to a general absence of conflicts between EA and other things
    • I don’t identify with or feel responsible to groups of people I don’t know
    • I don’t worry much about my finances
  • Explaining John to myself and others
    • I like to act in ways that I can understand through simple models
    • I want others to think I’m nice and reasonable
  • I think some people are drawn to EA because it offers a fleshed-out worldview; this, like religious worldviews, helps people avoid existential terror


Thanks
To Erica, for some clarifying comments

1 comment: