Monday, April 18, 2016

What's the Expected Marginal Impact of Voting?

I've always felt conflicted about voting. On one hand, I like the idea of participating in democracy. On the other hand, there is almost no chance of an election being so close that my voteone out of millionswill break a tie and have a marginal impact.

In the past, I have argued it is so unlikely my vote will matter that voting is not worthwhile, even if I altruistically account for the vast number of people whom elections affect. That is, I've argued that the expected outcome of my votethe chance it will swing the election, times the impact that would have on each person, times the number of people the election impactsis next to nothing. The expected causal effect of voting is so small, I claimed, that it would be more altruistic for me to write a nice letter to my grandmother than to vote.

Last week I finally thought through it carefully... and it turns out I was wrong.

I now think the expected net impact of one vote in a typical US presidential election is on the same order of magnitude as the impact of the election on one eligible voter. So if you care about the way your vote will affect the rest of the country and the world (and you think you know what effect it will have!), voting may be a very valuable use of your time.

In this post, I'll explain my old argument against voting, show why it was wrong, andwith minimal amounts of mathballpark a very rough estimate of the expected marginal impact of a vote. (If you're not interested in the fallacious argument, just skip ahead.)

Disclaimers

  1. This whole post works within the framework of a plurality (whoever gets the most votes wins) election between two parties. The electoral college is somewhat more complicated, with aggregation of votes happening at more local levels. For the purpose of the ballpark arguments in this post, I don't think these details matter, but I'm happy to discuss in the comments if anyone disagrees.
  2. I'm also ignoring the possibility that the election will be tied after I vote, and I'm ignoring the fact that very close elections are decided by complicated politcal processes I don't understand. Again, I don't think these matter to first order, but I'm happy to discuss in the comments.
  3. There are lots of arguments for (and against) voting, and their omission does not represent a stance on any of them. I am simply focusing on causal impact.
  4. None of the reasoning or math in this post is particularly sophisticated. I write it not because it is interesting but because it is important! And because I owe it to everyone to whom I've made the wrong argument.


My old, fallacious argument against voting

My thinking went like this: If we anonymize all of the $N$ ballots in an election and then consider them one by one, we can think of each as an identical random variable $X_i$. This random variable can take the value $-1$ (Democrat), $0$ (didn't vote), or $1$ (Republican). Since I don't know who voter $i$ is, and since he or she might make a mistake or forget to vote, $X_i$ could equal any of $-1, 0,$ or $1$, so it has some variance $\sigma^2$. Further, there is some bias to the votesi.e. people on average slightly prefer one candidate to the otherso $X_i$ has some non-zero expected value $b$.

What is the chance that I swing the election? It's the same as the chance that the election was tied before I voted. The probability of a tie is just
$$P(\sum X_i = 0) = P(\sum X_i \leq 0) - P(\sum X_i \leq -1) \\
= P(\sum X_i - E(X_i) \leq -b \cdot N) - P(\sum X_i - E(X_i) \leq -b \cdot N - 1) \\
= P(\frac{1}{\sigma \sqrt{N}} \sum X_i - E(X_i) \leq \frac{-b \sqrt{N}}{\sigma}) - P(\frac{1}{\sigma \sqrt{N}} \sum X_i - E(X_i) \leq \frac{-b \sqrt{N} - 1/\sqrt{N}}{\sigma})$$
Now, applying the central limit theoremwhich tells us that the average of many independent, identically distributed, mean-zero random variables converges to a normal distribution distributed "very tightly" around zerowe can say that the probability of a tie is
$$\approx  P(Z \leq \frac{-b \sqrt{N}}{\sigma}) - P(Z \leq \frac{-b \sqrt{N} - 1/\sqrt{N}}{\sigma}))  \\
\frac{1}{\sigma \sqrt{N}} \cdot \phi(-b \sqrt{N} / \sigma) = \frac{1}{ \sigma \sqrt{2 \pi N}} e^{-\frac{b^2 N}{2 \sigma^2}  }
$$
This is absurdly small for large $N$ (many voters). But more importantly, even when we compute my expected impact, multiplying the chance of swinging the election by the number of voters $N$ and the impact $I$ of the election on each voter, it is still basically zero. For example, if we suppose that the election has a whopping $\$$100,000 effect on each voter, that there are only one million voters, and that voters are biased toward the Democrats by only half a percent (so that $b=0.01$), the expected impact of my vote is
$$(number \ of \ voters)(impact \ per \ voter)(probability \ of \ impact) \\
= N \cdot I \cdot \frac{1}{\sigma \sqrt{ 2 \pi  N}} e^{-\frac{b^2 N}{2 \sigma^2}  }
= \frac{I}{\sqrt{2 \pi} \sigma} \sqrt{N} e^{-\frac{b^2 N}{2 \sigma^2}  }
\approx \frac{10^5}{\sqrt{2 \pi} \cdot 1} \sqrt{10^6} e^{-\frac{10^{-4} \cdot 10^6}{2 \cdot 1}  } \\
= \$0.0000000000000077
$$
If that didn't make much sense to you, here's the basic intuition: A million votes being cast is like a million coins being flipped. If each of the coins is weighted a bit toward heads, then after enough flips, more than half will be heads. The chance that there are comparably many heads and tails is exceptionally small; in fact, it falls off exponentially as the number of flips increases. Similarly, the chance that when the average voter is a little biased toward the Democrats, all of the random factors--mis-checkings of boxes, people forgettings to go vote after work--favor the Republicans declines exponentially as the number of voters increases. Meanwhile, the impact of election's outcome grows proportionally to the number of voters. And an exponential decline will always dominate a linear growth, when the numbers are big.

Why that argument was wrong

In the argument above, I said "there is some bias to the votesi.e. people on average slightly prefer one candidate to the other" and interpreted this by giving each vote some non-zero mean $b$.  While it's true that the votes will almost certainly have some bias one way or the other, it's suspicious of me to treat this bias as a fixed number, because I am uncertain of its value. Rather, I should model $b$ as another random variable. Of course, I can assign near-zero probability to $b=0$, but even an infinitesimal chance that $b=0$ may matter after I multiply it by $N$, the large number of voters whom the election affects.

Another way to understand the fallacy in my argument is to think I misapplied the central limit theorem. If we interpret $b$ not as the actual bias of each vote, but rather as my best guess of the bias of each vote, then the $X_i$ are not independentsince if the first hundred I observe have a sample mean that is, say, less than $b$, then I will expect that $b$ was an overestimate and that the one-hundred-and-first vote is less than $b$. Since the $X_i$ aren't independent, I can't apply the (standard) central limit theorem.

The (actual) expected impact of voting 

Rather than separately considering many sources of uncertaintywhat the net bias of the population is, how many people will accidentally check the wrong box, who will forget to show upwe can model them all simultaneously, by thinking about my subjective probability distribution on the sum of the votes.

So, the day of a presidential election, what does my subjective distribution of $\sum X_i$ look like? A quick google search suggests that on the day of an election, betting markets typically reflect about 90% odds in favor of one candidate. If I knew better than the betting markets, I could be making a lot of money, so it's reasonable to assume my beliefs are similar to theirs. That means I assign a 10% chance of the predicted-to-lose candidate getting 50% or more of the vote, andsince there's basically no chance that the predicted loser gets more than 60% of the votewe can say I assign a 10% chance that the predicted loser gets between 50% and 60% of the vote.

So how likely is it there is a tie? We'd expect that if the predicted loser wins, it will be by the skin of their teeth. But to be very conservative, let's say it's as likely that the predicted loser gets 50.0% as it is they get 50.1%, as it is they get 50.2%, ..., all the way to 60%. That is, conditional on winning by between 0% and 10%, each number of votes the predicted loser might get is equally likely. Receiving 50% to 60% of the vote corresponds to receiving $\frac{5N}{10}$ to $\frac{6N}{10}$ actual votes, so there are $\frac{N}{10}$ possible numbers of votes that candidate might receive. So if each such number is equally likely, then there is a $10\% / (\frac{N}{10}) = 1/N$ chance that they get exactly 50% of the votes.

If there is a $1/N$ chance that one candidate gets exactly half the votes, then there is a $1/N$ chance that I swing the election. So the expected impact of my vote is just
$$(number \ of \ voters)(impact \ per \ voter)(probability \ of \ impact)  = N \cdot I \cdot 1/N = I$$

Wait, really?

How can this be? No US presidential election has ever come within one vote. Is it really reasonable to think this might happen? 

These questions are tempting, but ultimately misguided. We've never seen a tie before, andsince there is only a $1/N \approx 0.0000003\%$ chance of it happening in each presidential electionwe shouldn't expect that we ever will. But on the off-chance that there is a tie, each vote will have a marginal impact whose magnitude is as large as the off-chance is small. Since our brains are bad at understanding both tiny probabilities and huge impacts, and since this problem requires us to weigh the two against each other, we shouldn't really expect this to be intuitive.

Loose ends

So far, I've left all estimates in terms of $I$, which I've called the average impact of the election on a voter. By this, I mean the expected difference in outcomes for an average person if your preferred candidate is selected instead of the other one.

It's important to be aware that $I$ may be negative; you might chose a candidate who will actually do a lot of harm to other voters (not to mention the rest of the world!). If you are really humble, you might think that you have no better idea of what is good for people than does anyone else, in which case your $I$ is about zero, and you'll need to find some other reasons to vote.

However, you might also think that you are better informed or better educated than other voters, or that your values are "better" than theirs in some moral sense. In this case, $I$ could be quite large, since different presidents have significantly different priorities. I'll guess that I'd put my own $I$ in the range of thousands or tens of thousands of dollars (though I cringe at the idea of trying to monetize outcomes across such a wide swath of topics, as well as at having to put a number on something about which I'm so uncertain). This is huge, considering that it will take me at most a few hours.

tl;dr

In case you weren't going to already, you should really voteand you should make an informed decision about whom to support. It is unfathomably unlikely that you will swing the election, but if you do, you will impact an unfathomably large number of people.

Thanks

To Margaret, for having a conversation about voting that finally prompted me to formalize these arguments. To Jake, for some helpful edits and comments.

Wednesday, October 7, 2015

The *other* reasons I'm into EA


Every so often, someone asks me why I’m into effective altruism. My response is usually something like,
“I am very aware of how lucky I am to live such a comfortable and enjoyable life. I’ve also realized that my privilege gives me an incredible ability improve others’ lives—and I want  to commit a large part of my life to doing so. When I think about how to take action, I realize that there are many different things I could do, which means I’ll have to choose—whether deliberately or not—between them. It’s intuitive to me that between two possible actions, I’d rather take the one that causes more people to be empowered to make more informed choices between more attractive sets of options, regardless of who those people are, where they live, and what they believe. And I care that outcomes really occur—not just that it feels nice to have acted toward them, or to think of myself as the kind of person who would act toward them. When you combine my desire to improve the world with my consequentialist and maximization-oriented ethics, you more or less get effective altruism.”
This is not a dishonest response; I really do feel the way that the above quote suggests. But it’s only a partial answer to the question. Suppose someone asked me why the Lakers won a game and I only told them about some buzzer beater that Kobe made; I would be overlooking that the buzzer beater only mattered because all the Lakers’ other baskets had gotten them within one point. The way I usually explain my interest in EA has the same problem. That is, I would probably not be an EA if that explanation was a lie, but the same is true of many other facts about me. It is a necessary but not sufficient condition.

Some of the other reasons I’m into EA are uninteresting. For example, I wouldn’t be interested in EA if I had never heard of it, which might not have happened if I went to school somewhere else or if I were born in a different country or different century.

But I also have particular personality traits and personal experiences that predispose me to effective altruism. They, I think, are interesting and yet usually overlooked. Why do I overlook them? For one, they’re less consciously operative; I don’t directly experience having a personality trait the way I do directly experience the emotions and reasoning from which I act. But it’s not only that—I think I also overlook them in part because they prevent me from believing in an idealized version of myself, one who acts from deep moral convictions rather than from self-interest, ignorance, and randomness. Sometimes I am tempted to believe in the cleaner, perhaps more admirable, and (need I say it?) less “human” version of John.

This blog post explores these other reasons why I’m into effective altruism.

Two quick disclaimers:
First, this post reads a lot into my feelings and behavior, inserting sociological explanations which—while they certainly touch on the truth—are worth taking with several grains of salt. I’ve decided to state them at full strength for the sake of clarity rather than aiming at more tempered descriptions. Second, this post paints an overwhelmingly negative picture of me and my involvement in EA, partly for the reason given in the last disclaimer but also partly because it’s only about the stuff I usually don’t talk about. The good stuff gets enough of its own airtime. Anyhow, please don’t mistake this for the full story and think I’m the devil.


Rationalizing other emotional drives

Rationalizing hard work
I grew up in a family and in a larger culture that emphasized the “value of hard work.” Not only did my parents, teachers, etc. emphasize that time and effort are important to achieving many goals, but also they rewarded hard work itself in order to encourage the trait. Even now, when someone tells me “you’re such a hard worker”—whether it’s a compliment or a complaint—I feel a sense of validation. There’s some kind of cultural understanding that “hard workers” are high up on the social hierarchy; they possess a prized virtue.

As a result, I sometimes work hard in order see myself as virtuous rather than to achieve any particular goal. It’s as though I’m a Calvinist, trying to convince myself I’m a member of the elect (if you’re interested in this connection, read Max Weber’s The Protestant Ethic). I can feel like I need some kind of excuse in order to justify spending time just enjoying things.

On one hand, it’s difficult for me to act as though I don’t accept hard work as an important moral virtue. But I am also actively aware that I feel the urge to work hard more than I really value hard work. So on the other hand, it’s dissonant for me to behave as though I do accept hard work as an important moral virtue.

Effective altruism offers me a way out: It gives me an external reason for me to work hard at things; once we consider that my goal is to maximize someone else’s welfare, it’s no longer irrational to work harder than is optimal for my own. That is, EA allows me to rationalize a deeply ingrained behavioral tendency which would otherwise bother me.

Rationalizing the pursuit of prestige
As much as I might wish otherwise, some weak version of the “excellent sheep” paradigm applies to me. I’ve grown up with people telling me how smart and impressive I was, and they’ve given these compliments in a tone that suggests my talents are a reason to care about me or to think I am important. The result of this conditioning is that my sense of self-worth (like that of many ivy-leaguers) is tied to my “achievements”—and the validation that accompanies them—more than I wish it were. On top of that, I suspect I have some innate drive to seek high social status (for evolutionarypsychology reasons).

The need for validation is not a socially acceptable reason to pursue prestige, nor is it one which I myself find particularly appealing. And so, while it would be hard on my sense of self-worth to forgo prestige, it would also be dissonant to pursue it without some other justification. As with hard work, EA provides that justification, in this case by reminding me that prestigious people often have the most power to do good. This reasoning allows me to rationalize actions that would otherwise be motivated by only my emotional need for validation.

EA also has a secondary prestige effect. It tells people that if they behave a certain way, they are doing something that is “important,” and often stresses that what an ordinary EA can do (say donating 10%) is more important than what most people do in their whole lives. That is to say, EA is kind of self-congratulatory, which makes me feel good because, again, I’ve been conditioned to want validations of my self-worth.


Few conflicts with other interests

Minimal group identities and group indebtedness
Many people identify strongly with groups that share their nationality, hometown, cultural background, race, gender, sexual orientation, religion, etc. When people identify more with some groups than others, they often care more about (or feel more responsible to) members of those groups than members of others. This can be enhanced by the feeling that a particular group has done something for oneself and one should return the favor or “pay it forward” within the group.

Among people I do not know, the extent to which I identify with some individuals more than others is pretty weak (I think this has a lot to do with that I’ve never spent a lot of time somewhere where I feel marginalized). The people to whom I really do feel responsible—my family, my close friends—are in positions of wealth or social status where I can’t do anything to dramatically improve their wellbeing.

Being financially comfortable
I am not seriously concerned about finances day to day, and I didn’t grow up witnessing others’ concern about finances day to day. I’m used to thinking that I’ll probably never really have to worry about whether or not I can support myself or my family. (I also have cheap tastes, which makes this a bit easier.)

When someone suggests to me that I donate some of my money, my mind doesn’t jump to worrying about whether or not I can afford to do so. I certainly think about it, but I’m not triggered directly toward it. I think I might be if I weren’t used to being financially comfortable. It’s easier for me to be an EA because of both the absence of this trigger and the understanding that I really will be financially comfortable, EA or not.


Explanations and understanding, for myself and for others

Desire to understand myself a certain way
I care a lot about having fleshed-out models that explain why I feel and behave the ways that I do. I dislike the feeling that I’m acting arbitrarily; I prefer to do things for reasons. And since I must accept the lowest level of reasons—my axioms—for no reason at all, I prefer use as few of them as possible. My preferences between models are especially important when I’m modeling myself, because—since I have some control over my feelings and behavior—I can change myself in order to conform to the kind of model I like, rather than fitting a model to some fixed version of myself.

One exceptionally simple (though not particularly well-fitting) model of me is this: “given his uncertainty, John takes the actions that maximize the expected utility of all sentient beings over all future events—and he enjoys life proportionally to size of his recent actions’ impact.” In theory, this could specify how I act and (roughly) feel at every moment. And it’s so simple. The idea that I might cohere to such a nice model really appeals to some part of me and it makes EA an appealing life philosophy. In reality, of course, I can only get some of the way there, but I can cause my actions to make somewhat more sense, and I can change my feelings (or at least suppress them) to make me more comfortable doing this.

At this point, my attempts to derive actions from first principles—rather than, say, acting on intuition—have extended to almost everything I do (often to my great frustration). I’m so accustomed to this approach that it’s hard for me to understand there are other options.. And as the supposed alternatives become increasingly inaccessible to me, EA’s relative appeal increases.

Wanting everyone to think I’m nice
I care about what other people think of me, and in particular I hate thinking that someone considers me unkind. When friends of mine come into personal conflict, my instinct is to avoid taking a side. When someone thinks I’ve mistreated them, I feel awful until they understand my perspective and (hopefully) forgive me. I really want everyone to think I’m acting in a kind and reasonable way. 

This has two effects. First, it motivates altruistic behavior; I (simplistically) reason that the more altruistically I act, the less likely it is for anyone to think I’m a jerk. Second, it reinforces my desire to derive my actions from very simple and widely-held assumptions; if others can understand where I’m coming from, they’ll probably respond more kindly.

EA's appeal to certainty-seekers
This last section is not about me. But there’s another trend along the same lines that I see in some other EAs, so I thought I’d mention it.

I’ll state my claim now and then work back up to it slowly. The claim: I think some people find EA appealing because it offers a fleshed-out worldview which, once they accept, allows them to avoid the tedious sometimes terrifying process of questioning one’s fundamental assumptions.

Consider that you may be the only sentient being; everyone else is a philosophical zombie. Consider that you may be living in a simulation, and, further, that the world from which you’re being simulated isn’t anything like our own. Consider that when you die, you may wake up and realize it had all been a dream; and that when you wake up from this level you realize it too had been a dream; and so on indefinitely.

This ideas point to a kind of ultimate ignorance—not just uncertainty about parameters of a model, but also uncertainty about the model itself, and about what all we are uncertain about, and about whether there are even objective questions about which to be uncertain.

For some people, contemplating ultimate ignorance is freeing, magical, and profound. To others, it’s frustrating and terrifying—like you’re hurtling through some total darkness and there’s nothing to grab onto.

For many in this second camp, religion offers a way out of the philosophical terror. Most religions insist on a particular metaphysics—the way things are—and on a particular morality—the way we should act.  If one can become convinced of a religion’s worldview, it’s easier to tune out the existential uncertainty.

Most EAs are not religious in the formal sense—but they do tend to dislike uncertainty. Pretty much all EAs want to reduce uncertainty about empirical questions (e.g. what is the causal effect of some intervention). Many EAs are interested in reducing “moral uncertainty” (e.g. how much should we care about animal suffering)—and some do this from a moral realist perspective where they believe there is some absolute moral fact to discover. There are also a bunch of EAs who believe we can get at answers to the ultimate questions through Occam’s-Razor-like approaches related to informational complexity.

Now, effective altruism isn’t about answering existential questions, but—especially because of the homogeneity of many EA sub-communities—it certainly pushes a certain worldview. This worldview is a physicalist one without a god, where our world is all there is (though there’s some chance we’re being simulated by a similar world), and in which utilitarianism is the objective moral truth.

I suspect that some people—particularly the most uncertainty-averse EAs—find EA appealing in large part because it, by establishing a well-defined worldview, allows them to avoid the terror of uncertainty that can result from really questioning one’s worldview. I suspect that they like being able to have answers available and seemingly absolute, over and above the extent to which they think those answers are reasonable or justified or true.


Conclusion
  • EA lets me rationalize emotional drives which otherwise cause dissonance
    • My drive to work hard
    • My drive to pursue prestige
  • Various privileges have led to a general absence of conflicts between EA and other things
    • I don’t identify with or feel responsible to groups of people I don’t know
    • I don’t worry much about my finances
  • Explaining John to myself and others
    • I like to act in ways that I can understand through simple models
    • I want others to think I’m nice and reasonable
  • I think some people are drawn to EA because it offers a fleshed-out worldview; this, like religious worldviews, helps people avoid existential terror


Thanks
To Erica, for some clarifying comments

Tuesday, September 8, 2015

An attempted taxonomy of everything

There is some sense in which the peach I’m eating is a different kind of thing than the thoughts I’m thinking, and there is some sense in which the last book I held in my hands is a different kind of thing than the story it contained. I suspect you know what I mean, but let’s not stop there. Is there some systematic way to organize all of the different “kinds of things” I can think of?

People with different worldviews might categorize the concepts they know of in different ways. But we mostly share the same basic picture: there are actions and there are things, and both actions and things have properties.  This common perspective shouldn’t be surprising, since these mental categories are built into our languages. In English, at least, there are four main types of words: verbs (for actions), nouns (for things), adverbs (for properties of actions), and adjectives (for properties of things). However, my goal is not to write a taxonomy of the English language; rather, I want to taxonomize all the concepts that live in my head. The English language has such an intimate relationship with the concepts in my head that its structure will surely shape the way I categorize them, but there is not a one-to-one map between words I know and concepts that live in my head. Some examples: I don’t have independent mental notions of “shiny” and “shininess”; “to” doesn’t really mean anything at all to me until it’s used in a phrase, like “to the park”; some concepts, like the one referred to by “dialectical materialism,” take many words to express.

Before I start, I’d like to give a big disclaimer.  Although there are probably linguists who have worked out much better versions of what I’m trying to do here, I decided it would be fun to try this on my own. It has been fun, but it has also been super confusing and difficult, and I’ve made decisions I’m not sure about. So this taxonomy can likely be improved in at least three ways. First, I might be missing some categories entirely—are there concepts you know of which don’t fit into my framework at all? Second, there might be ways to organize the concepts that I have included in a way which is just objectively more clear. Third, it may be that there is no objectively “best” taxonomy; you and I might just have different perceptions of which concepts are similar and which are dissimilar, in which case there may be better taxonomies for you but not for me. Feel free to comment on and criticize the way I've categorized things! I'd love to see how someone else might do this differently.


The taxonomy
First, I’ll take the basic picture we have from the English language: there are actions and things, and both actions and things have properties.  I’m going to give the sub-taxonomies of properties, actions, and things, in that order—one of increasing depth and interest.


Properties
Some properties describe actions and some describe things. Some action properties—like those expressed by “quickly” or “blindly”—are those the English language calls adverbs. Prepositional phrases (e.g. “to the park”) can also be used to express properties of actions. Similarly, properties of things are either expressed by adjectives or by prepositional phrases (note that certain prepositional phrases that can apply to some actions, like “to the park,” can not apply to things).

I keep writing things along the lines of “those concepts expressed by words like” as a reminder that this is a taxonomy of concepts in my head and not one of words. I stop including these reminders later in the post, for the sake of brevity.


Actions
I think actions are best categorized along three dimensions, rather than split into groups which are each split into subgroups, etc.

Dimension 1: degree of activity
Actions vary in how much engagement or focus-in-the-moment they require from the doer.  For example throwing is a little more active than breathing, which is a little more active than believing, which is a little more active than existing. There is a whole continuum along which actions lie. To be clear, the action that a word like “breathing” expresses can be done at many different levels of activity (e.g. if meditating and focusing on one’s breathing); in the previous sentence, I was referring to the level of activity of the actions those words most often express.

Dimension 2: internal-external
This dimension is a binary one. Although—at least from my physicalist perspective—any mental action corresponds to a physical event which could in principle be observed externally, we tend to think of actions like pondering as taking place within my head while some actions like surfing take place outside of it. The clearest way I can say this is: internal actions are those the actor could not do were it a philosophical zombie version of itself, whereas external actions could be done just as well by philosophical zombie equivalents. The same physical process can be thought of as either external or internal; it depends upon whether we focus on what happens physically or on what the physical process was like to experience.

Dimension 3: tense
There are a handful of different tenses that verbs can come in, and these correspond to different concepts of what is being done. I mean something different when I say that “I ate a carrot” than when I say that “I was eating a carrot” or  “I would eat a carrot,” etc.

Some examples and an two important difficulties
When I use the word “spat,” I’m referring to a concept with a high degree of activity which is external in the simple past tense. “Would believe” is low activity, internal, and subjunctive. “Am considering” is medium-active, internal, and present tense.

But how would we categorize the concept I’m referring to when I say, “I looked around the room”? Is the action here internal or external? Both, I think; “I looked around the room” is a sentence which expresses two different ideas at the same time—first the external action of moving my eyes around and second the internal action of systematically perceiving my surroundings. This idea—that one sentence (or even one word!)—can express multiple concepts at the same time is an important one which will come up again.

A second challenge: It’s temping to say that the tense expressed by sentences like “I went running” is just simple past tense. However, some would say that the concept expressed by the sentence is not that the event John-goes-running actually occurred but rather that I currently remember having gone running. My response here is that, again, the sentence can express multiple meanings at once. I think most speakers would intend to express that they remember going running and that the event did actually happen. How strongly their statement intends to express each of these components depends on who they are (and probably how much Descartes they read).


Things
I’ll divide all the things I can think of into five categories and further subdivide some of these groups.

Action-derived things
Given any verb, we can construct a noun (a gerund) by adding “ing.” We can also sometimes add other endings too; for example, “to erode” can become “eroding” or “erosion.”  I’m not sure whether my concepts of thing-ified actions are in general distinct from their source actions; the thing that is “running” seems no different in my head from the action expressed by “running,” but the idea of a “diversion” does at least seem to have some connotations that aren’t captured well by “divert.” Anyhow, I’ve decided to tentatively include this category.
The internal structure of actions-derived things can be carried over from the words they are sourced from (just leave out the tense dimension).

Property-derived things
As with verbs, we can convert adverbs and adjectives into nouns. “Wise” becomes “wisdom” and “chaotic” becomes “chaos.” Again, I’m unsure whether I understand thing-ified properties in ways distinct from their source properties. “Shiny” feels the same as “shininess” but “wisdom” feels like it may capture a little more than does “wise.” I’ve decided to tentatively include this category, too.
We can give a little bit of structure to property-derived things by thinking about what the properties were properties of. Some property-derived things, like those expressed by “chaos” or “synchronization,” describe a system—we will call these states. Others, like those expressed by “redness” or “wit,” describe a singular entity—we will call these qualities.

External things
Having dispensed with things that are derivative of actions and properties, we’re left with things that feel more thing-like. Of these, some (like emotions) depend on the mind in order to exist, and some do not. We will use the category external things to describe things that exist with or without sentient beings. I'll break external things into two subgroups.

Physical objects
Ah, finally something we understand! Physical objects are all the things made up of fundamental particles. They’re the stuff we can touch, the stuff that’s too small for us to touch, and the rest of the basic entities to which physics applies. Most physical objects we can either think of as conglomerates—groups of individual parts—or as singular wholes.

Facts and Rules
There are some facts about how the world is and rules about how it works. These are the what science (and social science, and all sorts of other endeavors) attempt to learn about and describe. Some of these facts and rules are absolutes, like that the sun is larger than the earth and that (to our knowledge) the laws of physics work however it is that they work. Others—like laws of human behavior—are more like rules of thumb which apply in some probabilistic or approximate sense.

It’s important to distinguish the way things are from the way we think they are: the rule that protons and electrons attract is a different thing from the claim, written in some textbook, that protons and electrons attract. The latter is science and the former is the actual phenomenon that science seeks to describe. The former is how things work, whether or not they are being studied (at least this is the view most people take most of the time), whereas the latter is something constructed by humans.

Mental things
This is the category for things that—while they also have physical incarnations in our mind—we think of existing within our minds. We can break them into three sub-sub-categories. First there are things that we just experience raw: sense-data, moods, and emotions. Then there are mental objects related to representing the world: worldviews (assignments of structure onto our experiences) and beliefs attached to notions of certainty or probability.  Thirdly, there are the things which we understand to help us determine our actions: these are desires and values. Intuitions are a final kind of mental object, which could either be considered part of our raw experience or part of our system of beliefs.

Mind-dependent, individual-independent things
What kind of thing is the story of Little Red Riding Hood? What kind of thing is the idea of a university? What kind of thing is the claim that markets seldom exist in equilibrium? These things are clearly not external the way objects are—I can’t touch them, for sure—and they are also aren’t human-independent facts about the world. But they are more universal than what we’ve called “mental things”; whereas my belief it’s Friday isn’t the same thing as your belief that it’s Friday (simply because it’s your belief), we think of the story of Little Red Riding Hood as being the same story whether it’s in my head or yours (so long as we know the same version). That is to say, the substrate in which it exists is not relevant to its identity. And yet, if there is no mind thinking about some story, or, further, if there is no mind that will ever (or even could ever) think about some story, does the story still exist? I don’t really think so. There may be physical objects (books) capable of causing the story to exist in a mind, but in the absence of minds to hold the stories, I think it makes more sense to say they don’t exist.

Anyhow, we now have a final category of things: those whose existence requires instantiation (or at least the future possibility of it) in some sentient mind, but whose identities are constant across the minds in which they are conceived (assuming they are fully understood). We’ll break these into three subgroups.

Pointers
There are some things whose functions are to gesture toward other things. For instance, the symbol “<” is generally thought of not in terms of its shape or color, but in terms of the mathematical concept we understand it to represent. And yet we have a notion of the symbol “<” itself, distinct from the concept it points to. We’ll call these things—those whose function is just to point to other things—“pointers,” and we’ll divide them into two subcategories.

The first subcategory of pointers is those that are part of a formal language. We include in this category: letters, which point to particular sounds; letter-composed words (or those formed from something else, e.g. Chinese characters), which point to particular things, actions, properties, etc.; and word-composed phrases or sentences, which point to more complicated concepts or strings of concepts. It’s worth noting that mathematics is a formal language, too. Math has symbols and rules about what symbols make sense in which combinations, and strings of mathematical symbols point to ideas which we (usually) understand as something above and beyond the symbols themselves.

That leaves the second subcategory of pointers to contain the ones that are not incorporated into any formal language. For instance, (at least in Western literature) fires tend to symbolize passion and mirrors tend to suggest some kind of self-reflection. However, I don’t know of any formal language that, say, has fire-shaped and mirror-shaped characters for passion and self-reflection, respectively.

Compared to those in formal language, pointers outside of formal language tend to be things that we can also conceive outside of their roles as pointers; fires and mirrors are things in of themselves and also have had pointer-meanings attached to them, whereas “<” has little meaning outside of the concept it represents. This brings us back to the idea that a single linguistic word can have multiple meanings at the same time. If I point at some wood burning and use the word “fire,” I am simultaneously referring both to fire the physical object and to fire the non-formal-language pointer.

Truth-related / proposition-based constructs
In logic, a proposition is the kind of statement to which one can assign a truth-value (true or false or some degree of uncertainty). In English, declarative sentences (as compared with, say, questions or exclamations) can be thought of as expressing propositions, because we can imagine how they might be true or false. While one can string together concepts we’ve already categorized in order to form propositional statements, we wouldn’t yet be able to be explicit about whether the sentences are intended to express some truth-related content; the sentence “I ate dinner” might just intend to express the idea of me having eaten dinner rather than the claim that I did, in fact, do it. This category is my attempt to capture the notion that statements can have some semantic truth-value rather than just expressing the idea of things.

We can subcategorize a bit: First we just have propositions, which are statements of the proper form for us to consider attaching truth-values to them. Then we have claims, which are propositions with truth-values attached (e.g. “the sun will almost definitely rise tomorrow,” or “it’s possible that I’ll go blind”). Things like hypotheses, conjectures, and theorems fit under the umbrella of claims. Finally, we are familiar with groupings of various claims (with varying degrees of certainty); we call these groupings things like theories or frameworks.

This category bears some similarity to the subcategory of external things that we called “facts and rules”—both are about instances of objective truth. These categories are, however, different in an important way. Propositions, claims, theories, etc. are statements about the way things are, whereas facts and rules just are the way things are. Members of the former category attempt to describe members of the later.

(Non-truth-related) Notions
In the last section I brought up that sometimes we just think about “the idea of things,” without considering any claims. This subcategory contains notionsthose mind-dependent, individual-independent things that we just consider, not as things that might be true or false but merely as thoughts. This allows us to capture things like the idea of a university and things we express in phrases like “that I went to the store.”

This subcategory also captures stories; the story of Little Red Riding Hood doesn’t make any particular claims about the world (it’s not saying that the story every happened), but rather just gives us a series of hypothetical events to consider. What about stories like the one in the Bible, which sometimes are understood as telling us that particular things actually happened? I’d say—as I seem to be saying about all the confusing concepts—that we conceive of the Bible as doing two different things at once. It is both a story (something that just encourages us to consider some hypothetical series of events) and a series of claims about some events in the past. Some would say that it’s a third thing too: a series of claims about abstract concepts that one has to read the Bible the informal language of symbols in order to understand. And of course, “the Bible” also refers to a group of physical objects that fit some description.

I’d like to elaborate on an important subgroup of notions, namely the kind involved in plural nouns. When I say “dogs tend to smell bad,” the concept I’m expressing with “dogs” is not a list of all dogs, since that’s far too cumbersome for my mind to handle. Rather, “dogs” has more to do with “the idea of dogs”—it means something more like “things which have certain properties which are shared by each thing I’ve been told is a dog.”  To say that more generally: I conceive of plural nouns as functions from things to truth values; each plural-function takes in some thing, evaluates whether it is, say, a dog, and then returns “yes” or “no” (or sometimes something in between). Most of these plural-functions seem to work internally by evaluating whether some particular thing satisfies most or all of a list of properties which I use to describe the group. Sometimes, like in the case of “members of the Beatles,” my plural-function works more simply, by just comparing the thing in question to the things written on some list (“Is it John Lennon? Is it Paul McCartney…”).



Summary
I'll give two different kinds of summaries--one in bullet points and the other in a picture.

Bullet-point summary
  • Properties
    • Of Actions
      • “Adverbs”
      • Prepositional phrases
    • Of Things
      • “Adjectives”
      • Prepositional phrases
  • Actions
    • Dimension 1: degree of activity (or awareness of action while doing it) (continuous)
    • Dimension 2: internal-or-external (binary) (“is it something I could do if I were a philosophical zombie?”)
    • Dimension 3: tense (finite)
  • Things
    • Action-sourced things (if they’re really distinct from actions)
      • Organized with internal-or-external binary and degree of activity
    • Property-sourced things (if they’re really distinct from properties)
      • System-describing (states)
      • Individual-describing (qualities)
    • Physical things
      • Physical objects
      • Facts and rules (about what there is and how things work)
    • Mental things
      • Raw-experienced things
        • Sense-data
        • Moods
        • Emotions
      • Representations
        • Worldviews
        • Beliefs and knowledge, notions of likelihood or probability
      • Internal choice-determinants
        • Desires
        • Values
    • Mind-dependent, individual-independent things
      • Pointers
        • Formal language
          •  Letter-symbols
          • Words (combinations of letter-symbols)
          • Combinations of words
        • Informal symbols
      • Truth-related / proposition-based constructs
        • Propositions
        • Claims (propositions with truth values assigned)
          • Hypotheses
          • Conjectures
          • Theorems, etc.
        • Groupings of the above types
          • Theories
          • Frameworks
      • (Non-truth-related) Notions (just the idea of things)
        • Stories (or the story-like component of more claim-based accounts)
        • Plurals (functions that take in things and say whether they fit in some group)
        • Others

Visual summary



Thank you
To Margaret, for talking about this and pointing out all kinds of conceptual difficulties.