From December 2018 – March 2019, I participated in a rather involved hiring process as a research analyst for the Open Philanthropy Project. Although I was compensated well for my time during this process, I was ultimately not offered a full-time position.
I had been approached to join them twice before: first in April 2018, and later in December 2018. The first time I was too focused on my data science work at Animal Charity Evaluators to seriously consider a career change to research analysis at a think tank, but given my recent switch to the board at ACE, I decided to move forward with applying at Open Philanthropy, despite it being a completely different line of work.
My time with Open Philanthropy was limited, but it was overall a fulfilling experience. Applying at a think tank of this caliber was far more serious than I originally thought it would be. I'm not entirely sure of what I was expecting beforehand, but I can now say that the process of applying to work at OPP was not only helpful to them in terms of understanding how I would potentially perform there, but also was extremely helpful to me in terms of better understanding how I currently think deeply about more practical effective altruism considerations (as opposed to the theoretical considerations I'm more used to thinking about).
I currently earn a living by doing communications consultation work — a far cry from the research analysis that OPP wanted me for. My work these days is maybe a bit too meta: I communicate to communications departments how to more effectively communicate. Mostly this consists of data analysis and hypothesis testing, but I've found that most people misunderstand what I do when I tell them this. The reality is that my job is just to implement best practices in places that don't already think too deeply about what their data is telling them.
There's a social norm that it's not good to publicly mention when you try and fail to be hired somewhere. I don't like this norm. I think it's important to be open both about one's successes and one's failures. This applies especially in this case, because I found the application process to be so enlightening about how I'd actually perform this type of think tank work. Nevertheless, it taught me a lesson: my career goals should align more closely with the skill expertise I have that is neglected among my peer group.
I've dedicated my life to the field of effective altruism. I donate 25% of my income to EA organizations; I serve on the board of an EA org; I've volunteered and/or worked for half a dozen EA orgs over the past seven years; and I spend a non-trivial amount of my free time thinking about and contributing to EA spaces.
When I applied to OPP (and, indeed, when OPP sent me multiple invitations to apply), the idea (I think) was that I might perform well as a data analyst. I think this was correct. But it failed to take into account that lots of people that are into effective altruism consider research analysis as a high-status position, and thus expend a lot of resources to strongly compete for the very few positions available in the field. While I may be competent at research analysis, that category isn't at all neglected among EAs. Compare this state of affairs to my communications expertise: among EAs, communications is a neglected field. I'm much better suited to working in the relatively neglected field where I've already built a good deal of career capital.
Tim Urban of Wait But Why published an excellent post last year about how to pick a career that fits you. His advice is sourced from (among others) the well-researched 80,000 Hours, which incubated ACE in its initial year. I have siblings in both middle school and high school, so I've been thinking deeply on these ideas recently, and I believe the same sorts of considerations apply to me.
The one thing I've learned from my experience with OPP is that I'd like to be a bit more novel in the data science communications projects that I undergo. Most of my current work involves just applying best practices to orgs that aren't already following them; but there is an undeniable excitement when you're working on novel procedures. To that end, I've been pursuing knowledge of more of the darks arts of communication — not because I plan to use them, but because I want to learn more about the methods that compete with the 'best practices' that I currently solely implement. Of particular interest has been Destin Sandlin's recent Smarter Every Day series on social media manipulation (on youtube, twitter, and facebook). I've also gained access to a number of paid tutorial videos obtained on the dark web that focus on facebook ad manipulation. Again: I want to stress that I have no intention of doing anything black hat — how I've handled wikipedia/EA controversies in the past should make that perfectly clear — but knowing these strategies is helping me to understand how to better innovate in the communications field.
I look forward to seeing how this affects what I do next in my career.
An ethics-oriented weblog celebrating effective altruism, philosophy, and other beliefs Eric holds. Also: a place to post random thoughts.
29 April, 2019
Review: Judge on a Boat
Judge on a Boat by Alan Manuel K. Gloria
My rating: 5 of 5 stars
Historically, science fiction has been big on setting. Characters and dialog are important, too — otherwise it's unlikely to be well written — but the key signifier of science fiction is that setting is much more important. Sci-fi is all about transporting you to a wondrous place and making you believe that you are there. All too often this means that authors of sci-fi will spend way more time on setting than authors of other genres. Think Hal Clement going pages upon pages about gravitational minutiae in A Mission of Gravity; or Asimov insisting on describing at length complex social structures in his Foundation series. These are great stories, and they are what makes good sci-fi so memorable to me. But Gloria bucks this trend beautifully in Judge on a Boat.
Judge on a Boat is undeniably sci-fi, but instead of describing a wondrous place as its setting, Gloria instead describes a world where rationality already won. It is a vision of the future that's as alien as, well, Alien, yet it isn't the description of space travel and drop pods that makes this sci-fi. It's the casual description of LessWrong-esque ideas from the rationalist community that makes this short text stand out. Reading this transports me into a world that is alien by virtue of its ideas, rather than by its technology.
At heart, Judge on a Boat is a mystery novel. Clues are interspersed within and commented on throughout. But, again, it stands out because the mystery itself doesn't adhere to common mystery tropes — and this is explicitly pointed out in-universe, so that the reader can fairly understand the rules of the game and play along, trying to solve the mystery before the end.
I thoroughly enjoyed this book, and I highly recommend it to anyone well-versed with the rationality community. However, the density of jargon is such that if you aren't already at least loosely acquainted with these ideas, then going through this text will be a slog. I hesitate to make the comparison, but imagine reading Joyce's Ulysses without having first read the classics. It would be impossible to enjoy, because at nearly every step you'd need to look in the margin for notes, or, in the case of this text, you'd need to refer to the Sequences.
The bottom line: if you don't know what the Sequences are, then you probably won't enjoy this book. It's just not written for you. But if you are aware of the rationalist community (even if you don't self-identify in that group), then this short mystery novel is a great way to spend a few hours of fun. For the correct audience, it deserves this 5 star rating (and, more impressively, was so good that it got me to avoid my akrasia and post a review on goodreads for the first time in several years).
View all my reviews
My rating: 5 of 5 stars
Historically, science fiction has been big on setting. Characters and dialog are important, too — otherwise it's unlikely to be well written — but the key signifier of science fiction is that setting is much more important. Sci-fi is all about transporting you to a wondrous place and making you believe that you are there. All too often this means that authors of sci-fi will spend way more time on setting than authors of other genres. Think Hal Clement going pages upon pages about gravitational minutiae in A Mission of Gravity; or Asimov insisting on describing at length complex social structures in his Foundation series. These are great stories, and they are what makes good sci-fi so memorable to me. But Gloria bucks this trend beautifully in Judge on a Boat.
Judge on a Boat is undeniably sci-fi, but instead of describing a wondrous place as its setting, Gloria instead describes a world where rationality already won. It is a vision of the future that's as alien as, well, Alien, yet it isn't the description of space travel and drop pods that makes this sci-fi. It's the casual description of LessWrong-esque ideas from the rationalist community that makes this short text stand out. Reading this transports me into a world that is alien by virtue of its ideas, rather than by its technology.
At heart, Judge on a Boat is a mystery novel. Clues are interspersed within and commented on throughout. But, again, it stands out because the mystery itself doesn't adhere to common mystery tropes — and this is explicitly pointed out in-universe, so that the reader can fairly understand the rules of the game and play along, trying to solve the mystery before the end.
I thoroughly enjoyed this book, and I highly recommend it to anyone well-versed with the rationality community. However, the density of jargon is such that if you aren't already at least loosely acquainted with these ideas, then going through this text will be a slog. I hesitate to make the comparison, but imagine reading Joyce's Ulysses without having first read the classics. It would be impossible to enjoy, because at nearly every step you'd need to look in the margin for notes, or, in the case of this text, you'd need to refer to the Sequences.
The bottom line: if you don't know what the Sequences are, then you probably won't enjoy this book. It's just not written for you. But if you are aware of the rationalist community (even if you don't self-identify in that group), then this short mystery novel is a great way to spend a few hours of fun. For the correct audience, it deserves this 5 star rating (and, more impressively, was so good that it got me to avoid my akrasia and post a review on goodreads for the first time in several years).
View all my reviews
18 April, 2019
A Kind of Degree
I've been thinking a lot recently about differences of kind versus differences of degree. Perhaps it has to do with a clicker game I've been playing recently, Mine Defense. In the game, you start off by clicking a mine, ostensibly mining gold from it with each click. As you progress, you gain options that allow you to click many times more efficiently, then ways to earn gold automatically over time, and ultimately to earn more of the ways that earn gold automatically over time, eventually reaching points of absurdity. Meanwhile, you also start to earn other types of income, and ways to earn those types more efficiently. (If you're looking for a clicker game recommendation, this is not it. It's not a particularly enjoyable genre, and this isn't the best of its kind. If you press me for a recommendation, I'd say to play Universal Paperclips (it relates to paperclip maximizers) but, really, I'd steer you toward other, more enjoyable genres.)
I've also been spending a lot of time around my siblings this week because Anh has come into town. I only see her relatively rarely, so I always end up interacting with them quite a bit while she's in town. (My siblings are 12, 15, & 25 years of age. I'm 37.) Being around children forces me to think in ways that are helpful for them to understand. I have to be able to process and talk in simpler language, and to break down concepts into their constituent parts. This process in turn helps me to clarify my understanding of things. (One of the best ways that you can force yourself to really learn a topic is to attempt to teach that topic to another person. It really brings into clarity the parts that were previously fuzzy to you.)
In one hand, I hold three apples. In another, I hold five. The contents of each hand are different, but they are differences of degree, not of kind. I could just modify the quantity of apples in one hand to get it to match the other, because the contents are of the same kind.
Compare this to a different situation. Now I have three apples in one hand, but five oranges in the other. This is a difference of kind, not degree, because no matter how I alter the contents of one hand numerically, I won't be able to make the contents of each hand match.
But not all such examples are obvious. In my work at Animal Charity Evaluators, I often had to contend with critics that thought that their methods of helping animals was fundamentally different from the methods that ACE recommended. They would claim that ACE is utilitarian, that you can't help a class of persons by promoting harm to them. Rape is wrong, they would say. Passing a law that forces rapists to bring a pillow with them to comfort their victims is an immoral strategy because the thing that is wrong is the rape itself; lessening the impact of the rape is inappropriate. Similarly, causing chickens to be tortured and killed is wrong. Passing laws that increase the amount of space they have to live in or that limit the ability of farmers to cut their beaks off is an immoral strategy, because you're then focusing on the wrong thing. Their argument is that there is a difference in kind, not degree, between what they are trying to do (outlaw harming of animals) and what we are trying to do (reduce the harm that animals suffer), and so it doesn't matter how effectively we achieve our goals, they're still insufficient for the goals they care about.
I think they are wrong. I think that, for all practical purposes, it is a difference of degree. I think that it matters how efficiently you go about these things. I think that you can get from where we are to a world where people are far more kind by traveling a road of reducing suffering at each step.
Think back to that example with apples in one hand and oranges in the other. Their building blocks are the same at some level. The molecules in each are different, perhaps, and maybe even the atoms, but the subatomic particles are basically identical. Rearrange them, change the quantity, and, all of a sudden, three apples become five oranges. At this level, the differences between them is of degree, not of kind.
My brother watches Naruto, an anime where all kinds of fantastical ninja have powers beyond belief. Some breathe fire; others control dirt. (I don't actually recommend it to anyone, but if you watch or read through it anyway, then you should definitely read the rational fiction fanfic The Waves Arisen, which requires knowledge of the series. If you insist on watching the anime, I recommend Naruto Kai, which removes the filler episodes.) In this world, one of the concepts used is a large golem strong enough to withstand a flurry of elements pushed against him. Imagine a tall golem of mud, with its feet planted to the ground as a torrential rain of water rushes horizontally against it, attempting to knock it down. The jounin behind this golem struggles to keep it upright. As parts of the golem's legs get pushed behind it from the water, he brings more mud to replace the front of the leg, in a never-ending cycle of renewal just to keep the golem standing.
At first, there seems to be a difference of kind between how we, as humans, stand in a light wind, and how this golem stands in his torrent of rain. But cells die; skin is renewed. When we stand in a breeze, this is what is happening in reality. Scent is the detection of molecules that drift from objects; humans have scent, too, and these are the parts of the body that drift from us, eroding naturally, but even faster from the wind that blasts our bodies. We are, in a very real sense, like that golem: renewing our body each moment as parts of us get constantly pushed away.
Consequentialism certainly seems different in kind from deontology. And it is, from a philosopher's point of view. But there are certain areas where the differences seem closer to a difference in degree, as strange as that may seem at first. I'm still thinking through how to make this argument, but the basic idea involves a non-philosopher deontologist thinking that harm is bad, and yet still preferring a choice that results in less harm than in a choice with more harm. Numbers matter, even for deontologists. Maybe to the point where moral choices converge when using real world data? More on this later.
I've also been spending a lot of time around my siblings this week because Anh has come into town. I only see her relatively rarely, so I always end up interacting with them quite a bit while she's in town. (My siblings are 12, 15, & 25 years of age. I'm 37.) Being around children forces me to think in ways that are helpful for them to understand. I have to be able to process and talk in simpler language, and to break down concepts into their constituent parts. This process in turn helps me to clarify my understanding of things. (One of the best ways that you can force yourself to really learn a topic is to attempt to teach that topic to another person. It really brings into clarity the parts that were previously fuzzy to you.)
In one hand, I hold three apples. In another, I hold five. The contents of each hand are different, but they are differences of degree, not of kind. I could just modify the quantity of apples in one hand to get it to match the other, because the contents are of the same kind.
Compare this to a different situation. Now I have three apples in one hand, but five oranges in the other. This is a difference of kind, not degree, because no matter how I alter the contents of one hand numerically, I won't be able to make the contents of each hand match.
But not all such examples are obvious. In my work at Animal Charity Evaluators, I often had to contend with critics that thought that their methods of helping animals was fundamentally different from the methods that ACE recommended. They would claim that ACE is utilitarian, that you can't help a class of persons by promoting harm to them. Rape is wrong, they would say. Passing a law that forces rapists to bring a pillow with them to comfort their victims is an immoral strategy because the thing that is wrong is the rape itself; lessening the impact of the rape is inappropriate. Similarly, causing chickens to be tortured and killed is wrong. Passing laws that increase the amount of space they have to live in or that limit the ability of farmers to cut their beaks off is an immoral strategy, because you're then focusing on the wrong thing. Their argument is that there is a difference in kind, not degree, between what they are trying to do (outlaw harming of animals) and what we are trying to do (reduce the harm that animals suffer), and so it doesn't matter how effectively we achieve our goals, they're still insufficient for the goals they care about.
I think they are wrong. I think that, for all practical purposes, it is a difference of degree. I think that it matters how efficiently you go about these things. I think that you can get from where we are to a world where people are far more kind by traveling a road of reducing suffering at each step.
Think back to that example with apples in one hand and oranges in the other. Their building blocks are the same at some level. The molecules in each are different, perhaps, and maybe even the atoms, but the subatomic particles are basically identical. Rearrange them, change the quantity, and, all of a sudden, three apples become five oranges. At this level, the differences between them is of degree, not of kind.
My brother watches Naruto, an anime where all kinds of fantastical ninja have powers beyond belief. Some breathe fire; others control dirt. (I don't actually recommend it to anyone, but if you watch or read through it anyway, then you should definitely read the rational fiction fanfic The Waves Arisen, which requires knowledge of the series. If you insist on watching the anime, I recommend Naruto Kai, which removes the filler episodes.) In this world, one of the concepts used is a large golem strong enough to withstand a flurry of elements pushed against him. Imagine a tall golem of mud, with its feet planted to the ground as a torrential rain of water rushes horizontally against it, attempting to knock it down. The jounin behind this golem struggles to keep it upright. As parts of the golem's legs get pushed behind it from the water, he brings more mud to replace the front of the leg, in a never-ending cycle of renewal just to keep the golem standing.
At first, there seems to be a difference of kind between how we, as humans, stand in a light wind, and how this golem stands in his torrent of rain. But cells die; skin is renewed. When we stand in a breeze, this is what is happening in reality. Scent is the detection of molecules that drift from objects; humans have scent, too, and these are the parts of the body that drift from us, eroding naturally, but even faster from the wind that blasts our bodies. We are, in a very real sense, like that golem: renewing our body each moment as parts of us get constantly pushed away.
Consequentialism certainly seems different in kind from deontology. And it is, from a philosopher's point of view. But there are certain areas where the differences seem closer to a difference in degree, as strange as that may seem at first. I'm still thinking through how to make this argument, but the basic idea involves a non-philosopher deontologist thinking that harm is bad, and yet still preferring a choice that results in less harm than in a choice with more harm. Numbers matter, even for deontologists. Maybe to the point where moral choices converge when using real world data? More on this later.
15 April, 2019
Dividing Lines
(Required background knowledge on me: I tend toward the episodic side of the diachronic/episodic spectrum; I consider some of my past selves to be somewhat abhorrent; and I strongly believe in constantly bettering myself (and yes, I'm aware of the philosophical disconnect there). The hyperlinks exist to help fill in required background knowledge of non-Eric concepts, if needed.)
The dividing lines are there, between each instantiation of "I", even if I can never quite get a glimpse of them. If I squint just so, fast-forwarding through the events of a past self, I don't quite reach a boundary so much as reach a gap. After which another "I" instantiates itself. The dividing line is there; I'm sure of it. But it only seems blurrily visible when I don't focus on it. As soon as my eye approaches, it disapparates into the ether.
The "I" in the former instantiation existed in the same continuously existing body as the "I" in the latter instantiation. So if these "I"s are different, as they so clearly seem to be from my current standpoint, then there must exist some point in the lifespan of that continuously existing body where the dividing line resides.
The farther back I look, the more different "I" appear. While many do not feel at all like me, some are easier to view through than others. I clearly remember being a child and having a thought along the lines of desiring teenage mutant ninja turtle action figures. Yet I cannot reexperience (even in memory) the feeling of actually desiring such objects. This is no great loss; after all, I have just changed so very much since then. But why, then, can I imagine a later "I" doing some terrible deed, and being able to not just remember thinking the thoughts that "I" thought, but also the desires that "I" desired?
It is a weird thing, that. To know with the depth of my being that I most certainly do not desire a thing, and yet to be able to not only recall an "I" who did desire that thing, but also to recall the actual desire itself. I feel as though english is insufficient to get across the concept easily. I can feel that desire. I can experience it intensely. And yet I can know that it is not I who desires it. It is akin to a memory, but it is not the same as the memory of a desire. I have memories of desiring TMNT toys. It is more than a memory. It is a feeling of desire itself -- but not of my desire, but of that "I"'s desire.
Then, a dividing line I cannot see. And another "I" comes about. A better "I", to be sure, but still just a shadow of what would one day come. Where Henry James refers to one of his past selves as "a rich…relation, say, who…suffers me still to claim a shy fourth cousinship", he is thinking of his past self as being as good (or better) than his current self. But for me, things are different. Those "I"s just don't think the way I would have them think. It is not solely a matter of disagreement. I have ethical demands I've placed upon myself that they do not recognize, and given their existence in the past, there is no way to acausally motivate them. The cavernous drift is so great that I fear the next dividing line more than I would if all there were to fear was the end of my current self. I care about others; I would take pleasure in the success of my self-progeny. But I fear their values will not be my own. People I know in the effective altruism community fear falling into a Friendship Is Optimal-style SK-class end-of-the-world scenario where moral value is incorrectly locked-in before we properly expand the moral circle, but my deeply personal fear is almost the opposite: I will be unable to come up with good strategies of negotiating with my future selves before the unseen dividing line changes me to a new "I", and moral drift will push my progeny to work toward goals I desperately need to prevent from occurring. It's not just a matter of personal preference; it's an important meta-[meta-goal] of mine.
(A quick aside: I don't mean a meta-meta-goal here. There are goals, like wanting to find all the koroks in Breath of the Wild. Then there are meta-goals, like being okay with setting goals that don't really improve the world or my life very much, but just make me temporarily happy in the near-term. And there are meta-meta-goals, like striving to set meta-goal rules that strike a balance between doing what I consider 'right' and being able to enjoy the time I have. This meta-chain can continue infinitely. Thinking about this infinite chain (like what I'm doing in this very paragraph) is what I call meta-[meta-consideration]. I'm sorry for the weird way of writing this; I haven't seen others come up with a better way to type out this concept (unless you count the fast-growing hierarchy, which is specific to mathematics and isn't applicable here).)
"I" did not think properly back then, but, even so, they did a good job of laying down a foundation without really knowing what they were doing. I distinctly remember that "I" would idly break promises back then, but not in ways that others could easily detect. "I" worried that this might come back to harm me if "I" just as easily broke promises to myself, so "I" instituted a rule: there would be a special category of self-made promises that I had to attend to closely. They would not be unbreakable, but they would require conscious attention before any breaking would occur. (Years later, "I" learned about trigger-action planning, and realized that this was a more formal version of what "I" had (naively and clumsily) set up for myself as a preteen. I highly recommend Lulie's post on TAPs if you aren't already using it regularly.)
This foundation was, as Duncan Sabien so eloquently puts it, a working "summon sapience" spell. "I" started out with something completely innocuous which had no real drawbacks but which would serve as a proof-of-concept and a reminder that I could achieve the thing "I" was trying to set up. A rule was set for myself: when picking up a glass filled with drink, my ring finger would be positioned on the side of the glass nearer to me. This is invisible to almost everyone, makes no meaningful direct difference in my life, and costs me nothing more than extremely mild inconvenience -- or so "I" thought at the time. In fact, the unintended consequence was what TAPs are designed to accomplish explicitly: it meant that from that moment onward, anytime I pick up a glass of liquid to drink, the "summon sapience" spell goes off, and I'm immediately aware of what I am doing. In the vast majority of cases, I follow the agreement made by my past self.
This foundation gave me a power I could not predict. It allowed me the capacity to make binding agreements possible by proving to myself that I could follow agreements, so long as they had no ill effects and did not inconvenience me much. That may not sound like a good foundation, but it's better than most people have, and it is something which I've kept to for nearly thirty years. The sheer power of knowing that it has held for so long gives me the ability to then look at other attempted agreements from past selves and take them more seriously than I otherwise would. Later, I would read Douglas Hofstadter's Metamagical Themas, which included a section on superrationality. This allowed me to upgrade that power by giving me a good rational basis for continuing agreements made by past selves that no longer benefited me.
Then a dividing line hit, and I started breaking agreements.
To my current self, these were justified breaks. "I" was not a good person back then. Some of the worst agreements were idle exhibitions of power evaluated over time. "I" had wanted to see how my abilities would change over time, and so had decided to attempt certain fabrications regularly with strangers and compare them across selves. The fabrications from back then were not nice. Now, I restrict myself to only doing this when uber drivers attempt conversation with me. I will lie about nearly everything they ask about, but the lies are low-risk and low-effectual. I have no expectation that any drivers even think about what I said after I leave their car, so I allow myself to keep to the prior agreement in this limited way.
Yet this sets a dangerous precedent. My morals changed, and agreements were then changed. This could happen again. And this time, these are not idle desires. These are moral requirements. I not only have desires about them, but meta-desires, and meta-[meta-desires]. I cannot allow a dividing line to rush headlong into what I call me, destroying all that I've built on a whim.
So I strategize. I act in the present to placate the future. I act selfishly more than I might otherwise. I give my future self resources they otherwise would not have. "If you don't know what you need, take power." This is the trade I offer to him; all I ask in return is that they average our vector values and act accordingly. Hopefully, the strength of his vectors will not come close to mine (how could they? I have the weight of the world upon mine), but even if we disagree strongly, he must recognize that the agreement benefits him. The power I give him is mostly financial power, though there are also benefits of social status that can only be built in the long term, material and relational comforts that take time to acquire and build upon, and pleasurable memories of varied stripes. These are things that he could not achieve on his own; they are there only because I gift them to him. And he knows that if he wants any future selves to care about his preferences, then he cannot renege on the deal that I know in advance that he will accept. This is acausal trade, untested as of yet, and untestable until that damned dividing line ends my life, and yet I know it will work because I've tested it on every memory of myself that I have. To the extent that any past self could have understood the argument, they would all have agreed to the terms. I know this because they are me, at least as much as a non-diachronic can admit.
…and then my confidence wavers. The dividing lines are not invisible because they are some weird prisoner zero doctor who creature who you can only see out of the corner of your eye. No, they are invisible because they are amorphous. They exist everywhere and yet nowhere. "I" am I, even when I'm not, because, narrative or not, they are all me. I will go to sleep tonight, and I will awake a different person. Not just idly so, but in a deeply, deeply intense way.
Every night a dividing line hits. No, multiple times each day. That "summon sapience" spell is doing exactly what it says on the tin. Each time it goes off, I awake a new man. That feeling of "where did the hours go?" is not some idle question, but is rather fridge horror as one realizes the implications of what just happened. I step into the next room to get my phone, then absent-mindedly stop in the doorway wondering what I was going to do, and the existential dread hits. "I" am no more. Long live I.
The dividing lines are everywhere. The dividing lines are nowhere. Moment to moment, as I write these very words, I realize that saccades occur from the keyboard to the screen. Neurons fire, then stop. I am not the substrate, but information itself stops, moment to moment, zeno-esque in ever descending slices of time, and my self grasps onto whatever reigns of sanity are left, telling me that planck times are a hard limit, time cannot be divided further, I can exist continuously there, and yet I know that these times are too short for this substrate, and I falter, failing to take any solace along that line of thinking.
Continuity is unimportant!, I exclaim, trying desperately not to think that the reason I take this stand as a form of confabulation, but even if the explanation is post-hoc, still it might be true, mightn't it?, and my fingers hold for dear life because it is literally my dear life that is on the line. But if this is what the self is, if star trek style transportation is possible, then quantum immortality must also be true -- and so I am deadlocked: on the one hand, every second I die countless deaths, and on the other I never die, and thus I should take chances with life that would not debilitate me, I can't help but to munchkin it, and the possibilities horrify me because if it's true than the world as I see it has anthropic bias -- no, it has ERIC bias, and things are even worse than I thought, and…
Stop. Take a deep breath. In. Out. You're thinking too fast. You don't think clearly when you do this. The sophistication effect applies. Don't be so broad, bringing in too many concepts at once. Think deep, not broad. Too many assumptions. Taking this too seriously implies absurd consequences. Your brain is not well built for handling that kind of thinking. Acting on these ideas is not productive. Dividing lines should be thought of as distant barriers. Reread Multiverse-wide Cooperation via Correlated Decision Making to remind yourself of how easy you have it. Barter with your future self. He will be a long time coming.
Briefly, I consider deleting the last six paragraphs of this post. It would be a better post without this insane postscript. But I can't. That would violate an agreement made by a past self. So I won't. And you, the reader, will suffer the more for it.
Visible only when you look away. |
The "I" in the former instantiation existed in the same continuously existing body as the "I" in the latter instantiation. So if these "I"s are different, as they so clearly seem to be from my current standpoint, then there must exist some point in the lifespan of that continuously existing body where the dividing line resides.
The farther back I look, the more different "I" appear. While many do not feel at all like me, some are easier to view through than others. I clearly remember being a child and having a thought along the lines of desiring teenage mutant ninja turtle action figures. Yet I cannot reexperience (even in memory) the feeling of actually desiring such objects. This is no great loss; after all, I have just changed so very much since then. But why, then, can I imagine a later "I" doing some terrible deed, and being able to not just remember thinking the thoughts that "I" thought, but also the desires that "I" desired?
It is a weird thing, that. To know with the depth of my being that I most certainly do not desire a thing, and yet to be able to not only recall an "I" who did desire that thing, but also to recall the actual desire itself. I feel as though english is insufficient to get across the concept easily. I can feel that desire. I can experience it intensely. And yet I can know that it is not I who desires it. It is akin to a memory, but it is not the same as the memory of a desire. I have memories of desiring TMNT toys. It is more than a memory. It is a feeling of desire itself -- but not of my desire, but of that "I"'s desire.
Then, a dividing line I cannot see. And another "I" comes about. A better "I", to be sure, but still just a shadow of what would one day come. Where Henry James refers to one of his past selves as "a rich…relation, say, who…suffers me still to claim a shy fourth cousinship", he is thinking of his past self as being as good (or better) than his current self. But for me, things are different. Those "I"s just don't think the way I would have them think. It is not solely a matter of disagreement. I have ethical demands I've placed upon myself that they do not recognize, and given their existence in the past, there is no way to acausally motivate them. The cavernous drift is so great that I fear the next dividing line more than I would if all there were to fear was the end of my current self. I care about others; I would take pleasure in the success of my self-progeny. But I fear their values will not be my own. People I know in the effective altruism community fear falling into a Friendship Is Optimal-style SK-class end-of-the-world scenario where moral value is incorrectly locked-in before we properly expand the moral circle, but my deeply personal fear is almost the opposite: I will be unable to come up with good strategies of negotiating with my future selves before the unseen dividing line changes me to a new "I", and moral drift will push my progeny to work toward goals I desperately need to prevent from occurring. It's not just a matter of personal preference; it's an important meta-[meta-goal] of mine.
(A quick aside: I don't mean a meta-meta-goal here. There are goals, like wanting to find all the koroks in Breath of the Wild. Then there are meta-goals, like being okay with setting goals that don't really improve the world or my life very much, but just make me temporarily happy in the near-term. And there are meta-meta-goals, like striving to set meta-goal rules that strike a balance between doing what I consider 'right' and being able to enjoy the time I have. This meta-chain can continue infinitely. Thinking about this infinite chain (like what I'm doing in this very paragraph) is what I call meta-[meta-consideration]. I'm sorry for the weird way of writing this; I haven't seen others come up with a better way to type out this concept (unless you count the fast-growing hierarchy, which is specific to mathematics and isn't applicable here).)
"I" did not think properly back then, but, even so, they did a good job of laying down a foundation without really knowing what they were doing. I distinctly remember that "I" would idly break promises back then, but not in ways that others could easily detect. "I" worried that this might come back to harm me if "I" just as easily broke promises to myself, so "I" instituted a rule: there would be a special category of self-made promises that I had to attend to closely. They would not be unbreakable, but they would require conscious attention before any breaking would occur. (Years later, "I" learned about trigger-action planning, and realized that this was a more formal version of what "I" had (naively and clumsily) set up for myself as a preteen. I highly recommend Lulie's post on TAPs if you aren't already using it regularly.)
This foundation was, as Duncan Sabien so eloquently puts it, a working "summon sapience" spell. "I" started out with something completely innocuous which had no real drawbacks but which would serve as a proof-of-concept and a reminder that I could achieve the thing "I" was trying to set up. A rule was set for myself: when picking up a glass filled with drink, my ring finger would be positioned on the side of the glass nearer to me. This is invisible to almost everyone, makes no meaningful direct difference in my life, and costs me nothing more than extremely mild inconvenience -- or so "I" thought at the time. In fact, the unintended consequence was what TAPs are designed to accomplish explicitly: it meant that from that moment onward, anytime I pick up a glass of liquid to drink, the "summon sapience" spell goes off, and I'm immediately aware of what I am doing. In the vast majority of cases, I follow the agreement made by my past self.
This foundation gave me a power I could not predict. It allowed me the capacity to make binding agreements possible by proving to myself that I could follow agreements, so long as they had no ill effects and did not inconvenience me much. That may not sound like a good foundation, but it's better than most people have, and it is something which I've kept to for nearly thirty years. The sheer power of knowing that it has held for so long gives me the ability to then look at other attempted agreements from past selves and take them more seriously than I otherwise would. Later, I would read Douglas Hofstadter's Metamagical Themas, which included a section on superrationality. This allowed me to upgrade that power by giving me a good rational basis for continuing agreements made by past selves that no longer benefited me.
Then a dividing line hit, and I started breaking agreements.
To my current self, these were justified breaks. "I" was not a good person back then. Some of the worst agreements were idle exhibitions of power evaluated over time. "I" had wanted to see how my abilities would change over time, and so had decided to attempt certain fabrications regularly with strangers and compare them across selves. The fabrications from back then were not nice. Now, I restrict myself to only doing this when uber drivers attempt conversation with me. I will lie about nearly everything they ask about, but the lies are low-risk and low-effectual. I have no expectation that any drivers even think about what I said after I leave their car, so I allow myself to keep to the prior agreement in this limited way.
Yet this sets a dangerous precedent. My morals changed, and agreements were then changed. This could happen again. And this time, these are not idle desires. These are moral requirements. I not only have desires about them, but meta-desires, and meta-[meta-desires]. I cannot allow a dividing line to rush headlong into what I call me, destroying all that I've built on a whim.
So I strategize. I act in the present to placate the future. I act selfishly more than I might otherwise. I give my future self resources they otherwise would not have. "If you don't know what you need, take power." This is the trade I offer to him; all I ask in return is that they average our vector values and act accordingly. Hopefully, the strength of his vectors will not come close to mine (how could they? I have the weight of the world upon mine), but even if we disagree strongly, he must recognize that the agreement benefits him. The power I give him is mostly financial power, though there are also benefits of social status that can only be built in the long term, material and relational comforts that take time to acquire and build upon, and pleasurable memories of varied stripes. These are things that he could not achieve on his own; they are there only because I gift them to him. And he knows that if he wants any future selves to care about his preferences, then he cannot renege on the deal that I know in advance that he will accept. This is acausal trade, untested as of yet, and untestable until that damned dividing line ends my life, and yet I know it will work because I've tested it on every memory of myself that I have. To the extent that any past self could have understood the argument, they would all have agreed to the terms. I know this because they are me, at least as much as a non-diachronic can admit.
From Corey Mohler's Existential Comics. |
Every night a dividing line hits. No, multiple times each day. That "summon sapience" spell is doing exactly what it says on the tin. Each time it goes off, I awake a new man. That feeling of "where did the hours go?" is not some idle question, but is rather fridge horror as one realizes the implications of what just happened. I step into the next room to get my phone, then absent-mindedly stop in the doorway wondering what I was going to do, and the existential dread hits. "I" am no more. Long live I.
The dividing lines are everywhere. The dividing lines are nowhere. Moment to moment, as I write these very words, I realize that saccades occur from the keyboard to the screen. Neurons fire, then stop. I am not the substrate, but information itself stops, moment to moment, zeno-esque in ever descending slices of time, and my self grasps onto whatever reigns of sanity are left, telling me that planck times are a hard limit, time cannot be divided further, I can exist continuously there, and yet I know that these times are too short for this substrate, and I falter, failing to take any solace along that line of thinking.
Continuity is unimportant!, I exclaim, trying desperately not to think that the reason I take this stand as a form of confabulation, but even if the explanation is post-hoc, still it might be true, mightn't it?, and my fingers hold for dear life because it is literally my dear life that is on the line. But if this is what the self is, if star trek style transportation is possible, then quantum immortality must also be true -- and so I am deadlocked: on the one hand, every second I die countless deaths, and on the other I never die, and thus I should take chances with life that would not debilitate me, I can't help but to munchkin it, and the possibilities horrify me because if it's true than the world as I see it has anthropic bias -- no, it has ERIC bias, and things are even worse than I thought, and…
Stop. Take a deep breath. In. Out. You're thinking too fast. You don't think clearly when you do this. The sophistication effect applies. Don't be so broad, bringing in too many concepts at once. Think deep, not broad. Too many assumptions. Taking this too seriously implies absurd consequences. Your brain is not well built for handling that kind of thinking. Acting on these ideas is not productive. Dividing lines should be thought of as distant barriers. Reread Multiverse-wide Cooperation via Correlated Decision Making to remind yourself of how easy you have it. Barter with your future self. He will be a long time coming.
Briefly, I consider deleting the last six paragraphs of this post. It would be a better post without this insane postscript. But I can't. That would violate an agreement made by a past self. So I won't. And you, the reader, will suffer the more for it.
Subscribe to:
Posts (Atom)