31 January, 2016

Review: A Girl Corrupted by the Internet is the Summoned Hero?!

A Girl Corrupted by the Internet is the Summoned Hero?! A Girl Corrupted by the Internet is the Summoned Hero?! by Eliezer Yudkowsky
My rating: 5 of 5 stars

The intersection of decision theory and sexual depravity is not my usual go-to place for light reading, but in this case the combination works beautifully.

Although the book is woefully short, the author has managed to set up a rational hard fantasy premise that successfully thrusts the reader from beginning to climax. (Puns intended.)

The book will be more enjoyable if you have some familiarity with Japanese light novels, tropes concerning B theory time travel paradoxes (or, more specifically, consistency re: decision theory), and ...advanced... pornography. But it's readable even without these prerequisites.

I'm giving it five stars because it was really good; but at the same time I'm quite disappointed at the extremely short length. I feel teased by how short it lasted!

View all my reviews

25 January, 2016

Review: The Northern Caves

The Northern Caves The Northern Caves by nostalgebraist
My rating: 4 of 5 stars

Postmodern moral philosophy fiction is somewhat hard to come by, so when I was recommended this story, I immediately found myself reading it in its entirety.

I was thoroughly riveted by each successive chapter. The more I read, the more I wanted to read. I especially loved the setting, a series of forum posts from the early 2000s. The setting felt true-to-life; I vividly remember participating in phpBB forums just like this when I was young. And the story performs beautifully in the beginning and middle, continually rising toward a climax that I don't want to talk about in this review, for fear of spoilers.

But I will say that the final chapter hits hardest if you stop and think through the entirety of the story afterward. For me, it took several minutes and a slight reread before I fully integrated the final chapter and its place among the rest of the book.

Despite the glowing recommendation above, there are a few problems with the story that keep it from getting five stars. I'll try to avoid big spoilers, but if you haven't read the book yet, you probably shouldn't read the rest of this review.

The Notes XV - XVI chapters bring a tragedy that seems somewhat out of line with the concept of Mundum; it just doesn't seem to fit. (Mundum is anti-causal, yet causes the event at the restaurant? I understood mundum to necessitate, but not cause, tragedy.) Saying that this story point might not have to fit for postmodern reasons is insufficient in my opinion. Also, that Salby pointedly avoids writing about sex, commenters bring this up specifically, and then nostalgebraist writes in sexual content anyway -- this just seems like the author is playing with the readers. Finally, that last chapter, even though it makes a really good point about the entirety of the story, nevertheless feels deflationary. Sure, it was a brilliant ending in one sense because it made me think, and it brought me around to thinking something very differently than what I was thinking just one chapter prior, but at the same time, it just made me feel bad. While I was reading this story, I got more and more invested, and I feel almost like the author cheated me out of the resolution I wanted. Instead, I got a 'resolution' that made me almost regret the way I thought about previous chapters. This was a good thing, I think -- it is certainly a novel way of reading a story like this, and I'm glad to have been exposed to it -- but I almost feel like I've been rick-rolled. Sure, it's great and fun this time, but if I ever read another book that does this to me again, I think I'll throw it down in disgust.

View all my reviews

22 January, 2016

Kissing, Self-Modification, & CEV

I have a strange relationship with kissing. I don't consider myself a good kisser, I don't particularly enjoy kissing, and I tend to feel a bit squicky about kissing.

This is odd. I don't really know anyone else who shares similar feelings about kissing on all these fronts. I've mostly dealt with it by avoiding kissing when appropriate, and 'doing my duty' when needed. Most either don't notice or at least pretend not to notice, but there have been a few that have questioned me about it.

It started when I was young -- younger, I think, than I can reliably remember. I think that my mother held her hand in front of my eyes whenever people would kiss on television, and this gradually turned into a habit of averting my eyes whenever an onscreen kiss would occur. To this day, if I am watching television and two actors kiss, I am immediately taken out of the story and have to consciously use willpower to not turn my head away from the screen. For me, kissing ruins otherwise good stories, though I do look past it when I can. (Similar to how I notice and am annoyed by superheroes that ignore newton's laws of motion or space battles with audible explosions that should be silent; I look past these as well and try to engross myself in the story whenever possible, but these things always take me out of the storyline at least temporarily.)

This is especially weird when watching porn. I have no problem seeing many sex acts, but as soon as the actors kiss, I feel squicky. I suppose this is because I was trained to look away from kisses, but I never had a parent make me look away from actual sex acts, since I never had a parent in the room when those sex acts occurred on screen.

And then there was N—. She wasn't the first I'd kissed, but was still one of my first. She broke up with me for one reason or another, as young people sometimes will, and I asked her why. Looking back, it wasn't a particularly good question to ask, because she was annoyed with me at the time, and was very likely to lie. But for some reason, when she said it was because I was a terrible kisser, I believed her. I was barely a teenager at the time. Now, I know better. She was just being mean, or at the very least rude. But the thought stuck in my head anyway, and never really went away.

Of course, there's also the issue of my teeth. Today, I like my teeth. I like their distinctiveness and I have grown quite fond of the shape of the hole I make when biting into an apple. But it may be easy to understand why I haven't always felt this way. Those who know me in person will no doubt have noticed that one of my front teeth skews forward at a slight angle. It's significant enough to not be easily missed by anyone who talks with me in person, let alone any who kiss me. For a long time, I felt embarrassed by it, and even though I now like it for its many benefits (distinctive whistling sounds, ease of dental identification should I die in a fire), it is nonetheless something I consciously think of whenever I kiss someone, and that's not really a good feeling.

So today, whenever I kiss someone on the mouth, it is quite a conscious experience. It is never 'in the moment'. Like with tv, if I kiss someone lips to lips, I am very much aware of what I am doing, how I am doing it, and what the other person might be thinking of the experience. It is not sexy, nor romantic, nor in any way a positive experience for me. It takes me out of the experience and very much turns me off. Nevertheless, I usually just soldier through it, which isn't particularly difficult to do. I do kiss; I just don't really enjoy it.

What makes all of this even more strange is that I'm the sort of person who will kiss new people I meet on the cheek. It's a kind of greeting that I've inherited from my family, for whom kissing is the most appropriate way to greet any person you're saying hello or goodbye to. But as this is the cheek, not the mouth, it doesn't bother me at all.

In fact, most kissing does not bother me. I enjoy kissing others, and being kissed in return, just as much as I love close contact with friends and family. So long as it is not on the mouth, I'm very much in favor of kissing, whether it is with a partner or a family member. But the thought of kissing someone I'm romantically involved with on the mouth.... Even as I write that last sentence, I found myself shudder involuntarily (though only slightly).

It's not really a rational preference. I get that. I'm sure that with practice and a little self-reflection, it's the kind of thing that I could 'fix'. But I've never really felt a desire to fix it, just like I have never really felt a desire to 'fix' my distaste of brussels sprouts, or the fact that I'm sapioromantic rather than someone who feels romantic attraction to others for more physical qualities. It's never really been a problem -- at least it hasn't been in the past.

But, for some people, kissing is important. Important enough that my enjoyment of kissing (on the mouth) would be required for them to enjoy any kind of romantic contact. So the question arises: what level of brain modification am I okay with?

When I first learned about the horrors of industrial agriculture, I felt compelled to abstain from eating meat. When I fully realized the impact I could make through effective altruism, I began donating a significant amount of my income. When I learned about my own invisible privilege, I took steps to try to make that privilege more visible so that I could act more appropriately. Each of these were a beneficial type of information hazard that spurred me to action once I learned the underlying truth of reality. In each case, I felt it was appropriate to modify the normal behavior of my own brain so that I could become a better person. I anticipate making many more such changes in the future, and a large part of my idle thoughts go towards predicting what my coherent extrapolated volition might be once I become aware of more beneficial information hazards. (Infohazards are quite well named, given that they demand immediate self-modification when viewed, even if that change is 'beneficial', since, from the point of view of the pre-changed mind, that change is, by its very definition, hazardous.)

Yet this is a peculiar situation. This is no beneficial infohazard. This case is more like someone asking me to self-modify to enjoy the taste of brussels sprouts. It is a lateral change; not a positive one (from my perspective). Sure, were I to self-modify to enjoy kissing, it would give the other person utility -- and, in a way, I'd gain utility by creating a new way for me to enjoy reality (plus, I'd gain the utility from enjoying being with this person) -- but if I were to accept this kind of self modification as being acceptable, then I should also be okay with self-modifying to like brussels sprouts.

In Douglas Adams' Restaurant at the End of the Universe, there is a cow that wants to be eaten. Much has been written about the idea of a rational being that places utility in others doing something to it that we would otherwise consider harmful, but I'd like to focus on the part where this being was effectively made to desire something that we would ordinarily expect it to not desire at all. In the book, others made the cow to be born with such a desire -- but imagine, instead, that it self-modified to have such a desire.

If I were to be taken as a slave, and had access to an oracle AI that informed me that I'd be a slave for the remainder of my life, then would it be rational for me to self-modify my utility function to desire being enslaved? If yes, then surely I should also be willing to self-modify to enjoy brussels sprouts or enjoy kissing. But I think the answer is no, which doesn't necessarily mean I should or shouldn't self-modify for lateral utility changes.

I'm a consequentialist, but I have no desire to permanently enter Nozick's experience machine, mostly because I place some value on being hierarchically higher when choosing between a simulation and reality (or between two simulations). (Friendship is Optimal is a horror story, no matter what anyone else tells you. It is most definitely not in my CEV.) So if entering the experience machine is bad, then doesn't that imply that self-modifying to enjoy brussels sprouts would also be bad?

I don't know. I'm not sure I'm really thinking straight about this, because I'm tempted to think that maybe self-modifying to be vegetarian would be bad from my own point of view, and is only justified because of others' points of view (like the harmed animals). But if that is the justification for why it is good to become vegetarian, then I shouldn't I also self-modify anything that would cause more good overall? Like maybe undergoing plastic surgery, or losing weight, or having less extreme political views, or even wearing orange less often. Let's not bring up gay conversion therapy, which is much more serious than these other ideas. Yet even these other ideas seem terrible to me. They seem obviously wrong, and so something has surely gone awry in my thinking on this topic.

(In any case, I should point out here that none of the above thinking applies to situation of getting children to try out vegetables they don't at first like. From what I understand, humans evolved to have children find sweet things pleasurable and bitter things unpleasurable at a first taste, but to grow to like bitter things after repeated eatings, so that parents can get children to eat the farmed vegetables while still having them avoid poisoning themselves on wild plants. On this theory, children are primed to learn to eat new bitter tastes after a few tastings, even though adults have a much more difficult time of learning to like the taste of something they previously disliked.)

07 January, 2016

The Double-Crux Game

One of the greatest joys I've personally experienced is that feeling you get when you genuinely change your mind. It's especially rewarding when you can feel the dominoes falling as each step in a logical sequence causes you to change your mind on increasingly complex lemmas after a basic premise's truth value switches.

In everyday life, this kind of thing is somewhat of a rare occurrence. But it doesn't have to be.

The double-crux game originated at CFAR.

I first learned about the Double-Crux game when McKenzie Amodei and Andrew Critch of the Center for Applied Rationality taught a workshop on it at EA Global 2015 in Mountain View. The idea is to get you and someone you disagree with into a situation such that hopefully one of you will be able to change your mind. It is genuinely one of the most awesome experiences I've ever had, and it is well worth the amount of effort that is involved in really and truly thinking it through.
  1. First, you need a friend who's willing to do this with you. It only works if you're both intellectually honest, willing to change your mind if presented with sufficient evidence, and excited about the progress of getting closer to truth, even if that means that one's current view is incorrect.
  2. Second, you need to identify an intellectual disagreement that you have between the two of you that you'd like to focus on. The process doesn't always work, but if it does, one of you will be changing your mind on this issue. The first time you try this, it will work best with a binary boolean of the form A or ~A. But once you get the hang of it, you can also use it for non-binary disagreements, like believing A is true with 90% confidence or 60% confidence.
  3. Third, you might want some paper to write down your thoughts. It's not strictly necessary if you have a good memory, but it's definitely helpful at least the first time you attempt it.
To start, write out statement A, for which the two of you disagree. For your first attempt at this game, one of you should think A is true; the other should think A is false.

From Richard Acton.
Now you must find a double-crux of A. Let's say that you think A is true. Then you need to find a crux B of A such that you also think B is true, but if you could be persuaded that B were false, then it would also cause you to change your mind about A. Meanwhile, your partner will be trying to find her own crux, where they will believe B to be false, but if they could be persuaded that B were true, then they'd change their mind about A. Your goal is to find the exact same statement B that serves as a crux for both you and your partner, hence the term "double-crux".

If you've done this correctly, then you will find that there are now two statements that the two of you disagree on. You think both A and B are true, while your partner thinks both A and B are false. Further, you honestly believe that, if B were false, then A would be false, too -- and your partner believes that, if B were true, then A would be true as well. This means that if either of you were to change your mind about B, then that person would also change their mind about A.

Now we can recursively go through the above steps, replacing A with B. The two of you find a double crux for B which will be called C. Then a double crux of C called D. And so on. At each step, write down the double crux statement. Each time you do so, make sure it is a genuine double crux. This doesn't work if you grudgingly accept a statement as a crux; it only works if you honestly believe that the truth value of the crux is causally related to the truth value of the former statement.

Eventually, the two of you may reach a double crux statement Z for which data can be looked up. This is the goal of the cooperative game. It should be an issue that is easy for the two of you to agree upon, perhaps because you can look it up directly in Wikipedia, it might be easily computable in Wolfram Alpha, or it might be easy to simulate on Guesstimate. Once this happens, the two of you will agree on statement Z. For one of you, this means that you have effectively changed your mind on Z.

Because Z was a crux of Y, that person should also change their mind on Y. And X, W, V, U, etc., all the way to A.

If you are both intellectually honest about what really does consist of a double crux each time, then once you reach the end condition of finding a checkable statement Z, it means that one of you will correspondingly change your mind about A.

The experience of changing one's mind through this method is absolutely breathtaking. Being able to watch as your brain goes through each step, legitimately changing your mind on increasingly complex lemmas until you reach the main statement of disagreement is literally awe inducing. It is one of my favorite feelings, and I highly recommend trying it out for yourself.

Unfortunately, it doesn't always work. I've often gotten into situations where my partner and I can each find a crux, but not a double crux. And sometimes when working with people new to the double crux game, we run into the issue of someone claiming something is a crux when it actually turns out not to be. But even in these "failures", I've found the double crux game to be an immensely rewarding experience, because it usually helps both sides to understand the perspectives of each other much more.

  • It can be easier to look for your individual single cruxes first, then to check with your partner to see if it might be a double crux with them.
  • You can find your single crux by asking "what things could, in principle, change my mind on this topic?"
  • Cruxes are better when you make the claim concrete and specific. Vague claims might be legitimate cruxes, but it will be very difficult to recursively work with something too vague. Be specific; quantify your claim; put a percent of likelihood on your predictions. That way you won't run into a situation where you won't be able to find a double crux for a too-vague statement.
  • Once you understand how to do this with a boolean statement, feel free to move on to more complex disagreements. If you believe A is 90% likely while your partner believes it is 60% likely, that disagreement can be resolved by the double crux game as well.
From Regex.
The games flows more smoothly if both people are familiar with Bayesian priors/posteriors, and if every statement has a degree of belief listed, but I've successfully played the game with non-Bayesians using Boolean statements with only 100%/0% degrees of belief and it has nevertheless worked well.

If you're having trouble finding double cruxes, Regex has come up with something called the Duct Tape Contention technique that helps to identify a double crux by explicitly graphing out the reasons for each of your beliefs. It takes a long time to do it this way, but it can be helpful if you find yourself stuck without knowing how to find a double crux. You may want to check out this sample duct tape graph that Regex provides to show how the technique can identify double cruxes.

Richard Acton has created a presentation that gives a great example of two players playing the double-crux game. It's short, but it can be helpful to see a real life example of finding a double crux from a statement of disagreement. [EDIT (July 2018): Acton now has an updated prezi that's formatted better.]

Once you're familiar with the double crux game, PeteMichaud points out that you can use the game to resolve cognitive dissonance by examining your internal epistemic rationality, or even to build deep, sustainable caring by playing between the parts of you that thinks some part of the world matters and parts of you that are afraid to look in that direction. Of course, this requires that you first know how to notice and parse out these parts of yourself, and it might not be helpful to people who believe they have an intuitive grasp of value. But for those of us who are moral anti-realists that wish to construct value anyway, this seems like a promising technique to get the entirety of your brain on board with a consistent ethical position.

If you want to learn more about the double-crux game, or if you want to experience it first hand in a setting with others that are similarly interested in better navigating intellectual disagreement, you may be interested in going to one of the Center for Applied Rationality's workshops. Or, you might want to invite me to work through a mutual disagreement. Regardless, I heartily recommend playing the Double-Crux game. It's well worth the effort needed to work through each level with a trusted friend.

EDIT (Dec 2016): Duncan Sablen has posted a high quality introduction to Double Crux on LessWrong. I highly recommend that people not only read Duncan's post, but also the several high quality comments left by users of LW.

EDIT (July 2018): Richard J. Acton has commented below with a link to an updated prezi he's created that has some additional rationality techniques added. On LessWrong, deluks917 has posted a concrete multi-step variant of double crux. Both are worth checking out.

04 January, 2016

Time for Work

Wake up. We don't want to be late for work.

Groggily, I stir. No music or podcast to help me wake this morning because I left my phone downstairs. I try to open my eyes. It doesn't work. It's the first day of work after a long holiday vacation, and I'm still not in the right headspace for getting up this early. But I manage anyway, mostly through a combination of masturbation and pointing a fan directly into my face.

Within the hour, I'm stepping outside the car and walking into Panera. I order a spinach and artichoke soufflĂ©  and a cup for iced tea. Four refills later, I'm finally ready for the metro.

Usually I like the metro. Living in Germantown, MD, and commuting twice each week to Alexandria, VA, really isn't that bad when you have podcasts and a comfortable pillow. But today is the first cold day of winter, despite it being january 4, and I left my phone at home. So I pull out my 3DS and play an old copy of Link's Awakening for the entirety of the trip.

Once at the office, I feel like I can finally relax. I know exactly what I need to accomplish today, and I'm anxious to get started. Long breaks have their ups and downs; when I'm away from work for too long, I generally start to worry. What if the automation I set up breaks? What if the tracking codes I set up aren't tracking correctly? What would ordinarily be a mild inconvenience can turn into a moderate disaster when I'm away from the office for a few weeks at a time. But now, I am back, and more is right with the world.

So now: the problem solving.

  • First, a new keycard system has been implemented. My current keycard won't let me onto my floor, so I need to get a new one. But that's several floors away, so first, a coworker buzzes me in and I see what else I need to do before getting a new keycard.
  • Ah, a form needs to be filled out. This happens at the beginning of every month, but for some reason I always forget until my calendar reminds me. This needs to be submitted to another floor as well; I should fill this out and submit immediately after getting my new keycard.
  • Hm... I have uncashed checks? These emails always throw me for a loop. I have two checks from the company that I apparently never cashed; one back in January of last year and one this past June. For a moment, I scold myself. Am I really so bad about money that I can just forget to cash two checks from my employer like this? The email is asking if I need the checks re-issued. .:sigh:. I guess I should look and see if I have the checks first. The last time I misplaced a check it was because I used it as a bookmark, thinking that there'd be no way to lose it, since of course I'd come back to the book and find it. Except it took me several months before I opened that book again. /c:
  • Next is a reminder that I'm supposed to take a lunch each day. I didn't in my last two work days, because there was so much to do at the end of the year and I went without lunch. I need to remember not to do that kind of thing. I still remember the time in a previous job when, to hit a deadline, I worked all night long, literally sleeping at my desk that night so I could finish on time. Back then, I thought it said something positive about my work ethic. Now I realize it said something negative about my ability to plan ahead. I haven't done anything that bad since, but skipping lunch two days last month is a move in that direction, and I don't like it. I'll definitely take a lunch today.
  • Next up is the monthly report. I only work two days each week at this place, so compiling a monthly report takes up way more time than it should. I really need to code up an automated version, but I won't have time to do that today. Maybe next month.

Okay. The rest is analysis, which I need dedicated time to spend on, so I should fill out this form and grab my keycard first. When I get back, I can begin checking up on how well the store performed over the holiday season.

But first, I should probably publish this blog entry.