23 December, 2021

Moral Cooperation with a Colleague

Oesterheld on Multiverse-wide cooperation
I’ve been thinking a lot lately about the various ways that we deal with others that have values not aligned with our own. When Aumann's agreement theorem fails due to different object-level values, what’s the best way to proceed? We can't just double crux at that point. Self-modifying value handshakes? dath ilan-style pareto optimal deals?

What about when I have the upper hand? It’s a contingent upper hand, not a necessary one, so maybe I need to make decisions that benefit all potential alternate versions of me? (In what ways is this different from benefiting them-as-an-alternate-of-me?) Is this the main purpose of being gracious? I want to do the right thing at the meta level, taking into account the probability that I'm just wrong; does this mean that I should compromise object level values when there appears to be no game-theoretic reason to do so?

I have a person in my life that has a serious difference in object level values with me, and I’m in a position where I don’t have to compromise, even though interacting with them on issues that deal with those values isn't avoidable, is ongoing, and they care a great deal about this difference in our object level values. I'm considering compromising despite not needing to; but I'm also wary of setting up a perverse incentive for my future dealings.

I'm still thinking deeply on this. On the supposed value of graciousness. On when meta values should take priority over object level values. On how I'd feel if I were on the other end of this situation. (Badly, I'd expect. And powerless.) I really don't want to fall into the trope of someone who doesn't update properly.

I really need to continue thinking about this.

No comments:

Post a Comment