More thoughts on Utility Calculations

A short while back I wrote Utility Calculations in Practice so that I could get some ideas out of my head. Unfortunately, said ideas did not leave me alone and I still find myself scribbling little follow up notes in the margins of my scratch paper instead of being able to concentrate on other work. Here are three loosely structured continuations, two short then one long, and then maybe I can have peace.

For reference, below are the principle equations from the previous article.

z, a principle actor
U_z, the set of universes with z as principle actor
A_z, the set of actions available to z
\rho : U_z \times A_z \rightarrow \{ f:U_z \rightarrow \mathbf{R}, f \text{ a probability measure} \}, a probabilistic state of the world after an action
\nu : U_z \rightarrow \mathbf{R}, a universal value function
\mu :=(\int_u \nu(u)\rho(w,a)(u))-\nu(w), the expected value of the action (utility function)
d : U_z \rightarrow A_z where d(w) satisfies \emph{max}_{a\in A_z} \mu(w,a), the best action to take

\displaystyle \mu_z := (\sum_{\rho_z(c_z(e_z(w)),a)(u)>0} \nu_z(u)\rho_z(c_z(e_z(w)),a)(u)) - \nu_z(w), calculable utility of action
d_z : U_z \rightarrow \{ a \text{ s.t. } \mu_z(w,a) > s_z(w) - \epsilon\} \subset A_z where \epsilon is some error term, satisficing formula for best action

Looking at it now, it seems too compact to be at all useful to the casual reader so feel free to comment asking after anything that’s unclear.

First note is on something that shows up in the original equations but that I did not bring to the reader’s attention at the time. I normalized the expected value of the action by subtracting the value of the current world but never explained why. Since integrating \rho over the space gives a value of 1, we can pull \nu(w) inside the integral which lets you reference only the difference between current and future worlds when calculating the value function. This severely decreases the inherent complexity of the operation and is likely the only way that calculation could be feasible.

\displaystyle \mu :=[\int_u \nu(u)\rho(w,a)(u)]-\nu(w) = \int_u [\nu(u)-\nu(w)]\rho(w,a)(u)

Second point of note, it’s tempting to address some of the concerns raised in the original article by including a term for the value of the action itself in our calculation. This idea stems from the conviction that many actions are inappropriate for most worlds (e.g. murder everyone around you might sometimes be the right thing to do but that likelihood that you’re in one of these situations is vanishingly small) and by rejecting them outright, we cut down on the search space for the correct action. Since calculation is hard for us, this makes sense as a piece of practical morality.

How to dial in the value of the action relative to the customary world? A linear combination is the most straightforward though, as will be shown later, not without its complications.

\bar{\nu}:A\rightarrow \mathbf{R}, static value of action
d : U_z \rightarrow A_z where d(w) satisfies \emph{max}_{a\in A_z} c_1\bar{\nu}(a) + c_2\int_u (\nu(u)-\nu(w))\rho(w,a)(u), the best action to take (with correction for value)

‘Pure’ virtue ethics might be an example of this model with c_1 set to 1 and c_2 set to 0 while ‘pure’ utilitarianism is the opposite.

This approach is obviously counterproductive from a utilitarian perspective and a ripe opportunity for more subjective value to sneak in. The wrangling over \bar{\nu} will get just as political as the wrangling over \nu and possibly worse since this new function is simpler in application and so more widely accessible to opinion. Proponents of the extension will argue that (in contrast to the world value function) it’s actually feasible to calculate value of action type and that, if done well, this approach gives an answer within the satisficing set with a high frequency. Their opponents say that no one is providing accountability on this claim.

Third and finally, consider the poor soul who wants to balance the concerns of multiple value functions. This is a natural enough desire for many reasons. Most value functions are too simple to capture everything we feel is important. Many accompany group membership and are required to remain in high standing. Some have very obvious flaws and its tempting to patch over them by applying another system.

The naive method is to use a weighted arithmetic mean (or any linear combination) utility functions as we did above with the ‘action value’. However, there’s a flaw in that this collapses to a single value function because you can pull all the terms through the integral.

\sum_i c_i\mu_i(w,a)
= \displaystyle \sum_i c_i\int_u \nu_i(u)\rho(w,a)(u)
= \displaystyle \int_u [\sum_i c_i\nu_i(u)]\rho(w,a)(u)

The worst example of this is a weight function parameterized by initial world state (i.e. c_i that is really a function c_i(w))since that is just asking for self-motivated leakage.

You’ll notice this method also grants as premise that the output numbers from each different utility function are directly comparable. This is a very strong statement, ask a Christian how they would compare number of souls saved to number of children. We have no reason to believe this idea of direct comparison is true or to believe that normalizing will be simple. They could involve different scales (or units) or be shifted by some large constant from each other. Some could respond exponentially to small shifts while others remain largely constant.

There are alternate schemes that don’t suffer the “pass through integral” problem. If we assume no negative values then it’s possible to use the geometric mean. This differs from the common arithmetic mean by being the product of its components instead of their sum. The geometric mean is nice because it’s useful in situations where normalization is a problem.

\displaystyle \prod_i [\int_u \nu_i(u)\rho(w,a)(u)]^{c_i}

or its equivalent form

= \displaystyle \sum_i c_i\log[\int_u \nu_i(u)\rho(w,a)(u)]

One way to visualize the difference between the two elements is to picture a multi-axis space where each axis is composed of a utility function and note that solving for each individual utility gives us a tuple in this space. The arithmetic mean corresponds to the L_0 norm, the sum of the components. The geometric mean corresponds to the volume of the box formed when taking the tuple as one corner and the origin as the other. (As an aside, consider the $latex L_1} norm. Euclidean distance sidesteps the “pass through integral” problem as well.)

Even if the structure of the space is “really” additive in some sense then the geometric mean will still act as an approximation of the arithmetic mean, albeit one with a bias toward balanced tuples.

Non-negative values are a tough sell because when we base our value calculations on the delta for simplicity. This method also asks a lot of its user by requiring one either multiple large numbers or take logs of each argument before aggregating. I guarantee that no one is using this approach in matters of practical or immediate ethics.

Don’t have much to conclude other than I remain skeptical of any scheme that claims to aggregate or combine disparate utility functions unless supported by very clean and explicit math.

Utility Calculations in Practice

I have issues with utilitarianism as usually stated. It encourages poor calculation and, as a tool, is not a good fit for the problem it is trying to solve. The topic is complicated and this post is not meant to stand against any argument in particular, merely to capture the particulars of the situation as they appear in my mind.

By my understanding, utilitarianism is about the following calculations.

z, a principle actor
U_z, the set of universes with z as principle actor
A_z, the set of actions available to z
\rho : U_z \times A_z \rightarrow \{ f:U_z \rightarrow \mathbf{R}, f \text{ a probability measure} \}, a probabilistic state of the world after an action
\nu : U_z \rightarrow \mathbf{R}, a universal value function
\mu :=(\int_u \nu(u)\rho(w,a)(u))-\nu(w), the expected value of the action
d : U_z \rightarrow A_z where d(w) satisfies \emph{max}_{a\in A_z} \mu(w,a), the best action to take

Taking this framing as correct, there are four categories of objection that I end up playing out.

Objections to the mathematical structure. Can we apply a suitable measure to U_z or is \rho incoherent for this reason? Can we integrate over U_z for the purposes of \mu? These are the weakest objections because they hinge on my math skills (and those have become quite rusty) but for completeness’ sake we cannot presume to use the machinery of analysis without first satisfying its prerequisites.

Objections from physics. Are U_z, A_z well-defined? Does \rho place requirements on cause-and-effect that are realistic or do we require more variables to capture the set of things that could happen?

Objections from humanity. Is \nu well-defined? There’s been no end of argument to resolve the question “What is the good?” and any attempt to get a utility framework off the ground has to bootstrap off some answer here.

Objections from computer science. Is d computable? Is \mu computable? Is \rho computable? Are U_z,A_z encodable? Our string of constructs are useful even as a thought experiment if they cannot be converted into a procedure for people to resolve emergent moral quandaries.

Going from that last set of objections, practicality forces humans to accept a few constraints when approximating this ideal. Our simple structure of above should be better rendered with the following.

e_z: U_z \rightarrow U_z', epistemology, worlds we know
c_z: U_z' \rightarrow U_z'', reductive encoding of a world into a model
i_z: U_z'' \rightarrow \{\text{actions } z \text{ can imagine taking}\}\subset A_z, imagination function for generating actions
\rho_z : U_z'' \times A_z \rightarrow \{ f:U_z'\rightarrow \mathbf{R}, f \text{ a probability measure}, f \text{ non-zero on a finite set} \}, a set of probabilities to hold in your head (replace finite with finite parameterization if that bothers you)
\nu_z value function of z, surely z has a utility function
\displaystyle \mu_z :=  (\sum_{\rho_z(c_z(e_z(w)),a)(u)>0} \nu_z(u)\rho_z(c_z(e_z(w)),a)(u)) - \nu_z(w), calculable value of action
s_z := \emph{max}_{a\in i_z(A_z)} \mu_z(w,a), the best change in world value

put it together to get the best action

d_z : U_z \rightarrow A_z where d_z(w) satisfies s i.e. \mu_z(w,d_z(a))=s_z(w)

or the satisficing version

d_z : U_z \rightarrow \{ a \text{ s.t. } \mu_z(w,a) > s_z(w) - \epsilon\} \subset A_z where \epsilon is some error term.

Notice how many invocations of z now appear. So many opportunities for subjective judgment or simply errors in judgment to sneak in and for a program that aims to provide an objective measure that should be a problem.

Different people deal with the subjectiveness in different ways. Life hackers focus on expanding and optimizing i_z. A lot of arguments arise in what makes an appropriate c_z. There’s much academic and theological agonizing over the size of \{ \nu_z \} and the apparent difficulty of extracting an agreeable \nu. Similar agonizing over the optimal amount of self-interest in each \nu_z. Human forecasting (dubious profession that it is) tries to refine \rho_z. The bedrock of science is refining e_z. This last is my particular favorite though strictly speaking it is only a small portion of the final equation.

All of these are genuine and noble responses to the muddiness of the equations and I don’t want to detract from that urge. It’s just that the muddiness is inherent to the project and the rhetoric around it does not appear to grant this. Comparison of ethical systems is best served by acknowledging that they’re all heuristics and then examining which distortions each structure is alternately vulnerable to or guarded against.


Debt Owed to The Last Psychiatrist

Go read The Last Psychiatrist if you haven’t already. Go ahead, this page can wait, I’ll still be here. The author is not a shy writer, whichever article you picked probably gave you a clear sense of the blog, the author’s mission (did any keywords “aspiration”, “alienation”, or “narcissism” show up?), and hopefully was an entertaining ride. Before the hiatus, the updates were never very frequent but they were always worth the wait and the community of the comment sections was strong. After reading, I’d always go away vaguely anxious and convinced that I learned more than I did and, consequently, I’d always return to try to piece together what ideas in the prose activated that resonance in me. There may not be substance below every bit of rhetoric but there were some lessons learned and I’m here to acknowledge my debt to TLP.

The media is dangerous and it is not on your side. Amply represented on the blog by the twin catchphrase “If you’re watching this, you’ve already lost/If you’re watching it, it’s for you”, this is arguably the principle message of the later series of posts. The idea is that all media is some form of advertising. Authors want to be read, talk shows want you to watch, and for all of them there is no such thing as negative name recognition. If you know their name then you are engaged and once the audience is engaged, they can be manipulated. The chief form of manipulation is the aspirational image. It’s fashionable to have a lifestyle and lifestyles change like fashion. Cigarette ads push an aesthetic but so do car commercials and so does cable news. Whatever box you’ve drawn around media, it isn’t big enough. There’s always another factor to mind because media is the tool of power. And as the tool of power nothing dangerous happens there, it is what you’ve been allowed to see. (TLP might be a bit cynical)

We’re all lazy. Acquiring the symbol of a thing is always cheaper and doesn’t require as much work as acquiring the thing itself. It’s easier to buy a sports jersey than to play professionally, it’s easier to sit in a coffee shop reading blogs on your Mac than to be a freelance writer, and it’s easier to complain in the comments section than to engage in political activism. This is why we pursue symbols of success so fervently – they provide a sense of progress while distracting from any push for real change. In fact, we have plentiful defense mechanisms against change. We don’t need more reasons not to change. If you’re not consciously aware that you’re changing (and you will because it will feel unpleasant) then you’re probably not changing in any way. Sweat is the most useful indicator that you’re doing something meaningful, both the sweat of work and that of fear. So go and sweat for your goals. The truth is that we only have limited time and there will always be more chores to do so you’re compelled to be brutal or despair. Don’t talk, do – you can’t give advice, you must live your advice.

Intellectually too. We’re bad at understanding our emotions and our impulses. Hate, for example. Despite their rhetoric, most people don’t hate Yankees, they actually hate Yankees’ fans. This serves a useful process of othering (and be extension, channeling the aspirational impulse) but is difficult to justify on socially accepted grounds. So it’s best understood as a polite lie that everyone ignores about themselves. Another example, the view that fighting is inherently bad. TLP would say that people are not frightened of the violence of a fight but by the fact of it. Violence has become so rare (so gauche) that we’d rather not deal with it at all. Considering that zero tolerance policies are not about creating justice, this means that this view implicitly furthers injustice.

Intangibles are dangerous. Some intangibles are really valuable! I personally get a great deal of mental peace out of visiting the ocean and will fight to stay close to one for the rest of my life. But you have to be able to cash out on your intangibles in some way (does that just make them tangible goods?). Usually, they’re a means to control you. Abandon hope – it’s an intangible that prices you for less. Promises of good things are not good things and you can’t treat them like they are. Beware receiving confidence, they’re playing a long game. TLP would have you trust only what you see and not wait for what you want.

Finally, I was most grateful for the idea that national news stories in the days of 24-hour media are “just local crime stories blown up nationally.” This let me classify and discard a lot of ephemera that I had been paying attention to. Companies have no incentive to present statistically representative stories to you so, if it’s truth you’re after, you have to look elsewhere.

Alexandr Chakroff

If you’re interested in understanding human conceptions of morality, you owe yourself some time to read Alexandr Chakroff’s dissertation. I know, no one actually reads dissertations any more and you’re a very busy person so certainly I’ll do my best to summarize key points below but I think that you’ll enjoy the read more than you expect for two reasons. One, Chakroff is a student of Joshua Greene. Greene is… well, I’m not entirely a fan but he’s become fairly famous for his work in moral judgment. At some point I’ll write up my thoughts on his engaging book, Moral Tribes. Second, Chakroff is more principled about designing experiments than Jonathon Haidt. There are places where Haidt inserts himself by “thinking hard” about a problem and then pronouncing an answer (e.g. the six categories in his moral foundations theory). Chakroff avoids this as much as possible by taking a data scientists view. What he loses in detail, he gains in clarity.

To demonstrate this principled approached, let me use as an example the first experiment from his dissertation. The purpose of the experiment is to see explore the structure of the moral domain across as diverse sample of humanity and it was structured as follows.

  1. Generate a cross-cultural list of sample moral violations. Participants were recruited through Mechanical Turk and individually asked to brainstorm as many moral violations as possible. The set of participants was almost gender balanced, represented a wide range of political orientation, and drawn from North American and India. Duplicate or vague responses were removed leaving 550 items in the list.
  2. General a cross-cultural set of moral judgments on these violations. Again, Mechanical Turk is used and participants are asked to rate the severity of 90 violations drawn randomly from the above list.
  3. Principal Components analysis on the resulting rankings. The algorithm extracted two principal components of significance.
  4. Intuition gathering. Participants from Mechanical Turk were presented with violations showing either high rankings in the first component or high rankings in the second and asked questions about those violations and the hypothetical agents involved.

The two components that Chakroff extracts do reinforce earlier results of Haidt. There was a well publicized result showing differences in moral reasoning between American liberals and conservatives, particularly in the categories that Haidt related to purity judgments. Chakroff shows a similar drift in his two categories.

Category one consists of harmful acts. These are “acts that directly and negatively impact others’ welfare” and corresponds to the individualizing grouping in moral foundations theory. Judgements of violations in this category depend on the situational context of a given problem and on the intent of the agents involved. Those guilty of category one violations are not trusted to refrain from further such violations (though they can be forgiven) but are not anticipated to commit category two violations.

Category two consists of impure acts. These are “acts that deviate from normative behavior, without necessarily hurting others” and corresponds to the binding grouping in moral foundations theory. Violations in this category are attributed to internal factors of the violating agent. While similar themes (food, sex) appear across different cultures, the particulars of the rules are local to each culture. Those guilty of category two violations are judged defective and not trusted to refrain from either category two or category one violations. Once you’ve violated a normative rule, you’ve shown yourself to be an unpredictable agent and all bets are off.


Just got an answer to my question from the other day.

“Is some other purpose of the university that subsumes the need for credentials, that doesn’t conflict with the agreed restrictions on publicly funded organizations (e.g. the BBC), and that allows for a principled discussion on what is too extreme to be allowed?”

Marduk neatly shows why the second constraint is inappropriate and why my follow up question comparing universities to city governments is a category error.

Q. “And what is a public university if not a state entity?”
A. An autonomous charity for the furtherance of knowledge and education.

Universities may receive money from governments but that does not make it a sub-department of that government. Rather, it falls into the same grouping as foreign aid. Government grants can become the instruments of policy and oversight bureaucracy attached to it grows over time. This does not prevent us from rejecting the policy that universities are directly responsible to the government, and by extension the public at large.

Also from the same thread, StillGjenganger gives a succinct summary of the liberal view on speech in a pluralistic society that’s worth reading.


A reputable casino succeeds by, among other things, offering scrupulously honest games of chance, the payout odds of which are unfair.

The games are honest in that the dice are not “loaded,” the wheel is not “fixed.”

The odds are unfair in that the expected value of the game to the two parties (the house and the player) are not equal.

– Bob Lince

The above quote gives a neat example of a distinction I hadn’t considered with any rigor. One can be the most honest business person in the world without it being necessary for anyone you interact with to have their happiness increased. Nothing feels immoral about this individual as honesty is the chief virtue of commerce, yet something feels off. It is intuitive that people in the same circumstances and context ought to have similar access to the same outcomes and two people engaging in a game together feels like symmetric context. How does morality encode the distinction between the player and the house?

Fair seems linked to proportion so only one of Fiske’s frameworks support it. This is transactional, bourgeois morality. I’m not sure that the morality of the military and the nobility, of duties and obligations, carries a sense of fairness. There it may be more a matter of treating your subordinates equally for similar performance. The two might be roughly comparable if we consider them the same pattern only concerned with economic capital and social capital, respectively. Fairness on the commune is pretty tricky and is hard to distinguish from a simple ‘no shirking’ rule.

Ultimately, fairness in outcome is difficult and hard to measure when we want to take into account different contributions of input. So, if we lived in the world where we could set up our measurements on the input and automatically get fairness of outcome then we’d be very happy. However, the example given suggests that is not always possible and we’re forced to conclude again that the world is broken and/or evil with respect to our desired ideal.


When you sign a contract and accept employment, do you owe your employee a duty to act within the ideal role for that position or should you continue to act as yourself and trust that they can choose someone else if you fail to meet their requirements?

Consider choosing who to vote into office. The ideal politician has several skills that are unsavory. They may be called on to lie and bluff convincingly, to be flexible with promises, to betray prior loyalties, and to make choices that cut against the immediate desires of their constituents. They may be required to make choices against their own interests. We can consider three candidates. One has all of these qualities as a matter of character and indeed acts as our ideal politician out of unconscious habit. One is personally more admirable but is flexible enough in character to perform the role of the politician in all official contexts. A third is personally more admirable but inflexible. They may not make a superb politician by the standards above but they have complete personal integrity and are likely to do at least a mediocre job in office. It’s not clear who is strictly superior.

If you value most the job getting done then one or two would be best. One will be more practiced at the desired traits so will be the more desirable. If you are interested in your leader being a good person (having certain limits) then two or three is best. If you split between these two paths then you’ll end up with a contest between two and a one pretending to be a two. If the one is any good, it will be impossible to distinguish in this case. If you value authenticity above all else, the choice is between one and three and you’ll need to use different criteria (possibly the above) to break the tie.

That’s assuming it is even possible for us to truly inhabit a role. Here’s an article in the New York Times about a tattoo and the belief in human universals. There’s a leap that’s not obvious from some person did feel X to all people could feel X. It’s a leap I used to be comfortable taking but not any more. I think humanity is more complicated than that and you might need to change J, K, and L before you’re capable of X.

There’s another point specific to the politician example. How the leader acts will cause the team to follow. If you make it about the work, it will be about the work to the exclusion of the personal. If you make it about loyalty, it will be about loyalty to the leader even when the leader is blatantly and obviously wrong. If you make it about ambition, everyone will be aiming for a promotion. That’s the only way to get one but running that race you’re constrained to be a piece of the current system (and probabilistically won’t get what you want). I don’t know if revolution can occur within entirely within a system but a priori this seems like a contradiction.

Let me leave you with a song about a narrow gate from The Wire, a show the utterly lives up to expectations.

The Onion

The writers at the Onion are great at comedy. I treasure that my college had free copies of their latest on stands all across campus. Respect to the college students who gave us the Hobo-Merman wars but there’s a reason that the Onion is so persistent and so respected (and, no, it’s not just Peter Rosenthal). It’s because they’re good writers.

(yes, this does tell you more about me than it does about the Onion. I don’t care, you should read it if you want and make up your own mind anyway).

Here’s an article that I truly hate. I have mixed feelings about sharing it – mostly because I’m new to this sort of thing. You don’t have to click through, instead you could accept this as a launching point for things that seem to need saying.

“Myself included”. As a rhetorical gesture to indicate you have a insiders perspective and the bravery to use it (potentially) against yourself this might be useful. I think it’s a cheap way out of including the type of rigor that doesn’t require disclaimers.

“Hyper-whiteness”. What the fuck. Please describe the cluster of traits that you believe this entails. Because it means very little to me and you seem to think it means something and you seem to think it means somethings bad and you won’t explain. This fails to pass a certain universal test in which the same thing might be described as hyper-sub-saharan-african or hyper-indian and we could draw similar meaningful conclusions.

Is a sense of entitlement unique to this subculture you’re trying to capture. Perhaps it’s a human universal or at lest something that appears in other contexts. You’re invoking an opposition to normality that may not exist.

Let’s play with definitions around the word contributor. Don’t ask me to defend rigid boundaries of insider/outsider but do forgive me for pointing out that your pedantic word play is unnecessary.

Nothing like cherry-picked speculative history.

“[Code is] the one true way”. This almost feels like a reasonable idea that you’re exaggerating so you can score points for your article.

At this point, I’m done. There are some poisonous open source communities. There are some personalities involved in open source that I truly hate. But this article does nothing to expose that truth. If this is the kind of writing accepted under Model View Culture then I find that I have little use for it. Perhaps you can take this as evidence that you can drop the publication if you have misgivings after reading it.