A short while back I wrote Utility Calculations in Practice so that I could get some ideas out of my head. Unfortunately, said ideas did not leave me alone and I still find myself scribbling little follow up notes in the margins of my scratch paper instead of being able to concentrate on other work. Here are three loosely structured continuations, two short then one long, and then maybe I can have peace.
For reference, below are the principle equations from the previous article.
, a principle actor
, the set of universes with as principle actor
, the set of actions available to
, a probabilistic state of the world after an action
, a universal value function
, the expected value of the action (utility function)
where satisfies , the best action to take
, calculable utility of action
where is some error term, satisficing formula for best action
Looking at it now, it seems too compact to be at all useful to the casual reader so feel free to comment asking after anything that’s unclear.
First note is on something that shows up in the original equations but that I did not bring to the reader’s attention at the time. I normalized the expected value of the action by subtracting the value of the current world but never explained why. Since integrating over the space gives a value of 1, we can pull inside the integral which lets you reference only the difference between current and future worlds when calculating the value function. This severely decreases the inherent complexity of the operation and is likely the only way that calculation could be feasible.
Second point of note, it’s tempting to address some of the concerns raised in the original article by including a term for the value of the action itself in our calculation. This idea stems from the conviction that many actions are inappropriate for most worlds (e.g. murder everyone around you might sometimes be the right thing to do but that likelihood that you’re in one of these situations is vanishingly small) and by rejecting them outright, we cut down on the search space for the correct action. Since calculation is hard for us, this makes sense as a piece of practical morality.
How to dial in the value of the action relative to the customary world? A linear combination is the most straightforward though, as will be shown later, not without its complications.
, static value of action
where satisfies , the best action to take (with correction for value)
‘Pure’ virtue ethics might be an example of this model with set to 1 and set to 0 while ‘pure’ utilitarianism is the opposite.
This approach is obviously counterproductive from a utilitarian perspective and a ripe opportunity for more subjective value to sneak in. The wrangling over will get just as political as the wrangling over and possibly worse since this new function is simpler in application and so more widely accessible to opinion. Proponents of the extension will argue that (in contrast to the world value function) it’s actually feasible to calculate value of action type and that, if done well, this approach gives an answer within the satisficing set with a high frequency. Their opponents say that no one is providing accountability on this claim.
Third and finally, consider the poor soul who wants to balance the concerns of multiple value functions. This is a natural enough desire for many reasons. Most value functions are too simple to capture everything we feel is important. Many accompany group membership and are required to remain in high standing. Some have very obvious flaws and its tempting to patch over them by applying another system.
The naive method is to use a weighted arithmetic mean (or any linear combination) utility functions as we did above with the ‘action value’. However, there’s a flaw in that this collapses to a single value function because you can pull all the terms through the integral.
The worst example of this is a weight function parameterized by initial world state (i.e. that is really a function )since that is just asking for self-motivated leakage.
You’ll notice this method also grants as premise that the output numbers from each different utility function are directly comparable. This is a very strong statement, ask a Christian how they would compare number of souls saved to number of children. We have no reason to believe this idea of direct comparison is true or to believe that normalizing will be simple. They could involve different scales (or units) or be shifted by some large constant from each other. Some could respond exponentially to small shifts while others remain largely constant.
There are alternate schemes that don’t suffer the “pass through integral” problem. If we assume no negative values then it’s possible to use the geometric mean. This differs from the common arithmetic mean by being the product of its components instead of their sum. The geometric mean is nice because it’s useful in situations where normalization is a problem.
or its equivalent form
One way to visualize the difference between the two elements is to picture a multi-axis space where each axis is composed of a utility function and note that solving for each individual utility gives us a tuple in this space. The arithmetic mean corresponds to the norm, the sum of the components. The geometric mean corresponds to the volume of the box formed when taking the tuple as one corner and the origin as the other. (As an aside, consider the $latex L_1} norm. Euclidean distance sidesteps the “pass through integral” problem as well.)
Even if the structure of the space is “really” additive in some sense then the geometric mean will still act as an approximation of the arithmetic mean, albeit one with a bias toward balanced tuples.
Non-negative values are a tough sell because when we base our value calculations on the delta for simplicity. This method also asks a lot of its user by requiring one either multiple large numbers or take logs of each argument before aggregating. I guarantee that no one is using this approach in matters of practical or immediate ethics.
Don’t have much to conclude other than I remain skeptical of any scheme that claims to aggregate or combine disparate utility functions unless supported by very clean and explicit math.