My current views on ethics

This page represents my current views on ethics as of 2022-07-06.

My flavor of hard utilitarianism

Traditional variants of hard utilitarianism have inherent problems. These problems can range from edge cases inherent to traditional maximization-based utility functions (e.g. St. Petersburg's game and Pascal's Mugging), to problems that come up seriously in normal life (e.g. the Surgeon's Dilemma). These problems, however, can be solved by considering utilitarian pleasure for what it actually is: temporary. The amount of 'pleasure' in existence changes over time. Thus, a more realistic model of utilitarianism can be defined, where the function to maximize is not the current level of pleasure, but rather the total amount of pleasure over time. I term this 'predictive utilitarianism'.

To be more mathematically specific, a perfect predictive utilitarian agent wants to maximize the value

P dt\int_{-\infty}^{\infty} P \der dt

where PP represents1 the total amount of pleasure in existence at time tt.

Of course, it is impossible to affect the past, so a more specific goal is in order. The universe at any time tt can be represented by the tuple Ω\mathbf \Omega, which has components yet-unknown, but suffices for our purposes. Define the total amount of pleasure at time tt with universal conditions Ω\mathbf \Omega as PΩP_{\mathbf \Omega}. Then, at time tt, a perfect predictive utilitarian agent wants to affect the value of dΩdt\frac{\mathrm d\mathbf \Omega}{\mathrm dt} such that the value of

tPΩ dt\int_{t}^\infty P_{\mathbf \Omega} \der dt

is maximized.2

The interesting thing about this, and what makes it ultimately able to evade a lot of the problems with traditional utilitarianism, is that the agent itself is represented in the Ω\mathbf \Omega term. When trying to maximize the total amount of future pleasure, the agent has to take into account their own actions as well. This leads to an interesting structure which solves a lot of the main pain points with traditional utilitarianism.

The Expectation of Future Behaviour

This is perhaps the only important point in this entire page, so take it wisely. Predictive utilitarian agents must predict their own actions in order to make informed moral decisions. In fact, they must not just predict themselves, but they must predict everyone, including those who are also predictive utilitarians. In essence, this means we have an additional criterion beyond the pure-utilitarianist one:
If any course of action is rational, then it must also be rational if done by any other predictive utilitarian agent. Not any agent, not only predictive utilitarian agents in the present, not only predictive utilitarian agents on Earth, but predictive utilitarian agents in all times and spaces. You may notice that this is exceedingly similar to the Kantian categorical imperative. I do not believe that this is a coincidence; in fact, I believe the similarity is the main reason why the Kantian categorical imperative is able to produce results that resemble anything like morality. (Although this may just speak to my dislike of non-consequentialist thinking in general.3)
If you're questioning how this statement helps at all, see the "Surgeon's Dilemma" section below.

Resolving Problems

Okay, this is great, but how does this help us to solve any problems whatsoever?

Let's take a look at the specific cases which plague traditional utilitarianism.

The Surgeon's Dilemma

This is perhaps the most pertinent problem with traditional utilitarianism. The scenario goes a little like this:

A surgeon has five patients that are in desperate need of organ donation: two of them need a kidney, one of them needs a heart, and two of them need a lung. If they do not receive their organs, they will die; and there is no organ stock for the foreseeable future. The surgeon knows that in another department, a healthy patient has just come in with no friends, no family, and fully functioning organs that are compatible with the the patients that need the organs. The surgeon considers: should he take the healthy patient's life, dissect him for organs, and explain away his death, to save the lives of the other 5 patients? He knows a way to do it with a very low probability of getting caught. Should he do it?

This is the classic 'kill-one-to-save-five' story that we see in the trolley problem and elsewhere, where traditional utilitarianism would say "kill the healthy patient". Yet it seems wrong for the surgeon to take the life of the healthy patient. Where in our brains does our moral instinct disagree with utilitarianism?

The answer is that our gut knows instinctively that this is a bad move for the surgeon, because our gut is very good at figuring out social consequences. A predictive utilitarian surgeon would stop and consider.

"If I was to kill this healthy patient, then every other predictive utilitarian surgeon around the world, and in the future, would also do this. If all predictive utiltiarian agents, when confronted with this situation, would perform the act, then the low probability of getting caught amplifies to a high expectation value of the number of surgeons caught, and these surgeons would be stripped of their medical licenses, would have the total amount of utility they could add decrease significantly, and would be replaced by non-predictive-utilitarian surgeons who themselves are less ethical and less likely to maximize the total amount of utility."

Predictive utilitarianism is all about thinking of not just the consequences of your actions on the physical world, but also the consequences of your actions on the future decision-making process that someone may be expected to go through.

'Utility Monsters' (Super-Beneficiaries)

Another thing about utilitarianism that people tend to have trouble with is the idea of a 'utility monster' (better name: super-beneficiaries), some kind of entity that has so much more capacity for emotion than others that utilitarian ethical systems promote its happiness over the downfall of others.

I don't think this is a problem. I sense that the entire super-beneficiary argument is based on a reductive imagination, imagining one shambling blob of flesh which somehow is 'better' than ten humans at happiness, protesting, and completely ignoring the emotional state of the entity in question. In the real world, a super-beneficiary would probably not actually manifest like this.

"That's easy to say", you might reply. "You've never actually seen a super-beneficiary." Well, we already have real-life examples of super-beneficiaries. There already exist real-life, living entities which have twice the capacity of a normal human to have emotional reactions. Here's a list of them.

Can you seriously look at the above list and tell me that they aren't capable of twice the amount of happiness of a normal human?

How about [X logical paradox]?

There are several other things I want to cover that you might imagine could trick a predictive utilitarian:

These things, however, are all ultimately solved by the integration of a continuous utility function rather than a discrete one, and therefore in the future I will write an article about why continuous utility functions do a better job of estimating the world than a discrete one does. Feel free to think through the arguments yourself.

Okay, all this is great, but what are you actually planning to do to maximize utility?

My main plan for the future, once I get a stable job and income, is to participate in effective altruism.

Footnotes

  1. Of course, this in reality would actually be the expectation value for the integral, rather than just the integral itself, since we have uncertainty in P(t)P(t). However, this doesn't affect anything than wrap a E\mathbb E around some of our equations, so I have left it out for brevity.

  2. The value of 'maximized' here deserves some clarification. If you are any good at your job as a utilitarian agent, then you should expect the total amount of pleasure in the future to be infinite (e.g. if entropy is somehow reversed and the human population keeps exponentially growing.) But clearly one human living in a room forever and scraping the dregs of happiness every day, which evaluates to infinite pleasure, is clearly worse than billions of people living deeply fulfilling infinite lives, which also evaluates to infinite pleasure. Hence, we can define a backup ordering among infinite values by the following construction: if abf(t) dt\int_a^b f(t) \der dt and abg(t) dt\int_a^b g(t) \der dt both evaluate to the same value, then we consider f(t)f(t) better than g(t)g(t) if and only if ab(f(t)g(t)) dt\int_a^b (f(t)-g(t)) \der dt is above zero. Of course, some of the more pedantic people in the audience may point out that it is possible to choose a value for f(t)f(t) and g(t)g(t) such that the corresponding "comparison integral" diverges to both the values ++\infty and -\infty, such as if f(t)g(t)=xsinxf(t) - g(t) = x \sin x, in which case I say "just choose whichever function you want, it doesn't really matter".

  3. I think my main dislike of non-consequentialist thinking stems from the fact that when we actually consider ethical and unethical acts we think about the consequences first, rather than any axiomatic justification of perfect conduct. Human emotion springs into action when we sympathize with the victims or champions of any situation, not when consiering what One Should Do.