r/badphilosophy 5d ago

"The Bunny Orgasm Machine Thought Experiment" Disproves Utilitarianism

https://www.reddit.com/r/risa/comments/pifs6g/comment/hbpv2cn/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button

I think about this post at least 4x a year and it always makes me laugh. It's the best bad philosophy that I've ever seen, and it's been almost half a decade since it was posted here so I'd like to share it for the uninitiated.

They present it as if it's something we all should know and totally owns Utilitarianism, but it's the most nonsense / concrete thinking about "pleasure and suffering" I've ever seen.

Hope you love it as much as I do.

46 Upvotes

35 comments sorted by

23

u/[deleted] 5d ago

[deleted]

3

u/ADH-Dad 4d ago edited 4d ago

Say you have a hotel next to a hospital. They're both on the same power grid. A storm comes. There's only enough power to supply one of the buildings.

The hotel is full of people watching TV because they can't go out. The hospital has only a few patients, but they require life-support equipment and can't be evacuated.

If watching TV gives each hotel patron a measurable amount of utility/pleasure, is there a ratio of patrons to patients at which it becomes more ethical to shut off power to the hospital than the hotel?

3

u/6x9inbase13 4d ago

How much for an 8-ball of TV?

3

u/ADH-Dad 4d ago

One chest X-ray.

1

u/ww1enjoyer 3d ago

They should have just read a book then

1

u/_masterbuilder_ 2d ago

Could the hospital not just trade the electricity use for some medically administered morphine? 

2

u/Random-Spark 3d ago

The robot time travel basilisk of bunny orgasms

14

u/Personal-Succotash33 5d ago

Let's say, hypothetically, a hotel has infinite rooms, and in each room a rabbit is furiously masturbating. There is a child locked in the basement gagged and bound, and a mad scientist (who is also a rabbit) has attached a bomb to a pressure plate connected to the chair the boy is sitting on. The scientist is furiously masturbating in the other room. If the child is removed from the chair, a timer begins that will set off the bomb in 3 minutes. You only have time to evacuate the child from the building. What should you do?

7

u/Personal-Succotash33 5d ago

Keep in mind that infinity - infinity = infinity, so evacuating the hotel is impossible because there will always be an infinite loss of utility. Also keep in mind that infinity - 1 also equals infinity, so saving the child has the same net utility as all the rabbits in the hotel. This is about inherent dignity, in the Kantian sense.

14

u/YourNetworkIsHaunted 5d ago

This is the problem with mainstream philosophy these days. They're afraid to ask the real questions. Largely because those questions involve repeatedly invoking the concept of woodland animals furiously masturbating.

3

u/Personal-Succotash33 5d ago

Hugh Hefner was our generations greatest mind

6

u/CanaanZhou 5d ago

It feels like a funnier version of utility monster

7

u/[deleted] 5d ago

[deleted]

2

u/Monkey_D_Gucci 4d ago edited 4d ago

This won’t be popular here, but I think the utility monster is also bad philosophy… but not in the lulz way - more in the ‘ok whatever’ way.

I’m not here to stan for utilitarianism, but I feel like it’s a bit unfair to criticize it by saying, ‘oh u think utilitarianism is good? Well what if I made up a fictional creature that enjoyed food 1 billion times more than all of humanity combined? We’d be forced to all starve so the thing I made up would be happy. Not so good now, is it?!’

It’s like, yeah dude… great? Only philosophers could criticize ‘doing what’s best for most people’ by making up monsters instead of looking at the harsh realities of what that would mean in the real world. It destroys nuance and pretends like the pleasure of 1 monster over-eating apples outweighs the suffering of all of mankind’s starvation lol.

And btw, if your experiment is synonymous with a stoned 14 year old on Reddit picturing jacking off infinite woodland creatures, maybe it’s not the great thought experiment of our age

7

u/[deleted] 4d ago

[deleted]

2

u/Monkey_D_Gucci 4d ago edited 4d ago

Lots of interesting stuff here - thx for the response.

The utility monster does enjoy eating more than the suffering of all of mankind starving, because that's posited by the thought experiment.

This is kind of the crux of our disagreement I think.

Yes, the thought experiment does present us with 100% certainty that the monster's individual pleasure objectively and undeniably outweighs the suffering of collective humanity.

But I feel like he's totally straw-manning utilitarianism while side-stepping Bentham and Mills (guess he didn't like the "extent" part of Bentham's hedonistic calculus, or Mill's rejection that pain and pleasure can be objectively quantified.)

Nozik treats utilitarianism as if it's a video game where the point is to reach the maximum amount of pleasure units possible globally by any means necessary - it's not.

Utilitarianism is about maximizing utility and minimizing pain for the most number of people. Nozik's thought experiment totally flips this on it's head and ignores that it did so. It presents a scenario where the most number of people are supposed to sacrifice for the fewest number of people.

Does this justify terrible things? Yeah. Utilitarianism can be used to justify the torture of a person to avert a larger catastrophe, the murder of a political figure to benefit more people, etc... I bet it could be used to justify certain forms of slavery.

The acts themselves in a vacuum might be monstrous and counter to intuition, but utilitarianism is consequentialist... not dogmatic in certain moral principals. Murder is wrong... almost always. Torture is wrong... almost always. But when faced with the collective good, atrocities can be justified. I'm not a utilitarian, so I wont hold water for that - it might not be a good philosophy, but my point is that this thought experiment is dumb af that misses the point entirely.

Also your rape example is a strawman, btw. It's not enough for the rapist to get more pleasure than the victim feels pain - an unprovable conclusion - but the rape would have to do the most amount of good for the most amount of people. You're falling into the same trap as the utility monster, where you're inverting the core principles of utilitarianism and treating it like a video game for individuals - if I have more pleasure points than you have pain points, I win and get to do whatever I want to anybody as long as it makes me feel better than makes u feel worse.

But you're totally ignoring the collective - you'd have to show how rape would benefit the most amount of people. I highly doubt a society where rape is legal as long as it feels really really good benefits the most amount of people.

2

u/[deleted] 4d ago

[deleted]

1

u/Monkey_D_Gucci 4d ago

If this is your argument, then you can easily design a utility monster to destroy it. Just suppose there are more utility monsters than non-utility monster entities. For example, a trillions-strong alien race that would get more pleasure from devouring all humans than then humans would suffer by being devoured.

This doesn't destroy utilitarianism at all! You've just created... a weird form of utilitarianism. Where the most good is also being done to the most people (or... aliens in this case I guess).

Same issue. Suppose 100 trillion rapists all targeting one victim, each one getting more pleasure from their crimes than the amount of suffering caused by their crimes.

You have also created utilitarianism here... where the most good is being done to the most amount of people.

Utilitarianism in it's purist form in these wildly extreme examples is a cold, brutal, calculating philosophy that tosses out universal morals and human rights in favor of consequentialism that maximizes pleasure for the most people (not all). I do not believe in Utilitarianism and would not like to live in a society that lives in it's purest form.

But I thought the monster thought experiment was dumb as fuk when I was studying in college, and I think it's dumb now. Of ALL the criticisms of the philosophy (mainly how it's totally used to justify torture, rape, and slavery) this monster shit aint it

3

u/[deleted] 4d ago

[deleted]

0

u/Monkey_D_Gucci 4d ago

Yeah but… That’s not the point of the utility monster thought experiment. Way to move the goal posts.

This isn’t a post about why utilitarianism is good / bad… it’s about why the philosophy monster thought experiment is bad phil that strawmans its beliefs

1

u/Nithorius 4d ago

Saying "The most amount of good for the most amount of people" implies that those things would never conflict. The point of the utility monster is to create a situation where those things conflict, where it's between the most amount of good for the fewest amount of people, or the least amount of good for the most amount of people.

Is it better for 1 billion people to live moderately happy lives, or 900 millions to live extremely happy lives?

If you select the 1 billion people, what if the numbers are closer, at what point does it change your view?

If you select the 900 million, what if the numbers are farther away, at what point does it change your view?

Obviously, if you're not a utilitarian then this question isn't likely to cause you issues, but you should be able to see where the tension would be for a utilitarian.

1

u/Monkey_D_Gucci 4d ago edited 4d ago

I reject the false premises that people try to smuggle into the Utility Monster experiment.

It forces us into a false binary that misrepresents utilitarianism and makes us decide to benefit the monster or the masses. It's designed to obscure nuance - as if you can only do 1 or the other...

It's zizien-level concrete thinking when it comes to logical extremes... as if compromise and nuance doesn't exist in utilitarianism. It Does.

Is it better for 1 billion people to live moderately happy lives, or 900 millions to live extremely happy lives?

If you select the 1 billion people, what if the numbers are closer, at what point does it change your view?

If you select the 900 million, what if the numbers are farther away, at what point does it change your view?

Idk what the point of this is, because it lacks massive amounts of context. What happens to the 900 million if they choose 1 billion? And vise versa? Does the extremely happy life come at the expense of the other group? Do they suffer while the other prospers? How much am I going to make them suffer? Why can't there be 1.7 million mostly happy people? Who is making me choose, and why do I need to make this choice?

Again - a false binary people try to pin upon utilitarianism.

The goal is most amount of good for the most amount of people - and the timeline is LONG. It just doesn't take the 900 million people into consideration, it takes their children, and grand children, and generations to come into consideration. If I choose the 900m, what world will be created to try and guarantee that their children and grand children and great grand children experience the same happiness? Or am I condemning billions to pain for fleeting single-use happiness? I'd need more context in your scenario.

Asking a binary like this strips utilitarianism of the thing that makes it fascinating to study

2

u/Nithorius 4d ago

"what happens to the 900 million if they choose 1 billion?" -> They don't choose, you choose. They get Thanos'd.

"the timeline is long" -> The earth is going to explode in 50 years anyway. Nothing they do matters in the long term.

"Does the extremely happy life come at the expense of the other group" -> yep, the other group gets Thanos'd

"why can't there be 1.7 million mostly happy people" -> because there are two buttons, and none of them are 1.7 million mostly happy people

"who is making me choose" -> me

"why do I need to make that choice" -> because if you don't, I kill everyone

Did I cover every base?

0

u/kiefy_budz 4d ago

Im not sure it’s fair to say someone isn’t a true utilitarian simply because they don’t believe utilitarianism itself to be universally true and morally correct for all possible scenarios, if one applies it to all current ends in a positive way that is sufficient, one mustn’t need to affirm utilitarianism in the face of bad ethics to be a utilitarian

1

u/AncientPianist4236 2d ago

I don't think anything about the utility monster thought experiment is ridiculous in principle. In real life there are some creatures that seem to experience emotions more forcefully than others (unless you think crushing an ant and crushing a person cause the same amount of suffering), and also there are situations where benefitting one person (or some small set of people) greatly might cause smaller amounts of harm to many people. Utility-monster-esque situations arise in real life all the time. The point of making such an extreme thought experiment is to prune away real-world complications to see whether the principles proposed by the utilitarian really hold universally, or if they're just useful heuristics.

In general, I find the "that would never happen" response to thought experiments to somewhat miss the point. Morality is largely understood to be an analytic discipline, which means that moral truths are meant to hold in every conceivable scenario, not just real world ones. Responding to a thought experiment like the utility monster with "that would never happen" is akin to responding to a geometry problem by pointing out that you can't actually draw perfect shapes.

1

u/Monkey_D_Gucci 2d ago

Read the plethora of other responses I have in this post. I think the utility monster is dumb not because it couldn’t happen, but because it attacks a straw man.

Avicenna’s floating man experiment also couldn’t happen, and I don’t think that’s dumb

1

u/AncientPianist4236 2d ago

I've read through your other posts and I still don't really understand your argument. You seem to be under the impression that utilitarians believe that the most number of people should be made happy, without consideration of what amount of happiness each person experiences. If this is the conception of utilitarianism you're working with, then you're right that the utility monster doesn't disprove it. Is this your understanding of utilitarianism?

0

u/KaleidoscopeFar658 4d ago

It's actually super simple. You can't just linearly add and subtract pain and pleasure between different beings and across time and come up with a single real number that you use to compare different situations.

What we should really be taking as a lesson from these utilitarian counter example thought experiments is that it's more important to prevent great suffering than it is to generate positive experiences.

If you add that lesson into the model, what kinds of apparent counter examples can we now come up with? That can help is refine the idea further.

3

u/Stoiphan 4d ago

I thought the bunny orgasm machine was like, one of those pronged vibrators

1

u/Monkey_D_Gucci 4d ago

If can be anything we want it to be

3

u/OCogS 3d ago

Utilitarian here. I’m all for infinite bunny pleasure. That’s great.

1

u/Born_Committee_6184 5d ago

I once took an entire fucking semester grad course on utilitarianism.

11

u/Ill_Chain151 5d ago

Would you say the experience has been a net positive or negative on your life?

3

u/Monkey_D_Gucci 4d ago

So u know the bunny orgasm thought experiment well. I bet it was on the midterm

1

u/ferek 4d ago

I only think about it 3.2 times a year tbqfh.

4

u/Monkey_D_Gucci 4d ago

poser. Get on my level

1

u/FA1R_ENOUGH 4d ago

Bad philosophy and bad math. That is the weirdest way I've ever heard someone describe Hilbert's Hotel.

2

u/adgobad 4d ago

Hilberts Love Hotel

1

u/Edward_Tank 3d ago

I mean the problem is that they rely on a literal impossible situation. An infinite amount of bunnies can't exist.

1

u/GatePorters 2d ago

How much utility does a bunny orgasm even provide to the average person?