Glad my post was helpful to you in writing yours. :) Quick note--
> Caroline Ellison—the prominent effective altruist and now convicted felon—notoriously advocated for romantic arrangements resembling an "imperial Chinese harem."
As a longtime Tumblr mutual of Caroline Ellison's, this was a shitpost on Tumblr, not a serious expression of her viewpoint. It's an understandable mistake-- a lot of her posts got pulled out of their original context, and EAs definitely believe a lot of stuff that an outside observer would assume is obviously a shitpost-- but in fact not even Ellison thought you should have an imperial Chinese harem. And her joke actually reflected her *rejection* of polyamorous, egalitarian effective altruist norms: Ellison was unusually-for-EAs sympathetic to trad and redpill views. (She was also, FWIW, monogamous by inclination.)
Yes. I was going to say something about this. I did not have the luck of ever interacting with Caroline Ellison before the FTX deluge, but after reading her blog with interest and some empathy, what comes through is a very intelligent, moral and good person with just the right amount of shitposting and girlboss-performing, and I would frown on any cheap dunking on her, whatever mistakes she might have commited.
Amazing piece. Think of how this correlates to the modern work place. The rise of software to quantify nearly everything, turn this data into KPIs for a growing managerial class, and how surveillance and tracking increased with the digitization of forms and the horror that is Teams.
I liked this post a lot. I have interacted quite a bit with EAs online (and would even describe myself as somewhat EA-adjacent): they mostly come out as really nice, well-intentioned, math-savvy, socially awkward nerds. They are also pretty consistent with their starting axioms, which is always to be admired. I just can't find myself persuaded by accepting those axioms (maximizing well-being/minimizing pain as the core and/or only measure of moral goodness, and applying the criterion impartially to all sentient beings). Like, I obviously dislike pain and like pleasure, but it feels equally obvious that there are things I value much more than that: truth, freedom, agency, connection. You pod experiment is a good rehash of Nozick's experience machine argument, for which I've never heard a convincing utilitarian answer.
In a way, I feel Utilitarians are trying to recreate morality in a godless world by following to the letter the Biblical 'your eyes will be opened, and you will be like God, knowing good and evil'. More than that, they try to assume this impersonal view of 'thinking like a state' (the coldest of all cold monsters, as Nietzsche said) and treating all humans as non-differentiated cogs in a machine. Which I can imagine makes sense at *some* level (i.e., up to a point, at the state's) but definitely not at the level of the individual or of small communities. Being human includes thriving on interpersonal relationships with other humans, and in those other humans becoming unique and valuable *to you* in a way that no moral system can condemn or pretend to equalize.
The excelbrains focus on material metrics, but these metrics in themselves carry little to no moral significance and do little to inspire human beings. Endless dopamine will cause degeneration of the brain, endless pleasures will cause atrophy of the muscles. Excess is bad, and that's a pitfall that utilitarians fall into time and again. Maximising pleasure foregoes everything about material world that impacts humanity and is actually an evolutionary negative, as it leaves us exposed to hardships and risk, that we simply cannot calculate in, and utilitarian approaches also fail to provide systemic redundancies.
Spiritually, I don't even need to make an argument, as you stated, we all have gut reactions to these scenarios.
Excellent summary, thank you. I think you’re nailing something important here. Wilde’s aphorism regarding people who know the price of everything and the value of nothing readily occurs to me.
As does Ian McGilchrist’s book “The master and his emissary”, the central thesis of which (as I apprehended it at least) holds that the right hemisphere assimilates reality in a subjective, global sense, but lacks language/speech. It therefore passes that information to the (language/speech capable) left hemisphere for evaluation and analysis, then subsequently receives back a “digest” or summary which it compares to its own more subjective inferences. Hence, the “right brain” is the Master and the left… well, in essence the argument runs that the analytical approach of the left hemisphere has become over emphasised following the inception of the Industrial Revolution and that the dehumanised thought process which have become more common probably shouldn’t be a surprise. McGilchrist (an All Souls fellow and philosopher turned medic and psychiatrist) draws interesting parallels from case reports of autistic and schizophrenic patients (who have diminished right hemisphere communication) and people who’ve experienced right hemisphere strokes. I suspect he may be onto some important things; his Substack is great, too. (I expect you’re familiar with all this, but hey, for anyone who isn’t…)
I think Mary Harrington is also really interesting on the post-birth control world and the ascendancy of what she describes as the “meat lego” approach to living (I expect you’re already familiar with her posts, but thought I’d mention her).
Yeah, Mary blogs on Substack too, as “Reactionary Feminist”, I think. I’ve always found her provocative and interesting. McGilchrist’s book is excellent, too - though not light reading! Very accessible all the same.
Great piece. It resonates quite nicely with some of Bifo’s reflections on autism: “This symptomatology could be read not as a diagnosis of an anomic behaviour but culturally as the description of a novus homo.”
I’ve been diagnosed as a sperg, but I’ve increasingly come to doubt it. I went through the same thought experiment you did, and came to the same conclusion.
I can’t tell you how much you’re university experience is equal to mine. Currently studying philosophy at top 50 university for the subject, but it’s just senseless thought experiment after thought experiment that never maps on to the real world (Derek Parfit is the best example of this). Literally gone through the same phases as well: vegetarian to vegan (influenced by the effective altruist camp) to eating meat again. Am really regretting my decision to study philosophy at university and am looking forward to when it’s over so I can read actual philosophy books, not the dirge of meaningless articles on meaningless topics that are thrown at me. Not to mention the smuggling in of feminism et al into every topic and it just being assumed that you are leftist, even had some professor saying judging cultures is racist (maybe it is, but it certainly should be done)… It’s a shame how the subject is taught because it could definitely be the most interesting subject if done correctly.
I find this sort of thing a natural area of cognitive dissonance for me. For example, I agree in principle with the notion that all humans have equal worth and that I should care just as much about the welfare of a stranger in some distant country as I do about my own friends and family. But no amount of agreeing in principle is actually going to make me live my life in a way that is consistent with this. In practice I am always going to care about my friends more than a stranger. In fact, isn't this the very definition of a friend?
Another utilitarian Best Possible World is one in which human overpopulation has saturated the entire Earth with biomass. 9 trillion people multiplied by even a single pleasure unit (out of say, 100) trumps 7 billion people at 100/100 units. I can't wait for our dead to be crushed by trash compacters into biocubes once burial space runs out!
The issue is reducing utility to maximizing pleasure/reducing pain.
I'm sure there is a more sophisticated way to quantify aesthetic value, but to your point it gets sidestepped in favor of KPIs that are easier to collect and improve materially.
Im not sure I believe it's 100% impossible to quantify the aesthetic experience but i might be an excel brainlet.
I think this is a great critique of Effective Altruism, but it doesn't supplant Utilitarianism's role as the "best" philosophical model for the modern time.
I won't touch on the moral heft of a utilitarian state of the world, but I'll agree that there's a rising population of anti-modernists, who find themselves incompatible to the hedon-optimizing world we live in today. There are a variety of unquantifiable objects (beauty) that people yearn for that escape the utilitarian world view.
Without reconciling that difference, there will eventually be a regression to philosophies that overvalue these unquantifiables. This makes utilitarianism inherently unsustainable—without reconciling those desires, then this system will never achieve its intended purpose of maximizing happiness, because, we're missing some features, and we're also building an unsustainable practice.
However, I don't think Utilitarianism is "wrong". Consider locality—sure, I'd likely save my mom over 3 random strangers, but how could you argue to spend money to save on person in your city over saving 10 people in Somalia. It doesn't make sense, and part of it is that locality means so much less nowadays. More of our relationships are virtual, and through the internet we can have conversations with those very people. The delta between our neighbour and someone around the world is so small now, that locality is almost negligible.
I read this article that said it's actually worth spending $100 on a video game instead of donating to charity, because that game brings you happiness, which keeps you employed, and motivates you to work harder, which increases your income, which increases your capacity to donate, perhaps more than if you donated that $100. I think this is a much more holistic, unmeasurable, accurate view of utilitarianism, which accounts for the human factor, while maximizing hedons. We should contribute to our local communities, not because it's better, but because as humans we wouldn't have it any other way.
Glad my post was helpful to you in writing yours. :) Quick note--
> Caroline Ellison—the prominent effective altruist and now convicted felon—notoriously advocated for romantic arrangements resembling an "imperial Chinese harem."
As a longtime Tumblr mutual of Caroline Ellison's, this was a shitpost on Tumblr, not a serious expression of her viewpoint. It's an understandable mistake-- a lot of her posts got pulled out of their original context, and EAs definitely believe a lot of stuff that an outside observer would assume is obviously a shitpost-- but in fact not even Ellison thought you should have an imperial Chinese harem. And her joke actually reflected her *rejection* of polyamorous, egalitarian effective altruist norms: Ellison was unusually-for-EAs sympathetic to trad and redpill views. (She was also, FWIW, monogamous by inclination.)
Yes tbh I suspected the Time's article was taking her words out of context. I'll pin your comment.
Yes. I was going to say something about this. I did not have the luck of ever interacting with Caroline Ellison before the FTX deluge, but after reading her blog with interest and some empathy, what comes through is a very intelligent, moral and good person with just the right amount of shitposting and girlboss-performing, and I would frown on any cheap dunking on her, whatever mistakes she might have commited.
Amazing piece. Think of how this correlates to the modern work place. The rise of software to quantify nearly everything, turn this data into KPIs for a growing managerial class, and how surveillance and tracking increased with the digitization of forms and the horror that is Teams.
Tell me about it. I grow weary of hearing that all our business decisions must be "data-driven." What about intuition? What about vision?
I liked this post a lot. I have interacted quite a bit with EAs online (and would even describe myself as somewhat EA-adjacent): they mostly come out as really nice, well-intentioned, math-savvy, socially awkward nerds. They are also pretty consistent with their starting axioms, which is always to be admired. I just can't find myself persuaded by accepting those axioms (maximizing well-being/minimizing pain as the core and/or only measure of moral goodness, and applying the criterion impartially to all sentient beings). Like, I obviously dislike pain and like pleasure, but it feels equally obvious that there are things I value much more than that: truth, freedom, agency, connection. You pod experiment is a good rehash of Nozick's experience machine argument, for which I've never heard a convincing utilitarian answer.
In a way, I feel Utilitarians are trying to recreate morality in a godless world by following to the letter the Biblical 'your eyes will be opened, and you will be like God, knowing good and evil'. More than that, they try to assume this impersonal view of 'thinking like a state' (the coldest of all cold monsters, as Nietzsche said) and treating all humans as non-differentiated cogs in a machine. Which I can imagine makes sense at *some* level (i.e., up to a point, at the state's) but definitely not at the level of the individual or of small communities. Being human includes thriving on interpersonal relationships with other humans, and in those other humans becoming unique and valuable *to you* in a way that no moral system can condemn or pretend to equalize.
Left brain tyranny indeed
The excelbrains focus on material metrics, but these metrics in themselves carry little to no moral significance and do little to inspire human beings. Endless dopamine will cause degeneration of the brain, endless pleasures will cause atrophy of the muscles. Excess is bad, and that's a pitfall that utilitarians fall into time and again. Maximising pleasure foregoes everything about material world that impacts humanity and is actually an evolutionary negative, as it leaves us exposed to hardships and risk, that we simply cannot calculate in, and utilitarian approaches also fail to provide systemic redundancies.
Spiritually, I don't even need to make an argument, as you stated, we all have gut reactions to these scenarios.
Excellent summary, thank you. I think you’re nailing something important here. Wilde’s aphorism regarding people who know the price of everything and the value of nothing readily occurs to me.
As does Ian McGilchrist’s book “The master and his emissary”, the central thesis of which (as I apprehended it at least) holds that the right hemisphere assimilates reality in a subjective, global sense, but lacks language/speech. It therefore passes that information to the (language/speech capable) left hemisphere for evaluation and analysis, then subsequently receives back a “digest” or summary which it compares to its own more subjective inferences. Hence, the “right brain” is the Master and the left… well, in essence the argument runs that the analytical approach of the left hemisphere has become over emphasised following the inception of the Industrial Revolution and that the dehumanised thought process which have become more common probably shouldn’t be a surprise. McGilchrist (an All Souls fellow and philosopher turned medic and psychiatrist) draws interesting parallels from case reports of autistic and schizophrenic patients (who have diminished right hemisphere communication) and people who’ve experienced right hemisphere strokes. I suspect he may be onto some important things; his Substack is great, too. (I expect you’re familiar with all this, but hey, for anyone who isn’t…)
I think Mary Harrington is also really interesting on the post-birth control world and the ascendancy of what she describes as the “meat lego” approach to living (I expect you’re already familiar with her posts, but thought I’d mention her).
Thanks again: great piece.
I haven't actually read anything by Mary Harrington - her book sounds interesting
Yeah, Mary blogs on Substack too, as “Reactionary Feminist”, I think. I’ve always found her provocative and interesting. McGilchrist’s book is excellent, too - though not light reading! Very accessible all the same.
Great piece. It resonates quite nicely with some of Bifo’s reflections on autism: “This symptomatology could be read not as a diagnosis of an anomic behaviour but culturally as the description of a novus homo.”
I’ve been diagnosed as a sperg, but I’ve increasingly come to doubt it. I went through the same thought experiment you did, and came to the same conclusion.
I can’t tell you how much you’re university experience is equal to mine. Currently studying philosophy at top 50 university for the subject, but it’s just senseless thought experiment after thought experiment that never maps on to the real world (Derek Parfit is the best example of this). Literally gone through the same phases as well: vegetarian to vegan (influenced by the effective altruist camp) to eating meat again. Am really regretting my decision to study philosophy at university and am looking forward to when it’s over so I can read actual philosophy books, not the dirge of meaningless articles on meaningless topics that are thrown at me. Not to mention the smuggling in of feminism et al into every topic and it just being assumed that you are leftist, even had some professor saying judging cultures is racist (maybe it is, but it certainly should be done)… It’s a shame how the subject is taught because it could definitely be the most interesting subject if done correctly.
Sucks being born a human in a world that wasn’t made for us. The System is defined by its inhumanity.
I find this sort of thing a natural area of cognitive dissonance for me. For example, I agree in principle with the notion that all humans have equal worth and that I should care just as much about the welfare of a stranger in some distant country as I do about my own friends and family. But no amount of agreeing in principle is actually going to make me live my life in a way that is consistent with this. In practice I am always going to care about my friends more than a stranger. In fact, isn't this the very definition of a friend?
In other words, much like everyone else, you do not agree even in principle with the notions per your second sentence.
Another utilitarian Best Possible World is one in which human overpopulation has saturated the entire Earth with biomass. 9 trillion people multiplied by even a single pleasure unit (out of say, 100) trumps 7 billion people at 100/100 units. I can't wait for our dead to be crushed by trash compacters into biocubes once burial space runs out!
The issue is reducing utility to maximizing pleasure/reducing pain.
I'm sure there is a more sophisticated way to quantify aesthetic value, but to your point it gets sidestepped in favor of KPIs that are easier to collect and improve materially.
Im not sure I believe it's 100% impossible to quantify the aesthetic experience but i might be an excel brainlet.
I figure we'll just let them have their pods, until there's no one left who wants one, then unceremoniously unplug them and go fishing.
I think this is a great critique of Effective Altruism, but it doesn't supplant Utilitarianism's role as the "best" philosophical model for the modern time.
I won't touch on the moral heft of a utilitarian state of the world, but I'll agree that there's a rising population of anti-modernists, who find themselves incompatible to the hedon-optimizing world we live in today. There are a variety of unquantifiable objects (beauty) that people yearn for that escape the utilitarian world view.
Without reconciling that difference, there will eventually be a regression to philosophies that overvalue these unquantifiables. This makes utilitarianism inherently unsustainable—without reconciling those desires, then this system will never achieve its intended purpose of maximizing happiness, because, we're missing some features, and we're also building an unsustainable practice.
However, I don't think Utilitarianism is "wrong". Consider locality—sure, I'd likely save my mom over 3 random strangers, but how could you argue to spend money to save on person in your city over saving 10 people in Somalia. It doesn't make sense, and part of it is that locality means so much less nowadays. More of our relationships are virtual, and through the internet we can have conversations with those very people. The delta between our neighbour and someone around the world is so small now, that locality is almost negligible.
I read this article that said it's actually worth spending $100 on a video game instead of donating to charity, because that game brings you happiness, which keeps you employed, and motivates you to work harder, which increases your income, which increases your capacity to donate, perhaps more than if you donated that $100. I think this is a much more holistic, unmeasurable, accurate view of utilitarianism, which accounts for the human factor, while maximizing hedons. We should contribute to our local communities, not because it's better, but because as humans we wouldn't have it any other way.
Totally mind opening
Thank you