Thursday, June 28, 2007

cooked up thought experiments and the viciousness of ethics

I've been working on these thoughts for a while, but I hope they will be particularly relevant given the recent discussions of the so-called problem of ethics professors.

There is something that has always bothered me about the thought experiments / intuition pumps that are the ultimate data for ethics according to many people, and for even more people, the rhetorical device driving ethical writing. Consider the following typical case (drawn from a pretty much random source):

You are out in a small boat and come upon fifty people in danger of drowning. Your boat can only accommodate ten people, and there is not time to make more than one trip to shore. Furthermore, there are no other boats in the area, and no one close enough to aid in any other way. No matter what you do, at least forty of the fifty will die.

My immediate inclination in these cases is always to try to find some way out of these situations. This, of course, is cheating, the ethicist will say. If you press them, they will add further constraints that frustrate your creative solutions, until you are left with the "hard choice" that the ethicist wanted to probe your intuition about.

I suspect my annoyance at this procedure is pretty universally understood as a sign that I am not serious about ethics, that I'm being annoying, or that I am too thick-headed to see the point. I also suspect that there is a strong pressure to stop suggesting the creative solutions, and to "play along" when these kind of cases are brought up, so that the ethics discourse can continue. And certainly, taking a certain conception of what ethics is to do for granted, this would be a totally reasonable way to go.

But I've still got a nagging bit of dissatisfaction, here, that I want to probe. Why do people react this way to thought experiments in ethics in particular? No one tries to "weasel out" of thought experiments in physics (do they?). I think my worries begin with a strong feeling that ethics ought to be tied to helping people make concrete moral decisions. I know this isn't universally accepted; it isn't uncommon to make a distinction between decision procedures and standards of value, to focus ethical theory on the latter, and to ignore any impracticalities of it, because those only matter for the former. I don't buy it; I think ethics has to be strongly concerned with both, and that they probably aren't totally separable. For a criterion in ethics to be true, it seems to me that a minimal condition is that it ought to be usable in (at least some) concrete situations.

Here's how I analyze my dissatisfaction. It seems like what I, or the undergrad ethics student, or others who cause this kind of trouble are trying to do is actually a sign of a good moral sensibility. It's not that we're not serious, or annoying, or thick-headed. It's that we're trying to do what most people try to do, what one ought to do when faced with a moral problem: we're trying to find a solution that maximizes all the types of value at hand. This kind of problem-solving activity uses a variety of resources, including background knowledge from any relevant area, to try to come up with a better solution than the immediately obvious ones.

Take the example above. As I read, I have in mind a picture of what my boat looks like (smallish wooden boat with oars), and I start using my background knowledge to come up with ideas about how we could save more people. Perhaps a few people could use the oars for flotation? Perhaps we could break up the boat and everyone could have a large enough piece to stay afloat? What if, instead of just letting 10 people in the boat, I let as many people as can manage grab on to the side to stay afloat? Etc.

And I start to think about reasons to take certain people over others. I should pick women and children, and probably I could fit more. I'd pick people who look weak or like poor swimmers, and try to come back for the others. Maybe I'd pick one or two stronger looking fellas who could paddle the boat faster than I could. And so on.

None of these considerations are supposed to be allowed, of course, because the point is that we want to theoretically explore the case when we can only save 10 out of 50. We want to ask something about our obligations, or their claims, or whether there is moral remainder, or whatever. They don't want the creative solution because it prevents us from facing genuine conflicts of value, which is one of the things they are interested in exploring. So the ethicist will keep adding stipulations until you are forced to deal with the issue that interests them.

And, given enough examples with this, you're probably going to get driven out of ethics, or you're going to fall in line. This actually strikes me as quite a vicious result! One of the results of studying ethics, then, is to squash one's ability to engage in ethical deliberation! No wonder that it's said: "nothing makes a man so much of a scoundrel as a prolonged study of ethics"[1]. In this case, we should recommending to students who might hope to become more skilled at solving moral problems, or become more reflective about right-behavior, to at all costs stay away from Ethics courses!

A further worry goes to the heart of the enterprise. I think the most sensible way to understand the point of intuition-pumping must be that it is supposed to make explicit the wisdom implicit in our practices of ethical choice-making, or, a bit more opaquely, to find out our core commitments or basic beliefs at the level of particular judgments. There is a worry, though, that since, in the course of intuition-pumping, we actually end up pretty far away from normal practices of ethical problem-solving, that whatever the thought experiments produces is going to be loosely if at all related to any actual implicit wisdom we have or practical judgments that we would make.

[1] It is quite difficult to find the original source of this saying. I've seen it alternatively attributed to C.S. Peirce and G.K. Chesterton.


Evan said...

I think the throwaway comparison with physics bears further analysis, particularly because physicists streamline their thought experiments in even more outlandish ways than ethicists do. And yet it is perfectly well understood that persistently asking questions like "but what if the collisions aren't elastic? what if I want to do this with a non-spherical object? what about friction?" and so forth, far from being a hallmark of one's ability to engage in physical deliberation, can in fact severely hinder one's progress. Moreover, when physicists make these idealizations, they have not somehow forgotten about the existence of friction, nor failed to understand its role in the real world, and indeed are quite good at applying their conclusions to practical cases, removed though their deliberations may be from the real world of inclined planes and normal-sized objects, along with the "implicit wisdom" that applies to it.

dd0031 said...

Yikes. Are you serious here? It strikes me, Matt, that this post is an exercise in uncharitableness. I will respond to one point, however (only one, because I'm in Canada). First point, not all "ethicists" make a distinction between criteria of right action and decision procedures. That's more common for those of the consequentialist stripe. But even those who make a distinction are interested in decision procedures. It's how we determine which decision procedures are the best that requires a criterion of right action. Second point, ok I can't help myself, I don't know what the heck this is supposed to say about ethics professors. Are you saying that if Judith Jarvis Thomson was on a boat, and she could save more by working some different plan, she wouldn't? Of course not. Why ethicists are interested in outlandish and unlikely scenarios is because we're interested in particular factors, and their relationship to value or rightness. Not refusing to play the game means not evaluating that factor, and it means a less well-developed understanding of the factors involved in moral reasoning. You say: "we're trying to find a solution that maximizes all the types of value at hand". But what is that? What maximizes value? What is valuable? All these questions need intuition pumps. Your refusing to engage in them means you can't possibly know, at any time, what would maximize value.

Ok, three points. I have no reason to believe the following:

"And, given enough examples with this, you're probably going to get driven out of ethics, or you're going to fall in line. This actually strikes me as quite a vicious result! One of the results of studying ethics, then, is to squash one's ability to engage in ethical deliberation!" This is a howler of epic proportions. You're suggesting that if one thinks enough about ethics and various topics of value, this will make one "fall in line". And "falling in line" is bad because it quells "one's ability to engage in ethical deliberation". Huh??? Where did that come from? So you are suggesting that JJT wouldn't try to be creative if actually faced with that situation? To quote Gob Bluth, come on. There is NO REASON AT ALL to believe your conclusion, based on your premises.

Theron said...

I think you are probably right to say that ethics should be concerned with both decision procedures as well as standards of value (or a criterion of rightness or worthwhileness). You are probably also right to emphasize the importance of having a richer moral imagination and richer problem-solving abilities than most thought experiments leave room for. I am unsure, though, what the argument for your "usability constraint" on a putative criterion of rightness/worthwhileness is supposed to be. Could there not be legitimate answers to the question, "All things considered, what ought I to do?" that one is systematically incapable of recognizing, being motivated by, or using? (And if this could be the case for an individual, why not for an entire species?) Indeed, ethics should strive to guide action and be usable. But this alone does not entail the non-existence or illegitimacy of an unusable criterion of rightness/worthwhileness, right? (After all, the most transcendental of realists can agree with your point that ethics should strive to guide actions).

dd0031 said...

Ok, four points. Let's examine your last paragraph.

"A further worry goes to the heart of the enterprise. I think the most sensible way to understand the point of intuition-pumping must be that it is supposed to make explicit the wisdom implicit in our practices of ethical choice-making, or, a bit more opaquely, to find out our core commitments or basic beliefs at the level of particular judgments."

These are not identical goals. The "wisdom implicit in our practices" seems tied to our actual practices - which, it seems to me, carries with it on all too many occasions relatively little wisdom at all. The goal, at least when I use them, of examples is the latter, which you label more "opaque", although I'm not sure why it's at all opaque. We're interested in the beliefs people hold strongly about permissible, impermissible, and required moral action. These categories are tied to particular factors, and in order to isolate those factors you need these sorts of examples. Not exclusively, but you do need them. But you go on:

"There is a worry, though, that since, in the course of intuition-pumping, we actually end up pretty far away from normal practices of ethical problem-solving," Ok, stop right here. Why should that be so? Normal practices of ethical problem solving, as you say above, seek to maximize value. But that's why moral theorists design examples that test our intuitions against isolating examples - to determine what factors help establish value. But you go on: "that whatever the thought experiments produces is going to be loosely if at all related to any actual implicit wisdom we have or practical judgments that we would make."

Ok, let's say I just give you that. But I don't know what the conclusion is. If the conclusion that's being lamented is that moral theory strays from our moral practice, we should expect it to! If it didn't, moral theory would end up much more like the sort of bourgeois pseudo-morality that's too often displayed in developed, capitalist nations. You'd end up with a moral theory that says: do what you were going to do anyway. Or, rather, a moral theory that's, in the words of Elijah Millgram, all but incapable of critique of existing practices. What's interesting about moral theory and moral epistemology is that it can use our coherent, considered beliefs about moral action to critique our practice. But for some reason I don't yet understand you want to strip moral theory of its ability to do that. But if your claim is that examples stray from people's beliefs about morally relevant factors, seems to me there's no reason to believe it.

Matt Brown said...

Evan: I see a possible point of agreement and a possible point of dissent in your comment. On the one hand, perhaps you're suggesting that physicists place careful limits on their thought experiments, and this reigns them in from excess. Hard to see how anything is reigning in Peter Unger. Of course, the limits here are based in the real world that physics is meant to cope with, which (according to many), ethics doesn't have at its disposal. On the other hand, you might just be saying that ethics, like physics, may well be able to idealize without going off the rails... in which case, I can use this disanalogy b/w physics and ethics to raise doubts.

Theron: You're right to bring up realists in your response. It seems to me that this is precisely the failing of certain brands of realism: no physicist would ever accept a theory in physics that was too unwieldy to yield concrete predictions. Now either this theory could be true, and truth does not guide / track the acceptance or rejection of theories (bad news for realism), or such theories cannot be true (worse news for realism). In either case, I think we can see a strong epistemic norm that will apply widely across various fields of inquiry, and we have good reasons for accepting it also in ethics. All the more so, since plausibly ethics is more a tool for helping us figure out how to live together, rather than a theory of some feature of the world.

DD: I will have to get back to you later, but two quick points that I will elaborate soon: first, I'm forwarding a largely psychological hypothesis about the effect on moral imagination (to use Theron's phrase) of repeated use of the method of intuition-pumping in ethics, and attempting to make it plausible. Of course, it is totally an empirical claim, but it don't cast me as saying that someone like JJT wouldn't try to be morally creative, when I'm rather saying that they would have a diminished ability to be morally creative.

Also, I worry that your defense of the method assumes the determinacy of something rather more ambiguous, a fallacy of misplaced concreteness, if you will. More on these later...

dd0031 said...

Ok. I understand you are trying to make an empirical point. Fair enough. But I think there is little reason to believe that the empirical point holds. The so-called "problem of ethics professors" seems to me to indicate what we all thought before, i.e., that ethicists are neither better nor worse than any other philosophers, morally speaking. This should not be a surprise. We shouldn't expect that epistemologists would beat all other members of philosophy departments at Jeopardy!. Moral philosophers are interested in moral knowledge, moral rightness, etc. Moral philosophers are not moral educators. Moral philosophers are not moralists And there appears to be no empirical reason to believe that moral philosophy hinders whatever moral education you have already, but rather allows you to approach your moral education critically, which is a good thing.

I don't know what a fallacy of misplaced concreteness is, but it seems to me that moral philosophical inquiry gets whatver concreteness there is to be had in the object that it studies: our moral beliefs and moral commitments. You've given me no reason at all to believe otherwise.

Theron said...

Matt, my apologies for straying from your initial points about the psychological effects of moral philosophizing on one's moral imagination and entering a discussion about realism. I think I see where you are coming from, but let me try to make a few remarks on behalf of realism (both scientific realism and moral/value realism). We can agree that there are plenty of pragmatic virtues that are not mere pragmatic virtues, but also epistemic virtues. I here have in mind virtues like parsimony, coherence, conceptual unity etc. Such virtues are pragmatic virtues; they are also epistemic virtues in that, among other things, they steer theories away from the genuine epistemic vice of being ad hoc. However, the pragmatic vice you cite (pragmatic unwieldiness) does not automatically strike me as also an epistemic vice.

Now, there might be a point to arguing for a "hypothetical usability constraint" on theories. After all, true theories are supposed to bear information about, or in some sense track the structure of, the domain they purport to portray (whether that domain is what exists in the universe or what value consists in). And it seems like if that is what theories do, then their usability derives from their ability provide us with information about some domain. But I would want to suitably modalize a usability constraint in order to avoid ruling out theories on the basis of epistemic, cognitive, or technological flaws on behalf of cognizers. Less sophisticated cognizers cannot use Einstein's theory of gravity; this putatively does not redound to the epistemic credentials of Einstein's theory. Analogous points can be made for the relationship between a community of cognizers and some epistemically virtuous theory. Thus, I would want to modalize any usability constraint to be, when confronted with a case of pragmatic unwieldiness, ambiguous between a fault in some theory and a fault in some cognizer(s). Again, such hypothetical usability constraint (e.g., "a theory should be usable insofar as the cognizers attempting to use it are cognitively, technologically, and otherwise implementationally flawless") on theories would avoid ruling out theories on the basis of problems with us, not the theory.

You say that if a theory is pragmatically unwieldy, then this makes it unacceptable. I guess it would help to know what is meant by "acceptable." Does accepting a theory mean accepting that the theory is true, does it mean putting it to use via specific actions, using particular instruments, or...? You also say that if a theory is true but practically unusable, then this is bad news for realism. But this is precisely what I was questioning to begin with -- why should usability be a constraint on which theories (moral or otherwise) are true? In other words, why is it bad news for realism? Some might even take this as good news for realism -- to find a theory that is less constrained by the epistemic, biological, technological, economic etc. idiosyncrasies of the human condition might be regarded as, among other things, more objective.

Matt Brown said...

Theron: By acceptance, here, I just mean that the relevant community (physicists for theories in physics, etc.) more or less accepts it. (Meta-linguistic features of English might guarantee that they also accept it as true?) I feel like you can only say the sort of things about usability constraints being hypothetical by ignoring the context of discovery, though. All theories, to be accepted, must at least be usable in some context, the context of the problems they were created to solve. Science usually requires even stricter conditions on usability in order to count a theory successful. Truly, Einstein's theory is not usable by every cognizer. But if it wasn't usable by the relevant areas of the physics community, then it wouldn't be a successful theory of physics, accepted and used by physicists.

Why would such a scenario be bad news for realism? Because scientific realism has largely supported itself on the basis of an inference from success to truth. But here we're breaking the evidential link between the two, suggesting that the actually true theory might be persistently unsuccessful.

Matt Brown said...

DD: The empirical reasons behind my speculative hypothesis are, I had thought, fairly commonplace features of educational psychology. This particular method gets you in a habit of doing a certain kind of thing in reasoning about fictional cases. I'm suggesting that the way of reasoning would carry over to reasoning about actual cases. Isn't this how a lot of education works? And the problem is, in this case, that the kind of reasoning involves taking a certain set of alternatives as given, and actively squelching the urge to develop creative "workaround" solutions. Are you suggesting that there is some reason to think that the normal ways of ingraining habits wouldn't apply in the case at hand?

In terms of the fallacy of misplaced concreteness, I guess what I'm worried about is that ethics assumes that there is a determinate structure of "moral beliefs and commitments" to get at. Surely, ethical reasoning produces a determinate set of moral propositions, but the assumption that these exist prior to the process that produces of them is troublesome. I suspect that the basic moral capacities are practical rather than theoretical, know-how rather than knowledge-that, abilities of moral imagination and problem-solving rather than judgments. If this were true, one would need to turn a skeptical eye towards results produced by pushing the moral reasoner into extreme situations (just as many other heuristics of reasoning that humans have work quite well in normal circumstances break down in extreme cases, as in Kahneman and Tversky's research on "illogical" psychological behavior).

Theron said...

Hi Matt, thanks for your comments. They help, but I have a couple more questions/comments:

Would such a scenario (true but systematically unusable theory) be bad news for brands of realism that are not based (or at least, not strictly based) on inferences from success to truth? It is arguably the case that brands of realism appealing to other epistemic virtues (not success per se, but, e.g., coherence and conceptual unity) are what a least some notable moral/value realists tend to be concerned with anyhow.

But even for brands of realism that are based on inferences from success to truth, I fail to see what the bad news amounts to, at least with respect to our initial back-and-forth concerning a usability constraint on theories. It seems to me that because success is used as evidence for the truth of a theory does not straightforwardly imply that there cannot be true theories that are systematically unsuccessful (where 'unsuccessful' means something along the lines of 'pragmatically unwieldy' or 'unusable'), right?

Maybe it is unnecessary to present a thought experiment here (and perhaps doing so is antithetical to your original post about thought experiments!), but I will go ahead and construct one in an attempt to elucidate my earlier point about remaining ambiguous between a fault in some theory and a fault in some cognizer(s) when confronted with a case of pragmatic unwieldiness (or "unsuccess"):

Imagine a community of scientists that accept theory T. T boasts a variety of epistemic virtues, including success -- it would be irrational not to accept T. The community of scientists author millions of publications that offer a detailed description of T. A few months after libraries, computer databases, and archives have safely filed away the scientists' complete and clear characterization of T, a virus (appropriately name the T140 virus) kills all and only those with an IQ of 140 and higher. However, in order to understand the conceptual intricacies of T (which is required to put T to use), one must have an IQ of at least 140. There is no hope of stopping the T140 virus, and there is thus no chance of someone on earth having an IQ that is higher than 140. Now T has become a systematically unusable theory. Is T just as true or epistemically virtuous as it was before the breakout of the T140 virus? Presumably T tracks the features of the domain it purports to portray just as well as it did before the T140 virus, and the unusability of T is not T's fault, but results from a lack of sophistication on the part of cognizers.

A usability constraint that has a built-in prejudice against theories (and for cognizers) strikes me as at worst implausible and at best in need of a careful defense. A hypothetical formulation of the usability constraint, as I mentioned earlier, avoids such prejudices.

Theron said...

Matt and Dale: regarding the empirical dispute, you might check out Sunstein's BBS paper entitled "Moral Heuristics." Jonathan Haidt has also done interesting work on moral intuitions and heuristics.

David Hunter said...

Interesting discussion, I'm inclined to think that philosophers have an obligation to point out that what we are mostly interested in is the theoretical issues, and in terms of action guidance the theories though non-ideal do the job in most circumstances

There is a relevant discussion going on over here:
In regards to teaching philosophy

Michele said...

Dear Matt, I left some comments about your post in my personal blog.

Brian Berkey said...

Since Matt's "random source" was actually a post of mine, and since I think it's often quite useful to use the kinds of examples that he would like to eliminate from moral philosophical discourse, I suppose I should weigh in and attempt to defend it.

First, DD's point that refraining from using such examples would seriously undermine our ability to critique existing practices is an important one. By focusing in on certain factors involved in cases with carefully stipulated constraints, we're able to recognize (often serious) inconsistencies in prevailing thought. Sometimes even simple examples can help in this respect (e.g. Singer's case of the child drowning in the shallow pond, when considered alongside a purported requirement to donate a great deal to OXFAM), but in order to bring out deeper inconsistencies (e.g. the various ways in which morally arbitrary aspects of the factual status quo affect our moral thinking) we must employ more complicated cases, in the way that philosophers like Unger do. If we're unwilling to think about such cases, it's unclear to me how we could possibly challenge prevailing views in an effective way.

Also, I think that even if we accept Matt's claim that "the basic moral capacities are practical rather than theoretical, know-how rather than knowledge-that, abilities of moral imagination and problem-solving rather than judgments", we still must be committed to doing the kind of theoretical work that he's argued against. If the basic moral capacities are practical, and consist in, say, skill in moral reasoning and dispositions to think and act in certain ways, in particular in ways that morality requires, we still have to consciously work to develop those skills and dispositions. And in order to consciously develop the appropriate skills and dispositions, we need to know what our use of those skills and dispositions ought to be aiming at. And in order to know that we have to do more than look at existing practices; we have to do moral philosophy, and decide what it is that morality requires of us, so that we can use that knowledge to develop the necessary skills/dispositions. Otherwise we're just developing them blindly, and, in all probability, simply conforming to the behavioral status quo.

Matt Brown said...

DAVID: Thanks for the link. Note, I'm not attacking theories per se (though I do express some doubts about the current state of ethical theory). My target here is one method of generating/testing/justifying ethical theories.

BRIAN: Thank you for your reply! Whether and how such examples are really needed depends on what you think they are telling you. I'd be curious if anyone knew of a systematic defense of intuition-pumping in terms of what it is supposed to be evidence for. It might be revelatory of basic judgments, or about inconsistencies in prevailing thought. On the other hand, if you think that our actual beliefs in the types of highly unusual cases that I'm worried about are just indeterminate, and pushing someone to make a determination just results in confabulation, then it isn't so revelatory.

Another question I have is how thinking about such cases can "challenge prevailing views in an effective way," if we read "effective" as "useful for resolving moral perplexities we encounter." If, as you say, it can help us recognize inconsistencies, how can we resolve them?

Finally, I'd like to say again that I'm not against normative, systematic ethical theorizing, only against one method for it popular in contemporary philosophy. (I don't recall Kant, Hegel, Mill, Jane Addams, or John Dewey, to name a few ethicists I admire, using such a method in developing and defending their ethical theories.)

Further, when you say that in order to develop our moral skills, we need to know what they ought to be aiming at. Yes and no. If the answer to what we ought to be aiming at is a determinate set of results, I say no. When training scientists, we don't know what discoveries they are aiming at. When training philosophers, we don't (or shouldn't) know what positions they are aiming at, whether they will be idealists or realists, Kantians or Utilitarians, etc. Likewise, when training moral actors, we don't know what moral actions they are aiming at. But in all these cases, we do need to know the proper telos of the activity, and we need to know something about effective and ineffective methods of training people to be good at that activity.

MICHELE: Thank you for your kind words. I like a lot of what you're saying here.

In fairness to my opponents, I believe that they would insist that we must appeal to some evidence to justify ethical theories, or to answer ethical problems, and since there are, unlike in science, no experiments or observations that can answer these questions for us, we can only look to our basic judgments on ethical cases. I think you're right, that this source of evidence is faulty, and that there are other sources, but one can't fault them for seeking evidence.

Second, I'm not sure that I would want to distance ethics from science completely, but I'd want to expand the notion of science away from value-neutral positivist towards a value-laden, creative problem-solving activity. John Dewey thought that we needed to make morals more scientific or more intelligent, but that we also needed to make science and intelligence more moral.

Theron said...

Hey Matt,

Two things:

(1) I wanted to point out that my last response was perhaps more radical than it needed to be. I argued that a theory can be true (or epistemically virtuous) independent of our ability to comprehend it, and (thus) independent of its usability. As was discovered in a conversation with Dan, in order to argue against a strong usability constraint and for a weak or hypothetical usability constraint, I do not need to go this far. (As some might reasonably object that if there are no cognizers present to comprehend the theory, then there is no theory). So imagine roughly the same thought experiment as before, only this time the virus does not disable cognizers from recognizing the truth of the theory, it only blocks them from ever putting it to use (it also blocks the theory's ability to generate usable theories). Apologies if you are put off by the exoticness of the thought experiment -- I can try to make it more realistic or detailed if you want.

(2) Regarding moral theorizing, moral psychology, and moral action, here's another potentially interesting thought: that to be most excellent at any particular aspect of morality (theoretical or practical), one needs to be morally excellent OVERALL. And in order to be morally excellent overall, one needs to be psychologically or intellectually excellent overall. It is sort of like the idea of general intelligence. Focusing on being excellent at a variety of things (A, B, C, X, Y, Z, P, and Q), or just focusing on being excellent in general, can make one more excellent at Z and Q than someone who merely focuses on being excellent at Z and Q. John mentioned this to me in a conversation about the super-virtuous Paul Farmer (check out "Mountains Beyond Mountains"). While I do not agree with Susan Wolf's disparaging remarks about moral saints, she might be onto something insofar it is empirically the case that being a moral saint requires being virtuous overall, (which includes, for example, having a great sense of humor, being knowledgeable about a variety of topics, and knowing how to talk to and take the perspectives of people from drastically different backgrounds). Claims about general (moral) intelligence are empirical claims, and to the extent that they are plausible, they seem relevant to your discussion here.

Sarah A. said...

Matt, are you concerned that 1) the kind of general rules that “cooked up thought experiments” will lead to will be warped by the strangeness of such cases or 2) that ethicists, by refusing to participate in creative problem solving of moral dilemmas are in fact causing that once innate ability to dissolve an apparent moral stalemate to atrophy? I have been thinking about these concerns.

Regarding (1), if the theories of fundamental values or ethical principles are “discovered” only by way of considering somewhat deviant (and almost always unfamiliar) thought experiments, the content, shape, bent of these principles may be less an accurate reflection of our actual values and more a result of what the human mind does when backed into an ethical corner. Ok, ok. So moral rules are supposed to be universal and consistent in all cases but I was just considering the following: the behavior of a wild animal being chased into an inescapable dead-end, or perhaps the behavior of, say, the characters in the movie “The 300” in their final battle. I can imagine an interesting analogy to this last image. Notice that the movie-Spartans presumably realized that wars can and do occur and, based on this information, devised principles to live by. The result: a vicious community of hostile, belligerent masochists. Perhaps the method of discovering the fundamental principles of right and wrong or value by means of the extreme cases proposed by most ethical thought experiments has or will wreak similar havoc on the ethical domain; ethicists become moral Spartans (Picture Judith adorned in helmet, crouching in phalanx formation as she confronts the anti-abortionists, a sneer of battle lust twisting her lips… sorry). I am not sure what the psychological or substantive result of such extreme theorizing is but one might take pause and consider.

If (2) I wonder what the empirical evidence will ultimately suggest. I will say that, since I began studying moral theory, I have become a bit of a misanthrope, especially when considering the right action in a moral case. My downfall is obviously not evidence of a general trend but I wonder if it could be the result of the continuous backing into ethical corners. Perhaps the change occurred just because I have neglected my problem-solving muscle for the sake of more abstract ethical thought experiments (if you don’t use it, you lose it, right?). I am ultimately inclined to think that the problem (I do regard it as a problem) is the result of a bit of both.

At any rate, one might take refuge in the fact that Applied Ethics is the place to offer guidance in most actual ethical decisions. But these thinkers have been influenced, no doubt, by the ethical-thought experiment smack down just like all the other theoretical ethicists. I am beginning to wonder if I really want to trust the principled ethical prescriptions of a bunch of battle worn moral philosophers (a party to which I have aspired to be a member). For now, the alternative may be far worse in instances where actual moral dilemmas arise. What might the moral animal do when she is pressed to make hard choices (and the paddles just won’t keep everyone afloat)? I have a hard time beliving that the person who has not contemplated such a scenario and attempted to understand the principle underlying right action will have much luck determining what action and why this action is the best one. Experience with such cases as well as examined responses to them seem somewhat indespensible.

Matt Brown said...


Thanks for your response! I'm concerned not only with 1 and 2, but a 2 as applied not only to ethicists but to those who have been educated by ethicists as well.

I take your comments as mostly friendly, and appreciate them. Where you point to places where ultimately it is an empirical matter, I agree, and I'm merely speculating on likely results.

On your last remark, you may well be right that ethical training of the sort I've criticized might be useful in the case of so-called "tragic dilemmas" where both choices are bad and there are no easy ways out. Of course, such cases must exist. So it would behoove myself or anyone else, if they were to attempt to provide a replacement for how we do & teach ethics, to attempt to recapture this ability to some degree.

Anonymous said...
This comment has been removed by a blog administrator.
Timothy Griffy said...

I am very familiar with the tactic of adding "further constraints that frustrate your creative solutions" to get at what the interrogator wants, though from a very different angle. I am a pacifist, forbidden to use violence in any situation short of a clear and present danger to the life of someone else. So often enough, I will get questions about what I would do in this or that circumstance. I think the goal is to trap me into a situation where my abstaining from violence allows them to claim moral superiority for themselves. Or it becomes such.

In most cases, we are not talking about situations I had not thought about already. And figured out ways to get through the situation without using violence. That's where the further constraints start coming in. Eventually, what started out as a fairly realistic scenario devolves into something so fantastically removed from reality there really is any point in even trying to answer.

I can see the general utility of ethical thought experiments in getting us to think about the decisions we would make for the situation. However, my own experiences have made me wary about outlandish thought experiments designed to force a conclusion (of which the ticking time bomb scenario is an exemplar) or artificially suggest inconsistencies where none really exist (e.g., Unger's sedan and envelope experiments).