Usually when I discover (or, more rarely, think up) a thought experiment about a moral point, and discuss it with an arbitrary person whom I will (for convenience) call Kim, the conversation usually goes like this:
Me: <Interesting scenario> - what do you think?
Kim: I would just <avoids point of scenario by nitpicking>
Me: You know what I meant. <applies easy fix to scenario to prevent nitpick>
Kim: Well then, I’d <avoids point of scenario by raising unrelated moral issue>
Me: That’s not the point. The point is <point> - let’s say I constructed the scenario to make <moral issue> not an issue.
Kim: Hmm. <avoids point again>
And so on, and so on.
Now I have a platform on which to present the prerequisites for using hypothetical situations as aids to moral understanding.
I have read two excellent pieces about logical rudeness - one by a Peter Suber [link to WebCite, because the page looks suspiciously like it will vanish soon], and one on LessWrong. Logical rudeness is a term used to denote a whole variety of techniques used to appear to win arguments, rather than to address the issues at hand. I can’t offhand think of a way to improve Eliezer Yudkowsky’s explanation on the LessWrong page I linked, so I won’t elaborate on it.
The main way people are logically rude with moral dilemmas [I suffered a little dilemma here myself, wondering whether to sound pretentious by pluralising as “dilemmata”] is in working out lots of ways in which your hypothetical situation could, in fact, not be about the point you want it to be about. A paraphrased real-life example that actually happened to me:
Me: <explains the torture vs. dust specks moral problem>
Kim: But how can you possibly even contemplate torturing a person! You’re an evil person!
Me: I would contemplate torturing a person if it would avert some greater harm, yes. That’s not to say I would torture a person.
Kim: But torture! Evil!
This example shows Kim latching on to an emotional part of the hypothetical situation, and using it to launch an ad hominem. This is not only logically rude (I could have outlined any scenario at all, and included the word “torture”, and got the same result; Kim ignores the effort I put in to the explanation) but also verges on the socially rude. (In the actual situation in which this happened, I lost my temper, I am ashamed to say; the discussion, which was between about ten people, quickly turned into what was essentially a shouting match, that was only dissolved when some of us insisted on watching the latest episode of Doctor Who.)
The key way to avoid this is to make sure that you never stop yourself considering something, and never condemn others for considering something. It’s a moral dilemma - you’re meant to feel uncomfortable while thinking about it. You shouldn’t be afraid just to think something, and it takes some time and effort to learn not to avoid uncomfortable thoughts. (Obviously, speaking those uncomfortable thoughts is certainly something to consider avoiding.)
The Least Convenient Possible World
The other major way people avoid grappling with moral dilemmas is to say, “But your hypothetical situation doesn’t actually work, because of <this objection>.” It’s a very natural thing to do. My major inspiration on this is the LessWrong post on considering the least convenient possible world during debates. (As an aside, I’m not sure whether to use the word “argument”, “debate” or “discussion” - an argument is a pointless thing, while a debate is something you enter with the aim of winning. Neither of these is what I am actually talking about, but the word “discussion” is becoming a little monotonous.)
The usual situation: it’s perfectly obvious to you (or at least would become so after five minutes of thought) what the flaws are in the presentation of the hypothetical situation, and it is probably abundantly clear that those flaws could be fixed, but because you want to win the argument rather than address the moral issue, you point out the flaws and waste the time of all concerned.
However, the aim of a moral discussion is not to prove yourself to be a better arguer, but to discover what your thoughts are on an issue you’ve never really seen before. If you are going to point out the flaws in the given situation, at least do so while presenting a solution. My usual tactic when someone (let’s make it Kim again) presents me with a moral dilemma is to begin the discussion with something like:
I presume we can ignore <this flaw>? I could fix it with <very brief explanation of fix>.
Invariably Kim will reply with something along the lines of “Yeah, that’s what I meant” - and that is the signal for “I am trying to discuss a moral problem, not to construct a watertight scenario.” If Kim were instead to respond with “Hmm, I hadn’t considered that…”, then that would be an indication that ey was looking for the implementation flaws in the situation ey had outlined. Then and only then would I generate more such flaws.
I’m not holding myself up to be a paragon of hypothetical-considerators, but I like to think that I’m a bit better at it than most people are. My overarching rule is:
If either party in a discussion has become angry, you have failed.
Of course, some people just enter into arguments in order to make you or them angry (after all, it’s quite fun to be angry about something that doesn’t matter) - but if you actually want a fruitful discussion, avoid inflaming people.