When a President is Charged with Rape

After several months of investigation, this week Moshe Katsav has been indicted for rape and sexual harassment of 4 different former workers. Today, former justice minister Haim Ramon was found guilty of sexual harassment. And last week, an investigation began against Prime Minister Ehud Olmert over suspicions of power abuse during the privatization Bank Leumi (as a I side note I have to mention that Bank Leumi was founded by my great great grandfather, Zalman David Levontin, a very different time in Israeli history).

What is the meaning for a country when its leaders are charged contemporaneously with serious crimes? Should the focus be on the number of criminal occurrences or rather the fact that the legal system functions autonomously and charges against leaders are not overlooked. Charged with the most serious crime an Israeli leader has ever been indicted for, Katsav has said that he is the victim of a conspiracy by ‘political enemies.’ Haim Ramon, who was found guilty of kissing a young soldier despite her resistance, on July 12, the day the Lebanon war erupted announced he will appeal the verdict, repeatedly stating that the allegations are false and designed to condemn him politically. The three judge panel wrote unanimously and clearly: All the elements of a sex offense were present. The victim spoke truth while Ramon did not. The AG office stated, “It is not an easy day when a minister is convicted, but the verdict is a confirmation of the enforcement of legal norms of autonomy and the dignity of women [in the country]”. The debate is heating up about the significance of these events, between the perspective of the net corruption among leaders and the independent unbiased operations of the legal system, in which all are equal before the law.

Posted by Orly Lobel on January 31, 2007 at 12:55 PM

» Alleged Rape by Moshe Katsav from Sex Crimes I normally don’t blog about international sexual violence issues because it is hard enough to cover domestic issues. However, since Orly Lobel at Prawfsblawg has commented on the story, I figured I should do a post as well. This is [Read More]

Tracked on Feb 1, 2007 4:36:23 PM

Comments

Orly, my own sense is that I’m glad the legal system in Israel is capable of bringing charges against its leaders; this is a much better sign of “normalyut” and a healthy system able to detect and extrude corruption or vice. Of course, if the charges *are* trumped up and the result of corruption, vice or caprice, then Israel really is in trouble…

Posted by: Dan Markel | Jan 31, 2007 4:39:13 PM

Teaching students to be “real lawyers”

It’s an old and common complaint: Law schools don’t teach students how to be “real lawyers.” Cameron Stracher returns to the theme in this Wall Street Journal opinion piece. He reports, “[t]here appears to be an emerging consensus that although law schools may teach students how to ‘think like a lawyer,’ they don’t really teach them how to be a lawyer.” “One of the biggest problems with the current state of legal education,” he writes, “is its emphasis on books rather than people.” In Stracher’s view, part of the answer to the problems with legal education is a thoroughgoing shift toward clinical education; law schools, he thinks, should not be content with a few discrete, limited, short-term clinical classes, but should instead be more like medical schools, and provide “sustained clinical experience,” “rotations,” etc.

Now, there’s certainly something to Stracher’s charges. That said . . .

I want to push back a bit when Stracher writes:

By giving students the false idea that being a lawyer is all about intellectual debate, we also drive the wrong students to law school in the first place. The hordes of English majors who fill our classes might think twice if they knew that economics and mathematics–with their emphasis on problem-solving–are the best preparation for a career in law. Flowery prose is seldom valued by an overburdened judiciary.

It is not (I hope!) philosophy-major / law-prof defensiveness to think that, while warnings about “flowery prose” are certainly appropriate, law remains a deeply humanistic, “liberal artsy” enterprise, as well as an arena for problem solving. And then there’s this:

Law is not brain surgery. It is a skill that can be acquired through practice and repetition. This is perhaps the most interesting lesson from Brian Valery, the over-ambitious paralegal: He fooled those around him who ought to have known best. In the late 1990s, I litigated against another paralegal who later pleaded no contest to five criminal misdemeanor charges of unlicensed law practice. What struck me about him at the time was how good he was at his job. He blustered, bluffed, threatened and cajoled with the best of them. He knew the law and argued it capably. But then again, he learned his trade the old-fashioned way: He practiced it.

I hope it isn’t — I don’t think it is — merely snobbish or anachronistic to cling to the view that, actually, Valery didn’t learn the law, and didn’t really practice it. We can roll our eyes, as Stracher does, about the Ivory Tower, and about law profs who know their Aristotle but not their county courthouse, but, in my experience, the best lawyers — the ones who had judgment, who really understood a case and the people and issues involved — were those who also understood that, for all the “bluster[], bluff[ing], threat[s] and cajol[ing],” the practice of law is, at least in part, an intellectual enterprise and, in many ways, moral philosophy at the retail level. Stracher says that Valery “knew the law” and “learned his trade.” But isn’t there something to wrestling with the competing arguments whose resolution is reflected in the “law” that Valery knew, to understanding the context out of which that “law” emerges, and the goals that “law” is trying to accomplish?

Posted by Rick Garnett on January 30, 2007 at 10:51 AM

» Retail and Wholesale Philosophy from Legal Profession Blog Posted by Jeff Lipshaw Rick Garnett (Notre Dame, left) has a neat post over at PrawfsBlawg reacting to a Wall Street Journal op-ed piece about the purported failures of legal education in training students to be problem-solvers. The part to [Read More]

Tracked on Jan 30, 2007 12:21:17 PM

Comments

As a law professor who takes a rather practical and concrete approach to teaching about the law, who continues to practice law on the side, and who regularly writes for a practitioner audience, as well as for academic venues, I nonetheless am convinced that the primary purpose of a legal education is not to convey the nuts-and-bolts of practice but to immerse students in the fabric of the law. They will have decades in which to develop the skills of nitty-gritty practice, but most will never again have the time to come to appreciate the simultaneously integrated and complexly-varied web of doctrine and theory that underlies the law. While a shift to even more skills-oriented teaching undoubtedly would enhance the basic practicing ability of lawyers in their initial few years in practice, by moving the unavoidable apprentice period back into the law school years, the accompanying loss of study devoted to the nature of law and legal theory ultimately will result in an impoverished profession, with lawyers being less able to draw upon the richness of legal doctrine and theory to well-represent clients other than in routine matters.

Posted by: Greg | Feb 1, 2007 4:42:10 PM

I think theatre is the best preparation for law school, because every day in class I have to act like I care.

*Just kidding!*

I do think Stracher is on to something, even if I don’t quite agree with everything he says. My background is theatre *and* computer science. (Don’t ask.) If you look at various computer science programs in the U.S. there are two camps: the theoretical camp–where you’re likely to learn a great deal about the structure of programming languages and become a Scheme master, dreaming about recursion all day; and the practical camp, where you’re just as likely to have classes on advanced SQL database administration.

The point being, law school is biting off more than it *should* chew. In an effort to protect their academic good standing in spite of their under-education (I think law profs. should be PhD’s, like most other profs.) law professors wax philosophic about “law”. Meanwhile, practicing lawyers, jealous because they aren’t law professors, lament the fact that law schools don’t provide enough practical education. Where are the law students? Yawning in the back row.

Why do all law schools have to do both? Why not have schools with a reputation for theory (Yale?) versus schools that bang out top notch practicing attorneys (Your School Here?). Obviously, no school should be exclusively one or the other, but I don’t see anything wrong with having schools *focus* on one versus the other, and letting students decide what they would like their focus to be, either.

It works in *many* other fields–why not legal education?

Posted by: Dave! | Jan 31, 2007 2:46:56 PM

Scott’s example that he was able to become a pretty good brief writer is inapt. That’s mostly because the one practical task you do in law school is write briefs – at least 2-3 at my law school. But that aside, I think a “practical skill” that would be useful is a drafting class – contracts, wills/trusts, etc. A second practical skills class that would be useful (at least for those who students who are going to practice at a large law firm for some part of their careers) is a class on financial markets, corporate structures, capital formation, bond issuances and the like – a sort of survey course on how corporations are structured and how they raise funds. I find that this type of knowledge would be beneficial for both corporate attorney and litigators.

Rick:

I think Strachter is correct that law schools could better emphasize that tight analytically sound argumentation is — as a practical matter — more useful than “flowery prose.” Law is sometimes an intellectual enterprise, especially for those – like yourself – who have the luxury of practicing as an academic specialist rather than an advocate. But with the exception of the rarefied practice of constitutional appellate litigation, the day-to-day practice of law does not afford for much in terms of starry-eyed intellectual debate. A client presents a problem – and that problem must be solved. And whether law schools prepare students for that type of experience is not clear.

Posted by: Alex | Jan 31, 2007 2:04:23 PM

John, And, not surprisingly, NYLS faculty seem to think that their students will never reach their full potential as members of that influential class. So, the dean says they will never get their ticket taken at the gate of the influential class (despite having paid for it), and one professor seems to conclude that since his students are not making it, we should just give up on the whole thing and learn how to file stuff.

Posted by: S.cotus | Jan 31, 2007 12:29:07 PM

There’s also a broader point to be made here about the role of lawyers in a democratic society.

Assuming that lawyers constitute an influential class with an effect on government in great disproportion to their actual numbers, and assuming that lawyers have a disproportionate impact on the accepted realm of legal possibilities in a society (i.e. the set of possible legal reforms considered feasible by politicians and the public), the importance of educating lawyers to think broadly and deeply about the law, and the values implicated by it, becomes obvious.

Put differently, the importance of the legal profession extends beyond the practice of law, and therefore so too should legal education.

Posted by: JohnQ | Jan 31, 2007 12:23:12 PM

Mark just said what we all know. Law schools teach about the “fabric” of the law. These are the argument that real lawyers make in court. By the time a student graduates law school, they should be able to understand complex bodies of law, make arguments, and be ready to tackle other areas. Sure, some “practical” experience helps, because it gives people little “practice tips” but it does little else.

Then, someone lies about going to law school. A firm doesn’t catch it, because, perhaps, he never had to act as a lawyer or perhaps clients were unable to detect his lack of intellectual discipline.

The reaction from some quarters is strange, it is “blame the law schools.” From NYLS (which, as I said above, is filled with bitter people who know that many of their grads, at best, will be doing document review after they graduate) the word comes that law schools should be less intellectual – and more practical – because, it seems someone “got away” with not going to law school because he worked as a paralegal.

While his firm has returned a lot of money, it may very well be that he malpracticed as well. But, in his field it may not be immediately obvious whether he did or not, and since his firm is likely to simply repair the any damage done, court decisions are unlikely.

Posted by: S.cotus | Jan 31, 2007 12:19:51 PM

I think I pretty much agree with Scott’s substantive post. I’m really not that persuaded that law school should be changed radically to introduce a lot more “practical” skills. I think it’s a good idea to try to incorporate some practical focus within classes you’re already teaching – I have my students write memos to the senior partners or opinion letters to their clients, for example – but I think focusing on how to take a deposition or produce documents would do a serious disservice to our students. We could train them to be very comfortable making objections to form, but without the “intellectual” or “academic” training, how would they know what questions to ask? The significance of the questions derives entirely from substantive law. And what about cases that raise novel issues? In my experience, it was pretty common to get a case that presented an issue for which you couldn’t find a good answer. And “winning” on those issues is about understanding how to analogize to favorable situations and distinguishing unfavorable ones. I don’t know how one would know how to do that by going through a practically focused curriculum.

I’m not suggesting that one could not learn to read cases or to fit broad pieces together on her own. I suspect some people could do that – just like some people could learn any discipline on their own. But learning it in a structured environment is much more efficient. Moreover, the argument that someone could learn it on their own is hardly an argument to focus more on practical skills. Between the two – practical skills or “academic” skills – I think it’s fair to say that practical skills are much more easily acquired outside of school.

Posted by: Mark McKenna | Jan 31, 2007 11:32:24 AM

Patrick, As law dean one is responsible for most everything at the law school, including admission. The law school tells prospective students that they *should* attend the law school. Yet, somehow this dean believed that most (if not all) of the admitted students should not have attended.

Us lawyers think that practicing law is the highest form of existence. To say that someone “shouldn’t” do it, is an insult to that person’s soul.

Posted by: S.cotus | Jan 31, 2007 10:59:00 AM

I don’t think that questioning whether a student should in fact be attending law school, and whether attending law school is really in their best interest, counts as being condescending towards the student. I may not be law faculty, but in my time in law school I certainly knew a few students who probably shouldn’t have been there. Some of them even came to the same conclusion prior to graduation.

Posted by: Patrick | Jan 31, 2007 9:38:46 AM

Okay, so I am not going crazy. That attitude, is, indeed, present at NYLS, even on a dean-to-dean level.

Posted by: S.cotus | Jan 31, 2007 2:10:25 AM

FWIW, about a decade ago, the dean at NYLS mentioned to me that he questioned whether his students should in fact be attending law school. specifically, he thought that, from the standpoint of their own best welfare, it was a poor choice for many of them.

It sticks out in my mind because I thought it a very odd thing for a dean to say to an outsider.

Posted by: bill | Jan 30, 2007 11:52:50 PM

Scott, I don’t care if you were offended or not. I am offended by lots of things. Perhaps I should not have spoken in absolutes, because I have not had private conversations with all members of the faculty and all the students. But the ones I have, without exception have expressed the same thoughts about the school.

I find it hard to believe, on the other hand that you had no idea what discovery was. You took civil procedure (usually required.) You also said you took litigation courses. Heck, I only took two, and I knew quite a bit about discovery. Moreover, most summer associate positions involve some involvement with discovery.

To suggest that someone “won” a motion” because they shouldn’t have because of writing is vague. While the difference between legal style and substance has been of interest to me, people have not been able to explain how style will change “the law” as it impacts one litigant. You probably should post a copy of the briefs, and indicate the error so we can see what you were talking about. Sure, it is nice to tell war stories, but very often it seems like people are just complaining about a result they don’t like.

Congrats on telling us that you can take a good deposition.

Posted by: S.cotus | Jan 30, 2007 11:08:44 PM

That ugliness aside (couldn’t help myself), onto the merits:

Starcher has a point about many law schools neglecting practical skills; I graduated having absolutely no idea what discovery was, for example, even though I took every litigation course I could find — and I think that reflects a major problem in my legal education. But he goes too far in bashing the intellectual side of legal education, for three reasons:

(1) For much legal practice — e.g., anything involving briefwriting — intellectual depth of research and writing ability matters a huge amount. I’ve seen plenty of lawyers win motions they shouldn’t because they were better briefwriters.

(2) Even if practical skills matter a lot, a law school’s comparative advantage is in teaching the intellectual side, which you can’t learn nearly as easily from practice. Despite learning zero re depositions in law school, I ended up being able to take a pretty good deposition in practice after watching a few; I wouldn’t have been able to write a good brief in practice after just looking over a few briefs.

(3) I’m unpersuased by Stracher’s “Turing machine” argument that if someone can fake it effectively, then legal education is faulty. I know a lot of parents who could fake being a pediatrician and succeed wildly except in the 0.1% of cases in which a child has something rare. Does that mean Stracher or I should be OK taking our kids to Ima Faker, M.D., rather than to a real doctor?

Posted by: Scott Moss | Jan 30, 2007 10:33:45 PM

S.cotus: “EVERY member of the faculty and EVERY student that attends there has indicated that the professors have an abnormally low amount of respect for the students”

(capitals added)

I think we don’t have to take seriously an anonymous poster who makes absurd, patently false (and not to mention offensive) assertions that “every” student and faculty member of a school shares the same gripe. I’ll consider believing something he says if he comes out of hiding and tells us how he “learned” this information. My guess is that he “learned” it the same way he “learned” his info re Stracher (who’s a really decent and thoughtful guy, from my limited exposure to him): he made it up (i.e., the absurd assertion, “Starcher didn’t “know the law” or “learn” his trade. He bluffed his way though.”)

Posted by: Scott Moss | Jan 30, 2007 10:27:40 PM

James,

I don’t have a personal grudge against NYLS, but every member of the faculty and every student that attends there has indicated that the professors have an abnormally low amount of respect for the students. After all, the faculty pretty much all attended ivies, and NYLS has comparatively low admissions standards. NYLS faculty advise students to transfer! That speaks reams about the quality of NYLS.

Secondly, it is rare, but not unheard of for a partner at a large firm to become a professor. Most of the times, professors are drawn from people with less than 5 years firm experience.

Finally, LSKS, his old firm, is not that large a firm. Because it is so small, I checked the BIOs of all of their partners and associates. Not a single one went to NYLS!

Now, of course, people should judge a law school for its merits, but most people do not want to send their kids to NYLS. The professors, in turn, look down upon the students that “settled” for NYLS, and, in turn, they think that the practice of law is some kind of vocation, devoid of analysis, philosophical inquiry, or thinking. So, he concludes that a person that lied, cheated, and defrauded people was really just “practicing” by some loose definition of the word, and therefore not as great a harm was done.

Posted by: S.cotus | Jan 30, 2007 5:18:41 PM

James, at least as the line was quoted, I think Stracher might want to study creative writing a little more if the remark was intended to be self-deprecating. It seemed serious, if deliberately provocative, to me at any rate. Maybe an econ major is around to help us with the interpretive question…

Posted by: JohnQ | Jan 30, 2007 2:14:28 PM

S. cotus, what did NYLS ever do to you that you bear it such a grudge? Still smarting over a rejection letter? In any event, a quick check of his bio page would have shown you that Stracher has years of private practice experience, including stints in-house at CBS and as a law firm partner. Also, his time studying creative writing might have been a tip-off that the “flowery prose” line was self-deprecating.

Posted by: James Grimmelmann | Jan 30, 2007 1:43:27 PM

John! Ha! I think art is!

Whatever the case, I am actually surprised at Stracher’s ignorance, though the atmosphere at NYLS might explain it. I mean, where the hell did he get off saying that English majors do nothing but write “flowery prose.” Does he even know what any lawyers — besides temps doing document review — do?

Posted by: S.cotus | Jan 30, 2007 1:24:27 PM

The lurking lemma in your argument, Rick, is that one does not learn about the philosophical and moral underpinnings of the law through the practice of law, and that the classroom is the only place one can become critically familiar with such aspects.

But that assumption is so obviously false that to say it aloud practically suffices to refute it.

As far as the WSJ’s opinion that economics is better preparation than English, I agree that that’s nonsense. The notion that English students learn to write “flowery” prose, i.e. verbose and purple, while economics students learn to write well-crafted, efficient arguments without flourish, reflects more ignorance about what most English professors consider good writing than awareness of what will help in law school. Has he never read Strunk&White? Also, Stracher apparently doesn’t realize that English majors occasionally interpret the texts they read, and argue about which interpretation better fits, which sounds kind of similar to what lawyers supposedly do.

Of course the best preparation is a philosophy degree. Everyone knows that.

Posted by: JohnQ | Jan 30, 2007 12:40:21 PM

I think it is a bit of an oversimplification to say that legal culture focuses on “winning.” Usually it focuses on advancing a position, which in many cases doesn’t require a definitive legal victory and a definitive loss for the opposing party.

It is hard to know what Valery was actually doing. My guess is that he was bluffing a lot. But, perhaps if we had specifics, we would know better.

Unless we think that the government can take care of everyone’s needs perfectly, the practice of law will always involve advocacy for the interests of an individual.

Posted by: S.cotus | Jan 30, 2007 12:33:31 PM

Rick,

I share your desire to see the law as more than merely practical sophistry. Yet in a legal culture that values winning above all, isn’t it hard to argue that Valery wasn’t actually practicing law? Instead of dismissing the Valery example, therefore, I would suggest that his ability to successfully “practice” law is something of an immanent critique of the practice of law itself, a reminder that when law becomes nothing more than a vehicle for advancing partisan interests, “trained” lawyers do indeed become increasingly expendable.

Posted by: Kevin Heller | Jan 30, 2007 11:48:38 AM

Of course Stracher teaches at NYLS, which has an inferiority complex because most of its students go on to become “contract attorneys” and are looked down upon by 1) other lawyers; and 2) NYLS faculty. It doesn’t help matters that Stracher went to Harvard, whereas NYLS grads rarely (if ever) become faculty members anywhere.

What we are seeing is nothing more than shallow elitism, which the WSJ eats up hook, line, and sinker.

People with backgrounds in English do make good lawyers. They might not be able to write flowery prose, but they can 1) read prose; and 2) analyze it.

Let’s just call it for what it was: Valery committed a crime. He hurt some people. It is just as bad as someone that falsely claimed to be a doctor, airline pilot, or policeman. Starcher didn’t “know the law” or “learn” his trade. He bluffed his way though. The reason he looked respectable is that no self-respecting large law firm will allow its associates to NOT look respectable. So, they gave him a paper trail, but a pretty flimsy one at that.

Posted by: S.cotus | Jan 30, 2007 11:31:54 AM

It is quite obvious that someone who pleaded no contest to five criminal misdemeanor charges of unlicensed law practice lacked an internal view of the law.

Posted by: Snarky Snark | Jan 30, 2007 11:21:16 AM

Solum on “natural justice”

Larry Solum has posted a new paper, “Natural Justice.” Here is the abstract:

Justice is a natural virtue. Well-functioning humans are just, as are well-ordered human societies. Roughly, this means that in a well-ordered society, just humans internalize the laws and social norms (the nomoi) – they internalize lawfulness as a disposition that guides the way they relate to other humans. In societies that are mostly well-ordered, with isolated zones of substantial dysfunction, the nomoi are limited to those norms that are not clearly inconsistent with the function of law – to create the conditions for human flourishing. In a radically dysfunctional society, humans are thrown back on their own resources – doing the best they can in circumstances that may require great practical wisdom to avoid evil and achieve good. Justice is naturally good for humans – it is part and partial of human flourishing. All of these are natural ethical facts. . . .

More from the abstract, and a few questions about Solum’s project, after the jump . . .

Here is the rest of the abstract:

Natural Justice develops these claims in four stages. Part I contextualizes the claim that justice is a natural virtue in relationship to Hume’s famous argument about deriving ought from is, Moore’s open-question argument, and the so-called fact-value distinction. The upshot of the discussion in Part I is the claim that there are no clearly decisive objections to existence of natural ethical facts.

Part II traces the movement from neo-Aristotelian virtue ethics to virtue jurisprudence by articulating a theory of the judicial virtues. Among these are the virtues of practical wisdom and of justice. Practical wisdom or phronesis is best understood on the model of moral vision, which in the context of law is legal vision or situation sense. The virtue of justice is best understood as lawfulness. Just humans are law-abiding or nomimos – in that they internalize the widely shared and deeply held social norms of their social groups. This part concludes with the claim that a legally correct decision is the decision that characteristically would be rendered by a fully virtuous judge under the circumstances of the case.

Part III argues that natural justice can be understood on the model of natural goodness as articulated in the work of Philippa Foot and Michael Thompson. The intuitive idea is that justice as lawfulness is naturally good for reason – using social creatures in human circumstances. This part also articulates and responds to a variety of objections.

Part IV concludes by articulating the sense in which an aretaic theory of law that incorporates a natural virtue of justice as lawfulness can be viewed as an expression of the natural law tradition. The natural law idea that an unjust enactment is not a true law corresponds to two senses in which positive laws can fail to be nomoi (in the technical sense specified by virtue jurisprudence). First, a given enactment may contravene deeply held and widely shared social norms. Second, such enactments may be fundamentally inconsistent with the purpose of law – the promotion of human flourishing.

I’m attracted to Professor Solum’s (and others’) efforts regarding aretaic jurisprudence, although I’m painfully aware that my undergraduate engagement with Aristotle, Aquinas, and MacIntyre hardly qualifies me to evaluate them. So, these questions will probably accomplish little more than to highlight my lack of training, but here goes: First, does “virtue jurisprudence” require a theory of the state, i.e., a theory that explains why it is (assuming that it is) the case that the state may employ coercion (i.e., the law) to push citizens toward virtue? Second, does “virtue jurisprudence” require a teleological account of the person, in order to pour content into the idea of “human flourishing”, the promotion of which is said to be the purpose of law?

Posted by Rick Garnett on January 29, 2007 at 11:09 AM

Comments

a-train:

Sorry for the belated response:

The mature, self-responsible, self-actualizing individual is first and foremost self-governing. The art and science of self-government is for virtue ethical theory the paradigm of good government (or governance). We might imagine, therefore, that the primary task of good government is to assure that the opportunities and occasions of such self-governance are generalized throughout society. Owing to the human condition and reflective of our natural sociability, not all of the preconditions of self-directed individuality can be self-supplied by individuals. It follows that if we are to hold individuals morally accountable for self-discovery and self-actualization, they are entitled to the necessary conditions of same. In short, some of the necessary (yet not sufficient) conditions of eudaimonistic moral aspiration are best thought of as social and political conditions, and thus the responsibility for which is everyone’s: ‘To say that all are responsible is not necessarily to say that each is responsible, though. Still less is it to say that each is necessarily responsible for attempting to do whatever must be done himself.[….] [W]e typically—and rightly—suppose that, when responsibilities have not been allocated to anyone in particular within a group, the most that can be said is that each of them has an imperfect duty to perform at least some (but not necessarily all) of the acts that we might ideally wish be performed. The same general principle gives rise to much stronger implications at the level of the group as a whole, however. When no one in particular bears responsibility for performing some morally desirable actions, everyone collectively has a strong, perfect duty to see to that those things are done, within the limits of the capacities of the group as a whole to so without undue sacrifice. [….] [The requirements of strong collective responsibility are, from the perspective of individual action, a coordination problem.] [T]he solution to such coordination problems is, of necessity, a responsibility peculiar to the group as a whole’ (Robert E. Goodin).

In our case, it is the polis or city-state (or, in today’s terms, the State) that provides (through its ‘constitution’) a solution to the coordination problem represented by the generalization of the opportunities and occasions for human flourishing. The state bears ‘ultimate responsibility for providing the coordination that is required in order for people to be able do the right thing’ (Goodin). For virtue ethics, individual moral responsibilities give rise to collective moral responsibilities that cannot be self-supplied by individuals: ‘Where shared collective responsibilities are concerned, it is—by definition—everyone’s business what everyone else does. And this tautology is far from an empty one. It is everyone’s business, first and most simply, because it is a responsibility that everyone shares with everyone else. It is everyone’s business, second and more importantly, because, for anyone’s else’s contribution to be efficacious, each agent must usually play his part under the scheme that has been collectively instituted for discharging that shared responsibility. [….] Failure to discharge shared, collective responsibilities…undermin[es] in certain crucial respects other people’s moral agency itself. [….] That is what justifies us, pace libertarian principles, in forcing people to play their part in collective moral enterprises—so that others may play their part in them too’ (Goodin).

Posted by: Patrick S. O’Donnell | Feb 10, 2007 9:58:54 AM

“I would think one would need an argument, not unlike J.S. Mill in Considerations of Representative Government, which endeavors to explain why a democratic State–representative government–is most conducive to human flourishing”

Right. But Prof. Garnett asks is whether VJ needs a theory to justify coercion of individuals by the State towards virtue. The problem with the question is that, as I understand virtue, one cannot be coerced toward it. It seems to me that the apparent goal of VJ, a truly healthy and just community, is one that mere force or coercion can never hope to attain.

The problem is always the same, there is no unchallengeable evaluative system or authoritative normative standard of right and wrong outside of ourselves. So all we can do to create or move towards this ideal community is engage in a (tentative) dialogue with each other, seeking to genuinely persuade and be persuaded.

Posted by: a-train | Jan 30, 2007 12:50:53 PM

Sticking largely to the abstract: Are states ‘well-ordered’ societies? Are states embodiments of ‘natural justice?’ How do we know the laws internalized are ‘just laws’? How can we be confident that states are creating and not subverting or corroding the conditions for human flourishing? How can we have an ‘aretaic account of the nature of law’ while ignoring where law comes from? [I’m assuming here that the Athenian city-state and the modern nation-state are rather different.] The Liberal theory of the state is not in any obvious way devoted to realizing an aretaic conception of law or the virtues of ‘virtue jurisprudence.’ I would think one would need an argument, not unlike J.S. Mill in Considerations of Representative Government, which endeavors to explain why a democratic State–representative government–is most conducive to human flourishing (or simply individual moral development in which our innate differences allow for the introduction of distinctive values into the world: Mill notoriously assigned too great a role to political participation to facilitate such moral self-development). A virtue-centered account of jurisprudence is parasitic or depends on virtue ethics in general, and thus there is the question of what political form(s) are conducive to human flourishing. We might consider the extent to which the social norms and laws of any given society may fall far short of the ethical ideals and values of virtue ethics (as the Stoics often demonstrated in theory and practice) such that ‘internalization’ may in some measure be thwarting the end(s) of natural justice from a virtue ethical perspective. Your second comment/question about value seems to indirectly make this selfsame point.

Posted by: Patrick S. O’Donnell | Jan 30, 2007 10:09:01 AM

Patrick, interesting comment but it doesn’t really answer the question. Depending on what you mean by “jurisprudence” in the phrase “virtue jurisprudence,” I don’t see why virtue jurisprudence *needs* a theory of state (or a teleogical account of the person).

What it needs is a plausible theory of value. In other words, how is it that values come to be widely shared/deeply held? (Solum: “Just humans are law-abiding or nomimos – in that they internalize the widely shared and deeply held social norms of their social groups.”) I.e.: Things are important because they are meaningful. Things are meaningful because they are important.

But how does a norm (etc.) enter the loop?

Posted by: a-train | Jan 30, 2007 9:27:20 AM

I have yet to read Larry’s paper, but I would answer ‘yes’ to the first question re: the state and its coercive powers and recapitulate and quote from an argument made by the late David L. Norton in his Democracy and Moral Development: A Politics of Virtue (1991), a work I find suggestive and provocative in parts but somewhat disappointing in its entirety. Anyway, Norton notes that it is commonly argued or assumed that the idea of individual self-development [his phrase for eudaimonia or human flourishing] is contradicted by the idea of government, owing to its coercive power over individuals while self-development entails or implies the ‘voluntary initiative of individuals and therefore cannot be coerced. But this argument mistakenly supposes that whatever characterizes self-development must likewise characterize its conditions. To say that self-development is voluntary is to say that it is optional. If it has necessary conditions, then self-development is an option only when those conditions prevail. And this is to say that for the option of self-development to exist, supply of its necessary conditions is mandatory. To be sure, supply of the necessary conditions that are to be self-supplied by individuals falls with the option of self-development and is not mandatory. But conditions that must be furnished to individuals by external agencies do not partake of the voluntary character of self-development. Recognition that their presence is mandatory commensurates the provision of them with the coercive nature of government, while respecting the voluntary nature of self-development: individuals remain free to avail themselves, or not, of the provided conditions. It is mandatory, of course, that individuals contribute (notably through taxes) to the government that provides the necessary conditions that individuals cannot self-supply, but this is a different issue, namely the balancing of liberty with autonomy, where “liberty” is understood “negatively,” as freedom from interference, but “autonomy,” as “self-direction,” entails positive conditions of enablement.’ There’s of course much more to the argument in Norton’s book.

As to the second question, much depends here on just how one is intending the notion of teleology. I would agree with Julia Annas that Aristotle does not have a ‘universal teleology’ ‘and the teleology that he does have is not a theory about human lives.’ Moreover, while the Hellenistic schools of virtue ethics appealed to nature (NOT in the modern sense of ‘naturalism’), they had ‘the most diverse views on teleology [with Epicurus, for instance, rejecting teleology in nature].’ It is the appeals to nature that I find most intriguing, and which in some measure can be addressed apart from the teleological question. Of course ancient ethics is unavoidably teleological in the rather limited and basic sense that we have a concept of the agent’s final end and final good that emerges from her reflective deliberations. See Annas’ The Morality of Happiness (1993) [the title is unfortunate (though perhaps not from a marketing standpoint!) insofar as what is being referred to is eudaimonia, a term that is misleadingly translated as ‘happiness,’ as Annas herself notes!]

Anyway, these are both necessary and important questions and are helpfully addressed in the literature.

Posted by: Patrick S. O’Donnell | Jan 29, 2007 8:33:07 PM

The Situationist

No, it’s not the title of a recurring sketch on SNL. The Situationist is a new blog run by a polyglot crew of academics concerned with “law and mind sciences.” Its co-creator is Jon Hanson of Harvard Law School, who has been writing extensively and at length (no exaggeration — check out some of the articles) in this area for the past few years. Its contributors include the psychology prof Philip Zimbardo, who is most famous for the 1971 Stanford Prison Experiment. From the blog:

Part of a larger effort, including the Project on Law and Mind Sciences at Harvard Law School (website forthcoming), this blog will provide commentary by social psychologists, law professors, policy analysts, practicing attorneys, and others connected to law and mind sciences. Our posts . . . will address current events and law and policy debates, informed by what social scientists are discovering to be the causally significant features around us and within us that we believe are irrelevant or don’t even notice in explaining human behavior, that is “the situation.”

Situationism” represents a striking contrast to the dominant conception of the human animal as a rational, or at least reasonable, preference-driven chooser, whose behavior reflects stable preferences, moderated by information processing and will, but little else. Different versions of the rational actor model have served as the basis for most laws, policies, and mainstream legal theories, at the same time that social psychology and related social scientific fields have discovered many ways in which that model is wrong.

The Situationist, then, will be a venue in which the powerful, influential, but incorrect conceptions of the human animal come up against more accurate, if surprising and unsettling, realizations about who we are and what the law is and ought to be. Its content will reflect an emerging interdisciplinary trend in legal scholarship, as exemplified by the work of scholars such as Mahzarin Banaji, Gary Blasi, Martha Chamallas, Susan Fiske, Jerry Kang, Linda Hamilton Krieger, Lee Ross, David Yosifon and many others.

I’m a big fan of Hanson’s work on situationism, and look forward to hearing from him, Jerry Kang, Sung Hui Kim, and others. Welcome.

Posted by Paul Horwitz on January 29, 2007 at 10:48 AM

Drama in Michigan’s Highest Court

A feud has erupted on the Michigan Supreme Court among five of the GOP members of the seven-Justice court. Justice Elizabeth Weaver has called for the removal of four of her fellow Justices for their “inappropriate” behavior and attempts to silence her when she complained about it. Justice Weaver recently published a dissent protesting the election of her colleague Clifford Taylor to serve as the Court’s Chief Justice. Justice Weaver asserted that the Chief Justice, along with Justices Corrigan, Young, and Markman, misused and abused power and engaged in repeated disorderly, unprofessional and unfair conduct in the performance of the judicial business of the Court. According to Justice Weaver, the four Justices should have disqualified themselves from a particular case and then attempted issue a “gag order” prohibiting her from publishing her dissent on the matter. Justice Weaver claims that Chief Justice Taylor, in an internal memo, called her a “petulant only child” who is “holding her breath until she gets her way.” The four Justices under attack argue that Justice Weaver has violated the confidentiality of judicial deliberations. They also contend that Justice Weaver bears a grudge against them for their decision in 2001 to oust her from the Chief Justice position.

Last week, Justice Weaver asked Michigan’s Governor Jennifer Granholm and members of the State Legislature to convene an independent commission to investigate the Supreme Court controversy in order to determine whether the removal of any Supreme Court Justice is warranted. Article 6, section 25 of the Michigan Constitution enables the Governor, supported by a two-thirds majority of each house of the legislature, to remove a member of the Supreme Court. Writing in the Detroit Free Press, Carter-appointee Judge Avern Cohn (E.D. Mi.) supported the establishment of an independent commission to determine whether a Justice’s dissent can be withheld from the public if such dissent would reveal judicial deliberations and to resolve the allegations of disqualification. Judge Cohn contended that such an action was critical as the “justices have shown they are incapable of doing it on their own.” Time will tell if the infighting involves a personal rift/political grandstanding or a genuine disagreement about the scope of the cloak of secrecy over judicial deliberations.

Posted by Danielle Citron on January 28, 2007 at 10:23 PM

Comments

The underlying controversy, a legitimate point, is Justice Weaver’s belief that the MSC should change its method of resolving claims of bias when a party seeks recusal of a Justice in a case. The current rule is that, when a party seeks the recusal of Justice A, it is Justice A alone who decides the issue. She thinks that the rule should be changed. The court submitted three different proposals to do so earlier in 2006, then abruptly withdrew them.

Posted by: yclipse | Feb 6, 2007 8:10:32 AM

Much thanks, Jeff, for your comments. I integrated them into the posting. What is your take on what is going on?

Posted by: Danielle Citron | Jan 29, 2007 11:55:10 AM

I haven’t seen Judge Cohn’s op-ed, but you should note he was an active Democrat (a Carter appointee). I’m not impugning Judge Cohn’s integrity for a second (I know him), but no doubt there is political hay being made, and I’d be more impressed if a Republican appointee on senior status were suggesting that an outside agency investigate the internal workings of the court.

Posted by: Jeff Lipshaw | Jan 29, 2007 8:11:51 AM

Singer on human dignity

In a recent op-ed, “A Convenient Truth,” philosopher and ethicist Peter Singer discusses the much-remarked case of “Ashley,” a severely developmentally disabled 9-year old whose parents want to secure treatment that will prevent her from maturing physically (so that they will be better able to care for her). Addressing the debate over whether or not such treatment would be consistent with Ashley’s human dignity, Singer writes:

As a parent and grandparent, I find 3-month-old babies adorable, but not dignified. Nor do I believe that getting bigger and older, while remaining at the same mental level, would do anything to change that.

Here’s where things get philosophically interesting. We are always ready to find dignity in human beings, including those whose mental age will never exceed that of an infant, but we don’t attribute dignity to dogs or cats, though they clearly operate at a more advanced mental level than human infants. Just making that comparison provokes outrage in some quarters. . . .

What matters in Ashley’s life is that she should not suffer, and that she should be able to enjoy whatever she is capable of enjoying. Beyond that, she is precious not so much for what she is, but because her parents and siblings love her and care about her. Lofty talk about human dignity should not stand in the way of children like her getting the treatment that is best both for them and their families.

Now, this seems wrong to me. A better moral anthropology, I think, is one that thinks “what we are” is, in fact, every bit as important to why we are “precious” as is the (we can hope) fact that others love us. But Singer is serious and prominent, and so I’m curious about others’ reactions to his claim. Any thoughts?

Posted by Rick Garnett on January 28, 2007 at 03:51 PM

Comments

Rick, I think part of the problem for some of us is that “dignity,” in any but the Kantian sense, is an enormously ambiguous and vague term. I don’t really know what it means. In what ways is a small child dignified in a way that is different from an elephant, a dolphin or a bear? I’m not trying to be flip about this; I’m honestly confused.

I guess I just can’t follow either (1) the decision procedure by which we assign dignity to various beings, or (2) the logic that makes ethical decisions turn on whether the object of those decisions has dignity (in other than the Kantian sense).

Note finally that it is emphatically not true that Singer is claiming that the only reason not to throw her in the garbage is her parents’ capacity to suffer. The whole quote runs as follows:

What matters in Ashley’s life is that she should not suffer, and that she should be able to enjoy whatever she is capable of enjoying. Beyond that, she is precious not so much for what she is, but because her parents and siblings love her and care about her.

Note that the part you are referring to is the “beyond that” part — the subsidiary issue for Singer. For SInger, the best reason not to throw Ashley in the garbage is that if you did so, she would suffer and fail to realize her very limited opportunities for happiness. For Singer, what is important is not her lack of sapience but rather her possession of sentience. However, as he persuasively argues elsewhere, a creature like Ashley is not different from many non-human animals in this important respect, and indeed, it is hard to construct a principled difference between her and many non-human mammals that would result in her maintaining a privileged position with regard to our ethical duties towards her. Singer derives from this, not that we should treat Ashley as badly as we treat many other animals, but rather that we should stop treating animals so very badly.

Posted by: marghlar | Jan 30, 2007 9:00:40 PM

I appreciate all these comments, from so many people who know many of the relevant materials so much better than I do. But, for what it’s worth, it is not clear to me why, from “she can’t act autonomously in any way,” it should follow either that “she lacks dignity” (conceding that maybe it does follow that she lacks it in the “Kantian sense”) or that the only reasons for not throwing her in the garbage can are those proposed by Singer, i.e., that third parties happen to care about her.

Posted by: Rick Garnett | Jan 30, 2007 8:07:07 PM

OK then. So we agree that Singer is basically right in this case; that dignity is not what matters in Ashley’s case; what matters is her capacity to suffer, and the degree to which her suffering is likely to cause others to suffer.

Posted by: marghlar | Jan 30, 2007 7:53:50 PM

We don’t disagree- she can’t act autonomously in any way, and she lacks dignity in the Kantian sense. That doesn’t mean there’s no reason to not, say, throw her in a garbage can- the reasons are just those that Singer mentions. Those are all reasons a Kantian can and ought to accept.

Posted by: Matt | Jan 30, 2007 6:05:25 PM

More importantly, perhaps, anything else that could act autonomous in the right ways would also have dignity.

OK Matt. Now explain to me the ways in which Ashley can “act autonomous[ly] in the right ways” in a way that is different from my cat, or just about any other mammal. I’d suggest that you’ll have trouble figuring out a way to categorize Ashely in a way that makes her the right sort of autonomous agent.

Posted by: marghlar | Jan 30, 2007 2:31:38 PM

Kantians aren’t committed to the claim that only “genotypic homo sapiens” have dignity in any way. First, trivially, since many, perhaps most cases of serious cognitive imparement that might keep one from being the sort of being that can be autonomous (the source of dignity, to put it too roughly) are not caused by genetic deviance. More importantly, perhaps, anything else that could act autonomous in the right ways would also have dignity. We just don’t know about those sorts of things. But, of course, a Kantian can very well think that lots of things are bad even though they don’t directly have anything to do with dignity. Just because the only thing that’s unconditionally good is the good will doesn’t mean that all sorts of badness have to be directly related to that. So, being cruel to animals is bad, and Kantians have no trouble saying that, despite what Singer might think. (I don’t hate him like lots of people do, but he is pretty bad on other people’s views.)

Posted by: Matt | Jan 30, 2007 12:41:53 PM

I’m with Singer on this one. “Dignity” is sense without reference. I have enormous trouble conceiving of what it is in the world that we should understand “Dignity” to be referring to.

Singer has spent a good amount of time demonstrating that the standard decision procedures for assigning dignity to feeling entities are enormously arbitrary. Those who wish to justify inflicting suffering on a feeling being on that basis should have the burden of proof to demonstrate why they distribute dignity to some entities and not others. Until they’ve done so, we should rightly regard dignitarian arguments with a great deal of distrust.

The only writer on this thread who seems to have really addressed this issue directly is Frank, but his argument that the sacredness of human embodiment gives dignity to human individuals, whatever their cognitive capacities seems merely to be shifting the ball. Now we are forced to come up with a decision procedure for which sentient beings are sacredly embodied (if we can agree on what that means); and the methods by which someone would demonstrate that all genotypic humans are so embodied while no other sentient or sapient creatures are seem elusive in the extreme.

I’d like to see less question-begging on this thread, and more confrontation of Singer’s central puzzle: Why is it that only genotypic homo sapiens have Kantian “dignity” or “worth?”

Posted by: marghlar | Jan 30, 2007 11:10:47 AM

I suspect many of us inspired by Kant do not follow him to the letter with regard to what specific properties or attributes human beings must have to qualify for the possession of intrinsic moral worth or value (as with Regan above). For instance, see what Martha Nussbuam has done with her provocative albeit tentative adumbration of ‘ten capabilities as central requirements of a life with dignity’ in Frontiers of Justice: Disability, Nationality, Species Membership (2006 [see the references to dignity in the index]) (and, earlier, Rawls’s political principles were intended to fill out–‘give shape and content to’–the abstract idea of dignity). As earlier noted, the notion of dignity is historically and conceptually integral to international human rights norms (these have something to do with the prevention and alleviation of suffering, do they not?), and the latest human rights convention happens to be on the rights of the disabled (the most rapidly negotiated human rights treaty in history and the first of the 21st century; see http://news.bbc.co.uk/2/hi/in_depth/6173073.stm).

Posted by: Patrick S. O’Donnell | Jan 29, 2007 1:44:04 PM

I understand the Kantian argument (thanks for explaining it so well, Patrick), but am I the only one troubled by the extension of the argument, viz., that seriously disabled humans lack intrinsic moral worth? I find such a conclusion exceedingly uncomfortable, as do many disabled persons, which is partly why Singer is so reviled by many disability activists.

Posted by: Daniel Goldberg | Jan 29, 2007 11:56:46 AM

The heart of Singer’s comment is to deconstruct the loaded term “dignity.” In the sentencing arena, the term dignity is thrown around to attack or defend the death penalty, to attack or defend shaming punishments, to attack or defend imprisonment, to attack or defend physical punishments.

“Dignity” claims — like most deontological arguments — are easy to state and difficult to refute (and may often be cover for other conscious or unconscious beliefs and commitments). That’s why consequentialists like Singer rankle at all dignity talk, especially when they see such talk being used to justify avoidable suffering.

This is why Singer’s comparison to animals (and the reference above to Nazis) is provocative and telling: dominant groups often rely on dignity denial to justify ethically questionable behavior: slavery depended, in part, on a denial of dignity to blacks; the Nazis denied dignity to non-Aryans; anti-sodomy laws often were based on a denial of dignity to homosexuals; legalized abortion relies, in part, on a denial of dignity to embryoes; widespread animal mistreatment is based, in part, on a denial of dignity to animals.

I think Singer is simply saying let’s stop talking about “dignity” and let’s instead talk about preventing avoidable suffering.

Posted by: Utilitarians unite | Jan 29, 2007 10:25:47 AM

I hate to to be the Godwin’s law trigger, but this statement, standing alone, does seem to step (goosestep?) in the Nazi direction:

“Beyond that, she is precious not so much for what she is, but because her parents and siblings love her and care about her.”

So someone’s (or something’s) value derives from whether others love her? So if we “all” (however defined) decide together that we don’t love someone, then she has no value? Neat trick, and quite convenient to do the horrible.

Again, sorry for the extreme Nazi reference, but it does seem apt here.

Posted by: just me | Jan 29, 2007 9:58:50 AM

I read Singer as claiming that 1) humans do not have intrinsic “dignity” simply because they are human, 2) if any such dignity exists it comes from mental level, probably capacity to suffer if I had to guess based on what else I know about Singer, 3) that Ashley’s mental level is below that of common house pets, and that 4) because of this she has little in the way of intrinsic dignity, but rather has an interest in not suffering, or even being as happy as possible, and has worth because people care about her and her suffering would cause them to suffer as well.

Posted by: Patrick (not O’Donnell) | Jan 29, 2007 9:52:30 AM

Briefly: after Kant, beings possessed of dignity make objective claims on us, it is this dignity that permits our actions to have that ‘motive’ proper to morality and accounts for the rational recognition of the objective worth of others as ‘ends in themselves.’ Dignity is an intrinsic value that signifies absolute worth, ‘a value that cannot be compared to, traded off against, or compensated for or replaced by any other value'(Allen Wood). Acting morally here means, in one sense, acting for the sake of humanity in someone’s person, thereby respecting the objective worth of humanity as an end in itself and calling upon us to treat everyone with equal dignity. Such a conception is metaphysical in nature. It assumes the fundamental importance of metaphysical freedom and moral autonomy. It means persons are to be construed as both infinitely valuable and irreplaceably valuable. Allen Wood’s work on Kant has helpful discussions of dignity, as does James Rachels’ The Elements of Moral Philosophy (2003, 4th ed.), pp. 130-140. On how dignity has been essential to fundamental formulations of human rights, see Jack Donnelly’s Universal Human Rights in Theory & Practice (2003, 2nd ed.). See also, Robert Kraynak and Glenn Tinder, eds., In Defense of Human Dignity: Essays for Our Times (2003). Understanding the difference between intrinsic and extrinsic value is of some help here as well.

Posted by: Patrick S. O’Donnell | Jan 29, 2007 12:38:44 AM

Perhaps someone can tell us what “dignity” means in ths context? It would also help if someone told us what “sacred” means.

Whenever I hear these discussions I can’t help thinking these are just code words for the fact that we all possess an instinct for self preservation, and championing another’s right to the same insures our own. Dignity and sacredness come down to, “If I don’t let it happen to you, then it won’t happen to me.” Is the selfish gene at work here?

Posted by: Elliot | Jan 28, 2007 10:37:28 PM

Yea, I think the problem is partially a misreading of Singer here (whatever one thinks of him). He isn’t saying that all humans should be valued by whether and why they are precious to others; he’s saying that this human doesn’t have the kind of agency that gives rise to a special kind of (human) dignity. I agree with Joe: you and Singer are on different pages as to what Ashley is.

Posted by: Steve | Jan 28, 2007 10:21:56 PM

Now, this seems wrong to me. A better moral anthropology, I think, is one that thinks “what we are” is, in fact, every bit as important to why we are “precious” as is the (we can hope) fact that others love us.

But isn’t that exactly what Singer is saying? It doesn’t help that you traded an ellipsis for this sentence: “But why should dignity always go together with species membership, no matter what the characteristics of the individual may be?”.

It seems to me that where you and Singer disagree is not on where value comes from, but rather on what Ashley is.

Posted by: Joe | Jan 28, 2007 10:01:38 PM

Who says we don’t attribute dignity to dogs or cats?

Posted by: Bruce Boyden | Jan 28, 2007 9:43:52 PM

From the little I know of Regan’s position (I’ve only looked at it a bit) it’s not one I find appealing or convincing. I’m hesitant to say much about it since I’ve read very little of it, but in general the (more or less orthodox now) Kantian view of intrinsic worth is the only one I can make and sense of, and on that view, of course, animals don’t have intrinsic moral worth. There are many good reasons to not be cruel to animals or wantonly kill them, and good reasons to treat seriously disabled humans with kindness, compasion, and care, but these reasons don’t depend on the (to my mind highly dubious) claim that these entities have intrinsic moral worth.

Posted by: Matt | Jan 28, 2007 7:33:06 PM

Matt, While I think you may be right about a strict Kantian response to Singer’s remarks, work in the spirit of Kant (neo-Kantian or otherwise) might rely on the distinction Tom Regan made in The Case for Animal Rights (1983) between ‘moral agents’ and ‘moral patients,’ with ‘human infants, young children, and the mentally deranged or enfeebled of all ages’ being ‘paradigm cases’ of the latter. On this view, moral patients possess the (equal) inherent worth intrinsic to all members of this (now expanded) moral community (and thus moral agents have direct duties to moral patients [which of course for Regan includes some non-human animals as well]).

Hard and tragic cases are just that, so I suspect most ethical ‘theories,’ be they of classical Greek vintage or of modern provenance, have no obvious or easy answers (and Singer might have made that point without a polemic against the notion of dignity).

Posted by: Patrick S. O’Donnell | Jan 28, 2007 6:50:42 PM

I don’t agree with Singer’s general approach to philosophy nor to a significant number of his particular conclusions. I don’t share his skepticism about the notion of human dignity in general. But I think he’s exactly right about the specifics of this case, and that most of those (disability rights activists and the like) who have gone into hysterics about this particular case have not thought about it very well. (Lindsay Bernstien wrote a terrific post about that aspect here:

http://majikthise.typepad.com/majikthise_/2007/01/ashleys_treatme.html

It’s not clear that Singer says too much that a Kantian would have to disagree with in the quoted passage, either, since on Kant’s account human dignity comes from the ability to legislate the moral law, something that Ashley obviously never will do. Exactly what a Kantian should think about such cases is hard issue, but I don’t think that Singer is too far wrong to say that in tragic cases such as Ashley’s it’s her parent’s love and care that matter most.

Posted by: Matt | Jan 28, 2007 5:51:25 PM

Patrick is brilliant, as always. For a better approach toward getting people to recognize the “unnecessary and therefore unconscionable degree of suffering in the (non-human) animal kingdom,” I suggest Matt Scully’s Dominion: http://www.amazon.com/Dominion-Power-Suffering-Animals-Mercy/dp/0312319738

As for the core question, I’ll paste in something I wrote 5 years ago:

[Cases like these] force us to articulate our reasons for valuing any feature of the natural or built environment. Why, for example, do we now value humans more than computers? If someone (believes he has) preserved his consciousness on a machine, should the resulting (simulacrum of the) person have (some of) the rights of its creator? Taking up the challenge, Richard Posner [reviewing a book on animal rights] observes that, eventually, “there will be computers that have as many ‘neurons’ as [humans], and the ‘neurons’ will be ‘wired’ similarly.” In such a case, Posner asks, should we be distressed at the thought of destroying a “conscious” computer?

Posner wisely grounds his own response in a widely held intuition: “Most of us would think it downright offensive to give greater rights to . . . computers than to retarded people, upon a showing that . . . the computer has a greater cognitive capacity to profoundly retarded human being.” But this is not simply a visceral response or brute affect; it is a signifying emotion, reflecting a deeper self-knowledge. No matter how highly we wish to value an artifact like a computer, our scale of values itself is parasitic on our embodied form.

We sense, however inarticulately, that our centers of value cannot hold once the human person, as embodied presently, ceases to be the valuer. As Clifford Geertz has observed, man is a being “suspended in webs of significance,” largely of his own making. Far from being one of many potential transducers, the body qua body is the only reliable vehicle for perceptions continuous with those we now experience.

And from that, I’d reason, perhaps following Michael Perry’s work and JPII’s Theology of the Body, that the sacredness of human embodiment gives dignity to human individuals, whatever their cognitive capacities. These cognitive capacities are not ghosts in a machine; rather, they are constituted by embodiment.

Posted by: Frank | Jan 28, 2007 4:57:34 PM

I wholeheartedly agree with you Rick. The history of moral anthropology from Plato, through the Stoics, and up to Kant would suggest our moral significance is, indeeed, not solely or simply (and instrumentally) derived from the fact that others love and care for us but our status as human beings possessed of intrinsic value (and such a move in secular ethics was in no small measure beholden to Judaic and Christian traditions: e.g., that we are created in the image of God, that we are children of God, etc.). Singer has blurred or effaced various philosophical and theological boundaries or distinctions that once prevailed between human and non-human animals in order to arrive at an ethical position that prompts or provokes us to think deeply about the suffering of non-human animals. While I’m in sympathy if not agreement with his motivations and especially conclusions, I think we can get there with premises that are not utilitarian, however sophisticated, and thus hold fast to conceptions of human dignity and/or intrinsic worth, conceptions that are the philosophical bedrock (presupposition, necessary condition) of the notion of human rights. In the effort to compel us to recognize our contribution to the enormous amount of unnecessary and therefore unconscionable degree of suffering in the (non-human) animal kingdom, Singer has swung too far in the other direction: conflating or collapsing distinctions between human beings and other animals. Were it otherwise, we would accord little or no ethical significance to the fact that there are some among us ‘whose mental age will never exceed that of an infant.’

Posted by: Patrick S. O’Donnell | Jan 28, 2007 4:38:53 PM

Food Science and the Limits of Empiricism

The cover story in today’s N.Y. Times Magazine is on food science and the “age of nutritionism.” Michael Pollan, a professor of journalism at Berkeley, argues that the last thirty years of food science, ostensibly aimed at making people healthier, have actually made matters worse. He argues that the effort to reduce foods to their component parts — namely, nutrients — has left Americans focused on a never-ending rotation of different nutrients instead of on the importance of the foods themselves. And I think his argument has something to say, as well, about the general methodology of scientific empiricism as applied in any discipline, including the law.

Pollan argues that foods are impossibly complex. He argues that whole foods, such as whole grains, fruits, vegetables, and leafy greens, are the best foods for human consumption, based on a broad-based perspective of human history and environment. But according to Pollan, food science has spent the last thirty years trying to isolate the exact nutrients in these foods that make us healthier. This scientific effort has resulted in an ever-changing series of findings, as scientists proclaim the value of a particular nutrient only to find its effects much less dramatic than initial findings suggested. Pollan argues that this is a result, in part, of an effort to isolate nutrients without the actual ability to do so; foods are so complex that it is impossible to reduce them to simply a list of nutrients. But scientists continue to do so, with predictably incomplete and ultimately erroneous results.

Pollan’s critique of food science has lessons for other forms of empiricism. Here are a few:

Limits to Empirical Methods. Pollan spends the bulk of the article explaining the flaws in various nutritional studies over the last thirty years. He notes that for each time the benefits of a new “nutrient” are discovered, it is later revealed that the nutrient itself does not really create those benefits. Instead, it is the nutrient working within its particular environment — namely, within a certain type of food — that may create the nutritional benefit. Pollan goes through a series of nutritional fads — low-fat foods, beta carotene, omega-3 oils, low-carb foods — to show that each has a kernel of truth but is woefully incomplete on its own. Pollan chalks these failures up to the effort to oversimplify something that cannot be simplified.

Quantitative empirical studies share these same limitations. They depend on the researcher’s ability to isolate a single variable and control all other “factors” that might influence the decision. Certainly, all empiricists would recognize the inherent difficulty in doing this. And better empiricism does a better job at actually controlling extraneous variables. But as Pollan suggests, there is some degree of hubris in even attempting this. From his perspective, we are nowhere near the day when scientists will actually be able to explain how foods actually work. While he pays some respect to the continuing scientific effort, he conveys a skepticism that it will ever actually be able to tell us what we need to know.

Overgeneralizing from the Results. Pollan would not have a problem with food science were its findings not so dramatically announced by the media and so extensively coopted by the food industry. For it is the conclusiveness of the studies and the real-world changes that such studies prompt that really cause the trouble. For example, Pollan describes how the low-fat trend in the 1980s actually prompted folks to eat more carbohydrates than they had been before. This made diets worse, not better. Similarly, the recent study finding that low-fat diets did not reduce health risks was weak science, according to Pollan. But the real problem is that the media’s trumpeting of the study encouraged the average person to pick up a quarter pounder with cheese, despite the study’s questionable and inconclusive results.

This, too, is a problem for all empiricists: how to acknowledge that their results are simply one small piece of data in a ongoing process of data collection and interpretation, while persuading their peers that their study represents a critical and important step forward for the discipline. And with law in particular, there is the temptation to argue that a particular empirical result inexorably leads to a particular policy prescription. After all, if law review articles with no empirical support can make such claims, why can’t demonstrable scientific facts?

Ultimately, I think Pollan swings too far the other way. Although food science has inherent limitations, that does not mean that its effort to isolate discrete nutrients is ultimately fruitless. The fact that Vitamin C prevents scurvy is an important and useful bit of information, and eating oranges is not the only way to get the benefits. Findings like this help prevent a wide array of diseases. Ultimately, food science may lead us to understand a lot more about food, and that understanding will help us in our everyday diets and in times of food crisis. But I agree with a more moderate version of Pollan’s thesis: empiricism is important, but we cannot focus on short-term findings as the new answer to all our problems. Putting food science in its context, and using a broader, more comprehensive vision in coming up with our actual diets, is a wiser course.

Posted by Matt Bodie on January 28, 2007 at 12:36 PM

Comments

hog roasting

Posted by: hog roasting | Mar 25, 2007 3:51:24 PM

Interesting article, and interesting take on it. I haven’t read it yet, but Pollan’s take on food science reminds me of Theodoric of York, Medeival Doctor:

“You know, medicine is not an exact science, but we are learning all the time. Why, just fifty years ago, they thought a disease like your daughter’s was caused by demonic possession or witchcraft. But nowadays we know that Isabelle is suffering from an imbalance of bodily humors, perhaps caused by a toad or a small dwarf living in her stomach.”

Posted by: Scott Moss | Jan 28, 2007 9:18:21 PM

Pollan’s article is useful as a case study about the limits of empiricism. But despite his rhetoric, Pollan does use the nutrient-based approach when it suits one of his arguments. In order to substantiate his theory that people should eat more leaves and fewer seeds, Pollan notes how one shouldn’t just increase intake of omega-3 fatty acids, but also should decrease intake of omega-6 fatty acids — he even states that he is borrowing “the nutritionist’s reductionist vocabulary” to make the point.

It seems that Pollan’s concerns about empiricism are mainly an instrument to present his beef (pun intended) with the way in which scientific findings are publicized. Specifically, the studies that are guided by “nutritionism” (and promulgated through media accounts as well as food manufacturers’ advertisements) lead people towards artificial supplements and/or processed food. People start looking to add beta carotene to their diets artificially, rather than start eating those naturally occurring foods rich in beta carotene. To use your scurvy example, Pollan would likely be displeased if people take a vitamin C pill instead of eating fruit (whether they be oranges, or limes, or grapefruit, or fill-in-the-blank).

I have a feeling that Pollan wouldn’t mind nutritionism so much if it led people to the produce aisle of the supermarket rather than to multivitamins and processed food advertised as “low-fat” or “now with omega-3 fatty acids.”

Posted by: J.R. | Jan 28, 2007 7:18:11 PM

On the first point, I’d suggest that some of the insights of complexity theory may provide some reasons why attempting to isolate individual variables doesn’t seem to get us very far in many cases. Namely, systems are not linear; the prevailing feedback loop models highlight the idea that system behavior in aggregate is the product of myriad interactions between all different components and attractors of the system. Attempting to isolate one attractor and understand its effect on aggregate behavior is in some sense limited in capacity by the notion that system behavior is really a product of the interactions between attractors.

This is obviously not to suggest there is nothing worthwhile in attempting to isolate single variables; only that in dynamic systems, doing so cannot capture some of the important aggregate dynamics of system behavior. JMO.

Posted by: Daniel Goldberg | Jan 28, 2007 2:17:03 PM

Contracting out of paternity

Courtesy of ContractsProf Blog:

A man whose wife has, with his consent, been artificially inseminated with an anonymous donor’s sperm cannot escape parental liability by contract, according to a new ruling from a New York state trial court.

In the case, the husband — who had previously undergone a vasectomy — reluctantly agreed to his wife’s desire to have another child by artificial insemination. Later, when the couple split before the child was born, they agreed that the husband would not be considered the father of the child. After the child was born, they again signed an agreement stating that the husband would not be liable.

But that agreement violates public policy, said Justice Eugene Peckham. New York law provides that the husband of a woman who conceives by artificial insemination with his consent “shall be deemed the legitimate, natural child of the husband.” The parties apparently cannot get around that obligation by contract. Justice Peckham also apparently ruled that the husband would be estopped from denying paternity in any case, since the child had relied on his prior consent by being conceived and born.

For our prior discussions on related matters, see here, here, here, here, here, and here.

Posted by Ethan Leib on January 28, 2007 at 12:35 PM

“It’s more like electing a pope.”

That’s how a former Harvard Law Review editor desribes the process of electing the Review’s editor-in-chief. In an article in today’s N.Y. Times, Jodi Kantor looks at the law school career of Barack Obama, who was elected to be the Review’s EIC. The article uses Obama’s law school experience as a frame for looking at Obama’s current political persona, which is very consensus-driven but, according to some, lacking in potentially controversial specifics.

The article is also interesting as a behind-the-scenes look at the workings of HLR in the early 1990s. Obama’s efforts to allay tensions on campus are frequently invoked. One former editor is quoted: “I have worked in the Supreme Court and the White House and I never saw politics as bitter as at Harvard Law Review in the early’90s.” Obama’s ability to listen and make everyone believe he agreed with them was critical to his successful navigation of this contentious terrain. The process of electing the EIC is discussed in detail, with its all day session considering 19 candidates (even Pound Hall is mentioned). A parody of Obama from the now defunct Harvard Law Revue is excerpted. The parody describes Obama as “the son of a Volvo factory worker and part-time ice fisherman” and “a backup singer for Abba” — after going to Chicago, “[t]here I discovered I was black, and I have remained so ever since.” (Questions about Obama’s “blackness” were raised recently by Debra Dickerson, a member of HLS ’95, in an article in Salon.) The Revue‘s willingness to flaunt norms of political correctness and civility would result in its demise a short time later.

The article highlights an interesting issue — the extent to which law school and law review activities are part of one’s “public” persona. In some senses, Obama first became a public figure when he was elected as the Review’s first African-American EIC. But the business of a law review is generally some mixture of academic publication and collegiate social club. I have a sense, at least, that some of the Review’s heady mix of politics should not be subject to national exposure and dissection. After all, it’s a group of 80 or so law students — folks who are still figuring out how to approach their professional lives. No doubt, it’s interesting stuff. But I fear that “Above the Law” profiles of law review banquets may not be far behind.

Posted by Matt Bodie on January 28, 2007 at 11:23 AM

Comments

I thought this article constituted an unfair criticism of Obama.

Posted by: nitin | Jan 28, 2007 4:32:02 PM

The story’s author, Kantor, briefly attended Harvard Law in the late ’90s (though she was not on the Law Review).

Posted by: AEDPA | Jan 28, 2007 3:19:39 PM

Remembering Judge Richard Arnold

I was blessed, after law school, with the opportunity to work for a brilliant and decent man, Judge Richard S. Arnold of the United States Court of Appeals. Judge Arnold died a little over two years ago, on September 23, 2004. (Here is a blog post I did, right after learning about his death.)

The University of Arkansas-Little Rock’s Bowen School of Law now hosts an annual Arnold Lecture, honoring Judge Arnold and his brother, Judge Morris S. Arnold. Last night, Justice Thomas — who came to know Judge Arnold well, in connection with his assignment to the United States Court of Appeals for the Eighth Circuit — gave that Arnold lecture. Here is a report.

Here’s a bit from a post I did, the day after Judge Arnold’s death:

The Judge was humane, wise, and devout. . . . There are few like him. In terms of the law, he was an old-school liberal who admired both Justice Black and Justice Brennan, and a textualist with originalist leanings who loved and respected Justice Scalia; he was a “strict separationist” who really did believe that such a legal regime was essential to preserving religious freedom; he was passionately committed to fairness and to the dignity and rights of litigants and defendants; he knew that the law should be just, yet knew also that judges cannot right every wrong. His writing was at the same time elegant and simple, clear and memorable. . . .

Judge Arnold was a great judge, and a deeply good man. Thanks to the Bowen School of Law, and to Justice Thomas, for honoring him.

Posted by Rick Garnett on January 27, 2007 at 10:53 AM

Comments

On the Harvard Law Review, Richard was generally acknowledged as the all-around perfect legal scholar. He was persuasive but not overbearing, logical but willing to listen to alternate logics, liberal but not too far from centrist. His enduring legacy as a judge, which should be mentioned in any obituary about him, was his decision that unreported judicial opinions violate the Constitution. Especially in these days of ultra-cheap internet archives, there is no warrant for sealing or otherwise hiding judicial opinions, even if the plaintiff, defendant, and judge all want to hide them. As Richard said, these judicial reports are public documents, not private memoranda. They contribute to the increasing refinement over time of the common law. The common law is a public resource, period.

Posted by: Anthony D’Amato | Jan 28, 2007 4:45:51 PM

Teaching the “Mormon” Cases

A slightly more academic post than usual: I’m teaching Law & Religion before a wonderful group of students this semester at Notre Dame, and have just gotten through teaching the so-called “Mormon” cases — Reynolds, Davis v. Beason, etc. Reynolds, especially, is useful as an introduction to the belief-conduct distinction, the question of constitutionally compelled exemptions under the Free Exercise Clause, and so on. But these cases also raise a host of fascinating bigger-picture questions about the relationship between religion and politics, or between the “religious” and “secular” realms, both from the secular perspective and from the religious perspective.

That is especially so if one takes the time to read the 1890 Revelation from Wilford Woodruff, then-President of the Church of Jesus Christ of Latter-Day Saints, which led to the end of polygamy as a distinctive practice of the Saints. (This is a decidedly generalized description, and none of the terms used therein are meant to be unduly conclusory or suggestive.) In a recent paper, I wrote that one ought not casually describe the abandonment of the practice as a simple secular response to state pressure, since the Church itself describes the doctrinal shift as a product of religious revelation. (To be clear, this was said by way of defense of the Church, not as an argument that since the change was religious, we shouldn’t care that the state helped bring the Church to this pass.) That is true, but also a little too simple. What is striking about the Revelation is the extent to which the Revelation is an effort to grapple religiously with a set of secular facts. Woodruff describes the Lord as having asked the Saints the following question:

Which is the wisest course for the Latter-day Saints to pursue—to continue to attempt to practice plural marriage, with the laws of the nation against it and the opposition of sixty millions of people, and at the cost of the confiscation and loss of all the Temples, and the stopping of all the ordinances therein, . . . and the imprisonment of the First Presidency and Twelve and the heads of families in the Church, and the confiscation of personal property of the people (all of which of themselves would stop the practice); or, after doing and suffering what we have through our adherence to this principle to cease the practice and submit to the law, and through doing so leave the Prophets, Apostles and fathers at home, so that they can instruct the people and attend to the duties of the Church, and also leave the Temples in the hands of the Saints . . . ?

It seems to me that professors who teach law and religion ought to include the Revelation in their reading materials.

Of course Reynolds is useful for doctrinal reasons, but adding this material opens up a far broader set of questions, many of which have broader resonance both for the question of Free Exercise accommodation and for the relationship between religion and the state in general. Surely our thoughts on those issues will be influenced by our sense of what it means for the state to wrong religion, whether we should speak in terms of a state altering religious practices, and so on; and from the other side, our sense of what religions should or should not seek from the state may depend on our sense of how doctrine is formed, what it means to have a “religious” as opposed to a “secular” response to events in the world, and more. It seems to me that many of the religiously oriented legal writings on the relationship between religion and the state depend on two assumptions about religion that may be widely shared among members of many “traditional” faiths, but not by others: 1) that most or all of God’s communications to man have already occurred, and therefore that religion’s response to the conditions of the secular world is in some sense fixed by what has gone before; and that 2) purity of faith depends on believers making a strong distinction between religious and temporal authority and following only or primarily the former, even to the point of martyrdom. One such set of arguments are put usefully in a paper by Mark Tushnet called In Praise of Martyrdom; but a different perspective is available in a short and remarkable paper by Frederick Mark Gedicks titled “The Integrity of Survival: A Mormon Response to Stanley Hauerwas,” 42 DePaul L. Rev. 167 (1992). I think Gedicks’s wonderful paper complicates the picture painted by Tushnet substantially.

Of course this post has only raised questions, and that but generally, and hasn’t even attempted to answer them. I’m not sure that any definitive answers are available, or if they are, that they wouldn’t have to proceed on a faith-by-faith basis. And to be sure, much more (and more informed) things could and have been said about the LDS experience in American legal history (among them, this fine paper by blogger/lawprof Nathan Oman). My point is not to settle these questions, but to encourage law & religion profs (and students!) to include the 1890 Revelation in their must-read material, and to treat Reynolds and its sequelae as raising a host of productive broader questions for the relationship between law and religion, and not just as a minor and historically quaint signpost on the road to the dreaded Employment Division v. Smith.

Posted by Paul Horwitz on January 26, 2007 at 02:36 PM

» Law, Revelation, and the Power of Interpretation from Concurring Opinions I realize that this is antediluvian in blog time, but last Friday Paul Horwitz had a very interesting post at Prawfs about teaching the Mormon Cases in his Law & Religion class. The Mormon Cases, of course, are the series… [Read More]

Tracked on Jan 29, 2007 2:29:27 PM

» The Mormons and Constitutional Federalism: from The Volokh Conspiracy Over at Prawfsblawg, Paul Horwitz has an interesting post on the usefulness of the 19th century Mormon cases for teaching law and religion. … [Read More]

Tracked on Jan 31, 2007 12:03:51 AM

» Who Cares About Federalism? from The Debate Link A little while back, there was a bit of a multi-blog debate on the issue of federalism and rights. The starting point was that many liberals today are skittish of federalism, mainly associating it with Jim Crow resistence to Civil Rights reforms. How… [Read More]

Tracked on Jan 31, 2007 5:45:07 PM

Comments

After reading this post, I deeply regret not taking any of your classes at Southwestern.

Posted by: J. Freeman | Oct 15, 2007 11:57:03 PM

I wonder if Professor D’Amato could provide a legal definition of “absurd?”

Posted by: Seth R. | Feb 4, 2007 8:44:34 PM

I somehow mistyped the link code for my prior Co-Op post. Let me try again.

Co-Op Post

Posted by: Kaimi | Feb 1, 2007 1:43:54 PM

Paul,

Thanks for a very interesting post. I agree that added background is very helpful.

As a Mormon, I find the Mormon cases interesting in part because they’re the culmination of literally 40 years of increasingly intense federal prosecution. That history is set out in various sources including Gedicks, and in most detail in the book The Mormon Question, by legal historian Sarah Gordon (Penn Law).

A nutshell version is that traditional bigamy prosecutions just didn’t work in Utah. This was for a variety of reasons, among them jury nullification, lack of evidence, lack of cooperating witnesses, and lack of funding for prosecution.

This led to sporadic bursts of congressional outrage. As a result, the federal government eventually passed a series of increasingly draconian statutes. Evidentiary standards were relaxed. Mormons were banned by federal law from voting, serving on juries, or holding any public office, and these restrictions were applied ex post facto. The federal government, by statute, also dissolved the church as an entity and seized all church property.

(And, as an aside for Professor D’Amato — the same series of federal statutes also explicitly removed from Utah women the right to vote, which they had previously enjoyed. The federal attitude towards Mormon women was ambivalent at best. They are sometimes painted as helpless victims of Mormon patriarchy, but are also often described as co-conspirators in polygamy, or as sexually licentious Jezebels in need of government supervision.)

Under the same statutes, the federal government made unmarried cohabitation a felony crime in many circumstances. This was intended to allow prosecution of polygamous families where no official marriage could be proved. Indeed, it was the cohabitation prosecutions that ultimately ended the practice of plural marriage; after passage of the Edmunds Act, there were 31 bigamy convictions under the Act, and over a thousand cohabitation convictions. (These laws are still on the books, having been incorporated into the state criminal code. This means that, as noted in my earlier Co-Op post, a person can be jailed for bigamy under the Utah statute for having one spouse, or even no spouse at all.)

And during portions of this time, federal troops were sent to Utah to put down a perceived Mormon rebellion.

It’s really quite remarkable the lengths that the government ultimately went to in this process. And with the historical background, the cases seem much more interesting to me — not as stand-along decisions, but rather as pieces in the larger discussions about state power over unpopular minority groups.

Posted by: Kaimi | Feb 1, 2007 1:41:56 PM

Very interesting post. I have written up something in response over at Co-opt, but for whatever reason I can’t get the TackBack to work. Here is a manual link:

“Law, Revelation, and the Power of Interpretation”

Posted by: Nate Oman | Jan 29, 2007 2:39:05 PM

I tend to think that Smith’s at least a reasonable interpretation of the free exercise clause, and probably the best reading — but even if it were a pure matter of policy, I’d still suggest such an approach makes more sense. On a level of policy, what troubles me is what happens when you have someone making claims that are clearly unsustainable — the Florida driver’s license cases a couple of years ago; or for that matter, the mormon cases, which aren’t purely of historical interest, since there are other religions that want to flout bigamy laws, some of whom are assertive about their desires to live by their own rules. If you don’t accept the principle that “the [Free Exercise] Clause does not relieve an individual of the obligation to comply with a law that incidentally forbids (or requires) the performance of an act that his religious belief requires (or forbids) if the law is not specifically directed to religious practice and is otherwise constitutional as applied to those who engage in the specified act for nonreligious reasons,” what do you do if confronted with a religion that basically treats women like property, whose adherents demand free exercise? In short, what do you do when the conduct at issue is more invidious than smoking peyote?

Posted by: Simon | Jan 29, 2007 1:09:03 PM

Fair question. Both, I think. Certainly the first, in other ways certainly the second, and I might add that my sense of my own personal constitutional methodology is that I am not precluded from having (1) influence (2), although I do not believe (1) should necessarily be determinative of (2).

Posted by: Paul Horwitz | Jan 29, 2007 11:19:15 AM

Although I share the view of many in the field that Smith was a bad decisionIn the sense that you think it’s bad (or at least, less than optimal) as a policy question that generally-applicable laws will override religious commitments to the contrary, or in the sense that you think it misconstrues the First Amendment?

Posted by: Simon | Jan 29, 2007 11:07:06 AM

To Prof. D’Amato: I think we might well disagree on the proposition in your second paragraph, but that’s fine; the purpose of the post was to suggest to Law & Religion teachers that the Revelation opens up a host of productive questions, not to supply my own answer. I think even if we agreed that the state ultimately must preserve core values, though, and even if we agreed on what they were, we might still disagree on how best to do so, and not least on what preconditions must exist for a liberal society to decide what those values are. For myself, one of the reasons (not the only one) I believe in religious freedom and, often, religious autonomy, even where it might lead to conduct I don’t share and might not like, is that I believe the liberal state ought to preserve spaces for illiberal groups because they bring, or may bring, other sources of knowledge and inspiration to the table as we deliberate together in forming the very “core values” of which you write. As for your first paragraph, it seems to me that even if Woodruff was “only” saying that giving up polygamy was the lesser of two evils, that still raises interesting questions for those who think about the relationship between religion and the state, or between the secular and religious realms. For the temptation, one I believe folks often fall prey to in thinking about these cases, is to conclude that the Church made a quite secular cost-benefit analysis; conversely, one might make the mistake of seeing the decision as purely a religious decision, but with the assumption that the decision was isolated from events in the world. What I find fascinating about the Revelation is that it partakes of both, in interesting ways. And that, I think, should be provocative for scholars in the field.

Simon: I was being a little cute about Smith. Although I share the view of many in the field that Smith was a bad decision, and tend to be a fairly strong accommodationist despite the “rationality” of Smith — and I do not concede that the legal system’s primary desideratum is “rationality,” in the sense of constructing a logically consistent but hermetic legal system, as much as lawyers seem to yearn for such a world — I was not really urging folks to draw the same conclusion. Just as con law classes are often structured around received narratives, right or wrong — the triumph of the New Deal Court, the radicalness of Lopez, the rise and fall of Lochnerism, and so on — so I think Smith often takes a similar position in the received narrative of the Religion Clauses. That doesn’t mean I share the received view entirely, and it doesn’t mean that those who do aren’t capable of giving a fair hearing to very contrary viewpoints, and discussing the many things that are logically compelling about Smith. But the call of narrative is strong nevertheless. My use of the word “dreaded” was really just a light-hearted tip of the hat to the possibility that law and religion profs might treat Reynolds as a brief stop along the way to constructing the historical narrative ending in Smith.

Rick and Edward: Thanks. I have already cited your UCLA piece to the class, Rick. It struck me, too, as relevant to this discussion.

Posted by: Paul Horwitz | Jan 29, 2007 10:21:32 AM

Why would Employment Division v. Smith – which strikes me as not only correct in terms of the First Amendment, but also eminently rational as a matter of policy – be “dreaded”?

Posted by: Simon | Jan 29, 2007 10:02:10 AM

Anthony D’Amato, in answer to your bizarre question in the last paragraph of your comment: No, that is not the real lesson.

Posted by: Chairm | Jan 29, 2007 12:10:22 AM

It seems to me that Woodruff is only saying that giving up polygamy is the lesser of two evils for the Mormons. What else do you think he’s saying? Why isn’t his additional language mere surplusage?

And isn’t the real lesson here not the effect of the state upon religious practices, but the need for the state to preserve core values (such as the equality of women) against the corrosive and primitively misogynistic memes of absurd belief-systems?

Posted by: Anthony D’Amato | Jan 28, 2007 4:59:40 PM

Great post, Paul. I think the Mormon cases complicate, in interesting ways, Justice Brennan’s remarks, in the Blue Hull case, about the state’s non-interest in the “development of doctrine.”

Posted by: Rick Garnett | Jan 27, 2007 10:25:22 AM

I was already planning on adding the Mormon cases to my First Amendment Religion readings later this semester. Thanks for the pointer to the Official Declaration and the other links. I will definitely consider these.

Edward Still, Birmingham AL

Posted by: Edward Still | Jan 26, 2007 10:05:25 PM

Out with the Old

Wired news reports that Western Union will discontinue its telegram service as of January 27, 2007. Over 160 years ago, Samuel Morse sent the first telegram from Washington D.C. to my beloved city, Baltimore. Although individuals had largely traded in the telegram for the telephone system by the 1950s, Western Union stood by its messaging service. But the increasing use of VoIP telephony, cellular phones, and email no doubt finally convinced Western Union that it was time to lay the telegram to rest. Morse’s first message can perhaps be said about today’s emerging information technologies that replace the telegram–“What God hath wrought?”

Posted by Danielle Citron on January 26, 2007 at 02:07 PM

Comments

Actually, that was January 27, 2006. So no last minute telegrams, unfortunately.

Posted by: Brian | Jan 26, 2007 6:39:00 PM