Is there any point to “textiness” in statutory interpretation? I use the term to refer to performative textualism — that is, judges’ emphasizing hair-splittingly technical points of English usage or grammar as a display of their self-restraint about judicial policy-making. Like Stephen Colbert’s “truthiness,” textiness conveys a mood rather than a method: The point of the exhibition is not to obtain some objectively ascertainable good (factual truth, semantic meaning) but rather to demonstrate judicial seriousness about semantics, even when the semantics do not actually do all that much to resolve a dispute.
Does textiness nevertheless serve any useful function? Professor Tara Leigh Grove has argued that, by “emphasi[zing] semantic context, rather than social or policy context,” such “formalistic textualism” could “help protect the legitimacy of the judiciary itself.” According to Professor Grove, formalistic textualism allows “even the most apparently ‘reliable’ progressive or conservative member of the Court” to vote like “a ‘swing Justice’ in statutory cases, because “the method” of textual formalism interacts with “the mix of federal statutes…and the mix of baseline requirements and exceptions within each statute” in ideologically unpredictable ways.
After the jump, I assess the promise and problems with Professor Grove’s suggestion that textiness could build cross-partisan judicial coalitions. One problem: Textiness does not constrain case outcomes very much and so gives little assurance that “liberal” and “conservative” outcomes will balance out over the run of cases. The promise: Assuming one or two justices who want to balance case outcomes for other reasons, textiness might provide political cover for swing votes that ultimately produce balanced case outcomes. A further problem, however, is that arbitrary swinging by one or two justices might not improve SCOTUS’s legitimacy very much.
1. Why doesn’t textiness yield plain meaning?
Before evaluating whether textiness has instrumental value, I should spend a few more words explaining more precisely what I mean by the term and why I believe that textiness yields no “plain meaning” capable of genuinely resolving disputes.
“Textiness” occurs whenever judges split ever-finer semantic hairs rather than consult statutory purpose as a way to demonstrate their commitment to judicial restraint, even when those semantics have low case-resolving power. The implicit assumption behind textiness is that judicial examination of a statue’s apparent practical purpose is like a gateway drug that will lead judges to impose their own judicially devised policies on statutes. Textiness calls attention to the judge’s abstinence from this dangerous drug with an ostentatious forswearing of what David Pozen and Adam Samaha call the “anti-modalities” of interpretation — that is, forbidden techniques of interpretation emphasizing policy, morality, or other judicial trespassing into the domain of the political branches.
The telltale sign of textiness is the judge’s obsessive focus on semantics even when such pondering of grammar or ordinary usage does little to resolve the dispute before the court. A little apophasis sanctimoniously disclaiming any consideration of statutory purpose while simultaneously hinting at the practical sense of the semantic distinctions that allegedly preclude such consideration reveals textiness in all of its glory. Recent examples of textiness include Niz-Chavez v. Garland, Van Buren v. United States, Facebook v. Duguid, but the practice extends back at least a decade to include decisions like Lockhart v. United States, Ali v Bureau of Prisons, and United States v. Hayes. In none of these cases could pure semantics, divorced from consideration of statutory purpose, satisfactorily resolve the dispute, but the majority opinions nevertheless ponder grammar and usage in an almost comically intense display of linguistic erudition as if to say, “No policy-making here! We’re just trying to divine the grammar of the Last Antecedent Rule.”
Take, for instance, Niz-Chavez, a case allegedly turning on the “plain” meaning of the phrase “a notice to appear.” Removable non-citizens who receive such “a notice” may not count any post-notice presence in the country towards the 10-year statutory period necessary for a suspension of removal. But what if the non-citizen receives two different notices, neither of which alone, but both of which in combination, contain all of the statutorily required information? Has the non-citizen received “a notice” in conformity with the statute? Or merely some notices in violation of the “plain” statutory command?
Justice Gorsuch, writing for six justices, expended five pages of a slip opinion exploring the finer points of how indefinite articles are used with “countable objects” having nothing to do with legal notification (“a car,” “a bank”) as opposed to “noncountable abstractions” (“cowardice or fun”), concluding that “notice” can function in either capacity and concluding without apparent irony that this fact plainly suggests that Congress used “a notice” to mean “a discrete, countable thing…a single statutorily compliant document” rather than multiple documents. Chastising the dissent for its focus on administrative convenience, Justice Gorsuch notes that that the majority’s interpretation of “a notice” avoids the danger that multiple notices strung out over months or years might be confusing to a “person who is not from this country—someone who may be unfamiliar with English and the habits of American bureaucracies.” The opinion, however, then renounces the tempting blandishments of inquiry into practical statutory purpose, insisting instead that “no amount of policy-talk can overcome a plain statutory command.” “Our only job today is to give the law’s terms their ordinary meaning and, in that small way, ensure the federal government does not exceed its statutory license,” the majority opinion modestly concludes.
The problem with this display of virtuous textiness is that the context-independent meaning of indefinite articles is not very helpful for deciding what the phrase means when used to instruct immigration officials on the statutory notice required for non-citizen residents. As Ryan Doerfler has shown (to my satisfaction, at least), what we care about in statutory interpretation (as in ordinary linguistics) is the speaker’s broad pragmatic meaning. It is a pointless exercise, therefore, to plug in some phrase or word pattern into pragmatic contexts utterly unrelated to the statutory context in order to prove that the statutory text is too “plain” to allow inquiry into statutory purpose. We necessarily assess “plainness” in light of likely statutory purpose. Displays of textiness, therefore, make a virtue out of a linguistic mistake. As explained (very convincingly IMHO) by Baude & Doerfler, it makes little sense to give lexical priority to semantic meaning over non-semantic statutory purpose if the latter is useful at all for determining statutory meaning. Judicial asides about the “incidental” policy wisdom of the allegedly “plain” semantics, therefore, are not just obiter dicta but rather the stuff that’s actually doing the work in the opinion: As Richard Re has noted, courts do not really apply such a rule of lexical priority for semantics, whatever they might say to the contrary.
I emphasize that Niz-Chavez‘s reading of the statute is perfectly plausible — but its plausibility is guaranteed not by its ruminations about usage of indefinite articles with words like “bank” or “fun” but rather by the “policy-talk” about fairness that the opinion sternly insists is legally irrelevant. That “policy-talk” supplies the statutory context that makes sense of the excruciating disquisition on noncountable nouns, a disquisition that otherwise makes the opinion ridiculous. (For a similar apophasis in which the policy sense of the opinion’s interpretation is discounted as merely “extra icing on the cake already frosted,” see Van Buren v United States).
2. Can textiness deliver cross-partisan coalitions through a cross-partisan method?
As annoying as textiness in judicial opinions is to read, it would nevertheless be worth the vexation if it could yield cross-partisan coalitions promised by Professor Grove. Professor Grove is surely correct that divisions on collegial benches along partisan lines undermines judicial legitimacy. Reducing polarization with coalitions that cut across those divisions would be well worth putting up with judges’ learned disquisitions on linguistics that do little beyond signal judicial virtue.
But can textiness deliver what Professor Grove promises? Maybe, but not for the reasons suggested by Professor Grove.
Professor Grove treats “textual formalism,” defined as “emphasi[zing] semantic context, rather than social or policy context,” as a “method” that yields determinative yet non-ideological results. I think this view is mistaken. This mistake is both theoretical and deeply practical. As a matter of theory, Ryan Doerfler bluntly states what I take to be the correct position: “whether some instance of context-sensitivity falls on the semantic or the pragmatic side of the divide is, for legal purposes, basically irrelevant.” As a matter of practice, that semantic context cannot really resolve many disputes without a big dollop of “social or policy context.” Professor Anita Krishnakumar has documented how canons with impeccable semantic pedigrees like ejusdem generis and noscitur a sociis depend covertly on assumptions about statutory purpose. It is simply impossible to know whether a series of nouns belong to the same kind or class without some sense of the purpose for which those nouns are being assembled. Likewise, textualists’ invocation of “whole act” context, ostensibly a purpose-neutral effort to examine the “structure” of a statute, tends in practice to depend on the judge’s speculation about the overall purpose that a statute is designed to serve. Textiness — e.g., avoidance of purpose-talk in favor of Latin-titled canons — merely conceals this dependence of the ostensibly semantic on policy context in what Professor Krishnakumar calls “backdoor purposivism.”
Stripping out policy context, therefore, simply makes semantic canons and “whole act” speculations more arbitrary, not more predictable. The “structural” argument deployed by the majority opinion in Lockhart v. United States provides my favorite example of such arbitrariness. Lockhart is chiefly famous for Justices Sotomayor’s and Kagan’s debate over whether the “Last Antecedent Rule” or “Series Modifier Rule” should be applied to the phrase aggravated sexual abuse, sexual abuse, or abusive sexual conduct involving a minor or ward.” Sotomayor (for the majority) and Kagan (in dissent) offer lots of examples from ordinary usage purporting to demonstrate some general principle governing the extension of modifiers across series of “parallel” nouns. We learn that “an actor, director, or producer involved with the new Star Wars movie” probably all must be involved in the Star Wars franchise but that “a defensive catcher, a quick-footed shortstop, or a pitcher from last year’s World Champion Kansas City Royals” need not all come from that final baseball team.
None of this display of verbal dexterity, however, sheds much light on that phrase in the sex offense statute at issue, so Justice Sotomayor turns to what she calls a matter of statutory “structure”: The three nouns in the disputed clause defining predicate state-law offenses are similar to three nouns in three separate sections of the statute defining predicate federal offenses. Because “[t]his similarity appears to be more than a coincidence,” Sotomayor insists that “we cannot ignore the parallel.” This non-coincidence somehow entails that, because the federal-law offenses are not restricted to crimes against children, the state-law offenses should also not be so restricted.
Why, beyond cabalistic mysticism, should this symmetry indicate anything whatsoever about the coverage of a federal statute? “[T]he majority has no theory for why that should be so,” Justice Kagan complains. Indeed. But the majority certainly hints at such a theory in asking precisely the same question about Justice Kagan’s reliance on legislative history: “the terse descriptions of the provision in the Senate Report and DOJ letter do nothing to explain why Congress would have wanted to apply the mandatory minimum to individuals convicted in federal court of sexual abuse or aggravated sexual abuse involving an adult, but not to individuals convicted in state court of the same.” In this one line, Justice Sotomayor tips her hand that the statutory purpose underlying this otherwise mysterious textual symmetry is to insure equal punishment for equal culpability. Without that purpose-based account for why symmetry matters, Justice Sotomayor’s discussion of textual symmetry is simply baffling — a sort of weird claim that Congress pointlessly buried secret messages in statutory text in the manner of an anagram.
Textual formalism, in short, often cannot deliver consistent results without some overt or covert reference to statutory purpose. But this means that formalism stripped of purpose cannot anchor a cross-partisan coalition, because conservatives and liberals can reach whatever result they please deploying “texty” rhetoric. Bostock is a good example of this indeterminacy. As I have noted elsewhere, Justice Kavanaugh’s dissent is no less “textual” than Justice Gorsuch’s majority opinion. The difference between Gorsuch’s literalism and Kavanaugh’s ordinary language is not “textual formalism” but rather the former’s winning the votes of three liberals plus Roberts. One suspects that those four allies were less seduced by the persuasive power of Gorsuch’s texty exegesis and more by the practical common sense of not trying to distinguish between discrimination based on sex stereotypes (forbidden by Title VII) and sexual orientation (somehow permitted as an acceptable stereotype by Kavanaugh’s wooden “ordinary language” reading). Likewise, again as I have elsewhere noted, textual formalism will not resolve Gun Owners of America v. Garland, the “bump stock” case pending before SCOTUS.
Cross-partisan coalitions can be anchored by an interpretive method that promises rough justice to each side of the partisan divide. But a “method” that is merely rhetoric anchors nothing. A “liberal” opinion today like Bostock that is laced with texty rhetoric offers no credible assurance to conservatives that textiness will later deliver “conservative” outcomes tomorrow, because any result can be reached while sprinkling an opinion with semantic seasoning.
3. Can textiness deliver cross-partisan coalitions by encouraging swing votes?
Nevertheless, there is an outside chance that textiness might deliver cross-partisan coalitions just by encouraging swing voting against ideological type-casting. Semantic context is not really a rigorously closed set of rules for interpreting text. The triggering conditions for so-called “intrinsic aids, for instance, are rarely fully specified in textualist precedents, and the choice between literal, technical, or ordinary usage is likewise not specified by statutory text itself.
Justices of a textualist bent, therefore, will have some freedom either to admit that there is an ambiguity requiring consideration of extra-textual sources for its resolution or, alternatively, just pretend that there’s nothing but plain text foreclosing such consideration. Such justices may be tempted to establish the credibility of their “method” by opting for the latter — what Professor Grove calls “textual formalism” — whenever the “textualist” choice enables them to vote against the outcome most preferred by the political party of the President that appointed them. What better way, after all, to signal the credibility of textualism than to vote against one’s expected ideological preferences while claiming that the plain text ties one’s hands? If one is a Chief Justice anxious to vindicate the non-partisan legalistic professionalism of one’s courts, then one has an additional incentive to join such an opinion.
It might be that Bostock and Niz-Chavez are both best explained as such an effort to burnish the reputation of textualism and/or SCOTUS itself as a reliable, apolitical method. Textiness in the opinion might not dictate the result, because reasonable interpreters might (for instance) disagree about whether literal or ordinary usage should govern. But such results produce cross-partisan votes that nevertheless protect the method’s or Court’s legitimacy against claims of partisan bias.
Do such cross-partisan voting patterns reduce polarization by depicting the SCOTUS as less partisan? I do not know. A cynic might note that, given the inherent arbitrariness of either ignoring or recognizing textualism’s escape hatches, the self-consciously “textualist judge” could fashion themselves as the swing voter in statutory interpretation cases — a heady experience that Justice Kennedy used to play in constitutional cases by emphasizing a kind of mushy balancing methodology. Such opinions might be expected to generate dissents from the Left and Right wings of the Court impatient with the pretense that texty rhetoric really resolves the dispute. For sophisticated observers, the choice to recognize or ignore semantic ambiguity tends to seem arbitrary or, at least, under-specified, so texty poses can seem, at best, naive and, at worst, sanctimonious.
Does a succession of such opinions with inevitable dissents really build up SCOTUS’s legitimacy? Search me. The answer probably varies with the sophistication of the relevant group making the judgment about legitimacy. I am more confident, however, that textiness in judicial opinions will have staying power. The temptations of striking that Olympian pose of pure legalism is just too great for some judges to resist.
Posted by Rick Hills on March 7, 2022 at 08:37 PM
Comments
Asher writes: “ More generally, and more specifically, really all the linguistic canons that you, not without good reason, deride are really about Gricean pragmatics.” He concludes that, somehow, the taxonomy of canons (as Gricean, Neo-Gricean, Paleo-Gricean, whatever)“ “a bit of a problem for” my position.
Asher, I am really confused, and I suspect that we are just talking past each other. So let me start over.
I think that assigning lexical priority to any sort of evidence of what words means is a bad idea. I do not care whether the class of evidence is classified as “Gricean,” “Pragmatic,” or whatever: The label is irrelevant to mer. (For what it is worth, I think that legal language is much better explained as words addressed to specialized communities of speakers known as lawyers and that Donald Davidson is a better language philosopher for understanding this linguistic division of labor than Paul Grice — but I also think that such disputes within the philosophy of language are irrelevant to my point).
I am interested in what statutory words mean, full stop. But dividing up evidence of their meaning into various classes and assigning some classes of evidence lexical priority over other classes strikes me as bonkers. Every sort of evidence, if it is useful at all, should come in immediately, free of any exclusionary rule. Textiness is the silly and obsessive focus on any subset of relevant sources as a way of making the interpreter look more apolitical or technically minded or some such.
One important source of information about what sentences mean is their statutory context, which includes the policy objectives that we infer Congress wanted to achieve with the law. For instance, the phrase “discrimination because of sex” connotes “sex-based classifications that are harmful insofar as they subordinate individuals based on sex-based stereotypes.” “Discrimination” practically has an implied modifier built into it of “invidious”: It does not connote any old classification. And “discrimination” in proximity to “sex” practically connotes to English speakers the struggle by women to achieve freedom from disparaging and belittling misconceptions of their ability based on their sex. As General Dynamics v Cline, 540 U.S. 581 (2004) notes, that understanding is rooted in the “social history” of this phrase. For these reasons, most courts have upheld sex-based classifications on professional dress in the workplace (e.g., women must wear dresses, men must wear suits with pants), reasoning that, although these classifications are classifications based on sex, they are not “discriminate because of sex.” See, e.g., See Jespersen v. Harrah’s Operating Co. (JespersenII), 444 F.3d 1104 (9th Cir. 2006).
If one reads the Title VII phrase in light of its well-known purposes, purposes that have indeed become embedded in the popular understanding of the phrase itself, then Bostock’s outcome makes a lot of sense, because distinctions based on sexual orientation are likely rooted in invidious stereotypes. None of that reasoning requires any judge to “go beyond what Congress apparently meant by what it said” as you claim. Honestly, I do not know where you got the idea that I was suggesting that courts should ignore the statutory words and go on some frolic and detour after what the 88th Congress “”might well have meant to say in light of its purposes.”
But I’ve discussed all of this in another earlier post: https://prawfsblawg.blogs.com/prawfsblawg/2020/09/bostock-cline-and-the-scotuss-repression-of-textualisms-unresolvable-contradictions.html
Posted by: Rick Hills | Mar 9, 2022 3:04:03 PM
More generally, and more specifically, really all the linguistic canons that you, not without good reason, deride are really about Gricean pragmatics. To elaborate on surplusage a bit, one of the government’s arguments in Lockhart, as I recall, was that if “involving a minor or ward” modified everything in “aggravated sexual abuse, sexual abuse, or abusive sexual conduct involving a minor or ward,” then “aggravated sexual abuse” and “sexual abuse” would do no extra work and be redundant. Now semantically, nothing in English prevents a writer from writing a list where A is a subset of B and B is a subset of C and modifying the whole list with one modifier. It’s bad writing, but people do things like that often (“Kamala Harris is the first female vice president, the first Black female vice president, the first Indian American female vice president,” etc.). Anti-surplusage, however, supposes that at least more often than not, people don’t say things that they intend to merely communicate something they’ve already said, but rather intend every word they write to communicate some additional bit of meaning unexpressed by all the others. This is obviously a pretty dubious claim (a less dubious version of it might be that readers of English sentences make this dubious assumption about the intentions of writers of English sentences, and that interpreters will therefore best approximate readers’ expectations by making this dubious assumption), but the claim is entirely one about speakers’ meanings, and not in the least about sentence meaning.
Really by definition, just about every other linguistic canon that you might quarrel with is doing the same thing: pragmatically enriching the meaning of a semantically ambiguous bit of text with suppositions about how speakers and writers intend to communicate meaning. Noscitur a sociis, for example, supposes that people don’t usually talk about extremely dissimilar sorts of things at once, and that when someone says “A, B, C,” where “C” both semantically may mean something dissimilar to “A” and “B” and something dissimilar from them, we should suppose the drafter intended the similar meaning. Justice Scalia used to give the example of “needles, pins, and nails.” Contrary to what he said at times about this example, there is nothing *semantically* wrong with reading nails to mean the nails on one’s fingers; that is even true, in fact, if you talk about hammering needles, pins and nails. Not only can nails on one’s hands be hammered, the unlikelihood or (contrary to fact) impossibility of doing so has nothing to do with what “nails” can semantically mean; semantics doesn’t preclude making fictional, impossible, or even nonsensical claims. One assumes, however, that more often than not people intend not to, at least when drafting legislation.
I think this labeling problem is a bit of a problem for you, because your real burden isn’t to explain why textualists ought to consider, or give more weight to, what Congress apparently meant by what it said — they’re already doing that — but why they ought to go beyond what Congress apparently meant by what it said and implement what it didn’t mean by what it said but what it might well have meant to say in light of its purposes had it considered some problem that didn’t exist yet, or been a half-century more enlightened, etc.
Posted by: Asher Steinberg | Mar 9, 2022 1:12:08 PM
So I agree that rigorously applying textualism often produces arbitrary results. I think I disagree with you on a matter of labeling; you are putting things in the semantic bucket that I think actual Griceans (maybe nowadays there are only neo-Griceans) would call pragmatic enrichment. For example, anti-surplusage is actually a textbook Gricean pragmatic move, by which one rules out a semantically permissible reading of some words (that some are synonyms of each other) by recourse to Grice’s maxim of quantity, which enjoins people to say no more than is necessary to be understood. On the other hand, I am not so sure that everything you call pragmatic would be recognized by a Gricean as pragmatic enrichment. For example, purposive arguments for the result in Bostock seem very far removed from circa-1965 “speaker meaning.” Perhaps less controversially, in Duguid, the fair complaint that advances in technology make a semantic parsing of the TCPA obsolete has nothing to do with speaker meaning before those unanticipated advances in technology. So I tend to see textualism as giving lexical priority to semantics *and* pragmatics, in the Gricean sense, over a pursuit of purpose over and above a plausible reconstruction of speaker meaning, though many textualists do claim, confusedly, that things like the last-antecedent nostrum are rules of English grammar.
Posted by: Asher Steinberg | Mar 8, 2022 3:44:30 PM
For a powerful recent comment supporting Prof. Hills post, see the SSRN TEXTUAL GERRYMANDERING by Profs. Victoria Nourse and Wm Eskridge
Posted by: Daniel Borinsky | Mar 8, 2022 3:35:22 PM
Asher writes that my post “more or less oddly assumes that textualists see the same degree of semantic indeterminacy in statutes that you do.” He notes that judges relying exclusively on semantic sources often converge on the same outcomes.
First, nice to hear from you again, Asher: It has been awhile.
Second, I think you misunderstand my point, a misunderstanding for which I blame my own poorly chosen words.
I do not doubt for a second that, relying exclusively on some set of sources deemed appropriately “semantic,” judges can reach consistent results. I also do not doubt that, relying only on one’s thumbs, ring fingers, and pinkies, one can play the piano. My point is that imposing such a limit on oneself serves no useful (interpretive or aesthetic) purpose: It is artificially truncating the resources at one’s disposal to honor a theory of language that makes zero linguistic or legal sense.
So-called semantic context can be rigged up to yield consistent results if one makes it normatively unattractive and arbitrary. One could (for instance) take the view that expressio unius is a rigorous rule requiring all lists to be read as exhaustive rather than illustrative. One could follow Ali v Bureau of Prisons in taking the arbitrary view of ejusdem generis that, whenever one must choose between making redundant the final sweeping clause or the more specific preceding terms, one should read the series in its broadest sense to render redundant the preceding terms. You could always opt for the “literal” rather than the “ordinary” meaning of terms. And so forth: Each such choice would “tighten up” the meta-canons governing the use of intrinsic aids, thereby eliminating any need to consult a statute’s pragmatic, policy context in much the same way that rolling dice to break semantic ties would yield definite results.
But why would one think that such “textiness” was a sensible, normatively attractive thing to do? I get why one might want to test a phrase’s or term’s or word pattern’s usage in contexts unrelated to the statutory context: Maybe that test will give one a hint about the proper reading that one can test against a reading more closely tied to the particular statutory context. Sometimes, after all, the semantics are clearer than the statute’s policy context!
What I cannot fathom is why one would think that giving lexical priority to contextual usage is a sensible thing to do. It certainly is not how any normal person reads English: They mix up the semantic and pragmatic contexts to figure out what Grice would call “speaker meaning” rather than puzzle over “sentence meaning” in isolation.
I can think of only two related answers to this question of why give semantic sources lexical priority. First, somehow such “textiness” seems more rigorously legal than just doing the common-sensical thing of asking what the statute seems deigned to achieve. As my posts suggests, I think that this vain and silly.
Second, one might strain at exhaustive readings of petty semantic details to avoid looking at statutory purpose because one distrusts judge’s views of policy context so much that one would rather just choose utterly arbitrary outcomes rather than risk politicized ones. I agree with Baude & Doerfler that, if one really believed that second point, then one should *never* look at the purpose of a statute even to break semantic ties: One should instead just flip a coin or invent arbitrary tie-breaking rules rather than trust judges to do their job in an apolitical way, when the semantic sources result in a tie.
So far, neither the SCOTUS nor lower courts have been willing to take such an extreme position: They are, instead, mostly relying covertly on assessments of purpose to inform the application of incomplete semantic sources. And thank goodness for that.
Posted by: Rick Hills | Mar 8, 2022 7:43:52 AM
Assuming that there are multiple possible textual readings in Bostock or Niz-Chavez, which I think is true, textualism may not only result in swing voting for the cynical reasons you describe, but also because textualist judges sincerely but mistakenly believe there is ultimately only one correct textualist answer in those cases. Adherents to textualism often genuinely believe, e.g., that it is somehow possible to tell with some certainty whether the last antecedent rule is a better fit for a statute than the “series qualifier” rule. (In Duguid last year, the Court was actually unanimous on just this question, in a way that seriously frustrated the purpose of the statute they were interpreting, for what seem like semantic reasons.) Perhaps you think they always form these judgments by recourse to purpose, or some other non-semantic consideration, like wanting to make textualism look neutral, but this more or less oddly assumes that textualists see the same degree of semantic indeterminacy in statutes that you do; more plausibly, they (no doubt mistakenly in many cases) see much less, which is why the whole textualist enterprise makes sense to them and not so much to you.
Posted by: Asher Steinberg | Mar 8, 2022 1:47:53 AM
Interesting somehow, but, and with all due respect, really baseless.
Not in vain by the way, the respectable author of the post, has concentrated upon Lockhart v. United states, over Facebook v. Duguid. In the latter, there wasn’t any dissenting. All agreed. All good. Why ? simply, because, there wasn’t any substantial problem, beyond what it is.
The point is, that there is a dispute. Many times, the dispute arises from the written text or law. Judges, don’t initiate disputes. They occur, and brought to them for resolving it. Means:
That those that should obey and follow the law, don’t fully understand it. So, the root of the problem is beyond judges. But, life. Human beings. Crazy things. And above all:
The lawmaker.
The lawmaker, can’t predict everything. It has very narrow, very specific and subjective intent while legislating. Later, things evolve to much more complicated outcomes, due to new technology. Social shifting. International development etc…. But, the lawmaker, is always behind occurrences. The latter, wouldn’t wait the lawmaker to fix it, amend it etc… So, disputes are brought to the court, and the court, needs to bridge those gaps. Gaps, between life and crazy occurrences, and the law or texts, far behind.
How shall the court do it ? The starting point, must be the law. The law. The text. They have legally, and constitutionally, the ultimate validity. Not the intent. But the text first. This is the starting point. Must be so. As such, if it has the ultimate legal validity or binding, then, they must start there. With the text. To exhaust first every cost out of the text. Then to shift further to purpose etc….
It has nothing to do with any cross- partisan coalition. For, this is their job. This is their professional identity. Dealing with the law. That means, first to deal with texts simply. For the law, is first of all the text, then the intent, finally, integrated interpretation.
A layman, or we all in fact, are bound first by the text per se, then jurisprudence or whatever.
For the rest, we won’t stay young anymore….
Thanks
Posted by: El roam | Mar 7, 2022 10:29:06 PM
