Academic highlight: Measuring the circuits’ success in the Supreme Court

Anyone who follows the Supreme Court knows that its docket is driven by its interest in resolving circuit splits.  But harder questions lurk behind that observation.  Does the Court usually agree with the majority of circuits’ views on a contested questions, does it side with the minority, or does it reach an entirely independent conclusion?  Do some circuits fare better than others and, if so, which ones?  Are courts that handle a disproportionate number of cases in a particular area (e.g., the Second Circuit and securities litigation) more likely to be affirmed?  A number of scholars have set out to address those questions, which are surprisingly hard to answer.

Tom Cummins and Adam Aft published an article in the Journal of Legal Metrics offering an “improved metric of appellate review.”  They analyzed the Supreme Court’s resolution of cases involving a circuit split to determine how frequently the Court agrees with the federal courts of appeals and how often it rejects their views.  As Cummins and Aft explain, circuit court success cannot be measured simply by counting how often the Court reverses the decision below because that would omit all the courts of appeals’ decisions addressing the same issue with which the Court agreed.  For example, when the Court reverses the Ninth Circuit, it may simultaneously affirm the result reached by four other circuit courts that previously decided that same issue, leading to a circuit court “success” rate of 80% on that question.  Using that improved metric, Cummins and Aft find that although the Supreme Court affirmed only 28% of the time in its direct review of cases involving a circuit split during the 2010 Term, its actually agreed with the legal reasoning of 64% of the courts of appeals’ decisions addressing those issues.

A student note by Eric Hansford in the Stanford Law Review applied this same technique in an effort to measure the effect of judicial specialization on reversal rates.  Hansford compared the Supreme Court’s affirmance rate of “specialist” courts — that is, courts that handle a disproportionate share of certain legal issues, such as the Second Circuit and securities litigation, and the D.C. Circuit and administrative law — with their generalist counterparts.  Hansford’s data provided “preliminary but inconclusive support that increased specialization by generalist courts improves performance.”

Stephen Wasby’s 2005 article focused on the success rate of the Ninth Circuit.  Particularly interesting was his finding that the Ninth Circuit fared considerably better when the Court reviewed intercircuit conflicts in cases taken from other circuits than when it reviewed the Ninth Circuit’s decision directly.  Between 1990 and 1999, the Court sustained the Ninth Circuit’s position in 49% of cases in which it directly reviewed another circuit’s decision on the issue, as compared to only 20% of cases taken directly from the Ninth Circuit.  (Though it should also be noted that all courts do better, on average, when their own decisions are not being directly reviewed because the Court is more likely to review cases in which it thinks the lower court erred.)

These articles are all good sources of information on the success rate of the courts of appeals in the Supreme Court in cases involving inter-circuit conflict.  But a word of caution comes from Professor Aaron-Andrew Bruhl.  In a forthcoming article, Bruhl observes that it is surprisingly difficult to obtain a full account of the circuits’ performance in the Supreme Court, in part because the Court does not make the background data public.  For example, it is not always obvious when the Court has granted a case to resolve a circuit split, and the Court rarely explains its reasons for granting a petition for certiorari.  Nor is it easy to determine how many lower court decisions addressed that same issue, and thus which were ultimately affirmed or reversed by the Court’s final decision.   Furthermore, Bruhl notes that many of the recent articles analyzing the Court’s review of circuit splits rely on the Supreme Court Database maintained by Harold Spaeth and his collaborators, but that database undercounts the number of splits because it only codes a case as involving a circuit split when the Supreme Court clearly states that is what it is doing — and, as Court watchers know, the Court is not always so forthcoming.  (As Bruhl makes clear, there is nothing inherently wrong in the Supreme Court Database’s coding methods; his point is simply that the database alone cannot be relied upon by researchers who want to identify all the cases in which the Court granted cert. to resolve a split.)

 

Posted in: Academic Round-up

CLICK HERE FOR FULL VERSION OF THIS STORY