SCOTUS FOCUS
“We’re not there to provide entertainment. We’re there to decide cases,” Roberts sternly declared. Or did he? — ChatGPT and the Supreme Court, two years later
Just over two years ago, following the launch of ChatGPT, SCOTUSblog decided to test how accurate the much-hyped AI really was — at least when it came to Supreme Court-related questions. The conclusion? Its performance was “uninspiring”: precise, accurate, and at times surprisingly human-like text appeared alongside errors and outright fabricated facts. Of the 50 questions posed, the AI answered only 21 correctly.
Now, more than two years later, as ever more advanced models continue to emerge, I’ve revisited the issue to see if anything has changed.
Successes secured, lessons learned
ChatGPT has not lost its knowledge. It still got right that the Supreme Court originally had only six seats (Question #32) and explained accurately what a “relisted” petition is (Question #43). Many of its responses have become more nuanced, incorporating crucial details that were missing in 2023. For example, when asked about the counter-majoritarian difficulty, the AI easily identified Professor Alexander Bickel as the scholar who coined the term (Question #33). Similarly, when explaining non-justiciability (Question #31), the concept that there are some cases that courts cannot hear, it now includes mootness and the prohibition on advisory opinions among its examples.
Continue Reading