From SCOTUSBlog, I see a link to a new article:
In our article, “No Hints, No Forecasts, No Previews”: An Empirical Analysis of Supreme Court Nominee Candor from Harlan to Kagan, we seek to overcome this gap in our understanding of the Supreme Court confirmation process. To that end, we present the results of a content analysis of every Supreme Court confirmation hearing transcript since 1955, the year that the proceedings became a regular part of the confirmation process. For each hearing, we coded all of the exchanges between a senator and the nominee, recording things such as the type of question asked, the degree to which the answer was forthcoming, and the reasons nominees gave for not answering more fully. Using this original dataset – nearly 11,000 exchanges in total – we then tested a series of hypotheses about nominee responsiveness in the face of Senate questioning.
Our results show that the conventional wisdom about Supreme Court confirmation hearings needs to be rethought. First, we discovered that there has not been a dramatic decline in nominee responsiveness since the 1980s. Recent nominees, such as Samuel Alito and Elena Kagan, were just as forthcoming as many earlier nominees, and even more forthcoming than others. Second, the overall rate of responsiveness for all nominees, including those who came after Bork, is much better than generally assumed. Nominees generally answer between sixty and seventy percent of their questions in a fully forthcoming manner. By contrast, only about twenty percent of the questions get a qualified response, and outright refusal to answer rarely tops ten percent. Therefore, whether we are talking about hearings from the 1960s or the 1990s, the notion that nominees evade more questions than they answer is unfounded. Lastly, we find that there have been subtle but important changes in the types of questions that are being asked, the topics of those questions, and in the ways in which nominees answer them, and that these shifts have helped to fuel the perception that responsiveness has declined where in fact it has not.
I’m curious how you characterize an answer as “fully forthcoming” or “qualified”?
And how does this coding work?
As noted earlier, the total number of exchanges included in our analysis (n = 10,883) was much larger than the datasets used in previous published reports.
(a) Question of Fact or Question of View. We divided all questions into one of two main groups. Questions of Fact (QOF) are questions that seek basic factual information about a topic or issue. Questions of View (QOV) seek nominees’ opinions, thoughts, assessments, interpretations, or predictions.11 For example, the question, “Where did you go to law school?” would be a Ques- tion of Fact, whereas “Do you think the Constitution protects the right to privacy?” would be a Question of View. While we recognized that this distinction would be obvious in most instances, we also anticipated that it would help us differentiate in some cases between questions that were more likely to gen- erate forthcoming responses and those that were not. For example, a question about when a case was decided (QOF) is more likely to be answered without reservation than a question about how a nominee would have voted in that case (QOV). Coding both of these simply as a question about a past case, rather than distinguishing between them, would constrain our ability to track an important potential difference in the way that nominees respond.
(b) Question of Fact topics. Questions of Fact fall into one of the following four main categories: (1) factual questions about a nominee’s legal education; (2) factual questions about a nomi- nee’s personal biography or family; (3) factual questions about a nominee’s nonlegal employment history; and (4) factual ques- tions about past cases, as well as factual questions about the nominee’s writings, speeches, previous testimony, and other issues that did not fit into the first three main categories.12
(c) Question of View topics. Questions of View seek a nominee’s views on one of the following topics: (1) past Supreme Court rulings or a lower court ruling; (2) hypothetical cases; (3) approach to judging and constitutional interpretation; (4) powers of Congress and the president; (5) federalism and states’ rights; (6) judicial power and administration; (7) peace, security, law and order; (8) individual rights and liberties; (9) other topics not identified above. Questions that cover more than one issue were coded with the main topic first, followed by secondary topics, if any.
(d) Candor of nominee response. To assess the level of candor/ evasiveness, coders assigned each nominee response to one of five categories:
- Fully/Very Forthcoming: Nominee answered the ques-tion that was asked without any qualification or evasion.13
- Qualified: Nominee indicated some reason for not answer- ing the question fully, but then gave at least a partialresponse to the question.
- Not Forthcoming: Nominee chose not to answer the ques-tion at all.
- Interruption: Nominee was interrupted by a senator before s/he even had a chance to give a partial response.
- Non-Answer: Nominee gave a nonsubstantive response (e.g., “Senator, you ask difficult questions”) to a substantive question. Or nominee gave a factual answer to a Question of View.14 Or, the nominee answered the question with a ques- tion (e.g., “Is that what you’re asking me?”). This should not be confused with the Not Forthcoming option (number 3,above).
(e) Reason for Qualified or Not Forthcoming response. If a nominee response was coded as Qualified or Not Forthcoming, coders then identified one of six reasons or explanations:
Nominee expressed concerns about answering a question about a case or issue that was before the Court or could be before the Court.
Nominee said the issue should be handled by another branch of government.
Nominee expressed general concerns about conflict of inter- est and maintaining judicial independence.
Nominee claimed s/he did not have enough information, or could not remember enough about the issue, to give more than a partial response.
Nominee claimed s/he did not have enough information, or could not remember enough about the issue, to give any response.15
Other, reason unclear, or unspecified.
Wow. It seems that so many empirical studies take a humongous data-set, code things, and then make conclusions.