Pages

Thursday, October 20, 2011

Week 10: Science in a (Messy) Social Context


We've been discussing some of the more overtly "social" aspects of scientific investigation of late via Feyerabend. And while I take it that he offered us many interesting and important insights, there is the strong sense that his medicine is rather strong. However he is working in an important tradition within philosophy of science: attempting to understand the sociology of science. Science is, after all, a particular human activity — something that we often do in groups — and thus potentially subject to social and psychological forces of which we may be only dimly aware. 

This week, we'll examine different facets of these social factors. Prior to Feyerabend, our focus on confirmation theory was primarily individualistic. An individual scientist (or research group — a functional individual, in a sense) working on a particular problem proposes a hypothesis, makes relevant observations, does experiments, &c., that either raise or lower their confidence that the hypothesis is true. Suppose that our individual scientist’s confidence in the hypothesis is raised quite a bit: they come to (tentatively) accept the hypothesis as true. What then? Does the result become “scientific knowledge”? 

That depends, at minimum, on its being accepted by a large portion of the wider scientific community. But in order for the result to even get heard be that community, it must cross an important gateway: peer-review. (Recall that this gateway has already made an appearance in this course: I insisted that your essays only draw from peer-reviewed sources.) In order for a result to be published, it must pass the scrutiny of other experts in the field, asking questions like “was the methodology used appropriate?”, “Were the assumptions reasonable?”, “Were the relevant calculations performed correctly?”, and so on. Inasmuch as these checks rule out obvious sources of error, it seems that passing this scrutiny ought to increase our confidence that a given paper’s conclusions are correct. 

On Tuesday, we'll talk about some recent reflections on a biasing effect in peer-review that suggests that we shouldn’t be nearly as confident about our research results as we tend to be. On Thursday, we will look at an interesting case study for scientific norms revolving around peer-review, bias, and propaganda: the debate about SDI and Nuclear Winter.

Tuesday (10/25): Collective Research Effort and its Foibles
• Ioannidis, “Why Most Published Research Findings Are False” [Journal Link]*
• Lehrer, “The Truth Wears Off” [PDF]
• Schooler, “Unpublished results hide the decline effect” [Journal Link]

Questions: (respond to two)
  1. On its face, Ioannidis's claim is quite bold. Do you think he succeeds in making his case?*
  2. There are at least two different interpretations of statements of the “Decline Effect” (e.g., “our facts were losing their truth”, “the effects appeared to be wearing off”, and so on); carefully distinguish between them.
  3. Why does regression to the mean provide a more satisfying explanation for the decline effect than the hypothesis that certain effects are simply declining? Do you suppose that the subject matter of Schooler’s investigation (precognition) has anything to do with the plausibility of this suggestion or can it be made independently of the particulars of that experiment?
  4. Does the decline effect offer us a skeptical argument about science comparable to Hume’s argument about induction?
  5. Reflect on the relevance of Publication Bias for the competing theories of Popper and Feyerabend. 

Thursday (10/27): Case Study: The “Star Wars” Defense Project & Nuclear Winter
• Oreskes & Conway, “Strategic Defense, Phony Facts, and the Creation of the George C. Marshall Institute” [PDF]

Questions: (respond to one)
  1. In what ways does it seem appropriate to think of strategic investigations as (analogous to) “scientific” investigations? Does the obvious phenomena of bias in the former suggest anything about the potential for bias in the latter?
  2. What can we say about the testability of SDI? Was it straightforwardly “untestable” or is there a way of nuancing our understanding of testability? 
  3. How do you suppose Feyerabend might react to the whole SDI-Nuclear Winter affair?
  4. What do you make of the controversy over Sagan’s publications in Parade and Foreign Affairs prior to the peer-reviewed publication of the TTAPS paper? Did Sagan have a duty to publish or a duty not to publish?
  5. What is the Fairness Doctrine? Comment on its relevance to scientific and policy research.
  6. What do you think of Oreskes and Conway’s analysis of Seitz’s critique of the Nuclear Winter hypothesis?

No comments:

Post a Comment