All theories or models are wrong until proven otherwise.  We all need
to stand up to support the ideas we try to put into press or admit that
we cannot satisfy their critics.  (Not "our critics", this process should
not be personal).  I agree with Nature's editors that questions of
"fabrication" are not matters to be decided within its pages.  All that
is necessary is for critics to propose that the conclusions of a paper
are inconsistent with previous results and/or its authors' own data
to cast doubt and suggest a retraction.  The mechanism by which the
paper was constructed is irrelevant to that topic.

   There are procedures in place for handling charges of misconduct in
most places in the world.  I know that, here in the US, the NIH has
a process for investigating such charges where they funded the research,
and I expect that every university and other host organization has a
local procedure.  These procedures were constructed with the advise of
expensive lawyers and are the only groups that can compel the investigator
to allow the access to the raw data, notebooks, and material that is
required to answer the questions we all have about "how could this have
occurred".

   I hope that the person's who believe that misconduct has occurred in
this case have already sent their letters to those with the power to
investigate this matter.

   Outright fraud is very rare in our field because it is very difficult
to do well enough for the criminal to expect to get away with it.  I
agree with Ron and others that establishing elaborate layers of security
around the publication process to guard against such a low probability
occurrence just makes life more annoying without really gaining much.

   Our field does have a much bigger problems with misuse of techniques
and a "will to believe".  With its explosive growth, training has fallen
behind and many people are solving structures that have never had formal
or even informal instruction.

   Much of the discussion in this thread has been about ideas to harden
the review process, but I think that attention is misplaced.  I can't
imagine that we can convince reviewers to download and reintegrate raw
images when judging the merits of a paper.  We can't even get reviewers
to look at "Table 1"!

   Ideally problems with the model should be identified before the paper
is submitted, and before the paper is written.  I have pulled a number
of structure from the PDB and often find small, but obvious, problems.
It is fairly clear to me that no one other than its builder has ever
looked at these models in a serious fashion.  If the P.I. was familiar
with crystallography and looked at these models the student would have
been sent back to do some cleanup.

   How the community could do it, I don't know, but I think we should
encourage an "internal review" of each model as lab policy in every
crystallography lab.  No paper should be submitted unless someone
other than the model builder has looked over the model, including the
Fo-Fc map, and has reviewed with the model builder the process of
structure solution.

   A new lab may not have the expertise in-house for such a review,
so they would have to make arrangements with some near-by friendly
crystallographer for assistance.  This would allow for information
transfer into the new lab so, in the future, they could stand on
their own later on.

   I don't know the mechanism that could be used to encourage this
practice.  At the very least a committee could write up a "best
practices" document that would emphasize the management needs that
arise in a technique where a person is staring at a graphics screen
hoping beyond hope to "see" that feature that will make the cover
of Nature.  The "will to believe" is so strong that we really need
a second pair of eyes (at least) early in the game.

Dale Tronrud

Ronald E Stenkamp wrote:
While all of the comments on this situation have been entertaining, I've been 
most impressed by comments from Bill Scott, Gerard Bricogne and Kim Hendricks.

I think due process is called for in considering problem structures that may or
may not be fabricated.  Public discussion of technical or craftsmanship issues
is fine, but questions of intent, etc are best discussed in private or in more 
formal settings.  We owe that to all involved.

Gerard's comments concerning publishing in journals/magazines like Nature and
Science are correct.  The pressure to publish there is not consistent with
careful, well-documented science.  For many years, we've been teaching our graduate students about 
some of the problems with short papers in those types of journals.  The space limitations and the 
need for "relevance" force omission of important details, so it's very hard to judge the 
merit of those papers. But, don't assume that other "real" journals do much better with 
this.  There's a lot of non-reproducible science in the journals.  Much of it comes from not 
recognizing or reporting important experimental or computational details, but some of it is 
probably simply false.

Kim's comments about the technical aspects of archiving data make a lot of
sense to me.  The costs of making safe and secure archives are not
insignificant.  And we need to ask if the added value of such archives is worth
the added costs.  I'm not yet convinced of this.

The comments about Richard Reid, shoes, and air-travel are absolutely true.  We
should be very careful about requiring yet more information for submitted
manuscripts.  Publishing a paper is becoming more and more like trying to get
through a crowded air-terminal.  Every time you turn around, there's another
requirement for some additional detail about your work.  In the vast majority
of cases, those details won't matter at all.  In a few cases, a very careful
and conscious referee might figure out something significant based on that
little detail.  But is the inconvenience for most us worth that little benefit?

Clearly, enough information was available to Read, et al. for making the case 
that the original structure has problems.  What evidence is there that 
additional data, like raw data images, would have made any difference to the 
original referees and reviewers?  Refereeing is a human endeavor of great 
importance, but it is not going to be error-free.  And nothing can make it 
error-free.  You simply need to trust that people will be honest and do the 
best job possible in reviewing things.  And that errors that make it through 
the process and are deemed important enough will be corrected by the next layer 
of reviewers.

I believe this current episode, just like those in the past, are terrific 
indicators that our science is strong and functioning well.  If other fields 
aren't reporting and correcting problems like these, maybe it's because they 
simply haven't found them yet.  That statement might be a sign of my 
crystallographic arrogance, but it might also be true.

Ron Stenkamp

Reply via email to