While all of the comments on this situation have been entertaining, I've been 
most impressed by comments from Bill Scott, Gerard Bricogne and Kim Hendricks.

I think due process is called for in considering problem structures that may or
may not be fabricated.  Public discussion of technical or craftsmanship issues
is fine, but questions of intent, etc are best discussed in private or in more 
formal settings.  We owe that to all involved.

Gerard's comments concerning publishing in journals/magazines like Nature and
Science are correct.  The pressure to publish there is not consistent with
careful, well-documented science.  For many years, we've been teaching our 
graduate students about some of the problems with short papers in those types 
of journals.  The space limitations and the need for "relevance" force omission 
of important details, so it's very hard to judge the merit of those papers. 
But, don't assume that other "real" journals do much better with this.  There's 
a lot of non-reproducible science in the journals.  Much of it comes from not 
recognizing or reporting important experimental or computational details, but 
some of it is probably simply false.

Kim's comments about the technical aspects of archiving data make a lot of
sense to me.  The costs of making safe and secure archives are not
insignificant.  And we need to ask if the added value of such archives is worth
the added costs.  I'm not yet convinced of this.

The comments about Richard Reid, shoes, and air-travel are absolutely true.  We
should be very careful about requiring yet more information for submitted
manuscripts.  Publishing a paper is becoming more and more like trying to get
through a crowded air-terminal.  Every time you turn around, there's another
requirement for some additional detail about your work.  In the vast majority
of cases, those details won't matter at all.  In a few cases, a very careful
and conscious referee might figure out something significant based on that
little detail.  But is the inconvenience for most us worth that little benefit?

Clearly, enough information was available to Read, et al. for making the case 
that the original structure has problems.  What evidence is there that 
additional data, like raw data images, would have made any difference to the 
original referees and reviewers?  Refereeing is a human endeavor of great 
importance, but it is not going to be error-free.  And nothing can make it 
error-free.  You simply need to trust that people will be honest and do the 
best job possible in reviewing things.  And that errors that make it through 
the process and are deemed important enough will be corrected by the next layer 
of reviewers.

I believe this current episode, just like those in the past, are terrific 
indicators that our science is strong and functioning well.  If other fields 
aren't reporting and correcting problems like these, maybe it's because they 
simply haven't found them yet.  That statement might be a sign of my 
crystallographic arrogance, but it might also be true.

Ron Stenkamp

Reply via email to