Hi Paul, I'll probably open myself up to criticism (welcomed) but I think I'd disagree with this somewhat. While crystallography from the Bragg reflections provides a nice static picture of the structure, looking at the diffuse scatter in more detail may give more knowledge about mechanism - i.e. if there are any characteristic modes associated with significant motion etc. Higher resolution is not always good, one of my enlightening experiences came from paying attention to collecting very complete, very low resolution data. Similarly, after collecting 0.8A data from a large protein I leant a lot about data processing but even more about how to not tell anyone, move the detector back, and then attenuate the beam :) The high-res provided a lot more work and didn't provide any more useful structural knowledge than a 1.2A data set collected in a fraction of the time. However, it did provide a window into how X-rays can perturb the structure - being greedy is not always good.
Diffuse scattering has been neglected in the field (for good reason) but I think we have the processing power to take advantage of it now. To misquote Richard Feynman, "there is plenty of room at the bottom", make sure you get the low resolution information as well as the high. I do agree that we may have to rethink image storage somewhat. Looking over a paper not too long ago that had over 30,000 images involved in the analysis made me remember the days when the tape drives were slower writing data than the detectors producing it. That mad scramble to start backup before starting collection ;) Realtime readout, continuous rotation etc., may need to redefine our thoughts of images. Cheers, Eddie Edward Snell Ph.D. Assistant Prof. Department of Structural Biology, SUNY Buffalo, Hauptman-Woodward Medical Research Institute 700 Ellicott Street, Buffalo, NY 14203-1102 Phone: (716) 898 8631 Fax: (716) 898 8660 Skype: eddie.snell Email: esn...@hwi.buffalo.edu Telepathy: 42.2 GHz Heisenberg was probably here! -----Original Message----- From: CCP4 bulletin board [mailto:ccp...@jiscmail.ac.uk] On Behalf Of Paul Smith Sent: Wednesday, January 20, 2010 3:00 PM To: CCP4BB@JISCMAIL.AC.UK Subject: Re: [ccp4bb] Refining against images instead of only reflections Hi Jacob, I see you're still in the crystallography business. While you have an interesting idea, I doubt refining structures against entire images would be of any use in obtaining higher quality macromolecular structures. Much of what you see on the screen is a function of parameters completely unrelated or irrelevant to the structure being studied. Diffuse scattering can come from the cryo liquid surrounding the crystal as well as the fibers of the mounting loop itself. Background scattering is related to beam collimation. Spot size/shape is a function of crystal morphology among other things. In addition, every detector has its own peculiarities that make the intensities observed apart from diffraction spots particular to that detector. Also, you would have to take into account other physical properties such as ambient temperature, detector dark current fluctuations, variations in air absorption, etc. So, you could conceivably fit all of these various parameters to the images on hand, but none of them give you any actual information about your structure. As always, if you want more information about your structure, get higher resolution data. Nonetheless, I do think some thought could be put in to exactly how data are reduced. Perhaps the impending era of real time detector readout will help us rethink about spot profiles and intensity integration in a more sophisticated way. We may see a return to thinking about ccd readouts like an area detector which makes the process of analyzing images moot. --Paul --- On Wed, 1/20/10, Jacob Keller <j-kell...@md.northwestern.edu> wrote: > From: Jacob Keller <j-kell...@md.northwestern.edu> > Subject: [ccp4bb] Refining against images instead of only reflections > To: CCP4BB@JISCMAIL.AC.UK > Date: Wednesday, January 20, 2010, 12:47 PM > Dear Crystallographers, > > One can see from many posts on this listserve that in any > given x-ray diffraction experiment, there are more data than > merely the diffraction spots. Given that we now have vastly > increased computational power and data storage capability, > does it make sense to think about changing the paradigm for > model refinements? Do we need to "reduce" data anymore? One > could imagine applying various functions to model the > intensity observed at every single pixel on the detector. > This might be unneccesary in many cases, but in some cases, > in which there is a lot of diffuse scattering or other > phenomena, perhaps modelling all of the pixels would really > be more true to the underlying phenomena? Further, it might > be that the gap in R values between high- and low-resolution > structures would be narrowed significantly, because we would > be able to model the data, i.e., reproduce the images from > the models, equally well for all cases. More information > about the nature of the underlying macromolecules might > really be gleaned this way. Has this been discussed yet? > > Regards, > > Jacob Keller > > ******************************************* > Jacob Pearson Keller > Northwestern University > Medical Scientist Training Program > Dallos Laboratory > F. Searle 1-240 > 2240 Campus Drive > Evanston IL 60208 > lab: 847.491.2438 > cel: 773.608.9185 > email: j-kell...@northwestern.edu > ******************************************* >