It is interesting and relevant here I think that if you measure
background-subtracted spot intensities you actually are measuring the
AVERAGE electron density. Yes, the arithmetic average of all the unit
cells in the crystal. It does not matter how any of the vibrations are
"correlated", it is still just the average (as long as you subtract the
background). The diffuse scatter does NOT tell you about the deviations
from this average; it tells you how the deviations are correlated from
unit cell to unit cell.
So, in order to use diffuse scattering information you will have to
refine a model in not just one unit cell, but many, and the only
information you have is correlations of motions, not the motion itself.
It would still be nice to have some software (the kind you can download
and install) to do diffuse scattering stuff, but I fear the drop in
Rfree you would get by incorporating diffuse scatter into model
refinement would be slim.
There may be some advantages to measuring weak spots if we had better
models to fit to them than the "learned profiles" Rossmann suggested in
the '70s (the EVAL programs seem to be trying to do this). But I think
what WOULD be a major step forward is handling overlaps more
intelligently: bridging the gap between powder diffraction (Riteveld
stuff) and single-crystal diffraction processing. I think the world is
full of single-crystal crystallographers who would love to have been
able to process that one really good crystal that was ruined by split or
otherwise messy or overlapping spots. What if you had a big unit cell
and a mosaic spread of 10 degrees? I've seen a lot of crystals like
this! It would be great if we didn't have to throw them away. Believe
it or not, there are also a lot of powder people who screen through
hundreds of preps cursing those "spotty rings", because Riteveld only
works for nice, even powder rings (mosaic spread of 180 degrees).
Refining against pixels or not, a program that could be used equally
well for powder, single crystal or clusters of crystals, would
definitely be nice! But I think all we need coming out the other end is
hkl and F. The "R factor Gap" between Rmerge and Rcryst/Rfree does not
come from measurement errors, it comes from our inability to accurately
model the average electron density in a unit cell.
-James Holton
MAD Scientist
Harry wrote:
Hi
I, too, was struck by the potential similarity with Rietveld
refinement, then started to think about the differences. The biggest
difference, of course, is refining against "many" two-dimensional
images compared to refining against a linear plot - so I'd guess in
principle you'd get a big benefit in increasing the data:parameter ratio.
Of course, there's a hint above in the "refinement" part of the
process; you need the structure and a model for the causes of the
diffuse scatter etc to have something to refine against, so it may
still be worthwhile extracting the Bragg intensities first to solve
your structure.
(I've never done Rietveld refinement, but read a good book on the
topic a few years ago - "The Riteveld Method" ed R.A. Young. It might
be a little out of date now...)
On 20 Jan 2010, at 21:14, Klaus Fütterer wrote:
A long time ago I did a bit of Rietveld refinement and I see some
similarities between this approach and what people have been
proposing in this thread. Refining against the profile of the 1-d
powder diffraction pattern rather than extracting integrated
intensities helped to improve the quality of the refined structures
significantly. Finding the correct (or best) profile function,
however, took a while, at least for the X-ray case.
Klaus
=======================================================================
Klaus Fütterer, Ph.D.
Reader in Structural Biology
School of Biosciences P: +44-(0)-121-414 5895
University of Birmingham F: +44-(0)-121-414 5925
Edgbaston E: k.futte...@bham.ac.uk
Birmingham, B15 2TT, UK W: www.biochemistry.bham.ac.uk/klaus/
=======================================================================
On 20 Jan 2010, at 20:29, Edward Snell wrote:
Hi Paul,
I'll probably open myself up to criticism (welcomed) but I think I'd
disagree with this somewhat. While crystallography from the Bragg
reflections provides a nice static picture of the structure, looking
at the diffuse scatter in more detail may give more knowledge about
mechanism - i.e. if there are any characteristic modes associated
with significant motion etc. Higher resolution is not always good,
one of my enlightening experiences came from paying attention to
collecting very complete, very low resolution data. Similarly, after
collecting 0.8A data from a large protein I leant a lot about data
processing but even more about how to not tell anyone, move the
detector back, and then attenuate the beam :) The high-res provided
a lot more work and didn't provide any more useful structural
knowledge than a 1.2A data set collected in a fraction of the time.
However, it did provide a window into how X-rays can perturb the
structure - being greedy is not always good.
Diffuse scattering has been neglected in the field (for good reason)
but I think we have the processing power to take advantage of it
now. To misquote Richard Feynman, "there is plenty of room at the
bottom", make sure you get the low resolution information as well as
the high.
I do agree that we may have to rethink image storage somewhat.
Looking over a paper not too long ago that had over 30,000 images
involved in the analysis made me remember the days when the tape
drives were slower writing data than the detectors producing it.
That mad scramble to start backup before starting collection ;)
Realtime readout, continuous rotation etc., may need to redefine our
thoughts of images.
Cheers,
Eddie
Edward Snell Ph.D.
Assistant Prof. Department of Structural Biology, SUNY Buffalo,
Hauptman-Woodward Medical Research Institute
700 Ellicott Street, Buffalo, NY 14203-1102
Phone: (716) 898 8631 Fax: (716) 898 8660
Skype: eddie.snell Email: esn...@hwi.buffalo.edu
Telepathy: 42.2 GHz
Heisenberg was probably here!
-----Original Message-----
From: CCP4 bulletin board [mailto:ccp...@jiscmail.ac.uk] On Behalf
Of Paul Smith
Sent: Wednesday, January 20, 2010 3:00 PM
To: CCP4BB@JISCMAIL.AC.UK
Subject: Re: [ccp4bb] Refining against images instead of only
reflections
Hi Jacob,
I see you're still in the crystallography business.
While you have an interesting idea, I doubt refining structures
against entire images would be of any use in obtaining higher
quality macromolecular structures. Much of what you see on the
screen is a function of parameters completely unrelated or
irrelevant to the structure being studied. Diffuse scattering can
come from the cryo liquid surrounding the crystal as well as the
fibers of the mounting loop itself. Background scattering is
related to beam collimation. Spot size/shape is a function of
crystal morphology among other things. In addition, every detector
has its own peculiarities that make the intensities observed apart
from diffraction spots particular to that detector. Also, you would
have to take into account other physical properties such as ambient
temperature, detector dark current fluctuations, variations in air
absorption, etc.
So, you could conceivably fit all of these various parameters to the
images on hand, but none of them give you any actual information
about your structure. As always, if you want more information about
your structure, get higher resolution data.
Nonetheless, I do think some thought could be put in to exactly how
data are reduced. Perhaps the impending era of real time detector
readout will help us rethink about spot profiles and intensity
integration in a more sophisticated way. We may see a return to
thinking about ccd readouts like an area detector which makes the
process of analyzing images moot.
--Paul
--- On Wed, 1/20/10, Jacob Keller <j-kell...@md.northwestern.edu>
wrote:
From: Jacob Keller <j-kell...@md.northwestern.edu>
Subject: [ccp4bb] Refining against images instead of only reflections
To: CCP4BB@JISCMAIL.AC.UK
Date: Wednesday, January 20, 2010, 12:47 PM
Dear Crystallographers,
One can see from many posts on this listserve that in any
given x-ray diffraction experiment, there are more data than
merely the diffraction spots. Given that we now have vastly
increased computational power and data storage capability,
does it make sense to think about changing the paradigm for
model refinements? Do we need to "reduce" data anymore? One
could imagine applying various functions to model the
intensity observed at every single pixel on the detector.
This might be unneccesary in many cases, but in some cases,
in which there is a lot of diffuse scattering or other
phenomena, perhaps modelling all of the pixels would really
be more true to the underlying phenomena? Further, it might
be that the gap in R values between high- and low-resolution
structures would be narrowed significantly, because we would
be able to model the data, i.e., reproduce the images from
the models, equally well for all cases. More information
about the nature of the underlying macromolecules might
really be gleaned this way. Has this been discussed yet?
Regards,
Jacob Keller
*******************************************
Jacob Pearson Keller
Northwestern University
Medical Scientist Training Program
Dallos Laboratory
F. Searle 1-240
2240 Campus Drive
Evanston IL 60208
lab: 847.491.2438
cel: 773.608.9185
email: j-kell...@northwestern.edu
*******************************************
Harry
--
Dr Harry Powell, MRC Laboratory of Molecular Biology, MRC Centre,
Hills Road, Cambridge, CB2 0QH