This is true, but if we really took the air-scatter argument seriously we
would go back to the days of huge Helium-filled enclosures to get rid of
the air scatter. Some beamlines currently do direct He outflow from the
collimator toward the crystal, which reduces air scatter by the
indident beam, but I have not seen many beamline "helium box" setups to
reduce also the air scatter from the diffracted beams.
Reducing air scatter between the collimator and beamstop makes the most
significant reduction due to x-ray induced scattering background.
Simply thinking, calculate the intensity times path-length before the
beamstop and compare to the scattered beam intensity times
the diffracted path length to estimate. This would suggest air-scatter
from the diffracted beams (even totaled up) is small.
Absorption/attenuation of the diffracted beam is an issue that can be
addressed by the He-box, and this air scatter should be balanced
against window(s) on the He-box that also absorb. My impression is that
there are a few extreme cases where He box improves
data quality, but that this is rarer than the number of cases where it
is used.....it would be nice to have someone perform
a systematic study of this across a number of condition cases.
IMHO mini-beams (at least those which retain a small divergence) are
critical for small crystals and rods where otherwise multiple
crystals would be needed, and seem to make a significant difference a
number of cases, including mosiacity scanning as was
earlier mentioned.
- Thomas
Thomas Earnest, Ph.D.
Senior Scientist and Group Leader
Structural Proteomics Development Group
Physical Biosciences Division
MS64R0121
Lawrence Berkeley National Laboratory
Berkeley CA 94720
[EMAIL PROTECTED]
510 486 4603
Ethan A Merritt wrote:
On Sunday 25 November 2007 14:43, Ronald E Stenkamp wrote:
Just a few comments on "consider a crystal bathed in a uniform beam".
Anyway, I thought the reason people went to smaller beams was that
it made it possible to resolve the spots on the film or detector.
Isn't that the main reason for using small beams?
If you mean that the projected image of the crystal onto the detector
is smaller because of a smaller beam, I think could only be relevant in
the case of truly huge crystals. On the other hand, as mentioned earlier
in this thread, there is a possibility that a small beam will illuminate
a sweet spot on the crystal with lower mosaicity. In that case yes,
the smaller beam may make it possible to resolve spots that would
otherwise overlap due to high mosaicity.
I think that is the strongest argument being advanced recently for
the use of micro-beam apparatus.
The other argument is that a smaller beam will generate lower background
due to air-scatter. So for weakly diffracting crystals you want a beam
that is no bigger than the crystal, as any part of the beam that doesn't
hit the crystal contributes to the background but not to the signal.
This is true, but if we really took the air-scatter argument seriously we
would go back to the days of huge Helium-filled enclosures to get rid of
the air scatter. Some beamlines currently do direct He outflow from the
collimator toward the crystal, which reduces air scatter by the
indident beam, but I have not seen many beamline "helium box" setups to
reduce also the air scatter from the diffracted beams.
I'm less convinced that frame-to-frame scaling can correct for
absorption very well. Due to our irregular-shaped protein crystals,
before the area detectors came along, we'd use an empirical correction
(one due to North comes to mind) based on rotation about the phi axis
of a four-circle goniostat.
The current scaling algorithms for area detectors do more than generate
a frame-to-frame scale. Separate correction factors are routinely
calculated for different regions of the diffraction image.
These map back onto a set of approximately equal X-ray paths through
the crystal. Furthermore, the 3D profile fitting done by some processing
programs is a logical extension of those same empirical corrections that
we did back in the 70s.
It'd be interesting to determine the validity of the assumption that
absorption is simply a function of frame number.
I don't think any of the current generation of programs make that
assumption. But maybe I'm giving them too much credit?