After conversing with Bernhard a bit offline I think the relevant
question is:
How far apart can two electrons in the crystal be before their
scattering becomes "incoherent" ... as in no longer interfering with
each other in the way Bragg described.
The answer to this is about 10 um if the source is 1 m away (or the
detector whichever is closer), and about 3 um if the detector/source is
100 mm away.
I derive this answer from the breakdown of Bragg's assumption that the
incoming and outgoing waves are a perfect plane wave. Since the
distance to the source is not infinity, there must be some curvature to
the wavefront. This has nothing to do with beam divergence or spectral
dispersion. We are talking about one photon (emitted from the
acceleration of some electron in the storage ring or in the copper
anode) propagating through space in the quantum-wave mechanical way, and
then being detected on the detector with some probability distribution
that is the result of all these interfering scattered waves. The Braggs
simplified all this to a single geometric diagram, and that is why we
all like them so much.
Here is my version of the Bragg diagram:
http://bl831.als.lbl.gov/~jamesh/pickup/Bragg.png
The fundamental principle behind Bragg's Law is that the length of the
path taken by the wavefront from the source to an atom in the crystal
and then on to a point on the detector must be the same (plus or minus
an integral number of wavelengths) as some other path from the exact
same point in the source to the exact same point on the detector but
through a different atom (maybe in the same unit cell, maybe not).
Bragg assumed that the source and the detector were REALLY far away and
that simplifies the math, but that must break down eventually as we talk
about atoms getting further and further apart in the crystal. For
example, if the atoms are on opposite sides of the room (d-spacing large
compared with the distance to the source) then Bragg's Law will break down.
I have drawn a generalized Bragg construction (not necessarily to scale)
here:
http://bl831.als.lbl.gov/~jamesh/pickup/Bragg-ish.png
So, all we have to do is calculate the distance traveled by "the photon"
taking a path through atom #1 vs atom #2 and then require that these two
distances be an integral multiple of the wavelength. This is exactly
what Bragg did, except he assumed d was very much smaller than the
source-to-xtal distance. You need some very high-precision
floating-point arithmetic to do this "right", since the distance between
the source and the detector is typically ten orders of magnitude larger
than the distance between the atoms. It is a rare thing in science when
a technique spans such a large range in length scale. I imagine Bragg
didn't feel like working with 10-digit numbers, so he just made the
lines parallel.
Since I, unlike Bragg, have access to a very high precision
calculating machine (gnuplot) I can calculate Bernhard's "coherence
length" using the Pythagorean theorem:
Simplest thing is to assume forward-scattering and consider two
electrons that are always exactly the same distance from the source
(stx). Let's assume this is one meter or 10^10 Angstrom. Also, for
simplicity, let's assume the detector is also one meter from the sample
(xtf). Also, for simplicity assume that the path from the source to
detector via atom #1 is straight (as in my second diagram). Now, in
Bragg's diagram, it doesn't matter how big "d" is: the lines are
parallel and the same length (outside the diagram) so the x-rays always
constructively interfere at the detector as long as 2*d*sin(theta) =
n*lambda. So, given that the lines must actually intersect at the
source and the detector, how large can "d" be before the path from
source to detector via atom #2 is 0.5 Angstrom longer than the path
through atom #1?
sqrt( (1e10)**2 + 1e5**2 ) = 1e10 + 0.5
So, 100,000 Angstroms (10 um).
-James Holton
MAD Scientist
James Holton wrote:
Ethan Merritt wrote:
My impression is that the coherence length from synchrotron sources
is generally larger than the x-ray path through a protein crystal.
But I have not gone through the exercise of plugging in specific
storage ring energies and undulator parameters to confirm this
impression. Perhaps James Holton will chime in again?
Hmm. I think I should point out that (contrary to popular belief) I
am not a physicist. I am a biologist. Yup. BS and PhD both in
biology. However, since I work at a synchrotron I do have a lot of
physicists and engineers around to talk to. Guess some of it has
rubbed off.
I passed Bernhard's question along to Howard Padmore (who is
definitely a physicist) here at ALS and he gave me a very good
description of the longitudinal coherence length, similar to that
provided by Colin's posted reference:
coherence_length = lambda^2/delta-lambda
This made a lot of sense to me until I started to consider what
happens if lambda ranges from 1 to 3 A, like it does in Laue
diffraction. One might expect from this formula that the coherence
length would be very small, smaller than a typical protein unit cell,
and then you would predict that none of the scattering from any of the
unit cells interferes with each other and that you should see the
molecular transform in the diffraction pattern. But the oldest
observation in crystallography is that Laue patterns have sharp
spots. You don't see the molecular transform, despite how nice that
would be (no more phase problem!).
I think the coherence length is related to how TWO different photons
can interfere with each other, and this is a rare event indeed. It
has nothing to do with x-ray diffraction as we know it. No matter how
low your flux is, even one photon per second, you will eventually
build up the same diffraction pattern you get at 10^13 photons/s.
Colin is right that photons should be considered as waves and on the
length scale of unit cells, it is a very good approximation to
consider the electromagnetic wave front coming from the x-ray source
to be a flat plane, as Bragg did in his famous construction.
So, I think perhaps Bernhard asked the wrong question? I think the
question should have been "how far apart can two unit cells be before
they stop interfering with each other?" The answer to this one is:
quite a bit.
Consider a silicon crystal (like the ones in my monochromator). These
things are about 10 cm across, but every atom is in perfect alignment
with every other. It is one single mosaic domain that you can hold in
your hand. And as soon as you shine an x-ray beam on a large perfect
crystal, lots of "weird" stuff happens. Unlike protein crystals the
scattering of the x-rays is so strong that the scattered wave not only
depletes the incoming beam (it penetrates less than 1 mm and is nearly
100% reflected), but this now very strong diffracted ray can reflect
again on its way out of the crystal (off of the same HKL index, but
different unit cells). Then some of that secondarily-diffracted ray
will be in the same direction as the main beam, and interfere with it
(extinction). Accounting for all of this is what Ewald did in his
so-called "dynamical theory" of diffraction. The important thing to
remember about perfect crystals is that a SINGLE PHOTON interacting
with my 10 cm wide silicon crystal will experience all these dynamical
effects. It doesn't matter what the "coherence length" is.
Now, if a perfect crystal is really really small (much smaller than
the interaction length of scattering), then there is no opportunity
for the re-scattering and extinction and all that "weird stuff" to
happen. In this limiting case, the scattered intensity is simply
proportional to the number of unit cells in the beam and also to
|F|^2. This is the basic intensity formula that Ewald showed how to
integrate over all the depleting beams and re-scattering stuff to
explain a large perfect crystal. As I understand it, the fact that
there were large, macroscopic "single" crystals that were found to
still obey the formula for a microscopic crystal came as something of
a shock in the time of Darwin and Ewald. They explained this
observation by supposing that these crystals were "ideally imperfect"
and actually made up of lots of little perfect crystals that were
mis-oriented with respect to one another enough so that the diffracted
ray from one would be very unlikely to re-reflect off of another
"mosaic domain" before it left the crystal. Protein crystals are a
very good example of ideally imperfect crystals.
I'm not sure where this rumor got started that the intensity reflected
from a mosaic block or otherwise perfect lattice is proportional to
the square of the number of unit cells. This is never the case. The
reason is explained in Chapter 6 of M. M. Woolfson's excellent
textbook, but the long and short of it is: yes the instantaneous
intensity (photons/steradian/s) at the near-infinitesimal moment when
a mosaic domain diffracts is proportional to the number of unit cells
squared, but this is not useful because x-ray beams are never
perfectly monochromatic nor perfectly parallel. This means that for
all practical purposes the spot must always be "integrated" over some
angular width (such as beam divergence). That is, you have to get rid
of the "steradians" in the units of intensity before you can get
simply photons/s. The intrinsic "rocking curve" of this
near-infinitely-sharp peak from a single mosaic block is inversely
proportional to the number of unit cells in the mosaic block. So, the
integrated intensity (photons/s * exposure time) is proportional to
the number of unit cells in the beam. It doesn't matter how perfect
the crystal is.
Okay, so I know there are a lot of people out there who don't agree
with me on this, but please have a look at Woolfson's Ch 6 before
flaming me. I may be nothing more than a biologist, but I did take a
few math classes in college and think I do understand the math in that
book.
So, I think the answer Bernhard was looking for is : the size of a
mosaic domain, which can be as much as 10 cm, or as little as a few
dozen unit cells.
-James Holton
MAD Scientist