Just a few things to add:

If you have a _current_ version of ADXV, then that "disappearing sharp spots" problem has been fixed. Try downloading a new copy. Its free.

To answer the OP's question: there was a paper written about practical Pilatus data collection recently:
http://dx.doi.org/10.1107/S0907444911007608

But I think it worth pointing out that, theoretically, the best "strategy" for a detector with no read-out noise is no strategy at all. This is because the whole point of a "strategy" is to get a complete dataset on as few images as possible because each image carries a certain amount of noise with it. Thus minimizing the number of images minimizes this source of noise. However, if there is no read-out noise then it doesn't matter how many images you have, and the next thing to worry about is radiation damage. The best way to deal with radiation damage is to divide your data over as many images as possible. You then move the problem of "strategy" to after you go home, where you can figure out where to "cut" the data or perhaps even do some zero-dose extrapolation.

An instructive way to think about this is to consider the most extreme case of "high multiplicity" where you record only one photon per image. For a 100um round crystal, a 30 MGy dataset will involve only a trillion or so scattered photons, and that number is pretty much fixed by the radiation damage physics (Holton & Frankel 2010). So when it comes to "strategy" the only question is how to divide these photons up. Images with only one photon hit and every other pixel "zero" will compress very well, so the storage needed to do a "single photon image" dataset doesn't take up nearly as much space as you might initially think. If you have such a "single-photon image dataset" you can then sum all the images with "phi" values that fall between 0 and 1 as one new image, then sum phi= 1 to 2 as a second image, etc. and process with your favorite software. Or, you can change your mind and sum images for 0 to 0.1, 0.1 to 0.2, etc. Essentially, single-photon image data collection would allow you to devise any conceivable "strategy" AFTER you have collected the data!

Why doesn't everybody do this? Mostly because they are impatient. In fact, it boggles my mind sometimes how someone who has slaved away on a project for years, even decades, will balk at doing a high-multiplicity data collection because it will "take too long". Admittedly, a trillion-image dataset, collected at 200 Hz would take 158 years to collect, but what about a 2-photon-per-image dataset? 10? 100,000? What about one photon per pixel (on average)? That would only take 14 minutes (1e12 photons / 6e6 pixels / 200 Hz / 60 s ). Yes, you'd have almost 200,000 images to deal with, but if getting the strategy "right" is going to make-or-break solving your structure, do you really care?

Currently, the major barrier to single-photon image or single-photon-pixel datasets is the processing software. Things like estimating the variance of a pixel field with no photons in it are, well, problematic. But in the future I imagine a more "holistic" approach can be made, where not only the position of the photon hit is considered (x,y and phi), but the time as well. Sort of an "intrinsic" zero-dose extrapolation.

At this point, one might wonder why we care so much about "dynamic range" when all you need is one or two bits per pixel (0-4 photons), and that actually IS a very good question. A question that carries us into other sources of error that we don't see in the detector specs, such as the accuracy of the pixel calibration. You might think that if you are "counting photons" then the calibration would be perfect, but this is only true if you can be sure you counted ALL of the photons, and there are no detectors that do that. In most cases (Pilatus and phosphor-coupled CCD alike) about 20% of the photons pass right through the x-ray sensitive layer. If this "capture fraction" varies from pixel to pixel (and it always does), then that is a source of systematic error. Same thing for photons that get absorbed in the front window, or flecks of dust on the front window.

Yes, you can "correct" for all these things, and that is what "calibration" is all about, but it is important to remember that any "calibration" is the result of some sort of experimental measurement, and all experimental measurements have an error bar. Calibrating something to 5% accuracy is pretty easy. 1% is difficult and 0.1% is very very hard. The resultant of all these "calibration" errors ends up in your low-resolution Rmerge (at high multiplicity). Remember, if your brightest spots have an average of 1 million photons in them, then Rmerge should be 0.1% (1e6 vs sqrt(1e6)). The fact that it is bigger means that something other than photon-counting error is playing a role.

The error due to pile-up is something that is probably news to our current generation of CCD-trained crystallographers who are too young to remember the "multiwire era", so I think it important to describe it here. Mind you, Dectris has done a very good job of minimizing the influence of pile-up on your data, and on Pilatus3 they are taking even further steps to deal with it, but Pilatus is still a counting device, and there are certain things the user of any counting device should bear in mind:

1) they are not linear
2) they are sensitive to photons/s, not just photons
3) they can "roll over" at high intensity
4) it can be hard to know if any of the above is happening

On any counting device (such as Pilatus) some absorbed photons go "missing" because they hit a pixel while that pixel was still "recovering" from the last photon hit. This is called "pile-up" and it is the main reason why I don't like counting devices. Perhaps I am emotionally scarred from the early days of commissioning my beamline when I was trying to figure out why I was getting upside-down absorption spectra for Se scans. A rookie mistake (in retrospect), but I wasted a lot of time on that one. Turns out my fluorescence detector (a counting device) was not only having pile-up issues, but had rolled over into a regime where the detector spends more time "recovering" than it does counting, and increasing the true intensity actually gives you less observed "counts". Sometimes a great deal less than the "maximum count rate".

What haunts me to this day about "pile up" is that the detector does not tell you when it happens! That is, Pilatus introduces a new kind of 'silent overload'. The first kind of overload: more than 20 bits of photons, does turn your pixels yellow to tell you that something is "wrong". But, the new kind of overload: too many photons/s at some point during the exposure, does not give you a yellow pixel. Sometimes you will see a spot on a Pilatus that has a "hole" in the middle, and that is an excellent indicator that the central pixel "rolled over" due to pile-up, but with single-pixel spots, this trick doesn't work. Yes, there are ways around this problem, one of which is Sol Gruner's new "mmPAD", which can integrate (no pile up) as well as count photons. This has the advantage of being a potentially self-calibrating detector, and it is now being marketed by ADSC. Dectris, however, seems to be specifically avoiding anything that isn't a counter.

Of course, roll-over is the most extreme kind of pile-up, at lower intensities what you get is non-linearity. This is probably the most valuable lesson I learned from my fluorescence detector: counting devices are fundamentally non-linear. That is, the graph of "true photons" vs "observed counts" is always curved. And not only that, it has two solutions for every "observed counts". Which one is right? There is actually no way to tell, not without changing the incident intensity and seeing if "observed counts" goes up or down. The "pile up correction" algorithm always picks the lower of the two possible "true photons".

Some beamline scientists turn the pile-up correction off. Why? Because sometimes it makes things worse. This is because the equation Poisson derived for "correcting" the count rate assumes on a very fundamental level that the "true intensity" is constant over the counting period, but if you have spots rotating through the Ewald sphere of bunches of electrons flashing by in the storage ring, then this is not exactly the case. This has even been studied recently:
http://dx.doi.org/10.1107/S0909049513000411

The pile-up problems arising from sharp spots can be mitigated with "fine slicing". If you slice fine enough to "outrun" the variations in instantaneous intensity as the relp moves in and out of the Ewald sphere, then the intensity actually is "constant" over any given "exposure time", and the pile-up correction will work properly. This is why you MUST fine-slice when using a Pilatus detector so that the delta-phi is smaller than the rocking width of your crystal. With a CCD (which has no pile-up) the optimum delta-phi is usually longer.

How can you tell if you are having pile-up problems? The most appropriate test is to repeat the dataset with an attenuated beam and longer exposure time so that you get the same photons/pixel, but different photons/s. If the second dataset is better than the first, then you had pile-up problems. Or, perhaps you had beam-flicker or shutter-jitter problems, but do you care? The second dataset was better! The true test for pile-up issues is to merge the two datasets and look at "outliers". If most of the outliers show bright spots getting brighter in the slow dataset (relative to weak spots), then you had pile-up problems in the fast dataset.

How can you be sure you don't have pile-up problems? Slow down. As long as none of your pixels experience photon arrival rates approaching the maximum count rate (~1 million photons/s for Pilatus) at ANY TIME during the exposure, then you're good. This is, of course, easier said that done because you never know what your brightest spot will be. However, once you have a first-pass dataset, you can look at your processing output to find the brightest spot and see what "phi" value that was. If you rotate to that "phi' and take a series of stills (no rotation during exposure) with different attenuation settings, you should be able to verify that the intensity of that brightest spot is linear with photons/s or not.

How big is the error due to pile-up? Well, it does depend on a number of things, but as long as you are below the "maximum count rate", the maximum possible error introduce by the "correction" is always smaller than the error due to applying no correction at all. For a 100 ns dead-time (note that this is the "dead time" of the counting circuit, not the "dead time" between image read-outs) and a phi slice short enough for the spot intensity to be "constant", the error of not doing the pile-up correction is 1% for 100,000 photons/s, but only 0.1% for 10,000 photons/s. Note that this is not the average count rate across the whole detector, it is for a single pixel. That is, if you have a pixel with 100,000 counts in a 1s exposure, then pile-up could be introducing up to a 1% error, even though the counting error (sqrt(N)) is only 0.3%. But if you attenuate 10-fold and expose for 10s (100,000 counts at 10,000 counts/pixel/s), the error due to pile-up will be only 0.1%, and then the dominant source of error actually is the photon-counting limit.

Of course, if those 100,000 photons were all bunched up into a very short period of time within the exposure, you might see quite a few less "counts". In the extreme case of a free electron laser, where the x-ray pulse is only 10 fs long, all of the photons arrive at the detector at the "same time", and Pilatus would give you a "1", even if 100,000 photons hit. This is why the XFEL detectors are integrators.

After I learned all this I started to wonder why we like "single photon counting" so much. What was wrong with integrating detectors again? It is often overlooked that fine slicing and "short exposure with high multiplicity" are ALSO a good idea on modern CCDs. Yes, they have read-out noise, but as long as the total read-out noise (summed for all the pixels for a given hkl over all the images in the dataset) is ~3x less than the photon-counting noise (summed for those same pixels) then the read-out noise has basically no influence on the total noise (3^2 + 1^2 ~= 3^2). For weak spots, the Bragg-scattered photons approach zero, so the total noise is dominated by the background counts. For an ADSC Q315r detector with typical settings, the read-out noise is equivalent to having 2 extra photons/pixel of background on each image, so for 5-fold multiplicity you will have 10 "extra photons" worth of noise per pixel, and that means you want at least 30 photons/pixel of background to bury it. This is equivalent to "I: 100" in the upper left corner of the ADXV display (the digital baseline is 40 and the "gain" is 1.8 pixel levels per photon). On a Rayonix "HE" series in "slow" mode, you can actually get less than 1 photon of noise per read-out, and that means you can theoretically do a single-photon-per-image dataset with this detector.

But, as always, don't take my word for it. I'm probably prejudiced against photon-counters. But I do recommend that you take a bit of your synchrotron trip to try out new data collection strategies on YOUR crystals. Particularly the "attenuate and expose longer" (attenu-wait) strategy. If you get the same or better data quality going fast as you do going slow, then great! Keep doing it. At least, until the next "upgrade".

-James Holton
MAD Scientist

On 5/9/2013 12:08 AM, Jose Brandao-Neto wrote:
Hi all,

  Graeme's message tell the core of the detector usage notes here at diamond. 
He might have unearthed a can of worms or two, though ;)

  CCDs still are great detectors! I am not sure about his noise comment as the 
more recent CCDs (post 2000-ish) are designed to operate with anode sources 
(and definitely image plates have a very good signal-to-noise ratio in long 
exposures). The mitigation of the readout deadtime is arguably the main 
experimental design driver when using CCDs.

  One other tiny little thing to bring to the surface is that the CCD images 
presented to the user are not the raw images and my personal opinion is that 
the end users should be at least aware of what a real image looks like 
(zingers, taper features, tiling) and their implications in the noise per 
reflection. I agree that filtering, stitching and smoothing help in judging the 
quality of the diffraction pattern (and crystal) itself.

Regards,
Jose'
-> try other color scales! black and white is good but a rainbow pallete helps 
identifying weaks and strongs because if discrete jumps in color at different 
intensities.

===
Jose Brandao-Neto MPhil CPhys
Senior Beamline Scientist - I04-1
Diamond Light Source

jose.brandao-n...@diamond.ac.uk
+44 (0)1235 778506
www.diamond.ac.uk
===

Reply via email to