Thanks, Clemens, yet more program/literature to study (and to come)!

Jorge


On 10/31/23 12:41, Clemens Vonrhein wrote:
Dear Jorge,

as Harry mentioned, autoPROC [1] will automatically exclude image
ranges that are significantly worse than the rest (i.e. add mostly
noise). This could be due to radiation damage, badly centred crystals
or anything else. The so-called "fitness parameter" it uses takes
multiplicity and completeness into account when judging if a set of
images should be excluded [2].

Those automatic decisions seem to work very well in our and our users
hands ... as far as we can tell [3]

Cheers

Clemens

[1] https://www.globalphasing.com/autoproc/
[2] a paper describing this in detail is under review atm
[3] like most software developers we tend to get feedback if
     something doesn't work ;-)

On Tue, Oct 31, 2023 at 10:48:54AM -0400, Jorge Iulek wrote:
Hi,

        Well, it seems there are already many good indications to study and to 
work
on.
        Thanks, Oliver, Kay, Harry and Graeme!
        Now, brain and hands on!

Jorge

-------- Forwarded Message --------
...

Dear all,

        I have found many fundamental studies on image processing and refinement
indexes concerning the decision on cutting resolution for a dataset, always
meant to get better models, the final objective. Paired refinement has been
a procedure mostly indicated.
        I have been searching studies alike concerning, in these days of 
thousands
of collected images and strong x ray beams, the cutting (or truncation) of
the (sequentially due to rotation method) recorded images in a dataset due
to radiation damage. Once again, I understand the idea is to always produce
better models.
        On one hand, the more images one uses, the higher the multiplicity, what
(higher multiplicity) leads to better averaged intensity (provided scaling
makes a good job), on the other hand, the more images one uses, lower
intensity (due to the radiation damage) equivalent reflections come into
play for scaling, etc. How to balance this? I have seen a case in which
truncating images with some radiation damage led to worse CC(1/2) and
<I/sigI> (at the same high resolution shell, multiplicities around 12.3 and
then 5.7), but this might not be the general finding. In a word, are there
indicators of the point where to truncate more precisely the images such
that the dataset will lead to a better model? I understand tracing a sharp
borderline might not be trivial, but even a blurred borderline might help,
specially in the moment of image processing.
        I find that in 
https://ccp4i2.gitlab.io/rstdocs/tasks/aimless_pipe/scaling_and_merging.html#estimation-of-resolution
there is a suggestion to try refinement with both truncating and not
truncating.
        Sure other factors come into play here, like diffraction anisotropy,
crystal internal symmetry, etc., but to start one might consider just the
radiation damage due to exposure to x rays. Yes, further on, it would be
nice the talk evolves to those cases when we see peaks and valleys along the
rotation due to crystal anisotropy, whose average height goes on
diminishing.
        Comments and indications to papers and material to study are welcome.
Thanks.
        Yours,

Jorge

########################################################################

To unsubscribe from the CCP4BB list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/WA-JISC.exe?SUBED1=CCP4BB&A=1

This message was issued to members of www.jiscmail.ac.uk/CCP4BB, a mailing list 
hosted by www.jiscmail.ac.uk, terms & conditions are available at 
https://www.jiscmail.ac.uk/policyandsecurity/


########################################################################

To unsubscribe from the CCP4BB list, click the following link:
https://www.jiscmail.ac.uk/cgi-bin/WA-JISC.exe?SUBED1=CCP4BB&A=1

This message was issued to members of www.jiscmail.ac.uk/CCP4BB, a mailing list 
hosted by www.jiscmail.ac.uk, terms & conditions are available at 
https://www.jiscmail.ac.uk/policyandsecurity/

Reply via email to