You can do this with Scala or Aimless. Scale everything first, write out the 
scaled unmerged file, then read it again "onlymerge" and a batch selection 
(Aimless also gives a cumulative  completeness)
Phil

On 12 Jan 2012, at 14:00, Ingo P. Korndoerfer wrote:

> hello,
> 
> my poor dementia ridden brain has gone on screensaver ...
> 
> i need to calculate the completeness and redundancy of reflections in
> batches or ranges of batches in a
> multi-record .mtz file.
> 
> sftools can do this, but the numbers are pretty much meaningless, i.e.,
> my feeling is, if i measure
> 10% of reflections 10 times, it will give me 100% completeness.
> 
> second option is, to select the batches i want, purge the rest, force
> sftools to "merge average"
> and then ask for the completeness. this works. but it requires
> re-reading of the complete
> dataset for every batch or segment of batches i am interested in, which
> is too slow,
> 
> any ideas greatly appreciated :-)
> 
> 1000 thanks already
> 
> ingo

Reply via email to