Hi Eleanor

I reran my simulation including your method.  Let's just look at the two
extreme cases in my original table: sd(F+) = 10, sd(F-) = 1 (5th row) and
sd(F+) = 1000, sd(F-) = 100 (last row: the case where my method performs
worst but still better than the others!).  I used the same 'true' values as
before: F+ = 578, F- = 1369, Fmean = 973.

For the first case, the values of su(Fmean) for the methods 'unweighted',
'variance-weighted', 'RMS-error-weighted' (my method) and 'sd-weighted'
(your method) are 5 1 5 1 respectively, so as expected the two weighted
methods give the lowest variances.

Similarly the mean biases are: 0 388 0 323, so again as expected the
weighted methods are the most biased.

The RMS errors are: 5 388 5 323 so in this case the 'variance-weighted' and
'sd-weighted' methods both perform poorly with the highest overall errors,
though your method is somewhat better than the 'variance-weighted' one.

For each statistic I'm including the first 3 values for comparison but they
are of course exactly the same as before: only the last value for the
'sd-weighted' method is new.

For the second case with sd(F+) = 1000, sd(F-) = 100 and the same values
for the true Fs, the results from the 4 methods in the same order as before
are:

su(Fmean):        386    99  171  115.

bias(Fmean):        87  389  274  339.

RMSE(Fmean):   396  402  323  358.

So again the results for su(Fmean) and bias(Fmean) are as expected (i.e.
the 'sd-weighted' method performs intermediate between 'unweighted' and
'variance-weighted').  However using the RMSE measure the
'RMS-error-weighted' method performs best.  Using the RMSE measure as the
definitive metric (which IMO is the only sensible procedure), in no case
does the 'RMS-error-weighted' method perform worse than any of the others.

Cheers

-- Ian


On 13 March 2017 at 08:22, Ian Tickle <ianj...@gmail.com> wrote:

>
> Eleanor
>
> I notice that you are calculating a weighted mean using 1/sd(I) as the
> weight whereas in least squares one would of course normally use
> variance-weighting, i.e. 1/sd(I)^2.  I assume this is intentional and is a
> way to reduce the bias effect of weighting, so that the resulting bias of
> the mean would be intermediate between that of an unweighted and a
> variance-weighted mean, though the variance of the mean would no longer be
> a minimum.
>
> Assuming that it was indeed intentional, I will see if I can find the
> program I used to simulate the various ways of calculating the mean using
> the MSE as the measure of accuracy and see how this method compares with
> the others.
>
> Cheers
>
> -- Ian
>
>
>
> On 12 March 2017 at 17:58, Eleanor Dodson <eleanor.dod...@york.ac.uk>
> wrote:
>
>>      You read:
>> h k l  IPLUS   SIGIPLUS     INEG   SIGINEG
>> Then program calculates this:
>>
>>        SIGIMEAN = SIGIPLUS*SIGINEG/(SIGIPLUS+SIGINEG)
>>
>>          IMEAN = (IPLUS/SIGIPLUS + INEG/SIGINEG)*SIGIMEAN
>>
>> ie: IMEAN = ( IPLUS*SIGINEG   + INEG * SIGIPLUS ) / /(SIGIPLUS+SIGINEG)
>>
>> Is that the right thing to do? Not sure!
>>
>> Eleanor Dodson
>>
>>
>>
>> On 11 March 2017 at 15:18, Karthikeyan Subramanian <skarthi...@gmail.com>
>> wrote:
>>
>>> Dear CCP4bb,
>>>
>>> How IMEAN and SIGIMEAN is calculated in scalepack2mtz if the input is
>>> with anomalous intensity (obtained from HKL2000). Any guide/reference is
>>> highly appreciated.
>>>
>>> Thanks in advance,
>>>
>>> With regards
>>>
>>> Karthikeyan S.
>>>
>>> Principal Scientist
>>>
>>> IMTECH, Chandigarh
>>>
>>
>>
>

Reply via email to