Kay, > The Wikipedia's "error propagation" article in its "Caveats > and Warnings" paragraph calls this a Cauchy distribution.
And that sounds strange to me. p.d.f. of the Cauchy distribution is non-zero for any finite argument, whereas the p.d.f. for the "inverse Gaussian" must tend to zero at x->0, and is in fact this p(t)=exp(-(1-x0*t)**2/(2*s**2*t**2))/(t**2*s*sqrt(2*pi)) where t=1/x, and x0 and s are from original N(x0,s). This does NOT look like Cauchy distribution and I don't even see any limiting case whereby these two would match. What they have in common is that in both cases the variance is undefined. This is because when t->inf, the exponential term remains ~constant and the p.d.f.~1/t**2 (same as Cauchy). The mean can still be defined as finite by combining negative and positive domains in a careful manner, but truth is that both left- and right-side integrals are infinite. As for variance, p.d.f.*t**2->const at t->+-inf, which leads to essentially infinite variance, in accord with your numerical observations. Perhaps the lesson is to be aware that Wikipedia may contain errors. After all, I did win Tour de France and Stanley Cup in 1997 - just give me 10 minutes and then check the Wikipedia :) > This is clearly an example where the first-order approximation breaks > down, and common sense tells me that this happens because we may > divide by numbers close to zero. But it still works with real experimental data (given that errors are small). If inverse value makes physical sense, then the direct value will not obey normal distribution as x=0 are prohibited. The approximation still works though if relative error is significantly less than 100%. Which would break down for weak reflectionds, but then inverse intensity really is not a meaningful thing. > And it shows that it might be useful to think about error propagation, > and not blindly apply the formulas. Exactly right. Cheers, Ed. -- "Hurry up before we all come back to our senses!" Julian, King of Lemurs