Monday, March 6, 2006, 1:04:33 AM, Wirt Atmar wrote:
WA> ...Gareth's comments do allow me however an opportunity to expand
WA> a little bit on my previous posting. I personally hold David
WA> Anderson and Ken Burnham in very high regard, but I worry that the
WA> AIC is being oversold to the ecological community -- for two
WA> different reasons. ...

I really enjoyed reading Wirt Atmar's insightful opinion about the
role AIC is supposed to play in (ecological) modeling, and I entirely
agree with him about the arbitrariness of its formulation. Of course,
this doesn't mean that AIC is useless. In fact, it's actually as good
as Akaike's personal opinion and therefore it can be used as one of
the (many) possible criteria for selecting models.

However, Wirt Atmar's post leads to a more general consideration about
indices and the way they are used. For instance, let's think about
biotic indices and the practical consequences of their uncritical
application in terms of environmental management. As ecologists, we
should be used to deal with complexity, but many of us just can't
refrain from turning that complexity into a single value, especially
if a pre-compiled scale is available for interpreting that value as
excellent, good, average, etc.

Like in the AIC case, of course, some biotic indices are probably very
smart, but they are still inherently subjective. So, are we doing good
science when we base our conclusions on them? I don't think so, but
I'm afraid that some ecologists don't even ask themselves this
question.

WA> ... Nevertheless, let me also say at this point that this
WA> scattershot method has also received a measure of high acceptance
WA> in the scientific community of late. The most exquisite example of
WA> the simultaneous engineering utility and scientific meaningless of
WA> the procedure exists in the training of neural networks. ...

As for the "simultaneous engineering utility and scientific
meaningless" Wirt Atmar mentioned, I agree with him: neural networks
can be regarded as a dumb (although practically useful) tool. However,
if properly trained, they're able to capture relevant relationships in
very complex, non-linear systems. Of course, this is possible because
some "knowledge" gets implicitly embedded into a neural network during
its training. So, our problem is to extract (i.e. to understand) at
least some of that knowledge.

Basically, a properly trained neural network can be regarded as a
simplified (but still very complex) model of a real system. However,
we can "play" with it more easily than with the real thing. For
instance we can do sensitivity analyses and try to figure out which
stimuli (i.e. independent variables, using a regression-based analogy)
are relevant with respect to each response (e.g. dependent variables).

In other words, we can do experiments with the neural network model
and make inferences about the properties of the real system, and then
plan further research on the real system on the basis of those
inferences. And this can be definitely meaningful from a scientific
point of view.

Cheers,

Michele

--------------------------------
Michele Scardi
Associate Professor of Ecology

Department of Biology
University of Rome "Tor Vergata"
Via della Ricerca Scientifica
00133 Roma
Italy

http://www.mare-net.com/mscardi
--------------------------------

Reply via email to