Joe, one of the key points in my paper with Judea is that causality is
very much model-dependent.  If you construct a "bad" structural model,
you get "bad" answers about causality.  You may be right that, for some
applications, constructing an appropriate "good" causal model may turn
out to be hard.  A particuarly important issue is the choice of random
variables.  If you leave out the random variable for kicking the board,
you may have a bad caussal model.  Unfortunately, we have rather little
to say about how you validate a model (in particular, how you can
validate that you've chosen the "right" variables).  

Interdisciplinary work may well help us here.  There is work going on in
the psychology (by people like Steve Sloman and Dave Lagnado) testing
the extent to which our models predict the answers given by people.  To
be honest, the results are mixed.  It seems very hard to describe what
people do just by our models (although they do seem to give some
insight).  I'm sure much more work could be done along these lines.

-- Joe

>From [EMAIL PROTECTED]  Mon Jul 17 09:05:43 2006
X-MimeOLE: Produced By Microsoft Exchange V6.5.7226.0
Content-class: urn:content-classes:message
MIME-Version: 1.0
Subject: Structural models are necessary but may be impossible to validate
Date: Mon, 17 Jul 2006 09:04:50 -0400
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Thread-Topic: Structural models are necessary but may be impossible to validate
Thread-Index: AcaoIQApez8As2AdQ2WAieQ3qkAfWwBez1Zw
From: "Mitola III, Joseph" <[EMAIL PROTECTED]>
To: "Joseph Halpern" <[EMAIL PROTECTED]>, <uai@engr.orst.edu>
Cc: <[EMAIL PROTECTED]>
X-OriginalArrivalTime: 17 Jul 2006 13:04:51.0817 (UTC) 
FILETIME=[97330590:01C6A9A1]
Content-Transfer-Encoding: 8bit
X-MIME-Autoconverted: from quoted-printable to 8bit by sundial.cs.cornell.edu 
id k6HD5gv20523

Professor Halpern,
 I'm concerned that the structural model building you appropriately
suggest is something like weather reporting: it is easy to get the
first and second order equations right, but the results in any given
case depend strongly on the initial conditions that in turn have
"critical" features that occur on such a fine scale that we had to wait
for computing to improve by 1000x before being able to predict the
weather as well as we do today, e.g. to predict hurricane tracks, good
but still not perfect.  This note suggests interdisciplinary research
to address Zadeh's examples with your methods in  a way that can be
validated.  
  Social systems (and Professor Zadeh's examples all entail social
systems), unlike weather, have many parameters that seem to be
unmeasurable no matter how fine the grid, depending in inscrutable ways
on internal psychological processes of the decision makers (e.g. the
buying public, the stock market, and a person under extreme stress in
Zadeh's clever progression).  Although one can retrospectively model
such phenomena from the perspective in Pearl's superb mathematical
treatment, the questions of observability of the underlying phenomena
seem to limit one's ability to validate models so that they could be
used not just to describe but also to predict (engineering predictions,
at least about stability versus likely instability, sufficiently for
decision-making insights, not fortune telling of social trajectories,
Asimov's Foundation not withstanding).
  The first thing I learned about random processes at Johns Hopkins was
that a probability space must be built on a measurable space with at
least a sigma-algebra defined on it.  The human psyche may be described
as in some ways a random space, but so far, I haven't seen problem of
measure, nor of aggregation of measures (the sigma-algebra) defined for
the psyche. In decision support, one attributes value to game positions
but doesn't include the probability that the other player will kick the
board over if the losses are too big.  This isn't funny or irrelevant
but really may be a missing aspect of mathematical causality: the
regular lack of measurability in Professor Zadeh's examples, thus the
lack of a validated sigma algebra (arithmetic on 1/k is easy; validate
the model with real world human aspects such as, e.g. voter fraud, is
harder, so what is the sociological or psychological basis for
"critical" and for 1/k?).
  So meaningful mathematical statistics or random process models work
well in automatic control where we can isolate the aircraft
meaningfully from the rest of the world except the air flowing over it,
we can define a measurable space of air pressure, pistons, valves, and
electronics, and thus we can define a fixed point corresponding to
level flight and we can write the algorithms to sustain that fixed
point in a validated way.
 Not so in the social sciences, like business and law: we don't seem to
have the basic measurability, the basic algebra of measures that we can
validate and agree to and thus the practicality of using the insights
in Pearl's book seems to be hard to come by, maybe impossible in the
near and mid term.
  But I wonder if this topic might not be a good realm for
interdisciplinary research that tries to quantify the significant
aspects of at least the sociology (not the psychology) of causality in
business and law so that we can be on more solid ground than anecdotal
evidence when trying to build models for decision support, maybe
drawing sociology and computer science together more (again: this seems
to be needed in waves on twenty year centers - I met people who were
there in the 50's and 60's, then lived through the AI hype of the 80's
and now it seems necessary again somehow).
  joe

Dr. Joseph Mitola III
Consulting Scientist
The MITRE Corporation
Tampa, FL  703-314-5709
(My views are my own and not necessarily those of The MITRE Corporation
nor any of its sponsors).
_______________________________________________
uai mailing list
uai@ENGR.ORST.EDU
https://secure.engr.oregonstate.edu/mailman/listinfo/uai

Reply via email to