Thanks for the responses!  Can someone look at the code I posted
http://www.math.northwestern.edu/~richter/RichterTarskiMizar.tar 
and tell me how to turn it into nice HOL code?  Or post a different
axiom-proof example? There's no relevant example in the HOL dox.

Lorenzo, I'd be delighted to see your Tarski geometry code, but if you
could explain how to do it on the group, that might be even better.

Roger, if there's ``some cultural aversion to the use of axioms in the
HOL community,'' then maybe HOL isn't the right proof-checker to use
in high school geometry classes.  Does anyone think there is a better
proof-checker?  My guess is that this is just an interface problem
that HOL could easily solve, and I'm willing to work on it myself.

Most of what I know about HOL is that Tom Hales really fired me about
in his Notices article.  Have you read Tom's article 
www.math.pitt.edu/~thales/papers/turing.pdf
His focus seems quite different from mine.  I'd like to use
proof-checkers to rigorize human proofs, starting with high school
geometry. I've been appalled for some time at the lack of rigor in my
subject, homotopy theory, and even got a paper published 
http://www.ams.org/journals/proc/2009-137-07/S0002-9939-09-09763-9/S0002-9939-09-09763-9.pdf
fixing a longstanding error, and my first paragraph reads

   We find this to be a serious pedagogical gap and give combinatorial
   proofs here.

I'd like proof-checkers like HOL to improve the rigor level of
theorems that are much simpler that Tom's Kepler conjecture proof.
Tom's focus instead seems to be realizing Turing's AI dreams, and he
looks forward to the days of million page proofs churned out by the
computer, about which we can only understand `fables' which maybe are
churned out by the computer as well.  Then he laments that cosmic rays
may doom the enterprise.  It's an awe inspiring vision, and he has
lots of interesting example    .  Perhaps it doesn't jibe with my focus.
Although using proof-checkers to debug software is a great idea.

John Harrison says that one route for debugging software is through
Denotational Semantics, which I know something about.  I failed on
comp.lang.scheme to explain to that there's a compositional semantic
function for Scheme that only uses only induction, and not Scott
models of the lambda calculus (basically non-Hausdorff Cantor sets).
Scott models are really cool, and give a new version of Church's
thesis: computable = continuous!  But you don't need Scott models for
a DS for Scheme.  It's almost obvious.  The standard reduction
function for the lambda calculus is perfectly well-defined (reduce the
leftmost lambda), and it only requires induction.  Keep performing
reduction steps until you're done, and if you're never done, send it
to bottom.  You can't write a program that will do that (by Turing's
solution to the Halting problem), but you can define this function
mathematically using only induction.  I even wrote a paper giving a
shorter proof that Barendregt's Normalization Theorem.
http://www.math.northwestern.edu/~richter/lambda-calc.pdf 
For Scheme we should use instead Felleisen's lambda value calculus,
where similar remarks apply.  We don't get a compositional semantic
function this way, of course, but that's easy to fix, using the usual
environment (& store if there's mutation), because Scheme is
compositional: you evaluate (x y) by first evaluating x to a function,
then evaluating y, and sending this value to the function.

-- 
Best,
Bill 

------------------------------------------------------------------------------
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
_______________________________________________
hol-info mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/hol-info

Reply via email to