> 2. I've tried this on demo0.mm and it turned out the proof in there 
remained valid under transformation
> 
> Could you please show an example with a small proof, so that I can 
understand what you mean by under transformation? (do you mean that the 
original proof is given back if you apply your transformation twice?)

Well, let's look at content of demo0.mm. It is essentially this:

$( demo0.mm $)
  $c 0 + = -> ( ) term wff |- $.
  $v t r s P Q $.    $( metavariables and constant symbols $)

  tt $f term t $.
  tr $f term r $.
  ts $f term s $.
  wp $f wff P $.
  wq $f wff Q $.
  tze $a term 0 $.
  tpl $a term ( t + r ) $.
  weq $a wff t = r $.
  wim $a wff ( P -> Q ) $.

  a1 $a |- ( t = r -> ( t = s -> r = s ) ) $.
  a2 $a |- ( t + 0 ) = t $.
  ${
    min $e |- P $.
    maj $e |- ( P -> Q ) $.
    mp $a |- Q $.
  $}

  th1 $p |- t = t $=
    tt tze tpl tt weq tt tt weq tt a2 tt tze tpl tt weq tt tze tpl tt weq 
tt tt
    weq wim tt a2 tt tze tpl tt tt a1 mp mp $.

Let's rewrite every statement into prefix form. Proof of th1 remains same, 
exactly.

$( demo1.mm $)
  tpl $a term + t r $.
  weq $a wff = t r $.
  wim $a wff -> P Q $.

  a1 $a |- -> = t r -> = t s = r s $.
  a2 $a |- = + t 0 t $.

  ${
    min $e |- P $.
    maj $e |- -> P Q $.
    mp $a |- Q $.
  $}

  th1 $p |- = t t $=
    tt tze tpl tt weq tt tt weq tt a2 tt tze tpl tt weq tt tze tpl tt weq 
tt tt
    weq wim tt a2 tt tze tpl tt tt a1 mp mp $.

This demonstrates that there will be no need to update or even decompress 
proofs unless tuning on them.

---
Ender
пятница, 28 марта 2025 г. в 13:29:01 UTC+3, Glauco: 

> Hi Ender,
>
> 1.  what script could expand a proof fast on-demand to its steps with full 
> statements written down
>
> as an example, Yamma does it with this method: 
> MmToMmpConverter.buildProof() 
> <https://github.com/glacode/yamma/blob/036935231f4863c39efe427bae4049bfa6fb56a6/server/src/mmp/MmToMmpConverter.ts#L267>
>
> 2. I've tried this on demo0.mm and it turned out the proof in there 
> remained valid under transformation
>
> Could you please show an example with a small proof, so that I can 
> understand what you mean by under transformation? (do you mean that the 
> original proof is given back if you apply your transformation twice?)
>
> 3. about the other questions, if you could publish some code on GitHub 
> (with stub calls for the parts you've not figured out, yet), we could look 
> at it and come up with possible solutions
>
> 4. here (OpenAI 
> <https://groups.google.com/g/metamath/c/D09W2QVR-_I/m/g_rsqGj0AAAJ>), you 
> can find an old thread where Stanislas Polu, from OpenAI, built a model 
> with inference capable of proving some small theorem (there's also an 
> OpenAI mathbox). Given that their online proof assistant was a "label 
> suggester" I'm inclined to believe it worked backward (see point 5. below). 
> Unfortunately, I've never found the source code for Stanislas' models.
>
> 5. you probably already know, but a non-negligible number of metamath 
> users, tend to write proof backwards. If I had to use a LM for proof 
> automation, I would probably try the backward approach, first (I don't know 
> if this is relevant with respect to your questions about RPN vs FPN).
> Of course, it would be fascinating to see an LM that can complete the wff 
> for the current statement (say, copilot style) - probably a simpler goal, 
> but pretty cool anyway.
>
> Looking forward to your thoughts and examples.
>
> Best regards,
> Glauco
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Metamath" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion visit 
https://groups.google.com/d/msgid/metamath/18ab5a01-117c-4285-baee-ca8a3ca51bacn%40googlegroups.com.

Reply via email to