Re: [bitcoin-dev] Scaling Lightning With Simple Covenants

2023-10-06 Thread jlspc via bitcoin-dev
Hi Antoine,

>> "I also think resizing channels can be done fairly effectively 
>off-chain
>with hierarchical channels [1] (and even better with hierarchical channels
>within timeout-trees)".

>Yes, transactional scaling of Lightning (i.e how many transfers can be
>performed off-chain per on-chain transaction) sounds good at first sight,
>though in practice liquidity unbalance due to asymmetries in liquidity
>flows among counterparties is a bottleneck. Note, how the on-chain
>splicing for LSP spec upgrade improves on this dimension and where
>"resizing" or "pool rebalancing" aims to keep this off-chain.

Yes, and note that with hierarchical channels you can use HTLCs to send 
Lightning channel capacity over the Lightning network [1], thus performing 
channel resizing off-chain between channels that aren't in the same pool.

>> "With these proposals, it's possible to dramatically limit the
>interactivity".

>Yes, from my rough understanding of timeout-trees and channel resizing, it
>sounds to suffer from the same issue as Jeremy radix-tree's proposal or
>Christian OG channel factory, namely the lack of fault-tolerance when one
>of the casual user or end of tree balance owner aims to go on-chain. The
>fragmentation cost sounds to be borne by all the users located in the tree
>branch. Note fault-tolerance is one of the key payment pool design goals to
>advance over factories.

Actually, in the case of a timeout-tree, the fragmentation costs imposed by a 
casual user going on-chain are borne exclusively by the dedicated user who 
funded the timeout-tree.
This makes it easier to address the problem by making the casual user pay the 
funder for the fragmentation costs.

I think this is an important issue, so I created a new version of the paper 
that includes a description of how this can be done [2].
The idea is to require casual users to reveal secrets (hash preimages) that 
only they know in order to put timeout-tree transactions on-chain.
Then, a fee-penalty output is added to each leaf transaction that pays from the 
casual user to the funding user an amount that depends on which timeout-tree 
transactions the casual user put on-chain.
The details are given in the new version of the paper ([2], Section 4.10, pp. 
25-28).

>> "I propose that if the active drain fails, the casual user should put
>their channel in the old timeout-tree on-chain (so that it won't timeout on
>them). "

>I think there is still some issue there where you need to handle the
>malicious HTLC-withholding case along your multi-hop payment paths and wait
>for the expiration. Then go on-chain to expire the old timeout-tree, which
>might come with a high timevalue cost by default. Not saying keeping
>timevalue cost low is solved for today's Lightning.

This is an excellent point that I hadn't considered.
I think the solution is to perform passive, rather than active, rollovers.
Passive rollovers don't require use of the Lightning network, so they 
completely eliminate the risk of HTLC-withholding attacks.
I've added this advantage of passive rollovers in the latest version of the 
paper ([2], Section 4.4, p. 19).

>> "These costs could be large, but hopefully they're rare as they are
>failures by dedicated users that can afford to have highly-available
>hardware and who want to maintain a good reputation".

>Yes, though note as soon as a dedicated user starts to have a lot of
>off-chain tree in the hand, and this is observable by adversaries the
>dedicated user becomes an attack target (e.g for channel jamming or
>time-dilation) which substantially alter the trade-offs.

I believe channel jamming and HTLC-withholding attacks can be eliminated by 
using passive rollovers, as mentioned above.

>> "However, the paper has a proposal for the use of "short-cut"
>transactions that may be able to eliminate this logarithmic blow-up".

>Yes "cut-through" to reduce on-chain footprint in mass exit cases has been
>discussed since the early days of off-chain constructions and Taproot /
>Grafroot introduction to the best of my knowledge, see:
>href="https://tokyo2018.scalingbitcoin.org/transcript/tokyo2018/multi-party-channels-in-the-utxo-model-challenges-and-opportunities";>https://tokyo2018.scalingbitcoin.org/transcript/tokyo2018/multi-party-channels-in-the-utxo-model-challenges-and-opportunities

While I see "how do we cut-through to reduce the on-chain footprint in mass 
exit cases?" listed as an open problem in the above reference, I don't see any 
specific solutions to that problem in that reference.

The "short-cut" transactions I was referring to are defined in Section 5.4 and 
pictured in Figure 14 on p. 32 of the revised version of the paper [2].
They are a specific proposal for addressing the logarithmic blow-up of putting 
a control transaction defined by a covenant tree on-chain.
I agree that this has some similarities to the Graftroot proposal, but I 
believe it is distinct from proposals for addressing mass exit cases (and in 
fact it woul

Re: [bitcoin-dev] Draft BIP: OP_TXHASH and OP_CHECKTXHASHVERIFY

2023-10-06 Thread Steven Roose via bitcoin-dev
I updated the draft BIP with a proposed reference implementation and a 
link to an implementation of a caching strategy.


It shows that it's possible to achieve TXHASH in a way that after each 
large tx element (scripts, annexes) has been hashed exactly once, 
invocations of TXHASH have clear constant upper limits on the number of 
bytes hashes.


Link to the draft BIP in above e-mail and link to the cache impl here: 
https://github.com/stevenroose/rust-bitcoin/blob/txhash/bitcoin/src/blockdata/script/txhash.rs



On 9/30/23 12:44, Steven Roose via bitcoin-dev wrote:


Hey all


The idea of TXHASH has been around for a while, but AFAIK it was never 
formalized. After conversations with Russell, I worked on a 
specification and have been gathering some feedback in the last 
several weeks.


I think the draft is in a state where it's ready for wider feedback 
and at the same time I'm curious about the sentiment in the community 
about this idea.


The full BIP text can be found in the attachment as well as at the 
following link:

https://github.com/bitcoin/bips/pull/1500

I will summarize here in this writing.

*What does the BIP specify?*

  * The concept of a TxFieldSelector, a serialized data structure for
selecting data inside a transaction.
  o The following global fields are available:
  + version
  + locktime
  + number of inputs
  + number of outputs
  + current input index
  + current input control block
  o For each input, the following fields are available:
  + previous outpoint
  + sequence
  + scriptSig
  + scriptPubkey of spending UTXO
  + value of spending UTXO
  + taproot annex
  o For each output, the following fields are available:
  + scriptPubkey
  + value
  o There is support for selecting inputs and outputs as follows:
  + all in/outputs
  + a single in/output at the same index as the input being
executed
  + any number of leading in/outputs up to 2^14 - 1 (16,383)
  + up to 64 individually selected in/outputs (up to 2^16 or
65,536)
  o The empty byte string is supported and functions as a default
value which commits to everything except the previous
outpoints and the scriptPubkeys of the spending UTXOs.

  * An opcode OP_TXHASH, enabled only in tapscript, that takes a
serialized TxFieldSelector from the stack and pushes on the stack
a hash committing to all the data selected.

  * An opcode OP_CHECKTXHASHVERIFY, enabled in all script contexts,
that expects a single item on the stack, interpreted as a 32-byte
hash value concatenated with (at the end) a serialized
TxFieldSelector. Execution fails is the hash value in the data
push doesn't equal the calculated hash value based on the
TxFieldSelector.

  * A consideration for resource usage trying to address concerns
around quadratic hashing. A potential caching strategy is outlined
so that users can't trigger excessive hashing.
  o Individual selection is limited to 64 items.
  o Selecting "all" in/outputs can mostly use the same caches as
sighash calculations.
  o For prefix hashing, intermediate SHA256 contexts can be stored
every N items so that at most N-1 items have to be hashed when
called repeatedly.
  o In non-tapscript contexts, at least 32 witness bytes are
required and because (given the lack of OP_CAT) subsequent
calls can only re-enforce the same TxFieldSelector, no
additional limitations are put in place.
  o In tapscript, because OP_TXHASH doesn't require 32 witness
bytes and because of a potential addition of operations like
OP_CAT, the validation budget is decreased by 10 for every
OP_TXHASH or OP_CHECKTXHASHVERIFY operation.


*What does this achieve?*

  * Since the default TxFieldSelector is functionally equivalent to
OP_CHECKTEMPLATEVERIFY, with no extra bytes required, this
proposal is a strict upgrade of BIP-119.

  * The flexibility of selecting transaction fields and in/output
(ranges), makes this construction way more useful
  o when designing protocols where users want to be able to add
fees to their transactions without breaking a transaction chain;
  o when designing protocols where users construct transactions
together, each providing some of their own in- and outputs and
wanting to enforce conditions only on these in/outputs.

  * OP_TXHASH, together with OP_CHECKSIGFROMSTACK (and maybe OP_CAT*)
could be used as a replacement for almost arbitrarily complex
sighash constructions, like SIGHASH_ANYPREVOUT.

  * Apart from being able to enforce specific fields in the
transaction to have a pre-specified value, equality can also be
enforced, which can f.e. replace the desire for opcodes like
OP_IN_OUT_VALUE.*

  * The s