------- Original Message -------
On Monday, May 23rd, 2022 at 17:09, AdamISZ via bitcoin-dev 
<bitcoin-dev@lists.linuxfoundation.org> wrote:


> Jonas, all,:
>
> So I do want to ask a couple further clarifying questions on this point, but 
> I got rather majorly sidetracked :)
> I wonder can you (and other list readers!) take a look at my attempt here to 
> summarize what is described in Footnote 2 of the draft BIP (as it's related 
> to this discussion and also .. it's pretty interesting generally!):
>
> https://gist.github.com/AdamISZ/ca974ed67889cedc738c4a1f65ff620b
>
> (btw github gists have equation rendering now which is nice!)
>
> Thanks,
> waxwing/AdamISZ
>
Jonas, list,

So given that that's basically correct (see the comments), continuing on this 
point of how to handle duplicate keys:

In 
https://github.com/jonasnick/bips/blob/musig2/bip-musig2.mediawiki#identifiying-disruptive-signers
 we have:

"If partial signatures are received over authenticated channels, this method 
can be used to identify disruptive signers and hold them accountable. Note that 
partial signatures are not signatures. An adversary can forge a partial 
signature, i.e., create a partial signature without knowing the secret key for 
the claimed public key."

(the gist in the previous message was just fleshing out what's stated there and 
in Footnote 2: if you get a "valid" partial sig at index i, it doesn't mean 
that the signer at index i knows the key for index i, *if* they also control 
index j; it just means they won't be able to produce "valid" partial sigs for 
both indices i and j).

(scare quotes "valid" - there is no notion in MuSig2 of a partial signature as 
a signature, only the aggregate signature in toto is valid or invalid or 
forged).

So we see in the above quote, that the concept of 'authenticated channels' is 
rather important. Consider 2 scenarios:
1. "Persistent": Every signer has a persistent identity across many signing 
sessions, and their communications are authenticated against that identity.
2. "Spontaneous": Signers join the protocol in some ad hoc way, but 
authenticate specifically inasmuch as they set up temporary nyms and use e.g. 
diffie hellman to establish a confidential and authenticated channel for the 
period of this signing session.

An example of "Spontaneous" might be: a variant of a multiparty channel 
construction with anonymous participants on LN or LN* in which participants set 
up such constructions ad hoc e.g. via liquidity markets .. in contrast, e.g. a 
hardware wallet multisig setup with a known provider might be a "Persistent" 
case.

Not sure, but ... are we mainly talking about the "Spontaneous" case?

Because the "Persistent" case doesn't seem interesting: If I "know" the 
counterparty that I'm engaging in this protocol with, first, a Sybil at two 
indices is kinda weird, so the occurrence of a duplicated key from them tells 
me something is wrong and manual intervention is needed (or equivalently some 
sanity check in the meta-protocol). Often (e.g. cold storage, devices) there'd 
be a way to know in advance what the keys *should* be. It's very likely a bug. 
(I suppose you could argue waiting till the second signing round helps, because 
it helps us isolate the bug (except it might not, if in certain protocols, both 
signers have access to some shared keys, but, meh) ... but that doesn't seem 
convincing ... executing more of a protocol when you already know the 
implementation is broken seems unwise).

So, to the "Spontaneous" case: if we see two identical pubkeys from two 
pseud/anonymous counterparties, I can see the argument for waiting until 
partial sig sending occurs, before establishing misbehaviour. The main 
substance of the argument seems to be something like: we can't actually deduce 
adversarial behaviour at key exchange time, so we *have* to wait for the 
partial signature step. I'm objecting to this on two fronts:

* A general principle of security should be 'abort early'. It's to me just 
sensibly conservative to not continue given the substantial risk of bugs (esp. 
in systems exposed to nonce-fragility!)
* The claim that the protocol laid out in the BIP identifies misbehaviour seems 
to be at best partially correct, it cannot be true in the general case.

Jonas has already countered my first bullet point by stating that this 
abort-early (at key exchange) strategy opens up an unlimited DOS vector. My 
counter here is that that, because of the second bullet oint, the DOS vector 
remains, in the "Spontaneous" case, anyway; and that the only way to close it 
is to use either identities (switch to "Persistent": see e.g. Coinshuffle which 
registers identities via inputs), or cost.

(Why does the DOS vector remain? Because of the partial sig "validation" issue 
as per my gist and Footnote2: if key 3 and key 4 are identical in a set of 5, 
we can wait, and then find that partial sig 3 verifies, and partial sig 4 
*also* verifies, and only at index 5 do we see an 'invalid' partial sig. If the 
adversary (as seems extremely likely.. I can't imagine it being otherwise) has 
used two *different* nyms for his two adversarial indices 4 and 5, then 
ejecting 5 doesn't really seem to close the DOS potential? If we then restart 
and 'grab another anonymous nym' for the 5th index, can't it be the adversary 
again? And haven't we let the adversary stay, at index 4? (though I'm not sure 
the implications)).

Another way to look at it, I'm saying that this claim:

"In contrast, MuSig2 is designed to identify disruptive signers at signing 
time: any signer who prevents a signing session from completing successfully by 
sending incorrect contributions in the session can be identified and held 
accountable (see below)."

isn't *fully* correct. That is, for sure the algorithm will identify a 
disruptive signer who simply operates one key, but it doesn't (as current) 
always identify every key owned by a disruptive signer. So it doesn't close the 
DOS vector.

(To be clear the whole 'fake partial sig' adversarial behaviour is *not* 
specific to having duplicate public keys; I'm just discussing whether the 
protocol should continue if duplicates are seen).

So overall I have the feeling that allowing duplicate keys at setup makes the 
implementation messier (and this protocol is complex, so that matters a bit 
more), and it strikes me as risky in the inevitable presence of implementation 
errors.

Cheers,
waxwing/AdamISZ
_______________________________________________
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev

Reply via email to