I recently found some interesting and simple HD wallet design here:
https://bitcointalk.org/index.php?topic=5321992.0
Could anyone see any flaws in such design or is it safe enough to implement it
and use in practice?
If I understand it correctly, it is just pure ECDSA and SHA-256, nothing else:
; (parent), "(privKey+firstOffset) mod n" (first child),
"(privKey+secondOffset) mod n" (second child) and so on. And as long as this
offset is not guessed by the attacker, it is impossible to link all of those
keys together, right?
> On 2021-03-20 11:08:30 user Tim Ruffing w
gt; >
> > > use sha3-256. sha256 suffers from certain attacks (length extension,
> > > for example) that could make your scheme vulnerable to leaking info,
> > > depending on how you concatenate things, etc. better to choose
> > > something where padding doe
at finding attackerPublicKey here is impossible. Of course
the absence of solution does not mean that it is secure, but I think it is a
good example to show how strong this scheme is.
On 2021-03-21 22:45:19 user Tim Ruffing wrote:
> On Sat, 2021-03-20 at 21:25 +0100, vjudeu via bitcoin-dev wr
Yes, transition from Proof of Work to Proof of Something Else is possible in a
soft-fork way. All that is needed is getting miners and users support. Then,
Proof of Work difficulty should drop to one, and the rest would be solved by
Proof of Something Else. Old miners still could use ASIC miners
We have some taproot address with private key "a" and public key "a*G", owned
by Alice. Bob wants to take Alice's coins without her permission. He owns
taproot address with private key "b" and public key "b*G". He knows "a*G" by
exploring the chain and looking for P2TR outputs. To grab Alice's f
As far as I know, P2TR address contains 32-byte public key. It can be used
directly by creating Schnorr signature or indirectly, by revealing tapscript.
Does it mean that any taproot output could be modified on-the-fly after being
confirmed without changing an address? I mean, if we have base po
> Perhaps the only things that cannot be usefully changed in a softfork is the
> block header format and how proof-of-work is computed from the block header.
Why not? I can imagine a soft fork where the block header would contain SHA-256
and SHA-3 hashes in the same place. The SHA-256 would be c
> Because the above block header format is hashed to generate the
> `prevBlockHash` for the *next* block, it is almost impossible to change the
> format without a hardfork.
First, assume that everything should be validated as today to be
backward-compatible. For that reason, removing SHA-256 is
> - 1 block reorgs: these are a regular feature on mainnet, everyone
should cope with them; having them happen multiple times a day to
make testing easier should be great
Anyone can do 1 block reorg, because nonce is not signed, so anyone can replace
that with better value. For example, if
You can do that kind of change in your own Bitcoin-compatible client, but you
cannot be sure that other people will run that version and that it will shut
down when you want. Many miners use their own custom software for mining
blocks, the same for mining pools. There are many clients that are c
It seems that Bitcoin Core will stop working in 2038 because of assertion
checking if the current time is non-negative. Also, the whole chain will halt
after reaching median time 0x in 2106. More information:
https://bitcointalk.org/index.php?topic=5365359.0
I wonder if that kind of issu
On 2021-10-15 03:05:36 user Anthony Towns via bitcoin-dev
wrote:
> Same stuff works with testnet, though I'm not sure if any testnet faucets
will accept bech32m addresses directly.
There are faucets that accept such addresses, for example
https://bitcoinfaucet.uo1.net/, but you have to use bec
.
Another proposed solution is 64-bit timestamps. They would break
compatibility with other software that has specific expectations of
header fields, like ASICs' firmware. They would also cause a hardfork
before the date of timestamp overflow. I thus believe them to be a less
appropriate solu
> What happens if a series of blocks has a timestamp of 0x at the
> appropriate time?
The chain will halt for all old clients, because there is no 32-bit value
greater than 0x.
> 1. Is not violated, since "not lower than" means "greater than or equal to"
No, because it has to b
> Then starting at Unix Epoch 0x8000, post-softfork nodes just increment
> the timestamp by 1 on each new block.
It is possible to go even faster. The fastest rate is something like that, if
you assume the time in the Genesis Block is zero:
0 1 2 2 3 3 3 3 4 4 4 4 4 4 5 5 5 5 5 5 6 ...
The
> how to select an analyze the choice of window
Currently, we need 100 blocks to spend the coinbase transaction and I think
that should be our "window".
> and payout functions
Something like "miner-based difficulty" should do the trick. So, each miner is
trying to produce its own block, with its
> Can you maybe try stating the goals of your payout function, and then
> demonstrate how what you're proposing meets that?
The goals are quite simple: if you are a solo miner, you are trying to mine a
block that meets the network difficulty. If you are using some kind of pool,
then you are tr
> The missing piece here would be an ordering of weak blocks to make the window
> possible. Or at least a way to determine what blocks should definitely be
> part of a particular block's pay out. I could see this being done by a
> separate ephemeral blockchain (which starts fresh after each Bitc
> I was thinking that this would be a separate blockchain with separate headers
> that progress linearly like a normal blockchain.
Exactly, that's what I called "superblocks", where you have a separate chain,
just to keep block headers instead of transactions.
> A block creator would collect toge
> Associating signing with proof of stake and thereby concluding that signing
> is something to avoid sounds like superstitious thinking.
If you introduce signing into mining, then you will have cases, where someone
is powerful enough to produce blocks, but cannot, because signing is needed.
The
> If you don't like the reduction of the block subsidy, well that's a much
> bigger problem.
It is reversible, because you can also increase the block subsidy by using
another kind of soft-fork. For example, you can create spendable outputs with
zero satoshis. In this way, old nodes will accept
> The system sounds expensive eventually to cope with approximately
> 2,100,000,000,000,000 ordinals.
What about zero satoshis? There are transactions, where zero satoshis are
created or moved. Typical users cannot do that, but miners can, we currently
have such transactions in the blockchain, f
Since Taproot was activated, we no longer need separate OP_RETURN outputs to be
pushed on-chain. If we want to attach any data to a transaction, we can create
"OP_RETURN " as a branch in the TapScript. In this way, we can store
that data off-chain and we can always prove that they are connected
cheme is not actually equivalent to op_return, because it
requires the user to communicate out-of-band to reveal the commitment, whereas
with op_return the data is immediately visible (while not popular, BIP47 and
various colored coin protocols rely on this).
Cheers,
Ruben
On Thu, Feb 24, 2022 at
> what happens if/when it comes to pass that we increase payment precision by
> going sub-satoshi on chain?
When we talk about future improvements, there could be even bigger problem with
ordinal numbers: what if/when we introduce some Monero-like system and hide
coin amounts? (for example by us
> Continuous operation of the sidechain then implies a constant stream of
> 32-byte commitments, whereas continuous operation of a channel factory, in
> the absence of membership set changes, has 0 bytes per block being published.
The sidechain can push zero bytes on-chain, just by placing a sid
> The Taproot address itself has to take up 32 bytes onchain, so this saves
> nothing.
There is always at least one address, because you have a coinbase transaction
and a solo miner or mining pool that is getting the whole reward. So, instead
of using separate OP_RETURN's for each sidechain, fo
In testnet3, anyone can become a miner, it is possible to even mine a block on
some CPU, because the difficulty can drop to one. In signet, we create some
challenge, for example 1-of-2 multisig, that can restrict who can mine, so that
chain can be "unreliably reliable". Then, my question is: why
coins and then peg them were just
raising the barriers to entry for starting to use a signet for testing.
If anything I think we should permanently shutter testnet now that signet is
available.
On Sat, Mar 5, 2022, 3:53 PM vjudeu via bitcoin-dev
wrote:
In testnet3, anyone can become a miner, i
> We should remove the dust limit from Bitcoin.
Any node operator can do that. Just put "dustrelayfee=0." in your
bitcoin.conf.
And there is more: you can also conditionally allow free transactions:
mintxfee=0.0001
minrelaytxfee=0.
blockmintxfee=0.
Then, when using
> A. Every pollee signs messages like support:10%}> for each UTXO they want to respond to the poll with.
It should not be expressed in percents, but in amounts. It would be easier and
more compatible with votes where there is 100% oppose or 100% support (and also
easier to handle if some LN use
ld be used to pay for domains!
On 2022-03-16 19:21:37 user Peter Todd wrote:
> On Thu, Feb 24, 2022 at 10:02:08AM +0100, vjudeu via bitcoin-dev wrote:
> Since Taproot was activated, we no longer need separate OP_RETURN outputs to
> be pushed on-chain. If we want to attach any data to
> I don't quite understand this part. I don't understand how this would make
> your signature useless in a different context. Could you elaborate?
It is simple. If you vote by making transactions, then someone could capture
that and broadcast to nodes. If your signature is "useless in a differen
> What do you mean "capture that" and "your network"? I was imagining a
> scenario where these poll messages are always broadcast globally. Are you
> implying more of a private poll?
If you vote by making a Bitcoin transaction, then someone could move real
bitcoins, just by including your trans
When I see more and more proposals like this, where things are commited to
Taproot outputs, then I think we should start designing "miner-based
commitments". If someone is going to make a Bitcoin transaction and add a
commitment for zero cost, just by tweaking some Taproot public key, then it is
It seems that Taproot allows us to protect each individual public key with a
password. It could work in this way: we have some normal, Taproot-based public
key, that is generated in a secure and random way, as it is today in Bitcoin
Core wallet. Then, we can create another public key, just by ta
Typical P2PK looks like that: " OP_CHECKSIG". In a typical
scenario, we have "" in out input and " OP_CHECKSIG" in our
output. I wonder if it is possible to use covenants right here and right now,
with no consensus changes, just by requiring a specific signature. To start
with, I am trying to
It seems that the current consensus with Taproot is enough to implement
CoinPool. There are no needed changes if we want to form a basic version of
that protocol, so it probably should be done now (or at least started, even in
some signet or testnet3). Later, if some features like SIGHASH_ANYPRE
> Re-enabling OP_CAT with the exact same OP would be a hardfork, but creating a
> new OP_CAT2 that does the same would be a softfork.
We have TapScript for that. OP_CAT is defined as OP_SUCCESS, it can be
re-enabled in a soft-fork way. For now, OP_CAT in TapScript simply means
"anyone can move
For now, we have txid:vout as a previous transaction output. This means that to
have a stable TXID, we are forced to use SIGHASH_ALL somewhere, just to prevent
any transaction modifications that can happen during adding some inputs and
outputs. But it seems that new sighashes could be far more p
> Looks like `OP_CAT` is not getting enabled until after we are reasonably sure
> that recursive covenants are not really unsafe.
Maybe we should use OP_SUBSTR instead of OP_CAT. Or even better: OP_SPLIT.
Then, we could have OP_SPLIT... that would split a
string N times (so there will be N
nsaction has 2 inputs: 0.00074 tBTC and 0.00073 tBTC (0.00074 + 0.00073
= 0.00147) which is more than output amount 0.001 tBTC
/dev/fd0
Sent with ProtonMail secure email.
--- Original Message ---
On Saturday, May 7th, 2022 at 9:22 AM, vjudeu via bitcoin-dev
bitcoin-dev@lists.linu
Some people think that sidechains are good. But to put them into some working
solution, people think that some kind of soft-fork is needed. However, it seems
that it can be done in a no-fork way, here is how to make it permissionless,
and introduce them without any forks.
First, we should make
aps what you had in mind?
Honestly, I've yet to fully load in exactly how the applications of it work,
but I'd be interested to hear your thoughts.
On Sat, May 7, 2022, 4:55 AM vjudeu via bitcoin-dev
wrote:
For now, we have txid:vout as a previous transaction output. This means th
> If the only realistic (fair, efficient & proportionate) way to pay for
> Bitcoin's security was by having some inflation scheme that violated the 21
> million cap, then agreeing to break the limit would probably be what makes
> sense, and in the economic interest of its users and holders.
So,
rly to how they commit to a timestamp (which is also only verifiable to
an approximation and can only be verified close to when it was mined but not eg
years later).
On Wed, Jul 6, 2022 at 4:13 AM vjudeu via bitcoin-dev
wrote:
> If the only realistic (fair, efficient & pr
Isn't it enough to just generate a seed in the same way as today, then sort the
words alphabetically, and then use that as a seed? I know, the last word is a
checksum, but there are only 2048 words, so it is not a big deal to get any
checksum we want. If that is insecure, because of lower possib
> Simply fork off an inflation coin and test your theory. I mean, that’s the
> only way it can happen anyway.
That would be an altcoin. But it can be done in a simpler way: we have 21
million coins. It doesn't matter if it is 21 million, if it is 100 million, or
if it is in some normalized rang
> Adding tail emission to Bitcoin would be a hard fork: a incompatible rule
> change that existing Bitcoin nodes would reject as invalid.
It won't, because we have zero satoshis. That means, it is possible to create
any backward-compatible way of storing amounts. And if we will ever implement
t
> We want mining to be is a boring, predictable, business that anyone can do,
> with as little reward as possible to larger scale miners.
To reach that, miners should earn their block rewards inside Lightning Network.
Then, if you want to send some transaction, and you have one satoshi fee, you
This problem can be solved by mining decentralization.
> What's likely to happen is that at first there will simply be no or very few
> blocks mined overnight.
Why? When it comes to energy usage, there are also cycles, because energy usage
during the day is definitely higher than at night. You
> This specific approach would obviously not work as most of those outputs
> would be dust and the miner would need to waste an absurd amount of block
> space just to grab them, but maybe there's a smarter way to do it.
There is a smarter way. Just send 0.01 BTC per block to the timelocked outpu
o make sure
the hashrate stays high.
I know, I know it's a tax on the rich and it's not fair because smaller holders
are less likely to do it, but it's a miniscule tax even in the worst case
On Thu, Jul 14, 2022, 5:35 AM vjudeu via bitcoin-dev
wrote:
> This sp
> So I'd suggest removing the fixed dust limit entirely and relying purely on
> the mempool size limit to determine what is or is not dust.
Just use those settings in your node:
minrelaytxfee=0.
blockmintxfee=0.
dustrelayfee=0.
No changes in source code are needed, nodes
, then they will have an
incentive to build on top of that.
On 2022-07-27 14:18:21 user Peter Todd wrote:
>
On July 27, 2022 6:10:00 AM GMT+02:00, vjudeu via bitcoin-dev
wrote:
>> So I'd suggest removing the fixed dust limit entirely and relying purely on
>> the mempool si
It is possible, because you can find nodes that accept low-fee transactions.
And on some statistics, for example
https://jochen-hoenicke.de/queue/#BTC,24h,weight,0 you can see that zero to one
satoshi per virtual byte transactions could take more space than other
transactions. You can be convin
> I'm not sure what is to be gained from adding an opcode
Backward compatibility. If we don't have OP_CHECKDATASIG, then it has to be
somehow introduced to make it compatible with "Bitcoin Message". And we have
opcodes like OP_RESERVED, that can be wrapped in OP_IF, then it is
"conditionally va
> I suppose in the case of legacy P2PKH signing, a hypothetical OP_CHECKDATASIG
> can take off the stack and perform an ECDSA public
> key recovery
You can always perform key recovery for legacy ECDSA: " OP_SWAP
OP_CHECKSIG" is always spendable, for any valid DER-encoded pair. Here,
if "
> If we actually wanted to solve the potential problem of not-enough-fees to
> upkeep mining security, there are less temporary ways to solve that. For
> example, if fees end up not being able to support sufficient mining, we could
> add emission based on a constant fraction of fees in the block
> [0]: https://gist.github.com/luke-jr/4c022839584020444915c84bdd825831
I wonder how far should that rule go: SCRIPT_ERR_DISCOURAGE_UPGRADABLE_NOPS.
Because "OP_FALSE OP_IF OP_ENDIF" is effectively the same as
"OP_NOP", and putting NOPs in many places is considered non-standard. The same
is tr
> By standardness rules (where you can have up to 80-byte pushes), a little
> over 1%. By consensus (520-byte pushes) less than 0.2%.
Note that instead of "OP_DROP OP_DROP", people can use "OP_2DROP", so the
number of dropping opcodes could be halved.
> I mean, they'd provide the `FALSE` as a s
> I don't know of any data of what happens at the point where the coinbase
> drops to below the fees on any chain. I don't think there has been one where
> that has happened. Perhaps there is a chain out there where it is fee's only?
> Perhaps that can provide insight.
I think federations like
> possible to change tx "max fee" to output amounts?
Is it possible? Yes. Should we do that? My first thought was "maybe", but after
thinking more about it, I would say "no", here is why:
Starting point: 1 BTC on some output.
Current situation: A single transaction moving 0.9000 BTC as fees
> confused. the rule was "cannot pay a fee > sum of outputs with
> consideration of cpfp in the mempool"
> your example is of someone paying a fee "< sum" which wouldn't be blocked
Every transaction paying "fee > sum" can be replaced by N transactions paying
"fee <= sum", where the sum of all
> Nevertheless, I next day I see other e-mails getting released to bitcoin-dev,
> while mine - was not.
If you created a new topic, then that is the reason. I noticed an interesting
thing: if the title of your post is just a reply to some existing topic, then
there are less strict rules, than i
> and I would like to understand why this problem has not been addressed more
> seriously
Because if nobody has any good solution, then status quo is preserved. If
tomorrow ECDSA would be broken, the default state of the network would be "just
do nothing", and every solution would be backward-c
> not taking action against these inscription could be interpreted by spammers
> as tacit acceptance of their practice.
Note that some people, even on this mailing list, do not consider Ordinals as
spam:
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2023-February/021464.html
See? It
> Given the current concerns with blockchain size increases due to
> inscriptions, and now that the lightning network is starting to gain more
> traction, perhaps people are now more willing to consider a smaller blocksize
> in favor of pushing more activity to lightning?
People will not agree
-interactive, done automatically by nodes) will be next, sooner or later.
On 2023-09-05 19:49:51 user Peter Todd wrote:
On Sun, Sep 03, 2023 at 06:01:02PM +0200, vjudeu via bitcoin-dev wrote: > >
Given the current concerns with blockchain size increases due to inscriptions,
and now that the l
> By redefining a bit of the nVersion field, eg the most significant bit, we
> can apply coinbase-like txout handling to arbitrary transactions.
We already have that in OP_CHECKSEQUENCEVERIFY. You can have a system with no
coinbase transactions at all, and use only OP_CHECKSEQUENCEVERIFY on the
> This opcode would be activated via a soft fork by redefining the opcode
> OP_SUCCESS80.
Why OP_SUCCESS80, and not OP_SUCCESS126? When there is some existing opcode, it
should be reused. And if OP_RESERVED will ever be re-enabled, I think it should
behave in the same way, as in pre-Taproot, s
> I think if A is top of stack, we get BA, not AB?
Good question. I always thought "0x01234567 0x89abcdef OP_CAT
0x0123456789abcdef OP_EQUAL" is correct, but it could be reversed as well. If
we want to stay backward-compatible, we can dig into the past, and test the old
implementation of OP_CA
> Imagine a system that tries to maintain a constant level of difficulty and
> reacts flexibly to changes in difficulty, by modulating the block reward
> level accordingly (using negative feedback).
This is exactly what I did, when experimenting with LN-based mining. CPU power
was too low to g
What about using Signet, or some separate P2P network, to handle all of that?
1. All e-mails could be sent in a pure P2P way, just each "mailing list node"
would receive it, and include to its mempool.
2. The inclusion of some message would be decided by signing a block.
Moderators would pick t
> Sign-to-contract looks like:
Nice! I think it should be standardized as some informational BIP. This is a
similar case as with Silent Payments: it is possible to let users make their
own commitments as they please, but if it will be officially standardized, then
it will be possible to build
> I've commented a few times asking the BIP editors to let me know what is
> needed for the BIP to either be merged or rejected.
I would accept it, if each Ordinal would require including OP_RETURN at the
beginning of the TapScript, to prevent them from being pushed on-chain. In that
case, they
> Since it is spent it does not bloat the mempool.
This is not the case. If you post some 100 kB TapScript, with some Ordinal,
then it of course bloats mempools, because then other users could post 100 kB
less, when it comes to regular payments. If you have Ordinals in the current
form, then t
I think it should be fixed. Because now, sending coins into P2WPKH is cheaper
than sending them to P2TR, even though finally, when those coins are spent, the
blockspace usage is cheaper for Taproot (when you spend by key) than for
Segwit, because public key hash is not stored anywhere. But of co
79 matches
Mail list logo