Ethically, this situation has some similarities to the DAO fork. We have an
entity who closely examined the code, found an unintended characteristic of
that code, and made use of that characteristic in order to gain tens of
millions of dollars. Now that developers are aware of it, they want to m
Just checking to see if I understand this optimization correctly. In order to
find merkle roots in which the rightmost 32 bits are identical (i.e. partial
hash collisions), we want to compute as many merkle root hashes as quickly as
possible. The fastest way to do this is to take the top level o
Well, here's another idea: we could shorten the tx hashes to about 4 to 6 bytes
instead of 32.
Let's say we have a 1 GB mempool with 2M transactions in it. A 4 byte shorthash
would have a 0.046% chance of resulting in a collision with another transaction
in our mempool, assuming a random distri
> On Feb 25, 2016, at 9:56 PM, Gregory Maxwell wrote:
> The batching was
> temporarily somewhat hobbled between 0.10 and 0.12 (especially when
> you had any abusive frequently pinging peers attached), but is now
> fully functional again and it now manages to batch many transactions
> per INV pret
The INV scheme used by Bitcoin is not very efficient at all. Once you take into
account Bitcoin, TCP (including ACKs), IP, and ethernet overheads, each INV
takes 193 bytes, according to wireshark. That's 127 bytes for the INV message
and 66 bytes for the ACK. All of this is for 32 bytes of paylo
On Feb 6, 2016, at 9:21 PM, Jannes Faber via bitcoin-dev
wrote:
> They *must* be able to send their customers both coins as separate
> withdrawals.
>
Supporting the obsolete chain is unnecessary. Such support has not been offered
in any cryptocurrency hard fork before, as far as I know. I do
On Feb 7, 2016, at 9:24 AM, jl2...@xbt.hk wrote:
> You are making a very naïve assumption that miners are just looking for
> profit for the next second. Instead, they would try to optimize their short
> term and long term ROI. It is also well known that some miners would mine at
> a loss, even
On Feb 7, 2016, at 7:19 AM, Anthony Towns via bitcoin-dev
wrote:
> The stated reasoning for 75% versus 95% is "because it gives "veto power"
> to a single big solo miner or mining pool". But if a 20% miner wants to
> "veto" the upgrade, with a 75% threshold, they could instead simply use
> thei
On Dec 30, 2015, at 3:49 PM, Jonathan Toomim wrote:
> Since we've been relying on the trustworthiness of miners during soft forks
> in the past (and it only failed us once!), why not
make it explicit?
(Sorry for the premature send.)
signature.asc
Description: Message signed with OpenPGP usi
On Dec 30, 2015, at 11:00 AM, Bob McElrath via bitcoin-dev
wrote:
> joe2015--- via bitcoin-dev [bitcoin-dev@lists.linuxfoundation.org] wrote:
>> That's the whole point. After a conventional hardfork everyone
>> needs to upgrade, but there is no way to force users to upgrade. A
>> user who is s
On Dec 30, 2015, at 6:19 AM, Peter Todd wrote:
> Your fear is misplaced: it's trivial to avoid recursion with a bit of
> planning...
That makes some sense. I downgrade my emotions from "a future in which we have
deployed a few generalized softforks this way sounds terrifying" to "the idea
of
As a first impression, I think this proposal is intellectually interesting, but
crufty and hackish and should never actually be deployed. Writing code for
Bitcoin in a future in which we have deployed a few generalized softforks this
way sounds terrifying.
Instead of this:
CTransaction Get
Ultimately, a self-interested miner will chose to build on the block that
leaves the most transaction fees up for grabs. (This usually means the smallest
block.) It's an interesting question whether the default behavior for Core
should be the rational behavior (build on the "smallest" block in t
in code, but as a social contract. Breaking those rules would
> be considered as a hardfork and is allowed only in exceptional situation.
>
> Jonathan Toomim via bitcoin-dev 於 2015-12-29 07:42 寫到:
>> That sounds like a rather unlikely scenario. Unless you have a
>> specifi
That sounds like a rather unlikely scenario. Unless you have a specific reason
to suspect that might be the case, I think we don't need to worry about it too
much. If we announce the intention to perform such a soft fork a couple of
months before the soft fork becomes active, and if nobody compl
I traveled around in China for a couple weeks after Hong Kong to visit with
miners and confer on the blocksize increase and block propagation issues. I
performed an informal survey of a few of the blocksize increase proposals that
I thought would be likely to have widespread support. The results
On Dec 26, 2015, at 3:16 PM, Pieter Wuille wrote:
> I am generally not interested in a system where we rely on miners to make
> that judgement call to fork off nodes that don't pay attention and/or
> disagree with the change. This is not because I don't trust them, but because
> I believe one
On Dec 26, 2015, at 3:01 PM, Pieter Wuille wrote:
> I think that's extremely short, even assuming there is no controversy about
> changing the rules at all. Things like BIP65 and BIP66 already took
> significantly longer than that, were uncontroversial, and only need miner
> adoption. Full nod
On Dec 26, 2015, at 8:44 AM, Pieter Wuille via bitcoin-dev
wrote:
> Furthermore, 75% is pretty terrible as a switchover point, as it guarantees
> that old nodes will still see a 25% forked off chain temporarily.
>
Yes, 75% plus a grace period is better. I prefer a grace period of about 4000
to
Another option for how to deal with block withholding attacks: Give the miner
who finds the block a bonus. This could even be part of the coinbase
transaction.
Block withholding is effective because it costs the attacker 0% and costs the
pool 100%. If the pool's coinbase tx was 95% to the pool,
On Dec 25, 2015, at 3:15 AM, Ittay via bitcoin-dev
wrote:
> Treating the pool block withholding attack as a weapon has bad connotations,
> and I don't think anyone directly condones such an attack.
I directly condone the use of block withholding attacks whenever pools get
large enough to perf
On Dec 18, 2015, at 10:30 AM, Pieter Wuille via bitcoin-dev
wrote:
> 1) The risk of an old full node wallet accepting a transaction that is
> invalid to the new rules.
>
> The receiver wallet chooses what address/script to accept coins on.
> They'll upgrade to the new softfork rules before cre
Off-topic: If you want to decentralize hashing, the best solution is probably
to redesign p2pool to use DAGs. p2pool would be great except for the fact that
the 30 sec share times are (a) long enough to cause significant reward variance
for miners, but (b) short enough to cause hashrate loss fro
1. I think we should limit the sum of the block and witness data to
nBlockMaxSize*7/4 per block, for a maximum of 1.75 MB total. I don't like the
idea that SegWit would give us 1.75 MB of capacity in the typical case, but we
have to have hardware capable of 4 MB in adversarial conditions (i.e.
This means that a server supporting SW might only hear of the tx data and not
get the signature data for some transactions, depending on how the relay rules
worked (e.g. if the SW peers had higher minrelaytxfee settings than the legacy
peers). This would complicate fast block relay code like IBL
On Dec 9, 2015, at 7:50 AM, Jorge Timón wrote:
> I don't undesrtand. SPV nodes won't think they are validating transactions
> with the new version unless they adapt to the new format. They will be simply
> unable to receive payments using the new format if it is a softfork (although
> as said
On Dec 9, 2015, at 7:48 AM, Luke Dashjr wrote:
> How about we pursue the SegWit softfork, and at the same time* work on a
> hardfork which will simplify the proofs and reduce the kludgeyness of merge-
> mining in general? Then, if the hardfork is ready before the softfork, they
> can both go tog
On Dec 9, 2015, at 8:09 AM, Gregory Maxwell wrote:
> On Tue, Dec 8, 2015 at 11:48 PM, Jonathan Toomim wrote:
>
> By contrast it does not reduce the safety factor for the UTXO set at
> all; which most hold as a much greater concern in general;
I don't agree that "most" hold UTXO as a much grea
On Dec 8, 2015, at 6:02 AM, Gregory Maxwell via bitcoin-dev
wrote:
> The particular proposal amounts to a 4MB blocksize increase at worst.
I understood that SegWit would allow about 1.75 MB of data in the average case
while also allowing up to 4 MB of data in the worst case. This means that th
Agree. This data does not belong in the coinbase. That space is for miners to
use, not devs.
I also think that a hard fork is better for SegWit, as it reduces the size of
fraud proofs considerably, makes the whole design more elegant and less
kludgey, and is safer for clients who do not upgrade
I am leaning towards supporting a can kick proposal. Features I think are
desirable for this can kick:
0. Block size limit around 2 to 4 MB. Maybe 3 MB? Based on my testnet data, I
think 3 MB should be pretty safe.
1. Hard fork with a consensus mechanisms similar to BIP101
2. Approximately 1 or
It appears you're using the term "compression ratio" to mean "size reduction".
A compression ratio is the ratio (compressed / uncompressed). A 1 kB file
compressed with a 10% compression ratio would be 0.1 kB. It seems you're using
(1 - compressed/uncompressed), meaning that the compressed file
Data compression adds latency and reduces predictability, so engineers have
decided to leave compression to application layers instead of transport layer
or lower in order to let the application designer decide what tradeoffs to make.
On Nov 11, 2015, at 10:49 AM, Marco Pontello via bitcoin-dev
Quick observation: block transmission would be compress-once,
send-multiple-times, which makes the tradeoff a little better.
signature.asc
Description: Message signed with OpenPGP using GPGMail
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfound
On Oct 28, 2015, at 12:13 AM, Luke Dashjr wrote:
> On Wednesday, October 28, 2015 4:26:52 AM Jonathan Toomim via bitcoin-dev
> wrote:
>
> This is all in the realm of node policy, which must be easy to
> modify/customise in a flexible manner. So simplifying other code in a way t
Assigning 5% of block space based on bitcoin-days destroyed (BDD) and the other
95% based on fees seems like a rather awkward approach to me. For one thing, it
means two code paths in pretty much every procedure dealing with a constrained
resource (e.g. mempool, CNB). This makes code harder two
You may want to add a cron job to restart bitcoind every day or two as a damage
control mechanism until we figure this out.
On Oct 22, 2015, at 9:06 AM, Multipool Admin wrote:
> This is a real issue. The bitcoind process is getting killed every few days
> when it reaches around 55gb of usage
The method I was using was essentially
grep VmRSS /proc/$pid/status
Comparing these two methods, I get
Your method (PSS):
2408313
My method (RSS):
VmRSS: 2410396 kB
On Oct 21, 2015, at 12:29 AM, Tom Zander wrote:
> On Tuesday 20 Oct 2015 20:01:16 Jonathan Toomim wrot
ool Admin wrote:
>> My nodes are continuously running getblocktemplate and getinfo, and I also
>> suspected the issue is in either gbt or the rpc server.
>>
>> The instance only takes a few hours to get up to that memory usage.
>>
>> On Oct 18, 2015 8:59 AM
ously running getblocktemplate and getinfo, and I also
> suspected the issue is in either gbt or the rpc server.
>
> The instance only takes a few hours to get up to that memory usage.
>
> On Oct 18, 2015 8:59 AM, "Jonathan Toomim via bitcoin-dev"
> wrote:
> On
On Oct 14, 2015, at 2:39 AM, Wladimir J. van der Laan wrote:
> This is *most likely* the mempool, but is just not reported correctly.
I did some testing with PR #6410's better mempool reporting. The improved
reporting suggests that actual in-memory usage ("usage":) by CTxMemPool is
about 2.5x t
41 matches
Mail list logo