> He also said that the equation for miners has many variables, as it
> should.
He only said that AFTER I called him on his bullshit.
Before that he wrote it like there is 100% certainty that only the party
producing big blocks is punished:
"That orphan rate increase will go to whoever is produc
>
> Yes, if you are on a slow network then you are at a (slight) disadvantage.
> So?
>
Chun mentioned that his pool is on a slow network, and thus bigger blocks
give it an disadvantage. (Orphan rate is proportional to block size.)
You said that no, on contrary those who make big blocks have a disa
> That orphan rate increase will go to whoever is producing the 20MB blocks,
> NOT you.
>
This depends on how miners are connected.
E.g. suppose there are three miners, A and B have fast connectivity between
then, and C has a slow network.
Suppose that A miners a block and B receives it in 1 seco
>
> Stop trying to dictate block growth limits. Block size will be determined
> by competition between miners and availability of transactions, not through
> hard-coded limits.
>
Do you even game theory, bro? It doesn't work that way.
Mike Hearn described the problem in this article:
https://medi
> Why 20 MB? Do you anticipate 20x transaction count growth in 2016?
>
> Do you anticipate linear growth?
>
It's safe to say that absolutely nobody can predict the actual growth with
any degree of an accuracy.
I believe that linear growth compares very favorably to other alternatives:
1. Exponent
> Why 2 MB ?
>
Why 20 MB? Do you anticipate 20x transaction count growth in 2016?
Why not grow it by 1 MB per year?
This is a safer option, I don't think that anybody claims that 2 MB blocks
will be a problem.
And in 10 years when we get to 10 MB we'll get more evidence as to whether
network can
> I don't really see how you can protect against total isolation of a node
> (POS or POW). You would need to find an alternative route for the
> information.
>
"Alternative route for the information" is the whole point of weak
subjectivity, no?
PoS depends on weak subjectivity to prevent "long te
Let's consider a concrete example:
1. User wants to accept Bitcoin payments, as his customers want this.
2. He downloads a recent version of Bitcoin Core, checks hashes and so on.
(Maybe even builds from source.)
3. Let's it to sync for several hours or days.
4. After wallet is synced, he gives hi
> With POW, a new node only needs to know the genesis block (and network
> rules) to fully determine which of two chains is the strongest.
>
But this matters if a new node has access to the globally strongest chain.
If attacker is able to block connections to legitimate nodes, a new node
will happ
Adaptive schedules, i.e. those where block size limit depends not only on
block height, but on other parameters as well, are surely attractive in the
sense that the system can adapt to the actual use, but they also open a
possibility of a manipulation.
E.g. one of mining companies might try to ban
Just to add to the noise, did you consider linear growth?
Unlike exponential growth, it approximates diminishing returns (i.e. tech
advances become slower with time). And unlike single step, it will give
people time to adapt to new realities.
E.g. 2 MB in 2016, 3 MB in 2017 and so on.
So in 20 ye
> Your "scorched earth" plan is aptly named, as it's guaranteed to make
> unconfirmed payments useless.
>
"Scorched earth" makes no sense by itself. However, it can be a part of a
bigger picture. Imagine an insurance service which will make sure that
merchants are compensated for every scorched-ea
> "The approach" is how Bitcoin has always worked.
>
Mike, you're making "it worked before, and thus it will work in future"
kind of an argument.
It is an extremely shitty kind of an argument. And it can be used to
justify any kind of bullshit.
E.g. any scamcoin which haven't yet collapsed will wo
> Yes, like any P2P network Bitcoin cannot work if a sufficiently large
> number of miners decide to attack it.
>
1. They won't be attacking Bitcoin, they will attack merchants who accept
payments with 0 confirmations. This attack has nothing to do with Bitcoin
consensus mechanism (as Bitcoin prot
> Miners are *not* incentivised to earn the most money in the next block
> possible. They are incentivised to maximise their return on investment.
>
This would be right if you assume that all Bitcoin miners act as a single
entity. In that case it is true that that entity's goal is to maximize
over
> To remain useful as border router, the replace-by-fee patched core should
> only relay double spend if it actually replaces an earlier transaction, as
> otherwise the replace logic that is according to your commit more than just
> fee comparison, would have to be replicated in the proprietary sta
> The goal is to have an opportunity cost to breaking the rules.
>
You could as well have said "The goal is to implement it in a specific way
I want it to be implemented."
This makes zero sense.
You aren't even trying to compare properties of different possible
implementations, you just outright
> I think what Gareth was getting at was that with client-side validation
> there can be no concept of a soft-fork. And how certain are you that the
> consensus rules will never change?
>
Yes, it is true that you can't do a soft-fork, but you can do a hard-fork.
Using scheduled updates: client sim
>
> "Secure" and "client side validation" don't really belong in the same
> sentence, do they?
>
Well, client-side validation is mathematically secure, while SPV is
economically secure.
I.e. it is secure if you make several assumptions about economics of the
whole thing.
In my opinion the former
> From the introduction "[...]Because signers prove computational work,
> rather than proving secret knowledge as
> is typical for digital signatures, we refer to them as miners. To
> achieve stable consensus on the
> blockchain history, economic incentives are provided where miners are
> rewarded
> For those following this thread, we have now written a paper
> describing the side-chains, 2-way pegs and compact SPV proofs.
> (With additional authors Andrew Poelstra & Andrew Miller).
>
> http://blockstream.com/sidechains.pdf
Haven't seen any material discussion of this paper in this mailing
> This thread is, in my opinion, a waste of time. It's yet again
> another perennial bikeshedding proposal brought up many times since at
> least 2011, suggesting random changes for
> non-existing(/not-yet-existing) issues.
>
> There is a lot more complexity to the system than the subsidy schedule
> For the sake of argument, lets assume that somehow (quite unlikely)
Why is it unlikely? Do you believe that the cost of electricity cannot be
higher than expected mining revenue?
Or do you expect miners to keep mining when it costs them money?
> half the mining equipment gets shut off.
> The
> We had a halving, and it was a non-event.
> Is there some reason to believe next time will be different?
>
Yes.
When the market is rapidly growing, margins can be relatively high because
of limited amounts of capital being invested, or introduction of more
efficient technologies.
However, we s
> The hashpower market is maturing in the direction of
> financial instruments, where the owner of the hashpower is not
> necessarily the one receiving income. These are becoming tradeable
instruments,
Meni Rosenfeld issued tradeable mining bonds back in 2012:
https://bitcointalk.org/index.php
> "Flag day" herd behavior like this is unlikely for well informed and
> well prepared market participants.
>
It is simply rational to turn your mining device off until difficulty
adjusts.
Keeping mining for 2+ weeks when it costs you money is an altruistic
behavior, we shouldn't rely on this.
---
# Death by halving
## Summary
If miner's income margin are less than 50% (which is a healthy situation
when mining hardware is readily available), we might experience
catastrophic loss of hashpower (and, more importantly, catastrophic loss of
security) after reward halving.
## A simple model
Le
I've heard about this idea from TierNolan. Here's some quick an dirty
analysis:
Suppose the last known block claimed a large tx fee of L. A miner who owns
1/N of the total hashrate needs to choose between two strategies:
1. Mine on top of that block and win usual reward R with probability 1/N.
2.
>
> A distinction there is that they can only become invalid via a
> conflict— replaced by another transaction authored by the prior
> signers. If no other transaction could be created (e.g. you're a
> multisigner and won't sign it again) then there is no such risk.
You need to check transaction'
>
> I can't remember who I saw discussing this idea. Might have been Vitalik
> Buterin?
>
Yes, he described it in an article a couple of months ago:
http://blog.ethereum.org/2014/01/15/slasher-a-punitive-proof-of-stake-algorithm/
but it is an old idea.
For example, I've mentioned punishment of t
It is also useful for betting: an oracle will associate a hash with each
possible outcome, and when outcome is know, it will reveal a corresponding
preimage which will unlock the transaction.
This approach has several advantages over approach with multi-sig script:
1. oracle doesn't need to be inv
>
> These sorts of proposals are all just ways of saying block chains kind of
> suck and we should go back to using trusted third parties.
>
No.
Different approaches have different trade-offs, and thus different areas of
applicability.
Proof-of-work's inherent disadvantage is that it takes some t
> And it still would. Non-collusive miners cast votes based on the outcome
> of their own attempts to double spend.
>
Individually rational strategy is to vote for coinbase reallocation on
every block.
Yes, in that case nobody will get reward. It is similar to prisoner's
dilemma: equilibrium has
This is outright ridiculous.
Zero-confirmation double-spending is a small problem, and possible
solutions are known. (E.g. trusted third party + multi-sig addresses for
small-value transactions.)
On the other hand, protocol changes like described above might have
game-theoretical implications whi
> At this point, I don't think what you are doing is even colored coins
> anymore. You might want to look into Counterparty or Mastercoin.
>
Nope, it's still colored coins. The difference between colored coin model
and Mastercoin model is that colored coins are linked to transaction
outputs, while
>
> 1) It's more private. Bloom filters gives away quite accurate statistical
> information about what coins you own to whom ever you happen to be
> connected too. An attacker can easily use this to deanonymize you even if
> you don't reuse addresses; Tor does not help much against this attack.
>
This is beyond ridiculous...
Color kernel which works with padding is still quite simple. I think we
have extra 10-50 lines of code to handle padding in coloredcoinlib.
Essentially we have a couple of lines like this :
value_wop = tx.outputs[oi].value - padding
(value_wop means "value withou
>
> Have you seen the padded order-based coloring scheme worked out here?
>
> https://github.com/bitcoinx/colored-coin-tools/wiki/colored_coins_intro
Just to clarify, a variant of padded order-based coloring called epobc is
already implemented in coloredcoinlib (which is used by
ngcccbase/ChromaW
38 matches
Mail list logo