On Fri, May 8, 2015 at 8:33 PM, Mark Friedenbach wrote:
> These rules create an incentive environment where raising the block size has
> a real cost associated with it: a more difficult hashcash target for the
> same subsidy reward. For rational miners that cost must be counter-balanced
> by addit
On Fri, May 08, 2015 at 08:47:52PM +0100, Tier Nolan wrote:
> On Fri, May 8, 2015 at 5:37 PM, Peter Todd wrote:
>
> > The soft-limit is there miners themselves produce smaller blocks; the
> > soft-limit does not prevent other miners from producing larger blocks.
> >
>
> I wonder if having a "min
On Sat, May 9, 2015 at 12:00 AM, Damian Gomez wrote:
>
> ...of the following:
>
> the DH_GENERATION would in effect calculate the reponses for a total
> overage of the public component, by addding a ternary option in the actual
> DH key (which I have attached to sse if you can iunderstand my logi
It seems to me all this would do is encourage 0-transaction blocks, crippling
the network. Individual blocks don't have a "maximum" block size, they have an
actual block size. Rational miners would pick blocks to minimize difficulty,
lowering the "effective" maximum block size as defined by th
...of the following:
the DH_GENERATION would in effect calculate the reponses for a total
overage of the public component, by addding a ternary option in the actual
DH key (which I have attached to sse if you can iunderstand my logic)
For Java Practice this will be translated:
public static
In a fee-dominated future, replace-by-fee is not an opt-in feature. When
you create a transaction, the wallet presents a range of fees that it
expects you might pay. It then signs copies of the transaction with spaced
fees from this interval and broadcasts the lowest fee first. In the user
interfac
That's fair, and we've implemented child-pays-for-parent for spending
unconfirmed inputs in breadwallet. But what should the behavior be when
those options aren't understood/implemented/used?
My argument is that the less risky, more conservative default fallback
behavior should be either non-propa
On Fri, May 8, 2015 at 3:43 PM, Aaron Voisine wrote:
> This is a clever way to tie block size to fees.
>
> I would just like to point out though that it still fundamentally is using
> hard block size limits to enforce scarcity. Transactions with below market
> fees will hang in limbo for days and
This is a clever way to tie block size to fees.
I would just like to point out though that it still fundamentally is using
hard block size limits to enforce scarcity. Transactions with below market
fees will hang in limbo for days and fail, instead of failing immediately
by not propagating, or see
such a contract is a possibility, but why would big owners give an
exclusive right to such pools? It seems to me it'd make sense to offer
those for any miner as long as the get paid a little for it. Especially
when it's as simple as offering an incomplete transaction with the
appropriate SIGHASH fl
Fail, Damian. Not even a half-good attempt.
-Raystonn
On 8 May 2015 3:15 pm, Damian Gomez wrote:On Fri, May 8, 2015 at 3:12 PM, Damian Gomez wrote:let me continue my conversation: as the development of this transactions would be indiscated as a ByteArray of On Fri, May 8,
let me continue my conversation:
as the development of this transactions would be indiscated
as a ByteArray of
On Fri, May 8, 2015 at 3:11 PM, Damian Gomez wrote:
>
> Well zombie txns aside, I expect this to be resolved w/ a client side
> implementation using a Merkle-Winternitz OTS in order
On Fri, May 8, 2015 at 3:12 PM, Damian Gomez wrote:
> let me continue my conversation:
>
> as the development of this transactions would be indiscated
>
> as a ByteArray of
>
>
> On Fri, May 8, 2015 at 3:11 PM, Damian Gomez wrote:
>
>>
>> Well zombie txns aside, I expect this to be resolved w/
Well zombie txns aside, I expect this to be resolved w/ a client side
implementation using a Merkle-Winternitz OTS in order to prevent the loss
of fee structure theougth the implementation of a this security hash that
eill alloow for a one-wya transaction to conitnue, according to the TESLA
protoc
Replace by fee is the better approach. It will ultimately replace zombie transactions (due to insufficient fee) with potentially much higher fees as the feature takes hold in wallets throughout the network, and fee competition increases. However, this does not fix the problem of low tps. In fact
Hello,
I was reading some of the thread but can't say I read the entire thing.
I think that it is realistic to cinsider a nlock sixe of 20MB for any block
txn to occur. THis is an enormous amount of data (relatively for a netwkrk)
in which the avergage rate of 10tps over 10 miniutes would allow f
Transactions don't expire. But if the wallet is online, it can periodically
choose to release an already created transaction with a higher fee. This
requires replace-by-fee to be sufficiently deployed, however.
On Fri, May 8, 2015 at 1:38 PM, Raystonn . wrote:
> I have a proposal for wallets suc
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512
Hello. I've seen Greg make a couple of posts online
(https://bitcointalk.org/index.php?topic=1033396.msg11155302#msg11155302
is one such example) where he has mentioned that Pieter has a new
proposal for allowing multiple softforks to be deployed at
The problems with that are larger than time being unreliable. It is no
longer reorg-safe as transactions can expire in the course of a reorg and
any transaction built on the now expired transaction is invalidated.
On Fri, May 8, 2015 at 1:51 PM, Raystonn wrote:
> Replace by fee is what I was ref
Replace by fee is what I was referencing. End-users interpret the old
transaction as expired. Hence the nomenclature. An alternative is a new
feature that operates in the reverse of time lock, expiring a transaction after
a specific time. But time is a bit unreliable in the blockchain
-Rays
I have a proposal for wallets such as yours. How about creating all transactions with an expiration time starting with a low fee, then replacing with new transactions that have a higher fee as time passes. Users can pick the fee curve they desire based on the transaction priority they want to adv
It is my professional opinion that raising the block size by merely
adjusting a constant without any sort of feedback mechanism would be a
dangerous and foolhardy thing to do. We are custodians of a multi-billion
dollar asset, and it falls upon us to weigh the consequences of our own
actions agains
As the author of a popular SPV wallet, I wanted to weigh in, in support of
the Gavin's 20Mb block proposal.
The best argument I've heard against raising the limit is that we need fee
pressure. I agree that fee pressure is the right way to economize on
scarce resources. Placing hard limits on bloc
On Fri, May 8, 2015 at 5:37 PM, Peter Todd wrote:
> The soft-limit is there miners themselves produce smaller blocks; the
> soft-limit does not prevent other miners from producing larger blocks.
>
I wonder if having a "miner" flag would be good for the network.
Clients for general users and mer
Actually I believe that side chains and off-main-chain transactions will be
a critical part for the overall scalability of the network. I was actually
trying to make the point that (insert some huge block size here) will be
needed to even accommodate the reduced traffic.
I believe that it is defi
On Fri, May 8, 2015 at 2:59 PM, Alan Reiner wrote:
>
> This isn't about "everyone's coffee". This is about an absolute minimum
> amount of participation by people who wish to use the network. If our
> goal is really for bitcoin to really be a global, open transaction network
> that makes money
On Fri, May 8, 2015 at 2:20 AM, Matt Whitlock wrote:
> - Perhaps the hard block size limit should be a function of the actual block
> sizes over some
> trailing sampling period. For example, take the median block size among the
> most recent
> 2016 blocks and multiply it by 1.5. This allows Bitc
On Fri, May 08, 2015 at 03:32:00PM +0300, Joel Joonatan Kaartinen wrote:
> Matt,
>
> It seems you missed my suggestion about basing the maximum block size on
> the bitcoin days destroyed in transactions that are included in the block.
> I think it has potential for both scaling as well as keeping
On Fri, May 08, 2015 at 06:00:37AM -0400, Jeff Garzik wrote:
> That reminds me - I need to integrate the patch that automatically sweeps
> anyone-can-pay transactions for a miner.
You mean anyone-can-spend?
I've got code that does this actually:
https://github.com/petertodd/replace-by-fee-tools/
On Fri, May 08, 2015 at 12:03:04PM +0200, Mike Hearn wrote:
> >
> > * Though there are many proposals floating around which could
> > significantly decrease block propagation latency, none of them are
> > implemented today.
>
>
> With a 20mb cap, miners still have the option of the soft limit.
Adaptive schedules, i.e. those where block size limit depends not only on
block height, but on other parameters as well, are surely attractive in the
sense that the system can adapt to the actual use, but they also open a
possibility of a manipulation.
E.g. one of mining companies might try to ban
On Fri, May 8, 2015 at 10:59 AM, Alan Reiner wrote:
>
> This isn't about "everyone's coffee". This is about an absolute minimum
> amount of participation by people who wish to use the network. If our
> goal is really for bitcoin to really be a global, open transaction network
> that makes mone
On 05/08/2015 01:13 AM, Tom Harding wrote:
> On 5/7/2015 7:09 PM, Jeff Garzik wrote:
>> G proposed 20MB blocks, AFAIK - 140 tps
>> A proposed 100MB blocks - 700 tps
>> For ref,
>> Paypal is around 115 tps
>> VISA is around 2000 tps (perhaps 4000 tps peak)
>>
For reference, I'm not "proposing" 100
Sorry for the spam of the last mail. I hit send by accident.
Assurance contracts are better than simple donations.
Donating to a project means that you always end up losing the money but the
project might still not get funded.
An assurance contract is like Kickstarter, you only get your CC char
This isn't about "everyone's coffee". This is about an absolute minimum
amount of participation by people who wish to use the network. If our
goal is really for bitcoin to really be a global, open transaction
network that makes money fluid, then 7tps is already a failure. If even
5% of the wor
Block size scaling should be as transparent and simple as possible, like
pegging it to total transactions per difficulty change.
--
One dashboard for servers and applications across Physical-Virtual-Cloud
Widest out-of-the
On Fri, May 8, 2015 at 3:54 PM, Benjamin
wrote:
> AC does not solve the problem. AC works if people gain directly from
> the payment.
Not necessarily.
> Imagine a group of people paying tax - nobody gains from
> paying it. You have to actually need to enforce negative outcomes to
> ena
>> Imagine a group of 1000 people who want to make a donation of 50BTC to
>> something. They all way that they will donate 0.05BTC, but only if everyone
>> else donates.
It still isn't perfect. Everyone has an incentive to wait until the
last minute to pledge. <<
AC does not solve the problem
So, there are several ideas about how to reduce the size of blocks being
sent on the network:
* Matt Corallo's relay network, which internally works by remembering the
last 5000 (i believe?) transactions sent by the peer, and allowing the peer
to backreference those rather than retransmit them insi
Hello,
At DevCore London, Gavin mentioned the idea that we could get rid of
sending full blocks. Instead, newly minted blocks would only be
distributed as block headers plus all hashes of the transactions
included in the block. The assumption would be that nodes have already
the majority of th
Just to clarify the process.
Pledgers create transactions using the following template and broadcast
them. The p2p protocol could be modified to allow this, or it could be a
separate system.
*Input: 0.01 BTC*
*Signed with SIGHASH_ANYONE_CAN_PAY*
*Output 50BTC*
*Paid to: <1 million> OP_CHECK
On Friday, 8 May 2015, at 8:48 am, Matt Whitlock wrote:
> On Friday, 8 May 2015, at 3:32 pm, Joel Joonatan Kaartinen wrote:
> > It seems you missed my suggestion about basing the maximum block size on
> > the bitcoin days destroyed in transactions that are included in the block.
> > I think it has
On Friday, 8 May 2015, at 3:32 pm, Joel Joonatan Kaartinen wrote:
> It seems you missed my suggestion about basing the maximum block size on
> the bitcoin days destroyed in transactions that are included in the block.
> I think it has potential for both scaling as well as keeping up a constant
> fe
I like the bitcoin days destroyed idea.
I like lots of the ideas that have been presented here, on the bitcointalk
forums, etc etc etc.
It is easy to make a proposal, it is hard to wade through all of the
proposals. I'm going to balance that equation by completely ignoring any
proposal that isn't
Matt,
It seems you missed my suggestion about basing the maximum block size on
the bitcoin days destroyed in transactions that are included in the block.
I think it has potential for both scaling as well as keeping up a constant
fee pressure. If tuned properly, it should both stop spamming and inc
On Wednesday 6. May 2015 21.49.52 Peter Todd wrote:
> I'm not sure if you've seen this, but a good paper on this topic was
> published recently: "The Economics of Bitcoin Transaction Fees"
The obvious flaw in this paper is that it talks about a block size in todays
(trivial) data-flow economy an
Matt : I think proposal #1 and #3 are a lot better than #2, and #1 is my
favorite.
I see two problems with proposal #2.
The first problem with proposal #2 is that, as we see in democracies,
there is often a mismatch between the people conscious vote and these same
people behavior.
Relying on an
There are certainly arguments to be made for and against all of these
proposals.
The fixed 20mb cap isn't actually my proposal at all, it is from Gavin. I
am supporting it because anything is better than nothing. Gavin originally
proposed the block size be a function of time. That got dropped, I s
>
> * Though there are many proposals floating around which could
> significantly decrease block propagation latency, none of them are
> implemented today.
With a 20mb cap, miners still have the option of the soft limit.
I would actually be quite surprised if there were no point along the road
Interesting.
1. How do you know who was first? If one node can figure out where
more transactions happen he can gain an advantage by being closer to
him. Mining would not be fair.
2. "A merchant wants to cause block number 1 million to effectively
have a minting fee of 50BTC." - why should he do
That reminds me - I need to integrate the patch that automatically sweeps
anyone-can-pay transactions for a miner.
On Thu, May 7, 2015 at 7:32 PM, Tier Nolan wrote:
> One of the suggestions to avoid the problem of fees going to zero is
> assurance contracts. This lets users (perhaps large merc
Looks like a neat solution, Tier.
--
One dashboard for servers and applications across Physical-Virtual-Cloud
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give
>
> Alan argues that 7 tps is a couple orders of magnitude too low
By the way, just to clear this up - the real limit at the moment is more
like 3 tps, not 7.
The 7 transactions/second figure comes from calculations I did years ago,
in 2011. I did them a few months before the "sendmany" command
Between all the flames on this list, several ideas were raised that did not get
much attention. I hereby resubmit these ideas for consideration and discussion.
- Perhaps the hard block size limit should be a function of the actual block
sizes over some trailing sampling period. For example, take
Why can't we have dynamic block size limit that changes with difficulty, such
as the block size cannot exceed 2x the mean size of the prior difficulty
period?
I recently subscribed to this list so my apologies if this has been addressed
already.
--
55 matches
Mail list logo