Hi all!
On 2015-05-12 21:03, Gregory Maxwell wrote:
> Summarizing from memory:
In the context of this discussion, let me also restate an idea I've
proposed in Bitcointalk for this. It is probably not perfect and could
surely be adapted (I'm interested in that), but I think it meets
most/all of t
I like the reuse with negative numbers more than the current proposal
because it doesn't imply bigger scripts. If all problems that may
arise can be solved, that is.
If we went that route, we would start with the initial CLTV too.
But I don't see many strong arguments in favor of using the current
I think its fair to say no one knows how to make a consensus that
works in a decentralised fashion that doesnt weaken the bitcoin
security model without proof-of-work for now.
I am presuming Gavin is just saying in the context of not pre-judging
the future that maybe in the far future another inno
This is exactly the sort of solution I was hoping for. It seems this is the
minimal modification to make it work, and, if someone was willing to work
with me, I would love to help implement this.
My only concern would be if the - - max-size flag is not included than this
delivers significantly les
On Tue, May 12, 2015 at 8:03 PM, Gregory Maxwell wrote:
>
> (0) Block coverage should have locality; historical blocks are
> (almost) always needed in contiguous ranges. Having random peers
> with totally random blocks would be horrific for performance; as you'd
> have to hunt down a working pe
Disclaimer: I don't know anything about Bitcoin.
> 2) Proof-of-idle supported (I wish Tadge Dryja would publish his
proof-of-idle idea)
> 3) Fees purely as transaction-spam-prevention measure, chain security via
alternative consensus algorithm (in this scenario there is very little
mining).
FYI on behalf of jgarzik...
-- Forwarded message --
From: Jeff Garzik
Date: Tue, May 12, 2015 at 4:48 PM
Subject: Re: [Bitcoin-development] Proposed additional options for pruned
nodes
To: Adam Weiss
Maybe you could forward my response to the list as an FYI?
On Tue, May 12, 2
This seems like a good place to add in an idea I had about
partially-connected nodes that are able to throttle bandwidth demands.
While we will be having partial-blockchain nodes with a spectrum of
storage options the requirement to be connected is somewhat binary, I
think many users manually throt
On Tue, May 12, 2015 at 08:38:27PM +, Luke Dashjr wrote:
> It should actually be straightforward to softfork RCLTV in as a negative CLTV.
> All nLockTime are >= any negative number, so a negative number makes CLTV a
> no-op always. Therefore, it is clean to define negative numbers as relative
On Tue, May 12, 2015 at 8:10 PM, Jeff Garzik wrote:
> True. Part of the issue rests on the block sync horizon/cliff. There is a
> value X which is the average number of blocks the 90th percentile of nodes
> need in order to sync. It is sufficient for the [semi-]pruned nodes to keep
> X blocks,
I suppose this begs two questions:
1) why not have a partial archive store the most recent X% of the
blockchain by default?
2) why not include some sort of torrent in QT, to mitigate this risk? I
don't think this is necessarily a good idea, but I'd like to hear the
reasoning.
On May 12, 2015 4:11
It should actually be straightforward to softfork RCLTV in as a negative CLTV.
All nLockTime are >= any negative number, so a negative number makes CLTV a
no-op always. Therefore, it is clean to define negative numbers as relative
later. It's also somewhat obvious to developers, since negative nu
True. Part of the issue rests on the block sync horizon/cliff. There is a
value X which is the average number of blocks the 90th percentile of nodes
need in order to sync. It is sufficient for the [semi-]pruned nodes to
keep X blocks, after which nodes must fall back to archive nodes for older
d
On Tue, May 12, 2015 at 7:38 PM, Jeff Garzik wrote:
> One general problem is that security is weakened when an attacker can DoS a
> small part of the chain by DoS'ing a small number of nodes - yet the impact
> is a network-wide DoS because nobody can complete a sync.
It might be more interesting
Having thousands of utxos floating around for a single address is clearly a
bad thing - it creates a lot of memory load on bitcoin nodes.
However, having only one utxo for an address is also a bad thing, for
concurrent operations.
Having "several" utxos available to spend is good for parallelism,
Yet this holds true in our current assumptions of the network as well: that
it will become a collection of pruned nodes with a few storage nodes.
A hybrid option makes this better, because it spreads the risk, rather than
concentrating it in full nodes.
On May 12, 2015 3:38 PM, "Jeff Garzik" wrot
One general problem is that security is weakened when an attacker can DoS a
small part of the chain by DoS'ing a small number of nodes - yet the impact
is a network-wide DoS because nobody can complete a sync.
On Tue, May 12, 2015 at 12:24 PM, gabe appleton
wrote:
> 0, 1, 3, 4, 5, 6 can be solv
Gavin and @NicolasDorier have a point: If there isn't actually scarcity of
NOPs because OP_NOP10 could become OP_EX (if we run out), it makes
sense to chose the original unparameterised CLTV version #6124 which also
has been better tested. It's cleaner, more readable and results in a
slightly smal
0, 1, 3, 4, 5, 6 can be solved by looking at chunks chronologically. Ie,
give the signed (by sender) hash of the first and last block in your range.
This is less data dense than the idea above, but it might work better.
That said, this is likely a less secure way to do it. To improve upon that,
a
I have no strong opinion, but a slight preference for separate opcodes.
Reason: given the current progress, they'll likely be deployed
independently, and maybe the end result is not something that cleanly fits
the current CLTV argument structure.
---
This saves us ocodes for later but it's uglier and produces slightly
bigger scripts.
If we're convinced it's worth it, seems like the right way to do it,
and certainly cltv and rclv/op_maturity are related.
But let's not forget that we can always use this same trick with the
last opcode to get 2^64
It's a little frustrating to see this just repeated without even
paying attention to the desirable characteristics from the prior
discussions.
Summarizing from memory:
(0) Block coverage should have locality; historical blocks are
(almost) always needed in contiguous ranges. Having random peers
On 11/05/2015 04:25 p.m., Leo Wandersleb wrote:
> I assume that 1 minute block target will not get any substantial support but
> just in case only few people speaking up might be taken as careful
support of
> the idea, here's my two cents:
>
> In mining, stale shares depend on delay between pool/
See the Open Assets protocol specification for technical details on how a
colored coin (of the Open Asset flavor) is represented in a bitcoin
transaction.
https://github.com/OpenAssets/open-assets-protocol
http://www.CoinPrism.com also has a discussion forum where some colored
coin devs hang out.
On Tue, May 12, 2015 at 6:16 PM, Peter Todd wrote:
>
> Lots of people are tossing around ideas for partial archival nodes that
> would store a subset of blocks, such that collectively the whole
> blockchain would be available even if no one node had the entire chain.
>
A compact way to describe
On Tue, May 12, 2015 at 09:05:44AM -0700, Jeff Garzik wrote:
> A general assumption is that you will have a few archive nodes with the
> full blockchain, and a majority of nodes are pruned, able to serve only the
> tail of the chains.
Hmm?
Lots of people are tossing around ideas for partial archi
Yes, but that just increases the incentive for partially-full nodes. It
would add to the assumed-small number of full nodes.
Or am I misunderstanding?
On Tue, May 12, 2015 at 12:05 PM, Jeff Garzik wrote:
> A general assumption is that you will have a few archive nodes with the
> full blockchain
I think proof-of-idle had a potentially serious problem when I last looked at
it. The risk is that a largish miner can use everyone else's idle time to
construct a very long chain; it's also easy enough for them to make it appear
to be the work of a large number of distinct miners. Given that th
Added back the list, I didn't mean to reply privately:
Fair enough, I'll try to find time in the next month or three to write up
four plausible future scenarios for how mining incentives might work:
1) Fee-supported with very large blocks containing lots of tiny-fee
transactions
2) Proof-of-idle
A general assumption is that you will have a few archive nodes with the
full blockchain, and a majority of nodes are pruned, able to serve only the
tail of the chains.
On Tue, May 12, 2015 at 8:26 AM, gabe appleton wrote:
> Hi,
>
> There's been a lot of talk in the rest of the community about h
Hi,
There's been a lot of talk in the rest of the community about how the 20MB
step would increase storage needs, and that switching to pruned nodes
(partially) would reduce network security. I think I may have a solution.
There could be a hybrid option in nodes. Selecting this would do the
follo
Thank you for your answer.
I agree that a lot of things will change, and I am not asking for a
prediction of technological developments; prediction is certainly
impossible. What I would like to have is some sort of reference scenario
for the future of Bitcoin. Something a bit like the Standard Mod
Thank You,
I know this, but I want to have mores details in the inputs/outputs, or in
the script of input/output and how i will proceed in the code.
Thanks for all replaying
2015-05-12 11:47 GMT+02:00 Patrick Mccorry (PGR) <
patrick.mcco...@newcastle.ac.uk>:
> There is no difference to the trans
Hello evry body,
I want to know what is the difference between a bitcoin transaction and
colored coins transaction technically.
Thanks
--
One dashboard for servers and applications across Physical-Virtual-Cloud
Widest out
34 matches
Mail list logo