I actually ask for headers from each peer I’m connected to and then dump them 
into the backend to be sorted out.. is this abusive to the network? I’m 
concerned about that as I work on this, it only dawned on me the other night 
that I really shouldn’t use the seed peers for downloading…


I figured with the headers being so tiny, it wouldn’t be a burden to ask for 
them from each peer. I won’t actually end up downloading the full blockchain’s 
worth of headers from every peer; I’m continually getting an updated view of 
the current winning chain before I send out additional header requests to peers.






From: Tier Nolan
Sent: ‎Monday‎, ‎April‎ ‎07‎, ‎2014 ‎6‎:‎48‎ ‎PM
To: bitcoin-development@lists.sourceforge.net








On Mon, Apr 7, 2014 at 8:50 PM, Tamas Blummer <ta...@bitsofproof.com> wrote:


You have to load headers sequantially to be able to connect them and determine 
the longest chain.




The isn't strictly true.  If you are connected to a some honest nodes, then you 
could download portions of the chain and then connect the various sub-chains 
together.

The protocol doesn't support it though.  There is no system to ask for block 
headers for the main chain block with a given height,


Finding one high bandwidth peer to download the entire header chain 
sequentially is pretty much forced.  The client can switch if there is a 
timeout.


Other peers could be used to parallel download the block chain while the main 
chain is downloading.  Even if the header download stalled, it wouldn't be that 
big a deal.






> Blocks can be loaded in random order once you have their order given by the 
> headers.
> Computing the UTXO however will force you to at least temporarily store the 
> blocks unless you have plenty of RAM. 


You only need to store the UTXO set, rather than the entire block chain.



It is possible to generate the UTXO set without doing any signature 
verification.



A lightweight node could just verify the UTXO set and then do random signature 
verifications.



The keeps disk space and CPU reasonably low.  If an illegal transaction is 
added to be a block, then proof could be provided for the bad transaction.



The only slightly difficult thing is confirming inflation.  That can be checked 
on a block by block basis when downloading the entire block chain.




> Regards,
> Tamas Blummer
> http://bitsofproof.com 




On 07.04.2014, at 21:30, Paul Lyon <pml...@hotmail.ca> wrote:





I hope I'm not thread-jacking here, apologies if so, but that's the approach 
I've taken with the node I'm working on.



Headers can be downloaded and stored in any order, it'll make sense of what the 
winning chain is. Blocks don't need to be downloaded in any particular order 
and they don't need to be saved to disk, the UTXO is fully self-contained. That 
way the concern of storing blocks for seeding (or not) is wholly separated from 
syncing the UTXO. This allows me to do the initial blockchain sync in ~6 hours 
when I use my SSD. I only need enough disk space to store the UTXO, and then 
whatever amount of block data the user would want to store for the health of 
the network.





This project is a bitcoin learning exercise for me, so I can only hope I don't 
have any critical design flaws in there. :)






From: ta...@bitsofproof.com
Date: Mon, 7 Apr 2014 21:20:31 +0200
To: gmaxw...@gmail.com
CC: bitcoin-development@lists.sourceforge.net
Subject: Re: [Bitcoin-development] Why are we bleeding nodes?





Once headers are loaded first there is no reason for sequential loading. 




Validation has to be sequantial, but that step can be deferred until the blocks 
before a point are loaded and continous.


Tamas Blummer
http://bitsofproof.com



On 07.04.2014, at 21:03, Gregory Maxwell <gmaxw...@gmail.com> wrote:


On Mon, Apr 7, 2014 at 12:00 PM, Tamas Blummer <ta...@bitsofproof.com> wrote:

therefore I guess it is more handy to return some bitmap of pruned/full
blocks than ranges.


A bitmap also means high overhead and— if it's used to advertise
non-contiguous blocks— poor locality, since blocks are fetched
sequentially.



------------------------------------------------------------------------------ 
Put Bad Developers to Shame Dominate Development with Jenkins Continuous 
Integration Continuously Automate Build, Test & Deployment Start a new project 
now. Try Jenkins in the cloud.http://p.sf.net/sfu/13600_Cloudbees
 
_______________________________________________ Bitcoin-development mailing 
list 
Bitcoin-development@lists.sourceforge.nethttps://lists.sourceforge.net/lists/listinfo/bitcoin-development



------------------------------------------------------------------------------
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test & Deployment
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees
_______________________________________________
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development
------------------------------------------------------------------------------
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test & Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees
_______________________________________________
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development
------------------------------------------------------------------------------
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test & Deployment 
Start a new project now. Try Jenkins in the cloud.
http://p.sf.net/sfu/13600_Cloudbees
_______________________________________________
Bitcoin-development mailing list
Bitcoin-development@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bitcoin-development

Reply via email to