> 
> Existing implementations out there seems to uses whatever nodes on the 
> network to replicate the information.
> 
> Yes, because that makes them decentralized :)

Yes :)

But couldn't we limit the decentralized networks to some nodes (my node + nodes 
of persons/organization I trust) + eventually node they trusts.

I have the feeling that limiting the networks would simplify the implementation 
cost as the underlying cryptographic system (I have a very naive understanding 
of such systems but I know I need them to sign/crypt information).

I have no idea of a minimal node number that is acceptable to do content 
addressing. 

>  
> 
> I’d like to control nodes where it is replicated. My nodes (all my app 
> instances + nodes of person I’m exchanging information with + eventually 
> friend of friend).
> 
> We've done recent work on capability-based content adressing, but, first it's 
> very slow (so you use a two level cryptosystem: the crypted header contains 
> the key to decrypt the content, and the crypted header can only be decrypted 
> if your private key has the right level of capabilities on the data item.

Interesting !  I need to get more knowledge on that. BTW, I found these 
introduction courses well done on cryptography in information theory 
(https://khanacademy.org/computing/computer-science 
<https://khanacademy.org/computing/computer-science>)

Just thinking loud:

I don’t actually need a quick implementation as the default is to have not 
synced version that get synchronized on opportunistic connection.

What if each node has by default private keys and all public key of the 
external node I have relation with ?
Wouldn’t it be simpler ?

To prototype stuff, I think I can consider naive crypto and even do symmetric 
cryptography (and use Cryptography package or NaCL binding). 
I think I’ll do a very naive approach to sign and encrypt. If you know some 
existing applications that could help, please, let me know.


>  
> 
> What hash function would you use ?
> 
> Anything that is fast, bonus points if it uses CPU special instructions (that 
> you can't use from Pharo, of course ;)), and has the right cryptographic 
> properties. Unless you go into a specific cryptosystem, I'd say that it is 
> not important.

:)   ok

>  
> 
> To get something compatible with ipfs, I’d need something like: 
> https://github.com/multiformats/multihash 
> <https://github.com/multiformats/multihash> 
> It looks to me quite universal as self describing. But any (existing) hash 
> method compatible with content hashing would do the job.
> 
> Interesting, but it looks like a minor issue in the overall scheme. Makes 
> some of your system robust to evolution in the hash used, but, since first 
> bytes are not well distributed, can you use it safely to build a multi-hash 
> function system? Probably not.

Yes  I agree. 

One idea was to use only ipfs for public information shared with everybody 
(even outside of nodes I have relation with). Clearly, not a central point.

>  
> 
> 
>> I wouldn't be against a slightly higher granularity when dealing with object 
>> transfers, however.
> 
> You mean at the pharo level ? Higher granularity means having more control on 
> the exchange/merge ?
> 
> No, just that the content-based address scheme is costly... and that a 
> practical implementation would probably look to provide addresses only to 
> large enough entities (a page containing objects, for example, or a large 
> objects containing smaller ones). So that you donc create an address for each 
> character of the string object describing the name of a person, say).


Oh ok. I think this is how I was seeing it… says I exchange my phone number 
with somebody, the content hashed would be the full information (full content + 
version number + (maybe) its model hash + its geolocalized timestamp).

Thanks a lot for the exchange,

Cédrick


> 
> Regards,
> 
> Thierry
>  
> 
> Cheers,
> 
> Cédrick
> 

Reply via email to