Hello GNUnet Developers, First of all I apologize if this is not the correct place for discussing a possible new feature to GNUnet and since I am not from the IT field I cannot even attempt to implement it. Still, perhaps if you find this feature valuable you would consider implementing it so I wanted to share it. Please bear in mind that I am no expert and this may not be feasible for technical reasons not obvious to me. In that case please say so and I will not take more of your time.
Some time ago I had the idea that gnunet (as well as other projects) could benefit from increased disk space for storage and that using the free space on disk should be a technically possible if difficult task. On many OS filesystems, when a file is deleted, it is not truly erased, in the FAT filesystem for example, the list of disk clusters occupied by the file be erased from the file allocation table marking those sectors available. On other filesystems I do not know how that is handled but, for the sake of argument let's say that a header is instead applied to the file indicating that the file portion of the hard disk is available to be overwritten. /header/ data block Nº1; /header/ data block Nº2; /header/ data block Nº3;... If gnunet was able to split the file data into data blocks (encrypted of course) and subsequently delete the data, while keeping both a checksum for the data block and record of its disk location, the free disk space of computers on which gnunet was installed could be used for storage without compromising normal functioning of said computer. This program, perhaps to be named gnunet-str (storage) would at the moment of storage of data, create a checksum for every encrypted data block and for every "contiguous" data group, as follows: /block1/block2/block3/block4/block5/block6/block7/block8... =>checksum1/checksum2/checksum3/checksum4/... but also /block1/block2/block3/block4/block5/block6/block7/block8...=>checksum1+2/checksum3+4/checksum5+6/checksum7+8... and also /block1/block2/block3/block4/block5/block6/block7/block8...=>checksum1+2+3+4/checksum5+6+7+8/checksum9+10+11+12... and continuing... In this way, it would be possible to (quickly? - by going from the checksums for the agglomerations of blocks to the individual blocks) ascertain which data was corrupt (by usage of the main OS, or a disk defrag) and had to be replaced. It would then signal to other GNUnet nodes "Of the data stored only 70% (for example) is still not corrupted. I can share this 70% but give me the 30% back, or new files to store in this space". Such a solution would allow big amounts of storage - in theory, if all free space in the the hard drive of host computer. Due to its nature it would not be possible to rely on the data not being compromised without implementing redundancy. If this gnunet-str made x copies of file y for example, the probability of data corruption and loss could be greatly diminished. Tahoe-Lafs and gnunet are based on this principle (although I could be wrong as I'm no expert), redundancy of storage between multiple peers on the net. If this redundancy could also be implemented locally, the total storage for GNUnet would increase. Alternatively to providing a greater amount of data storage, perhaps such a feature could instead be used to boost GNUnet's efficiency as parts of a file on a distant node could also be made available on more nodes diminishing the distance between the "asking node" and the node who actually has the file. Do you think such a feature could be useful for GNUnet? Once again do not hesitate to say this idea is unfeasible for some reason, I just shared it in the hopes of it being useful to an improved gnunet. -- hypothesys -- View this message in context: http://old.nabble.com/Idea-for-file-storage-in-GNUnet-tp34768221p34768221.html Sent from the GnuNet - Dev mailing list archive at Nabble.com. _______________________________________________ GNUnet-developers mailing list GNUnet-developers@gnu.org https://lists.gnu.org/mailman/listinfo/gnunet-developers