Il 06/06/2017 20:13, Konstantin Gribov ha scritto: > I can think of more simpler approach: > - generate secure random for symmetrical data encryption key (DEK); > - encrypt that key for authorized users on their public keys; > - encrypt data itself with something like ChaCha20 or AES in appropriate > mode. Problem: the symmetric key (DEK) must remain in plaintext on the server. It's a relatively secure setup, but I prefer *not* to risk. Even if that means a slightly more involved process. If the server gets compromised, the attacker can only access new datasets, at most, not the historic archive. Moreover, with your proposal, once I give an user access to one file, he'll be able to decrypt *any* other file too. If I keep track of who can access every dataset and some day I find some datasets are being used "illegally", I can restrict the suspects.
> Of course, such way doesn't allow you to revoke access to DEK since each > user could just decrypt his own copy. Since encrypting to a public key generates a random session key, the session key gives access only to that single file. Obviously that access can not be easily revoked (the user could have saved a plaintext copy anyway, so that's not a big issue). > A bit more complicated approach is to use two level system: > - generate data encryption key (DEK); > - generate key encryption key (KEK) for each authorized user; > - encrypt each user's KEK on each user's public key; > - create a table (tsv/csv or any other format) with some user id and DEK > encrypted with corresponding KEK and store it with data; > - encrypt data with DEK. That's the same of encrypting the DEK with multiple public keys. The problem is that I don't know in advance the users that will need access. IIRC there was some method to retrieve the session key and replace the public key part with another recipient... > Both methods are naive and gives end user DEK, so it's better to > reencrypt archive after that to rotate DEK. That would be a big problem: archives must remain static (to avoid troubles with offsite replication). > Also, a lot depends on your threat model. Since I don't know what risks > are you planning to avoid with original scheme I just assumed that > primary risk is 3rdparty archive storage compromise. Well, I handle the storage (currently 100TB, going to grow to 150TB soon). I want to avoid that an attacker could gain access to the whole archive if he succeeds in compromising the server. Clients are out of my perimeter (= not my problem). BYtE, Diego _______________________________________________ Gnupg-users mailing list Gnupg-users@gnupg.org http://lists.gnupg.org/mailman/listinfo/gnupg-users