> Fortunately, we're not locked into any particular implementation
> strategy, so if we're barking up the wrong tree, there's opportunity to
> change.  For example, we've also considered:
>
>       - Having the client do the encryption, which makes resuming
> uploads trivial, but complicates the client implementation (and requires
> that we pass the key around).
>
>       - Doing block-by-block encryption, as you suggest, but this adds
> management overhead associated with dealing with the blocks after the
> upload is complete.
>
>       - When an upload is resumed, read back, decrypt and re-encrypt
> the data already stored to get the context back to its previous state.
>
>       - etc...
>
> I'd love to hear what you think.  Are there philosophical issues with
> saving the context across failed uploads?  Looking at the low-level AES
> routines, it actually looks like it would be pretty straightforward from
> an implementation standpoint, but if it's the wrong thing to do...

One huge problem with many encryption modes is that if the plaintext changes
at all, the ciphertext may become completely incomprehensible across the
resume point because the context will not be the same. So you will probably
need some resynch mechanism.

You actually have an interesting problem. If the client does the encryption,
it's not too complicated. But if you want the added flexibility of allowing
the server to totally control the encryption algorithm, it gets trickier.
You can have the server able to request SHA1 checksums over arbitrary byte
ranges, allowing the resynchronization to be totally controlled by the
server. But there's still the issue of how the server should do it.

DS


______________________________________________________________________
OpenSSL Project                                 http://www.openssl.org
User Support Mailing List                    openssl-users@openssl.org
Automated List Manager                           [EMAIL PROTECTED]

Reply via email to