Thanks for the response.  Sorry for the ambiguity in my original
message: I thought keeping things abstract would make what we're doing
easier to understand at a high level, but I probably left things too
vague.

The short version is that we're implementing a kind of remote file
service that clients can use to store and retrieve files.  One of the
features we're trying to provide is the ability to resume uploads: for
example, if a client is storing a large file over a spotty network
connection, we don't want to have to restart the process every time the
connection drops.

For the actual file transfer, we're thinking of using vanilla SSL, for
several reasons: a) we don't want to reinvent the wheel; b) we'd like to
avoid having to deploy our own encryption code on all our client
platforms; and c) clients also transmit metadata that isn't part of the
file itself (and is stored separately from the file) that we don't want
to send over the wire in the clear.

Thus, the encryption I'm talking about is purely for the purpose of
storing the file on the server.  The idea is that even if someone were
able to gain unauthorized remote access to the storage server, he
shouldn't be able to read the contents of any of the uploaded files.  I
think this is somewhat closer to the very unusual case than the other
scenarios you described, but could be mistaken.

Fortunately, we're not locked into any particular implementation
strategy, so if we're barking up the wrong tree, there's opportunity to
change.  For example, we've also considered:

        - Having the client do the encryption, which makes resuming
uploads trivial, but complicates the client implementation (and requires
that we pass the key around).

        - Doing block-by-block encryption, as you suggest, but this adds
management overhead associated with dealing with the blocks after the
upload is complete.

        - When an upload is resumed, read back, decrypt and re-encrypt
the data already stored to get the context back to its previous state.

        - etc...

I'd love to hear what you think.  Are there philosophical issues with
saving the context across failed uploads?  Looking at the low-level AES
routines, it actually looks like it would be pretty straightforward from
an implementation standpoint, but if it's the wrong thing to do...

-----Original Message-----
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of David Schwartz
Sent: Wednesday, May 30, 2007 1:42 PM
To: openssl-users@openssl.org
Subject: RE: Saving (and restoring) cipher context


> I'm developing an application in which we're using AES to encrypt 
> files as they're transferred from another system and saved to disk.  
> We'd like to provide the ability for the application to resume a 
> transfer that was interrupted mid-stream, but the encryption throws a 
> bit of a wrench into things because of the state associated with the 
> encryption context.

> Is there a safe, supported way to stash the context somewhere on disk 
> so that encryption can be resumed where it left off when the file 
> transfer starts up again?  We're currently looking at the EVP 
> functions; would we have to drop down to the lower-level, 
> algorithm-specific routines to do this right?

Maybe you're locked into an implementation that isn't logical for your
problem set, but if not, change to a rational implementation.

If you are encrypting/decrypting during the transfer, that means the
encryption is to protect the transport. Since the resumption is over a
new transport, there is no rational reason to resume the previous
encryption.
Use a new encryption context for the new transport. If the encryption is
for storage, then resume sending the already-encrypted data.

The only case where you even have the problem you are describing is
where the data has to be encrypted differently from its normal state for
storage on the receiver. (Perhaps to be decrypted later.) Unless you're
in this very unusual situation, this shouldn't be an issue.

If you really are in this very unusual situation, I recommend one of the
following solutions:

1) Have a resync protocol that runs on both ends, re-encrypting the data
and sending block checksums. Let the other side verify the checksum
until you have a mismatch, then continue. You don't need to store the
context because this process recovers it.

2) Encrypt the file as a group of separate blocks. Simply discard any
incomplete blocks. Each block can have its own encryption context, so
there is no need to resume.

3) Use an encryption scheme that has negligible context. For example, if
you XOR each block of data with its block number and then encrypt it,
all you need to pick back up where you left off is the block number,
which you should already know.

Give us more details and tell us what you can and can't change. Your
proposed solution may not be the best one to your actual outer problem.

DS


______________________________________________________________________
OpenSSL Project                                 http://www.openssl.org
User Support Mailing List                    openssl-users@openssl.org
Automated List Manager                           [EMAIL PROTECTED]
______________________________________________________________________
OpenSSL Project                                 http://www.openssl.org
User Support Mailing List                    openssl-users@openssl.org
Automated List Manager                           [EMAIL PROTECTED]

Reply via email to