Hi Jens,
I got your point but some of our use case cannot just rely on TTL. We try to 
have long expiry for message and rather compact them (dedup) so we can replay 
messages as system of records. When key is lost, we will invalid the old key so 
message encrypted by old message will not be able to decrypt if we don't 
re-encrypt them also.

Josh
________________________________________
From: Jens Rantil <jens.ran...@tink.se>
Sent: Tuesday, January 19, 2016 11:48 PM
To: users@kafka.apache.org
Cc: users@kafka.apache.org
Subject: Re: security: encryption at rest and key rotation idea

Hi Josh,


Kafka will/can expire message logs after a certain TTL. You can't simply rely 
on expiration for key rotation? That is, you start to produce messages with a 
different key while your consumer temporarily handles the overlap of keys for 
the duration of the TTL.




Just an idea,

Jens





–
Skickat från Mailbox

On Wed, Jan 20, 2016 at 12:34 AM, Josh Wo <z...@lendingclub.com> wrote:

> We are trying to deploy kafka into EC2 and one of the requirement from 
> infosec is to have kafka encryption at rest (stored with encrypted value). We 
> also need to be able to rotate encryption keys and re-encrypt all the 
> messages on regular basis since we are a financial company. The re-encryption 
> feels challenging since kafka messages are immutable from client side 
> (producer and consumer). Some ideas floating around to have replicated 
> clustered but then it will mess up all the offsets of the consumer and 
> switching is complicated from operational perspective.
> One idea we have is to achieve this is to plugin our own "compression" codec 
> which deal with both compression and encryption logic and leverage compaction 
> cycle to re-write all the messages by calling decompress and compress into a 
> new file. It feels this approach can also have zero impact to the 
> consumer/producer if they are using the same "codec" for compression since 
> the offsets will be intact.
> My current understanding is the codecs are hardcoded right now (we are using 
> .9) so it will require us to customize kafka. Also compaction cannot be 
> triggered on-demand, which is needed in case of the key loss. So before we 
> take on customization of kafka, I am just wondering if our thinking is on the 
> right track.
> I hope some of the committers from Confluent/Hornton/Cloudera can comment on 
> that and the road map to support encryption at rest and key rotation, or 
> otherwise alternative to what is proposed. Also please let me know if my 
> question/problem is not clear.
> Thanks,
> Josh
> ________________________________
> DISCLAIMER: The information transmitted is intended only for the person or 
> entity to which it is addressed and may contain confidential and/or 
> privileged material. Any review, re-transmission, dissemination or other use 
> of, or taking of any action in reliance upon this information by persons or 
> entities other than the intended recipient is prohibited. If you received 
> this in error, please contact the sender and destroy any copies of this 
> document and any attachments.

Reply via email to