Maybe worth taking a look at TDE in HDFS:
https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/TransparentEncryption.html
A complete solution requires several Hadoop services. I suspect that would
scare the Kafka community a bit, but maybe it's unreasonable to expect
Kafka broke
Hi Sönke
I've been giving it more thought over the last few days, and looking into
other systems as well, and I think that I've derailed your proposal a bit
with suggesting that at-rest encryption may be sufficient. I believe that
many of us are lacking the context of the sorts of discussions you
@Ryanne
> Seems that could still get us per-topic keys (vs encrypting the entire
> volume), which would be my main requirement.
Agreed, I think that per-topic separation of keys would be very valuable
for multi-tenancy.
My 2 cents is that if encryption at rest is sufficient to satisfy GDPR +
oth
Hi Sonke
Thanks for bringing this for discussion. There are lot of considerations
even if we assume we have end-to-end encryption done. Example depending
upon company's setup there could be restrictions on how/which encryption
keys are shared. Environment could have multiple security and network
b
Adam, I agree, seems reasonable to limit the broker's responsibility to
encrypting only data at rest. I guess whole segment files could be
encrypted with the same key, and rotating keys would just involve
re-encrypting entire segments. Maybe a key rotation would involve closing
all affected segment
Hi All
I typed up a number of replies which I have below, but I have one major
overriding question: Is there a reason we aren't implementing
encryption-at-rest almost exactly the same way that most relational
databases do? ie:
https://wiki.postgresql.org/wiki/Transparent_Data_Encryption
I ask thi
From: Sönke Liebau
Date: 08/05/2020 10:05 (GMT+00:00) To:
dev Subject: Re: [DISCUSS] KIP-317 - Add end-to-end data
encryption functionality to Apache Kafka Hey everybody,thanks a lot for reading
and giving feedback!! I'll try and answer allpoints that I found going through
the thread in
Hey everybody,
thanks a lot for reading and giving feedback!! I'll try and answer all
points that I found going through the thread in this mail, but if I miss
something please feel free to let me know! I've added a running number to
the discussed topics for ease of reference down the road.
I'll g
Tom, good point, I've done exactly that -- hashing record keys -- but it's
unclear to me what should happen when the hash key must be rotated. In my
case the (external) solution involved rainbow tables, versioned keys, and
custom materializers that were aware of older keys for each record.
In part
Hi again,
Of course I was overlooking at least one thing. Anyone who could guess the
record keys could hash them and compare. To make it work the producer and
consumer would need a shared secret to include in the hash computation. But
the key management service could furnish them with this in addi
Hi Rayanne,
You raise some good points there.
Similarly, if the whole record is encrypted, it becomes impossible to do
> joins, group bys etc, which just need the record key and maybe don't have
> access to the encryption key. Maybe only record _values_ should be
> encrypted, and maybe Kafka Stre
Thanks Sönke, this is an area in which Kafka is really, really far behind.
I've built secure systems around Kafka as laid out in the KIP. One issue
that is not addressed in the KIP is re-encryption of records after a key
rotation. When a key is compromised, it's important that any data encrypted
u
Hi
I have just spotted this.
I would be a little -1 encrypting headers these are NOT safe to encrypt. The
whole original reason for headers was for non-sensitive but transport or other
meta information details, very akin to tcp headers, e.g. those also are not
encrypted. These should remai
Small typo correction i meant headers at the end of this paragraph not keys
(sorry long week already)
corrected:
"
Second i would suggest we do not add additional section (again i would be a
little -1 here) into the record specifically for this the whole point of
headers being added, is add
Hi Sönke,
Replies inline
1. The functionality in this first phase could indeed be achieved with
> custom serializers, that would then need to wrap the actual serializer that
> is to be used. However, looking forward I intend to add functionality that
> allows configuration to be configured broker
Hi Tom,
thanks for taking a look!
Regarding your questions, I've answered below, but will also add more
detail to the KIP around these questions.
1. The functionality in this first phase could indeed be achieved with
custom serializers, that would then need to wrap the actual serializer that
is
Hi Sönke,
I never looked at the original version, but what you describe in the new
version makes sense to me.
Here are a few things which sprang to mind while I was reading:
1. It wasn't immediately obvious why this can't be achieved using custom
serializers and deserializers.
2. It would be use
All,
I've asked for comments on this KIP in the past, but since I didn't really
get any feedback I've decided to reduce the initial scope of the KIP a bit
and try again.
I have reworked to KIP to provide a limited, but useful set of features for
this initial KIP and laid out a very rough roadmap
18 matches
Mail list logo