MG>curious if Jim tested his encryption/decryption scenario on Kafka's
stateless broker?
MG>Jims idea could work if you want to implement a new
serializer/deserializer for every new supported cipher

Not sure if I understand.  We didn't modify Kafka at all.

I definitely recommend batching events together before encrypting to
reduce time and space overhead, to whatever extent you can tolerate a
delay.  If you can't tolerate much delay and the overhead is causing your
system to be too slow, I suggest adding more hardware resources -- the
system should scale out, right?

As mentioned at http://symc.ly/1pC2CEG, we batched together several events
and encrypted them together into an "envelope" which we passed to the
standard Kafka producer from the time period as a single message.
Batching kept the overhead down.  It would have been nice to have the
encryption be part of the producer itself.  There was also a small about
of space overhead due to us adding a layer on top of Kafka, which in
theory could be optimized away.

You can read the blog on how we make use of encryption.  The security is
based on public/private keys and in keeping the private key secure.
Multiple public/private key pairs can be in use at a time and you could
change keys for any reason.

-- Jim




On 5/3/16, 5:39 AM, "Martin Gainty" <mgai...@hotmail.com> wrote:

>MG>hopefully quick comment
>
>> Subject: Re: Encryption at Rest
>> From: bruno.rassae...@novazone.be
>> Date: Tue, 3 May 2016 08:55:52 +0200
>> To: users@kafka.apache.org
>> 
>> From what I understand, when using batch compression in Kafka, the
>>files are stored compressed.
>> Don’t really see the difference between compression and encryption in
>>that aspect.
>> If Kafka would support pluggable algorithms for compression (it already
>>supports two), it would be rather straightforward I guess.
>> 
>> 
>> > On 03 May 2016, at 07:02, Christian Csar <christ...@csar.us> wrote:
>> > 
>> > "We need to be capable of changing encryption keys on regular
>> > intervals and in case of expected key compromise." is achievable with
>> > full disk encryption particularly if you are willing to add and remove
>> > Kafka servers so that you replicate the data to new machines/disks
>> > with new keys and take the machines with old keys out of use and wipe
>> > them.
>> > 
>> > For the second part of it I would suggest reevaluating your threat
>> > model since you are looking at a machine that is compromised but not
>> > compromised enough to be able to read the key from Kafka or to use
>> > Kafka to read the data.
>> > 
>> > While you could add support to encrypt data on the way in and out of
>> > compression I believe you would need either substantial work in Kafka
>> > to support rewriting/reencrypting the logfiles (with performance
>> > penalties) or rotate machines in and out as with full disk encryption.
>> > Though I'll let someone with more knowledge of the implementation
>> > comment further on what would be required.
>> > 
>> > Christian
>> > 
>> > On Mon, May 2, 2016 at 9:41 PM, Bruno Rassaerts
>> > <bruno.rassae...@novazone.be> wrote:
>> >> We did try indeed the last scenario you describe as encrypted disks
>>do not fulfil our requirements.
>> >> We need to be capable of changing encryption keys on regular
>>intervals and in case of expected key compromise.
>> >> Also, when a running machine is hacked, disk based or file system
>>based encryption doesn’t offer any protection.
>> >> 
>> >> Our goal is indeed to have the content in the broker files
>>encrypted. The problem is that only way to achieve this is through
>>custom serialisers.
>> >> This works, but the overhead is quite dramatic as the messages are
>>no longer efficiently compressed (in batch).
>> >> Compression in the serialiser, before the encryption, doesn’t really
>>solve the performance problem.
>> >> 
>> >> The best thing for us would be able to encrypt after the batch
>>compression offered by kafka.
>> >> The hook to do this is missing in the current implementation.
>> >> 
>> >> Bruno
>> >> 
>> >>> On 02 May 2016, at 22:46, Tom Brown <tombrow...@gmail.com> wrote:
>> >>> 
>> >>> I'm trying to understand your use-case for encrypted data.
>> >>> 
>> >>> Does it need to be encrypted only over the wire? This can be
>>accomplished
>> >>> using TLS encryption (v0.9.0.0+). See
>> >>> https://issues.apache.org/jira/browse/KAFKA-1690
>> >>> 
>> >>> Does it need to be encrypted only when at rest? This can be
>>accomplished
>> >>> using full disk encryption as others have mentioned.
>MG>the rest spec doesnt support encryption .. a SAAS secure impl such as
>Axis with Rampart encryption/decryption: WS-Security/WS-Policy
>workshttps://axis.apache.org/axis2/java/rampart/articles.html
>MG>or Apache-CXF with WS-Security
>http://cxf.apache.org/docs/ws-security.html
>MG>granted you *can* encrypt the whole disk but do you want that kind of
>performance degradation to all your running processes?
>> >>> 
>> >>> Does it need to be encrypted during both? Use both TLS and full disk
>> >>> encryption.
>> >>> 
>> >>> Does it need to be encrypted fully from end-to-end so even Kafka
>>can't read
>> >>> it? Since Kafka shouldn't be able to know the contents, the key
>>should not
>> >>> be known to Kafka. What remains is manually encrypting each message
>>before
>> >>> giving it to the producer (or by implementing an encrypting
>>serializer).
>> >>> Either way, each message is still encrypted individually.
>MG>curious if Jim tested his encryption/decryption scenario on Kafka's
>stateless broker?
>MG>Jims idea could work if you want to implement a new
>serializer/deserializer for every  new supported cipher
>
>MG>and yes you *should* change ciphers on a random basis on interval
>known only to consumer/producer/broker
>MG>since broker is stateless the only way for new cipher to be introduced
>is read from property  or read from DB
>MG>anyone having access to that info would know how to update their
>attack vectors
>MG>has anyone worked out a changeable cipher for Kafka Broker?
>> >>> 
>> >>> Have I left out a scenario?MG>majority of financial institutions
>>implement cipher-aware proxy servers like cisco
>>http://www.cisco.com/c/en/us/td/docs/interfaces_modules/services_modules/
>>ace/vA5_1_0/configuration/ssl/guide/sslgd/terminat.html
>MG>once inside the firewall your can send clear text to anyone
>> >>> 
>> >>> --Tom
>> >>> 
>> >>> 
>> >>> On Mon, May 2, 2016 at 2:01 PM, Bruno Rassaerts
>><bruno.rassae...@novazone.be
>> >>>> wrote:
>> >>> 
>> >>>> Hello,
>> >>>> 
>> >>>> We tried encrypting the data before sending it to kafka, however
>>this
>> >>>> makes the compression done by kafka almost impossible.
>> >>>> Also the performance overhead of encrypting the individual
>>messages was
>> >>>> quite significant.
>> >>>> 
>> >>>> Ideally, a pluggable “compression” algorithm would be best. Where
>>message
>> >>>> can first be compressed, then encrypted in batch.
>> >>>> However, the current kafka implementation does not allow this.
>> >>>> 
>> >>>> Bruno
>> >>>> 
>> >>>>> On 26 Apr 2016, at 19:02, Jim Hoagland <jim_hoagl...@symantec.com>
>> >>>> wrote:
>> >>>>> 
>> >>>>> Another option is to encrypt the data before you hand it to Kafka
>>and
>> >>>> have
>> >>>>> the downstream decrypt it.  This takes care of on-disk on on-wire
>> >>>>> encryption.  We did a proof of concept of this:
>> >>>>> 
>> >>>>> 
>> >>>> 
>>http://www.symantec.com/connect/blogs/end-end-encryption-though-kafka-our
>>-p
>> >>>>> roof-concept
>> >>>>> 
>> >>>>> ( http://symc.ly/1pC2CEG )
>> >>>>> 
>> >>>>> -- Jim
>> >>>>> 
>> >>>>> On 4/25/16, 11:39 AM, "David Buschman" <david.busch...@timeli.io>
>>wrote:
>> >>>>> 
>> >>>>>> Kafka handles messages which are compose of an array of bytes.
>>Kafka
>> >>>> does
>> >>>>>> not care what is in those byte arrays.
>> >>>>>> 
>> >>>>>> You could use a custom Serializer and Deserializer to encrypt and
>> >>>> decrypt
>> >>>>>> the data from with your application(s) easily enough.
>> >>>>>> 
>> >>>>>> This give the benefit of having encryption at rest and over the
>>wire.
>> >>>> Two
>> >>>>>> birds, one stone.
>> >>>>>> 
>> >>>>>> DaVe.
>> >>>>>> 
>> >>>>>> 
>> >>>>>>> On Apr 25, 2016, at 2:14 AM, Jens Rantil <jens.ran...@tink.se>
>>wrote:
>> >>>>>>> 
>> >>>>>>> IMHO, I think that responsibility should lie on the file
>>system, not
>> >>>>>>> Kafka.
>> >>>>>>> Feels like a waste of time and double work to implement that
>>unless
>> >>>>>>> there's
>> >>>>>>> a really good reason for it. Let's try to keep Kafka a focused
>>product
>> >>>>>>> that
>> >>>>>>> does one thing well.
>> >>>>>>> 
>> >>>>>>> Cheers,
>> >>>>>>> Jens
>> >>>>>>> 
>> >>>>>>> On Fri, Apr 22, 2016 at 3:31 AM Tauzell, Dave
>> >>>>>>> <dave.tauz...@surescripts.com>
>> >>>>>>> wrote:
>> >>>>>>> 
>> >>>>>>>> I meant encryption of the data at rest.  We utilize filesytem
>> >>>>>>>> encryption
>> >>>>>>>> for other products; just wondering if anything was on the Kafka
>> >>>>>>>> roadmap.
>> >>>>>>>> 
>> >>>>>>>> Dave
>> >>>>>>>> 
>> >>>>>>>>> On Apr 21, 2016, at 18:12, Martin Gainty <mgai...@hotmail.com>
>> >>>> wrote:
>> >>>>>>>>> 
>> >>>>>>>>> Dave-
>> >>>>>>>>> so you want username/password credentials to be sent in
>>response to
>> >>>> an
>> >>>>>>>> HTTP Get as clear text?
>> >>>>>>>>> if not this has been asked and answered with Axishttps://
>> >>>>>>>> axis.apache.org/axis2/java/rampart/
>> >>>>>>>>> 
>> >>>>>>>>> Martin
>> >>>>>>>>> ______________________________________________
>> >>>>>>>>> 
>> >>>>>>>>> 
>> >>>>>>>>> 
>> >>>>>>>>>> From: dave.tauz...@surescripts.com
>> >>>>>>>>>> To: users@kafka.apache.org
>> >>>>>>>>>> Subject: Encryption at Rest
>> >>>>>>>>>> Date: Thu, 21 Apr 2016 21:31:56 +0000
>> >>>>>>>>>> 
>> >>>>>>>>>> Has there been any discussion or work on at rest encryption
>>for
>> >>>>>>>>>> Kafka?
>> >>>>>>>>>> 
>> >>>>>>>>>> Thanks,
>> >>>>>>>>>> Dave
>> >>>>>>>>>> 
>> >>>>>>>>>> This e-mail and any files transmitted with it are
>>confidential, may
>> >>>>>>>> contain sensitive information, and are intended solely for the
>>use of
>> >>>>>>>> the
>> >>>>>>>> individual or entity to whom they are addressed. If you have
>>received
>> >>>>>>>> this
>> >>>>>>>> e-mail in error, please notify the sender by reply e-mail
>>immediately
>> >>>>>>>> and
>> >>>>>>>> destroy all copies of the e-mail and any attachments.
>> >>>>>>>>> 
>> >>>>>>>> 
>> >>>>>>> --
>> >>>>>>> 
>> >>>>>>> Jens Rantil
>> >>>>>>> Backend Developer @ Tink
>> >>>>>>> 
>> >>>>>>> Tink AB, Wallingatan 5, 111 60 Stockholm, Sweden
>> >>>>>>> For urgent matters you can reach me at +46-708-84 18 32.
>> >>>>>> 
>> >>>>> 
>> >>>> 
>> >>>> 
>> >> 
>> 
>                                         

Reply via email to