rong ?
>
> -Original Message-
> From: Tauzell, Dave [mailto:dave.tauz...@surescripts.com]
> Sent: 08 March 2017 22:09
> To: users@kafka.apache.org
> Subject: RE: Performance and Encryption
>
> I think because the product batches messages which could be for d
?
-Original Message-
From: Tauzell, Dave [mailto:dave.tauz...@surescripts.com]
Sent: 08 March 2017 22:09
To: users@kafka.apache.org
Subject: RE: Performance and Encryption
I think because the product batches messages which could be for different
topics.
-Dave
-Original Message-
From
I believe these are defaults you can set at the broker level so that if the
topic doesn’t have that setting set, it will inherit those
But you can definitely override your topic configuration at the topic level
On 9 March 2017 at 7:42:14 am, Nicolas Motte (lingusi...@gmail.com) wrote:
Hi everyone
They are defined at the broker level as a default for all topics that do
not have an override for those configs. Both (and many other configs) can
be overridden for individual topics using the command line tools.
-Todd
On Wed, Mar 8, 2017 at 12:36 PM, Nicolas Motte wrote:
> Hi everyone, I have
;
> -Dave
>
> -Original Message-
> From: Nicolas MOTTE [mailto:nicolas.mo...@amadeus.com]
> Sent: Wednesday, March 8, 2017 2:41 PM
> To: users@kafka.apache.org
> Subject: Performance and Encryption
>
> Hi everyone,
>
> I understand one of the reasons why Kafka is perfo
I think because the product batches messages which could be for different
topics.
-Dave
-Original Message-
From: Nicolas MOTTE [mailto:nicolas.mo...@amadeus.com]
Sent: Wednesday, March 8, 2017 2:41 PM
To: users@kafka.apache.org
Subject: Performance and Encryption
Hi everyone,
I
Hi everyone, I have another question.
Is there any reason why retention and cleanup policy are defined at cluster
level and not topic level?
I can t see why it would not be possible from a technical point of view...
2017-03-06 14:38 GMT+01:00 Nicolas Motte :
> Hi everyone,
>
> I understand one of
Hi everyone,
I understand one of the reasons why Kafka is performant is by using zero-copy.
I often hear that when encryption is enabled, then Kafka has to copy the data
in user space to decode the message, so it has a big impact on performance.
If it is true, I don t get why the message has to
Hi Todd,
I agree that KAFKA-2561 would be good to have for the reasons you state.
Ismael
On Mon, Mar 6, 2017 at 5:17 PM, Todd Palino wrote:
> Thanks for the link, Ismael. I had thought that the most recent kernels
> already implemented this, but I was probably confusing it with BSD. Most of
>
Hi Todd
Can you please help me with notes or document on how did you achieve
encryption ?
I have followed data available on official sites but failed as I m no good
with TLS .
On Mar 6, 2017 19:55, "Todd Palino" wrote:
> It’s not that Kafka has to decode it, it’s that it has to send it across
Thanks for the link, Ismael. I had thought that the most recent kernels
already implemented this, but I was probably confusing it with BSD. Most of
my systems are stuck in the stone age right now anyway.
It would be nice to get KAFKA-2561 in, either way. First off, if you can
take advantage of it
Even though OpenSSL is much faster than the Java 8 TLS implementation (I
haven't tested against Java 9, which is much faster than Java 8, but
probably still slower than OpenSSL), all the tests were without zero copy
in the sense that is being discussed here (i.e. sendfile). To benefit from
sendfile
So that’s not quite true, Hans. First, as far as the performance hit being
not a big impact (25% is huge). Or that it’s to be expected. Part of the
problem is that the Java TLS implementation does not support zero copy.
OpenSSL does, and in fact there’s been a ticket open to allow Kafka to
support
Its not a single message at a time that is encrypted with TLS its the entire
network byte stream so a Kafka broker can’t even see the Kafka Protocol
tunneled inside TLS unless it’s terminated at the broker.
It is true that losing the zero copy optimization impacts performance somewhat
but it
It’s not that Kafka has to decode it, it’s that it has to send it across
the network. This is specific to enabling TLS support (transport
encryption), and won’t affect any end-to-end encryption you do at the
client level.
The operation in question is called “zero copy”. In order to send a message
Hi everyone,
I understand one of the reasons why Kafka is performant is by using
zero-copy.
I often hear that when encryption is enabled, then Kafka has to copy the
data in user space to decode the message, so it has a big impact on
performance.
If it is true, I don t get why the message has to
16 matches
Mail list logo