Indeed to get proper performance, messages need to be batched before encryption.
However, this is not that straightforward to implement and Kafka has already a
very good batching algorithm.
For example, when do you decide to no longer wait for additional messages and
send a non-full batch ? Not t
hi, all
i want to update my kafka cluster from 0.8.2.2 to 0.10.0. i follow the
rules in kafka.apache.org and some errors happened.
i don't want to stop my cluster. so i made these changes in
server.property:
change `port=9092` to `listener=PLAINTTEXT://:9092`
add`inner.broker.protocol.ve
Hi Fredo,
A comment below:
On Mon, Jun 6, 2016 at 11:19 AM, Fredo Lee wrote:
> add`inner.broker.protocol.version=0.8.2`
>
There's a typo here, it should be:
inter.broker.protocol.version=0.8.2
Was the typo only in the email or also in your server properties file?
Ismael
Hello Apache Kafka community,
On 0.9.0.1 cluster, with all brokers up, I have a topic with single
partition and replication factor of 3, min ISR is 2. When topic was created
all 3 assigned replicas are in ISR. Now:
1. All brokers report UnderReplicatedPartitions of 0 while describe topic
reports
MG>quick questions for bruno and jim
> Subject: Re: Kafka encryption
> From: bruno.rassae...@novazone.be
> Date: Mon, 6 Jun 2016 10:51:13 +0200
> CC: tcrayf...@heroku.com
> To: users@kafka.apache.org
>
> Indeed to get proper performance, messages need to be batched before
> encryption.
> However
MG>Jim can we assume you only implement Asymmetric Cryptography?
As described and depicted in the blog post, we used asymmetric
cryptography as the basis for trust, with symmetric crypto doing the heavy
lifting. Specifically, for each "envelope", we include a randomly
generated AES key encrypted
How would it be possible to encrypt an entire batch? My understanding
is that the Kafka server needs to know the boundaries of each message.
(E.g. The server decompresses compressed message sets and re-compresses
individual messages).
Given that precedent, how could the server properly separate th
> Is this a case where multiple logical messages (when combined together)
>are
> treated by Kafka as a single message, and it's up to the consumer to
> separate them?
Yes.
-- Jim
On 6/6/16, 7:12 AM, "Tom Brown" wrote:
>How would it be possible to encrypt an entire batch? My understanding
>is
On Mon, Jun 6, 2016 at 3:12 PM, Tom Brown wrote:
> How would it be possible to encrypt an entire batch? My understanding
> is that the Kafka server needs to know the boundaries of each message.
> (E.g. The server decompresses compressed message sets and re-compresses
> individual messages).
>
Qu
I'd stay off the Camel. It's performance is quite low. Up to 5-10 mb/sec
it׳s ok but above that it will be your bottleneck.
The problem with Camel is that sometime it's Endpoints have special
behavior which is hard to understand and debugging it is a mess. We are now
migrating away from it.
On Fri
KAFKA-3716 should not be related, as it actually is pointing to a different
issue.
I re-ran the example demo but could not re-produce your issue. Is it
possible that you have multiple Kafka jars in your repo and the older
versions were used for the console producer?
Guozhang
On Sun, Jun 5, 20
Thanks all for the feedback. It sounds like the RollingFileAppender is the
preferred way to go anyway, so the default change could be documented in
release notes unless there's an objection.
On Thu, Jun 2, 2016 at 12:02 PM, Tauzell, Dave wrote:
> The RollingFileAppender is required to use in pr
For those that have seen this issue on 0.9, can you provide some more
insight into your environments? What OS and filesystem are you running?
Do you find that you can reproduce the behavior with a simple java program
that creates a file, writes to it, waits for a few minutes, then closes the
file?
Here are jars I find in kafka libs, it seems nothing wrong.
Can you post how you ran the demo? Maybe some problem how I ran it.
-- --
??: "Guozhang Wang";;
: 2016??6??7??(??) 2:52
??: "users@kafka.apache.org";
: Re: [Kaf
-- --
??: "";<1429327...@qq.com>;
: 2016??6??7??(??) 11:39
??: "users";
: ?? [Kafka Streams]
java.lang.IllegalArgumentException:Invalidtimestamp-1
Here are jars I find in kafka libs, it seems nothing wrong.
Can
Yes, and if on top of that it would be possible to define an own “compression”
algorithm, which actually does compress + encrypt, then this would be a
non-issue.
> On 06 Jun 2016, at 17:11, Ismael Juma wrote:
>
> On Mon, Jun 6, 2016 at 3:12 PM, Tom Brown wrote:
>
>> How would it be possible
16 matches
Mail list logo