Hi,
I understand that it's currently not possible to access message headers from
the filter() DSL operation, is there a semantically equivalent (stateless)
operation that can achieve the same result ? I understand that transform(),
transformValues() and process() could achieve the same result
https://github.com/apache/kafka/pull/6771
Could this be reviewed please ?
On Wed, 3 Jul 2019 at 11:35, M. Manna wrote:
> https://github.com/apache/kafka/pull/6771
>
> Bouncing both users and dev to get some activity going. We are waiting for
> a while to get this KIP pr merged.
>
> Could someon
Hi Jorg,
transform(), transformValues, and process() are not stateful if you do
not add any state store to them. You only need to leave the
variable-length argument empty.
Within those methods you can implement your desired filter operation.
Best,
Bruno
On Thu, Jul 4, 2019 at 11:51 AM Jorg Heym
Hey all,
I have seen already in archive an email concerning this, but as a solution
it has been said to upgrade kafka version to 2.1. In my case, kafka is
already up to date.
NOTE: Issue is on since this morning.
Specifying the problem, I'm running two kafka-streams stateful
applications. From th
On 2019/07/04 12:41:58, Bruno Cadonna wrote:
> Hi Jorg,
>
> transform(), transformValues, and process() are not stateful if you do
> not add any state store to them. You only need to leave the
> variable-length argument empty.
doh i did not realize you could leave the statestore argument emp
Hi Pawel,
It seems the exception comes from a producer. When a stream task tries
to resume after rebalancing, the producer of the task tries to
initialize the transactions and runs into the timeout. This could
happen if the broker is not reachable until the timeout is elapsed.
Could the big lag th
Hi Bruno,
Thanks for your reply!
The idea about broker being unreachable sounds fair to me.
What I just did is restarting brokers and that seem to diminish the
exception but wether the lag is being decreased cannot verify due to
Error listing groups
This error comes always after broker restart
I had a similar situation. For us one of our firewall appliances was
blocking traffic to the brokers.
On Thu, Jul 4, 2019 at 7:43 AM Paweł Gontarz
wrote:
> Hey all,
>
> I have seen already in archive an email concerning this, but as a solution
> it has been said to upgrade kafka version to 2.1.
Thanks Chad,
Unfortunately that's not our case
On Thu, Jul 4, 2019 at 4:19 PM Chad Preisler
wrote:
> I had a similar situation. For us one of our firewall appliances was
> blocking traffic to the brokers.
>
> On Thu, Jul 4, 2019 at 7:43 AM Paweł Gontarz
> wrote:
>
> > Hey all,
> >
> > I have s
I assume Kafka brokers are on a separate server from the stream apps.
Are you using ACLs? Did they change recently? Maybe an internal topic can’t
be written.
Is one of the brokers out of disk space?
Any local state on the stream side? Maybe clean that up?
Is the replication factor on the consum
i have few more details to share from today’s testing.
Attached strace to the process and noticed that there are thousands of short
lived child processes have been created by the stream application. Not sure
whether rocksdb is playing any role here. Noticed more than 73000 child
processes were
Hi,
I'm trying to understand how Kafka Broker memory is impacted and leads to
more JVM GC when Large messages are sent to Kafka.
*Large messages can cause longer garbage collection (GC) pauses as brokers
allocate large chunks.*
Kafka is zero-copy; so messages do not *pass-through *JVM heap; imp
12 matches
Mail list logo