Not sure which client you are using.
In kafka-python, consumer.config returns a dictionary with all consumer
properties.
Thanks,
Faraz
On Mon, Nov 20, 2017 at 5:34 PM, simarpreet kaur
wrote:
> Hello team,
>
> I wanted to know if there is some way I can retrieve consumer properties
> from the Ka
Thanks Ismael.
Just need a clarification on something because I observed getting errors
from the v0.9 and v0.10 consumer for invalid message format.
Is it true that after bumping the consumer version post rolling upgrade
will not cause message format mismatch errors in the Consumer?
On Tue, Nov 21
Hi,
I am joining 4 different topic with 4 partitions each using 0.10.0.0
version of Kafka Streams. The joins are KTable to KTable. Is there
anything I should be aware of considering partitions or version of Kafka
Streams? In other words should I be expecting consistent results or do I
need to for
Hi Artur,
KafkaStreams 0.10.0.0 is quite old and a lot has changed and been fixed
since then. If possible i'd recommend upgrading to at least 0.11.0.2 or 1.0.
For joins you need to ensure that the topics have the same number of
partitions (which they do) and that they are keyed the same.
Thanks,
Hi All,
I wanted to know if there is any way to get the current offsets of a
consumer group through a java api.
-Sameer.
Hi,
Is it possible to switch from gzip to lz4 at runtime on kafka brokers. My
servers are currently running on gzip, and I want to switch them to lz4.
-Sameer.
Anish,
That's correct, the broker will down convert messages for older consumers
after log.message.format.version is increased. As I said, however, this has
an impact on efficiency so you should only do it if the traffic from old
consumers is low (as mentioned in the release notes). Also note that
If I want to reprocess only a section(i.e. from-to offsets) of the topic
through Kafka Streams, what do you think could be way to achieve it.
I want the data to be stored in same state stores, this I think would be a
common scenario in a typical production environment.
-Sameer.
You can set topic specific compression type by setting topic level config
"compression.type"
another option is change compression type config on producer side.
On Wed, Nov 22, 2017 at 4:56 PM, Sameer Kumar
wrote:
> Hi,
>
> Is it possible to switch from gzip to lz4 at runtime on kafka brokers. M
Hello all,
I am new to Apache Kafka but used Apache Qpid, before.
I have a question regarding the broker setup:
Is it possible to run several brokers in a Kafka cluster while they are not
all running on the same zookeeper?
I want to reach the following target:
Several organizations are running
Hi
I posted the same query in samza mailing list. But I did not get any reply.
Anyone has any thoughts?
Sent from GMail on Android
-- Forwarded message --
From: "Debraj Manna"
Date: Nov 21, 2017 5:34 PM
Subject: Running apache samza with Kafka Client 1.0 - JIRA - SAMZA - 1418
To
Hi all,
I'm testing a setup where I have 3 zookeeper hosts and 3 kafka brokers
(version 1.0.0), using the kafka-producer-perf-test.sh script.
It seems that in certain circumstances, sending records is not retried
after a timeout. I'm not sure what is wrong...
>From the documentation of the reque
Hi Andreas,
What you are describing is basically one central cluster (managed by the
central org.) and a set of "satelite" clusters (which are managed by
different org. each). These will never form a single cluster and you cannot
use features such as replication to mirror the messages.
But you mi
Hi Debraj,
It looks like Samza is relying on an internal class. My understanding is
that kafka.javaapi.TopicMetadata is the public version. Either way, all
these classes are used by the deprecated Scala consumers and will be
removed in a future version. It would be great if Samza migrated to the
J
If you haven't please check KIP-91:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-91+Provide+Intuitive+User+Timeouts+in+The+Producer
It explains how things currently work as well as an improvement (which
should land in 1.1.0).
Ismael
On Wed, Nov 22, 2017 at 3:03 PM, frederic arno
wrote
Ok so until the issue is resolved by samza I have to stick with kafka
client 0.11 ?
On Wed, Nov 22, 2017 at 9:11 PM, Ismael Juma wrote:
> Hi Debraj,
>
> It looks like Samza is relying on an internal class. My understanding is
> that kafka.javaapi.TopicMetadata is the public version. Either way,
Yes. Since the Scala clients have not changed much, there isn't much
benefit in upgrading anyway.
Ismael
On Wed, Nov 22, 2017 at 3:59 PM, Debraj Manna
wrote:
> Ok so until the issue is resolved by samza I have to stick with kafka
> client 0.11 ?
>
> On Wed, Nov 22, 2017 at 9:11 PM, Ismael Juma
Hi:
Kafka node zookeeper is lost after a period of time.What causes the node to
be lost in zookeeper?
environment:
zookeeper-3.4.10
kafka_2.11-0.10.2.1
Can you provide more information (such as pastebin of relevant logs) ?
Cheers
On Wed, Nov 22, 2017 at 1:55 AM, Linux实训项目 wrote:
> Hi:
>Kafka node zookeeper is lost after a period of time.What causes the
> node to be lost in zookeeper?
>
>
>environment:
>zookeeper
Hi all,
I would like to double check with you how we want to apply some GDPR into
my kafka topics. In concrete the "right to be forgotten", what forces us to
delete some data contained in the messages. So not deleting the message,
but editing it.
For doing that, my intention is to replicate the top
We are using Kafka Connect consumers that consume from the raw unredacted
topic and apply transformations and produce to a redacted topic. Using
kafka connect allows us to set it all up with an HTTP request and doesn't
require additional infrastructure.
Then we wrote a KafkaPrincipal builder to au
I am testing a KafkaConsumer. How can I modify it to process records in
parallel?
I KafkaConsumer itself should be use single threaded. If you want to
parallelize processing, each thread should have it's own KafkaConsumer
instance and all consumers should use the same `group.id` in their
configuration. Load will be shared over all running consumer
automatically for this case.
I don't think that a "from-to" pattern would a common scenario -- Kafka
is about stream processing, not batch processing.
I guess you can to a hand crafted solution though.
1) use bin/kafka-consumer-groups.sh to seek to the corresponding start
offset for the group.id/application.id of your Stream
Hi
If I give several locations with smaller capacity for log.dirs vs one large
drive for log.dirs, are there any PROS or CONS between the two (assuming
total storage is same in both cases).
I don't have access to one drive for log.dirs, but several smaller
directories. I just want to ensure that
I understand it now. I must've done something wrong last time.
Thank you.
On Wed 22 Nov, 2017, 5:21 PM Ismael Juma, wrote:
> Anish,
>
> That's correct, the broker will down convert messages for older consumers
> after log.message.format.version is increased. As I said, however, this has
> an imp
KIP-91 is very interesting and answers my questions perfectly, I'm
eagerly waiting for 1.1.0!
thank you, Fred
On Wed, Nov 22, 2017 at 11:42 PM, Ismael Juma wrote:
> If you haven't please check KIP-91:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-91+Provide+Intuitive+User+Timeouts+in
Guys, any thoughts on below request.(getting current offsets of a consumer
group) through a java api.
On Wed, Nov 22, 2017 at 4:49 PM, Sameer Kumar
wrote:
> Hi All,
>
> I wanted to know if there is any way to get the current offsets of a
> consumer group through a java api.
>
> -Sameer.
>
I am not too sure if I can have different compression types for both
producer and broker. It has to be same.
This is possible by stopping all brokers, producers and changing values.
But for that broker cluster has to be done. I was looking if there is any
way we can do that in a running cluster.
You can dynamically change topic level configs on brokers.
http://kafka.apache.org/documentation.html#topicconfigs
On Thu, Nov 23, 2017 at 12:38 PM, Sameer Kumar
wrote:
> I am not too sure if I can have different compression types for both
> producer and broker. It has to be same.
>
> This is po
30 matches
Mail list logo