Kamal,
Thanks very much for your testing. I also tried script
(kafka-console-producer.sh) provided by kafka and find it does work at this
situation.
The original testing I did is with the test program written by ourselves. I'll
try to find the difference.
Thanks for your help!
Regards,
Aggie
See "offsets.retention.minutes" and "offsets.retention.check.interval.ms"
This setting is global (ie, not for a single consumer group)
See also
https://stackoverflow.com/questions/38402364/delete-unused-kafka-consumer-group
https://stackoverflow.com/questions/37741936/how-to-delete-kafka-consumer
Hi,
I just realized that this thread got somehow dropped... Sorry for that.
If you use KTable, each update to RocksDB is also written into a
changelog topic (for fault-tolerance and rebalancing). The changelog
topic is a *compacted topic*, thus, it is guaranteed that the latest
value for each key
As mentioned in our docs
(http://kafka.apache.org/082/documentation.html#monitoring) the 0.8.2
high level consumer has MBean called
kafka.consumer:type=ConsumerFetcherManager,name=MaxLag,clientId=([-.\w]+)
You should be able to use that.
On Wed, Sep 28, 2016 at 7:10 AM, Vikas Bhatia -X (vikbhati
Hi Kafka Fans,
We just pushed an update to the website. We changed the byline from
"Kafka is a pub-sub system rethought as distributed log" to "Kafka is
a stream platform" - because this reflects the modern use of Kafka a
lot better and stream processing systems are the use-cases we optimize
for.
Hi,
I am running kafka version 0.8.2.
I have scenarios where there is single producer client and multiple consumer
clients. All consumers consume from the same topic.
We differentiate consumers using the "consumerGroupId".
Now I have a requirement where I need offset value for each consumer.
I
Hello,
I'm having trouble getting `vagrant/vagrant-up.sh --aws` to work properly.
The issue I'm having is as follows:
1. The vagrant install and provisioning complete successfully.
2. ssh into the cluster locally works and running from there works fine
3. connecting to the cluster can be made to
Awesome! Thanks.
Ara.
On Sep 28, 2016, at 3:20 PM, Guozhang Wang
mailto:wangg...@gmail.com>> wrote:
Ara,
I'd recommend you using the interactive queries feature, available in the
up coming 0.10.1 in a couple of weeks, to query the current snapshot of the
state store.
We are going to write a b
Ara,
I'd recommend you using the interactive queries feature, available in the
up coming 0.10.1 in a couple of weeks, to query the current snapshot of the
state store.
We are going to write a blog post about step-by-step instructions to
leverage this feature for use cases just like yours soon.
G
Thanks for reporting this Hamid. The ProcessorTopologyTestDriver is used
only in ProcessorTopologyTest and is currently not expected to use
otherwise, and hence that is why we overwrite the MockProducer's
partitionsFor function to only return the empty list. Is there any
particular reason that you
I need this ReadOnlyKeyValueStore.
In my use case, I do an aggregateByKey(), so a KTable is formed, backed by a
state store. This is then used by the next steps of the pipeline. Now using the
word count sample, I try to read the state store. Hence I end up sharing it
with the actual pipeline. A
Ara,
Are you using the interactive queries feature but encountered issue due to
locking file conflicts?
https://cwiki.apache.org/confluence/display/KAFKA/KIP-67%3A+Queryable+state+for+Kafka+Streams
This is not expected to happen, if you are indeed using this feature I'd
like to learn more of you
Hello Mathieu,
Is the web service module and the background component sharing the same
producer client to send messages, and are they sending messages to the same
topic?
If answer is yet to both, then ordering should be preserved, since the
producer.send() call will append the messages to the par
Hamza,
Which "function call" are you referring to that caused the context to be
initialized? Could you share your code snippet for better understanding
your problem?
Guozhang
On Tue, Sep 27, 2016 at 2:02 AM, Hamza HACHANI
wrote:
> Hi,
>
>
> i would like to know how in kafka streams the conte
Thanks Vincent and Kamal. That solves my problem. :)
Yifan
On Tue, Sep 27, 2016 at 11:36 PM, Kamal C wrote:
> You can refer this example[1]
>
> [1]:
> https://github.com/omkreddy/kafka-examples/blob/master/
> consumer/src/main/java/kafka/examples/consumer/advanced/
> AdvancedConsumer.java
>
> -
@Kant,
Did you measure the latency while doing the test? I would expect there is
some trade-off between latency and throughput. Using only the default
configuration makes it difficult to compare. And it would also be
interesting to see the relative changes when the number of brokers will be
changed
We are running Kafka 0.9.0.1 in AWS and seeing quite frequent ISRs on many
nodes.
None of the nodes seems to be really struggling for server capacity.
Is there any reason the occurrence of ISRs can cause metadata and produce
response times on the brokers to spike up?
Thanks,
Joseph
Hi,
We are using the latest Kafka 0.10.1 branch. The combination of
ProcessorTopologyTestDriver and WindowedStreamPartitioner is resulting in a
division by 0 exception because of the empty list of partitions:
https://github.com/apache/kafka/blob/0.10.1/streams/src/test/java/org/apache/kafka/tes
18 matches
Mail list logo