Hello, hope you are doing well,
We have noticed that the last offset for some of the partitions for
topic *__consumer_offsets
is zero*. For instance, the following is a testing cluster running version
2.2, this cluster has been running for about 15 months already:
*./bin/kafka-run-class.sh kafka.
Issue created https://issues.apache.org/jira/browse/KAFKA-8203
On 2019/04/04 18:03:33, Harsha wrote:
> Hi,
> Yes, this needs to be handled more elegantly. Can you please file a
> JIRA here
> https://issues.apache.org/jira/projects/KAFKA/issues
>
> Thanks,
> Harsha
>
Hi Users,
Let me know if any one faced this issue.
I have went through multiple articles but has different answers. Just want
to check with kafka users.
Below are the setting i have on kafka cluster. What are the tuning
parameters to overcome this large message size issue.
Kafka version: 0.11
That was a blooper. But even after correcting, it still isn't working.
Still getting the same error.
Here are the configs again:
*Kafka config: *
KafkaServer {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="admin-secret"
user_admin="admin-s
Bill,
Could you please double check? It seems I cannot assign the issue to me yet.
Regards,
Jose
On Mon, 8 Apr 2019 at 23:24, Bill Bejeck wrote:
> Jose,
>
> You're all set. I look forward to your PR.
>
> Thanks,
> Bill
>
> On Mon, Apr 8, 2019 at 4:59 PM Jose Lopez
> wrote:
>
> > Bill,
> >
> >
Well,
from your own synopsis it is clear that message you want to send it much
larger than max.message.bytes setting on broker. You can modify it.
However, do keep in mind that if you seem to be constantly increasing this
limit then you have to look at your message itself. Does it really need to
be
Hello Jonathan,
In the __consumer_offsets, the messages have a key with (group, topic,
partition) and the last consumed offset as value.
If you have less than 50 group/topic/partition combination in your cluster,
it could make sense to have __consumer_offsets partitions without any
message
Best r
Thanks a lot Vicent for the clarification.
Cheers!
--
Jonathan
On Tue, Apr 9, 2019 at 1:28 PM Vincent Maurin
wrote:
> Hello Jonathan,
>
> In the __consumer_offsets, the messages have a key with (group, topic,
> partition) and the last consumed offset as value.
> If you have less than 50 grou
Jose,
I can see you listed as a contributor, can you try logging out and logging
back in?
Thanks
Bill
On Tue, Apr 9, 2019 at 7:27 AM Jose Lopez wrote:
> Bill,
>
> Could you please double check? It seems I cannot assign the issue to me
> yet.
>
> Regards,
> Jose
>
> On Mon, 8 Apr 2019 at 23:24,
Hi
Is there a property that I can set the min insync replicas to an individual
topic?
I only know min.insync.replicas but it's for the broker level.
Thanks
Hi Pierre,
If you're using a Processor (or Transformer), you might be able to use the
`close` method for this purpose. Streams invokes `close` on the Processor
when it suspends the task at the start of the rebalance, when the
partitions are revoked. (It invokes `init` once the rebalance is complet
> On Apr 9, 2019, at 8:55 AM, Han Zhang wrote:
>
> Hi
>
> Is there a property that I can set the min insync replicas to an individual
> topic?
>
> I only know min.insync.replicas but it's for the broker level.
>
> Thanks
Hi Han,
Yes, the topic-level config is also min.insync.replicas. You
Hi experts,
I believe to understand there is the need to set the serde for the
Double type after/in the map function for a re-partition task.
I can't figure out where to specified. I've already tried to find the
answer on documentation and article but I failed.
The following code
KStream
Hi Gioacchino,
If I'm understanding your topology correctly it looks like you are doing a
reduce operation where the result is a double.
For stateful operations, Kafka Streams uses persistent state stores for
keeping track of the update stream. When using the
KGroupedStream#reduce method,
if you
Bill,
Fixed, thank you!
Regards,
Jose
On Tue, 9 Apr 2019 at 15:55, Bill Bejeck wrote:
> Jose,
>
> I can see you listed as a contributor, can you try logging out and logging
> back in?
>
> Thanks
> Bill
>
> On Tue, Apr 9, 2019 at 7:27 AM Jose Lopez
> wrote:
>
> > Bill,
> >
> > Could you please
Hi all,
Just looking for some general guidance.
We have a kafka -> druid pipeline we intend to use in an industrial setting
to monitor process data.
Our kafka system recieves messages on a single topic.
The messages are {"timestamp": yy:mm:ddThh:mm:ss.mmm, "plant_equipment_id":
"id_string", "se
Hi Nick,
Have you looked into KSQL?
Kind regards,
Liam Clarke
On Wed, 10 Apr. 2019, 8:26 am Nick Torenvliet,
wrote:
> Hi all,
>
> Just looking for some general guidance.
>
> We have a kafka -> druid pipeline we intend to use in an industrial setting
> to monitor process data.
>
> Our kafka sy
👍
On Tue, Apr 9, 2019 at 4:32 PM Jose Lopez wrote:
> Bill,
>
> Fixed, thank you!
>
> Regards,
> Jose
>
> On Tue, 9 Apr 2019 at 15:55, Bill Bejeck wrote:
>
> > Jose,
> >
> > I can see you listed as a contributor, can you try logging out and
> logging
> > back in?
> >
> > Thanks
> > Bill
> >
> >
I would stream to influxdb and visualize with grafana. Works great for
dashboards. But I would rethink your line format. It's very convenient to
have tags (or labels) that are key/value pair on each metric if you ever
want to aggregate over a group of similar metrics.
Svante
use Spark streaming to receive topics from Kafka and process through some
kind of rule engine based on incoming tickers.
Sound like a variation of Lambda architecture. You have the batch layer
sorted out and now looking for speed layer with some form of dashboard.
Something similar to below but m
Nick,
Have you looked at Apache Flink? It’s got very powerful API’s and you can
stream aggregations, filters, etc right to druid and also it has very robust
state management that might be a good fit for your use case.
https://flink.apache.org/
https://github.com/druid-io/tranquility
Thanks,
Ke
For more flexibility without the need for extensive coding I suggest Esper
complex event processing (http://www.espertech.com/esper)
-R
Sent: Tuesday, April 09, 2019 at 4:26 PM
From: "Nick Torenvliet"
To: users@kafka.apache.org
Subject: Streaming Data
Hi all,
Just looking for some general guida
Hello, Zhang
Please refer to https://kafka.apache.org/documentation/#topicconfigs.
You can use the kafka-configs.sh to add a config on the topic level.
Best regards
-邮件原件-
发件人: Han Zhang [mailto:walkerl...@hotmail.com]
发送时间: 2019年4月9日 23:55
收件人: users@kafka.apache.org
主题:
On 2019/04/09 11:21:10, Shantanu Deshmukh wrote:
> That was a blooper. But even after correcting, it still isn't working.
> Still getting the same error.
> Here are the configs again:
>
> *Kafka config: *
>
> KafkaServer {
>org.apache.kafka.common.security.plain.PlainLoginModule required
Hi Nick,
You could give Kafka Streams a try for your use case. It is included in
Kafka, so probably you already have it!
For details, see https://kafka.apache.org/documentation/streams/ .
KSQL, which has already been mentioned by another member of the mailing
list uses Kafka Streams under the ho
25 matches
Mail list logo