I am seeing this in error in producer after upgrading broker to 0.10.0
2016-08-19 22:18:30.598 c.y.a.t.s.o.KafkaMessageSender pool-10-thread-5
[WARN] caught exception when sending msg to kafka
kafka.common.QueueFullException: Event queue is full of unsent messages,
could not send event:
KeyedMessa
I am seeing this in error in producer after upgrading broker to 0.10.0
2016-08-19 22:18:30.598 c.y.a.t.s.o.KafkaMessageSender pool-10-thread-5
[WARN] caught exception when sending msg to kafka
kafka.common.QueueFullException: Event queue is full of unsent messages,
could not send event: KeyedMess
Thx!
On Fri, Aug 19, 2016 at 12:48 AM Manikumar Reddy
wrote:
> This doc link may help:
>
> http://kafka.apache.org/documentation.html#new_producer_monitoring
>
> On Fri, Aug 19, 2016 at 2:36 AM, David Yu wrote:
>
> > Kafka users,
> >
> > I want to resurface this post since it becomes crucial fo
Hi,
We have setup a 3-broker Kafka 0.9.0.0 cluster in Cloudera. The bokers are
ec2 d2.xlarge instances. What we are trying to do is to run some producer
perf tests on this cluster.
>From the client host, we did a bandwidth test to the brokers using iperf3
and we are getting 900Mbps connections. F
My team is considering using either Kafka-connect JDBC or Bottled water to
stream DB-changes from several production postgres DB’s. WRT bottled water,
this is a little scary:
https://github.com/confluentinc/bottledwater-pg/issues/96
But, the Kafka-connect option also seems like it could affect
Hi Mathieu,
If you are only interested in the aggregate result "snapshot" but not its
change stream (note that KTable itself is not actually a "table" as in
RDBMS, but still a stream), you can try to use the queryable state feature
that is available in trunk, which will be available in 0.10.1.0 re
Helo Srikanth,
These two JIRAs are targeted for 0.10.1.0, as indicated in their fixed
versions, but with the time-based release plan coming in the way, JIRAs are
not guaranteed to be included in its marked fixed versions if it missed the
released code freeze deadline, and in this case we will upda
Hello Daniel,
I am not sure why you need step 3) before step 4), could you directly call
"seek" after calling "subscribe" and then get the assigned partitions via
"assignment()"?
Guozhang
On Tue, Aug 16, 2016 at 10:14 AM, Daniel Lyons
wrote:
> Hi,
>
> I’ve read and become somewhat indoctrina
Hi,
A basic question. Does zookeeper.connect has a role in producer. Is it
necessary to include this parameter in producer properties .
Thanks .
Sent from Samsung Mobile.
Hi,
I'm ugrading brokers from 0.8.2.1 to 0.9.0.1 and I'm having trouble with
replicas from old brokers getting out of sync with replicas on new ones
(ISR shrinks).
Since replica.lag.time.max.ms on 0.9 includes the time for the follower to
catch up, I've increased it but that does not remove all s
Thanks Damian. How about Kafka Connectors?
- Drew
> On Aug 19, 2016, at 12:46 AM, Damian Guy wrote:
>
> Hi,
>
> On trunk you can use a regex when creating a stream, i.e:,
>
> builder.stream(Pattern.compile("topic-\\d"));
>
> HTH,
> Damian
>
> On Fri, 19 Aug 2016 at 06:50 Kessiler Rodrigues
kafka consumer/producer currently require path to keystore/truststore.
my client runs in cloud and won't have access to some actual path to my jks.
any ideas on the best way to handle this?
thanks.
Mazhar,
Let's first confirm if this is indeed a bug. As I mentioned earlier, it's
possible to have message loss with ack=1 when there are (leader) broker
failures. If this is not the case, please file a jira and describe how to
reproduce the problem. Also, it would be useful to know if the message
Hi Jun,
In my earlier runs, I had enabled delivery report (with and without offset
report) facility provided by librdkafka.
The producer has received successful delivery report for the all msg sent
even than the messages where lost.
as you mentioned. producer has nothing to do with this loss of
Hi,
On trunk you can use a regex when creating a stream, i.e:,
builder.stream(Pattern.compile("topic-\\d"));
HTH,
Damian
On Fri, 19 Aug 2016 at 06:50 Kessiler Rodrigues
wrote:
> Hey Drew,
>
> You can easily use a WhiteList passing as parameter your regex pattern.
>
> E.g:
>
> Whitelist filter
15 matches
Mail list logo