So you're still having a problem getting partitions or offsets from kafka
when creating the stream. You can try each of those kafka operations
individually (getPartitions / getLatestLeaderOffsets)
checkErrors should be dealing with an arraybuffer of throwables, not just a
single one. Is that the
Dear all,
I am trying to use kafka to do some job load balance and not sure if kafka
support this feature:
Suppose there's more than one consumer, one message only cosumed by one
consumer?
I've tried looking through document and did not find any reference.
thanks!
刘振(oswaldl)
13430863373
Dear all,
I am trying to send message to kafka using kafka-Net (c#).
I want to send message synchronously
i am using below code to achieve this
*var x= client.SendMessageAsync("dynamictopic1", new[] { message });*
*var response = await x;*
Now the response contains the offset details
If i am
I think this is what you are looking for:
https://cwiki.apache.org/confluence/display/KAFKA/Consumer+Group+Example
On Thu, Sep 24, 2015 at 11:59 PM, 刘振 wrote:
> Dear all,
>
>
> I am trying to use kafka to do some job load balance and not sure if kafka
> support this feature:
>
> Suppose there's
Hi,
If you put n different consumers in different consumer groups, each
consumer will get the same message.
Each consumer gets full data
But, if you put n consumers in 1 consumer group, it'll act as a traditional
distributed queue. Amortised, each consumer will get 1/n of the overall data
Regard
Thanks Cody I was able to find out the issue yesterday after sending the
last email.
On Friday, September 25, 2015, Cody Koeninger wrote:
> So you're still having a problem getting partitions or offsets from kafka
> when creating the stream. You can try each of those kafka operations
> individu
If you are using the new producer, the send api returns a future on which you
can do a .get() to be sure that the message has made it to Kafka and than do
another send. I am not sure if the .Net producer that you are referring to
exposes this functionality.
Thanks,
Mayuresh
Sent from my iPho
Is there a way to delete kafka server, controller and state-change logs. They
just keep growing over time and not purged.
-Hema
Absolutely.
You can go into config/log4j.properties and configure the appenders to roll
the logs.
For example:
log4j.appender.stateChangeAppender=org.apache.log4j.DailyRollingFileAppender
log4j.appender.stateChangeAppender.DatePattern='.'-MM-dd-HH
log4j.appender.stateChangeAppender.File=${ka
How busy are the clients?
The brokers occasionally close idle connections, this is normal and
typically not something to worry about.
However, this shouldn't happen to consumers that are actively reading data.
I'm wondering if the "consumers not making any progress" could be due to a
different is
I don't see the logs attached, but what does the GC look like in your
applications? A lot of times this is caused (at least on the consumer side)
by the Zookeeper session expiring due to excessive GC activity, which
causes the consumers to go into a rebalance and change up their connections.
-Todd
Hello,
I tried using the sources for unit tests, but have been unsuccessful in
getting it to work with the low level API (kafka version 0.8.2.2). Our
code is modelled on the Simple Consumer API code, and the error always
occurs in the findLeader code.
SEVERE: Error communicating with broker:loca
Thanks Gwen!
I made the changes and restarted kafka nodes. Looks like all log files are
still present. Does it take some time to kick in the changes?
Here is the sample of changes:
log4j.appender.stateChangeAppender=org.apache.log4j.DailyRollingFileAppender
log4j.appender.stateChangeAppender.Da
That rebalance cycle doesn't look endless. I see that you started 23
consumers, and I see 23 rebalances finishing successfully, which is
correct. You will see rebalance messages from all of the consumers you
started. It all happens within about 2 seconds, which is fine. I agree that
there is a lot
I have Kafka 2.10-0.8.2.1 and zookeeper installed on the AWS EC2. The instance
is working fine.
[kafka@ip-xx-xx-xx-xx bin]$ ./kafka-topics.sh --topic topic1 --zookeeper
localhost:2181 --describe
Topic:topic1PartitionCount:1ReplicationFactor:1 Configs:
Topic: topic1 Part
15 matches
Mail list logo