I tried as you suggested, but still no output of any group info.
On Fri, Feb 19, 2016 at 2:45 PM, tao xiao wrote:
> That is what I mean alive. If you use new consumer connecting to broker
> you should use --new-consumer option to list all consumer groups
>
> kafka-run-class.sh kafka.admin.Consu
Did you try what Adam is suggesting in the earlier email. Also to
quickly check you can try remove keystore and key.password configs from
client side.
-Harsha
On Thu, Feb 18, 2016, at 02:49 PM, Srikrishna Alla wrote:
> Hi,
>
> We are getting the below error when trying to use a Java new producer
That is what I mean alive. If you use new consumer connecting to broker
you should use --new-consumer option to list all consumer groups
kafka-run-class.sh kafka.admin.ConsumerGroupCommand --list --new-consumer
--bootstrap-server localhost:9092
On Fri, 19 Feb 2016 at 14:18 Amoxicillin wrote:
>
How to confirm the consumer groups are alive? I have one consumer in the
group running at the same time, and could get messages correctly.
On Fri, Feb 19, 2016 at 1:40 PM, tao xiao wrote:
> when using ConsumerGroupCommand you need to make sure your consumer groups
> are alive. It only queries of
when using ConsumerGroupCommand you need to make sure your consumer groups
are alive. It only queries offsets for consumer groups that are currently
connecting to brokers
On Fri, 19 Feb 2016 at 13:35 Amoxicillin wrote:
> Hi,
>
> I use kafka.tools.ConsumerOffsetChecker to view the consumer offse
Hi,
I use kafka.tools.ConsumerOffsetChecker to view the consumer offset
status, and can get the correct output, along with a
warning: ConsumerOffsetChecker is deprecated and will be dropped in
releases following 0.9.0. Use ConsumerGroupCommand instead.
But when I altered to bin/kafka-run-class.s
Yeah I didn't mean to imply that we committed after each poll, but rather
that when it was time to commit, this would happen on the next poll call
and hence only commit processed messages.
-Jay
On Thu, Feb 18, 2016 at 2:21 PM, Avi Flax wrote:
> On Thu, Feb 18, 2016 at 4:26 PM, Jay Kreps wrote:
The consumer is single-threaded, so we only trigger commits in the call to
poll(). As long as you consume all the records returned from each poll
call, the committed offset will never get ahead of the consumed offset, and
you'll have at-lest-once delivery. Note that the implication is that "
auto.c
Hi,
We are getting the below error when trying to use a Java new producer client.
Please let us know the reason for this error -
Error message:
[2016-02-18 15:41:06,182] DEBUG Accepted connection from /10.**.***.** on
/10.**.***.**:9093. sendBufferSize [actual|requested]: [102400|102400]
recvB
That was a typo. I did remove that and still same error.
Thanks,
Sri
> On Feb 18, 2016, at 4:21 PM, Adam Kunicki wrote:
>
> Ha! nice catch Gwen!
>
>> On Thu, Feb 18, 2016 at 3:20 PM, Gwen Shapira wrote:
>>
>> props.put("ssl.protocal", "SSL"); <- looks like a typo.
>>
>> On Thu, Feb 18,
Hi,
I tried using the HDFS connector sink with kafka-connect and works as
described->
http://docs.confluent.io/2.0.0/connect/connect-hdfs/docs/index.html
My Scenario :
I have plain Json data in a kafka topic. Can I still use HDFS connector
sink to read data from kafka-topic and write to HDFS in
Ha! nice catch Gwen!
On Thu, Feb 18, 2016 at 3:20 PM, Gwen Shapira wrote:
> props.put("ssl.protocal", "SSL"); <- looks like a typo.
>
> On Thu, Feb 18, 2016 at 2:49 PM, Srikrishna Alla <
> srikrishna.a...@aexp.com.invalid> wrote:
>
> > Hi,
> >
> > We are getting the below error when trying
Just to be thorough, it seems you have client authentication enabled as
well.
This means that each broker must have your client's public certificate in
its truststore.
I felt like it might be easier to draw a diagram than write it out, but
this is what your setup should look like:
[image: Inline
props.put("ssl.protocal", "SSL"); <- looks like a typo.
On Thu, Feb 18, 2016 at 2:49 PM, Srikrishna Alla <
srikrishna.a...@aexp.com.invalid> wrote:
> Hi,
>
> We are getting the below error when trying to use a Java new producer
> client. Please let us know the reason for this error -
>
> Err
On Thu, Feb 18, 2016 at 4:26 PM, Jay Kreps wrote:
> The default semantics of the new consumer with auto commit are
> at-least-once-delivery. Basically during the poll() call the commit will be
> triggered and will commit the offset for the messages consumed during the
> previous poll call. This is
On Thu, Feb 18, 2016 at 4:26 PM, Jay Kreps wrote:
> The default semantics of the new consumer with auto commit are
> at-least-once-delivery. Basically during the poll() call the commit will be
> triggered and will commit the offset for the messages consumed during the
> previous poll call. This is
Hi Andrew,
That is really awesome. I would be interested in taking a look!
Best,
Liquan
On Thu, Feb 18, 2016 at 10:56 AM, Andrew Stevenson <
and...@datamountaineer.com> wrote:
> Hi Guys,
>
> I posted on the Confluent mailing list about my Cassandra Connect sink.
>
> Comments please. Be gentle!
You should have access now.
On Thu, Feb 18, 2016 at 12:09 PM, Christian Posta wrote:
> Can someone add Karma to my user id for contributing to the wiki/docs?
> userid is 'ceposta'
>
> thanks!
>
> --
> *Christian Posta*
> twitter: @christianposta
> http://www.christianposta.com/blog
> http://fabr
The default semantics of the new consumer with auto commit are
at-least-once-delivery. Basically during the poll() call the commit will be
triggered and will commit the offset for the messages consumed during the
previous poll call. This is an advantage over the older scala consumer
where the consu
Hello all, I have a question about Kafka Streams, which I’m evaluating
for a new project. (I know it’s still a work in progress but it might
work anyway for this project.)
I’m new to Kafka in particular and distributed systems in general so
please forgive me I’m confused about any of these concept
Hi John,
I should preface this by saying I've never used Storm and KafkaBolt and am
not a streaming expert.
However, if you're running out of buffer in the producer (as is what's
happening in the other thread you referenced), you can possibly alleviate
this by adding more producers, or by tuning
Can someone add Karma to my user id for contributing to the wiki/docs?
userid is 'ceposta'
thanks!
--
*Christian Posta*
twitter: @christianposta
http://www.christianposta.com/blog
http://fabric8.io
Thanks Joel, it is clear now.
/Ivan
- Original message -
From: Joel Koshy
To: "users@kafka.apache.org"
Subject: Re: Kafka response ordering guarantees
Date: Thu, 18 Feb 2016 11:24:48 -0800
>
> Does this mean that when a client is sending more than one in-flight
>> request per connectio
>
> Does this mean that when a client is sending more than one in-flight
>> request per connection, the server does not guarantee that responses will
>> be sent in the same order as requests?
>
>
> No - the server does provide this guarantee - i.e., responses will always
> be sent in the same order
> Does this mean that when a client is sending more than one in-flight
> request per connection, the server does not guarantee that responses will
> be sent in the same order as requests?
No - the server does provide this guarantee - i.e., responses will always
be sent in the same order as reques
Hi John,
Kafka offsets are sequential id numbers that identify messages in each
partition. It might not be sequential within a topic (which can have
multiple partition).
Offsets don't necessarily start at 0 since messages are deleted.
.bin/kafka-run-class.sh kafka.tools.GetOffsetShell is pretty
Hi Guys,
I posted on the Confluent mailing list about my Cassandra Connect sink.
Comments please. Be gentle!
https://github.com/andrewstevenson/stream-reactor/tree/master/kafka-connect
Regards
Andrew
A Connection and a ZK session are two different things.
The ZK master keeps track of a client session validity. When a client
connection gets interrupted, its associated session goes into Disconnected
state, after a while it will be Expired, but if a new connection is
established before the time
Hi Alex,
Great info, thanks! I asked a related question this AM--is a full queue
possibly a symptom of back pressure within Kafka?
--John
On Thu, Feb 18, 2016 at 12:38 PM, Alex Loddengaard
wrote:
> Hi Saurabh,
>
> This is occurring because the produce message queue is full when a produce
> req
This may not be helpful, but the first thing I've learned to check in
similar situations is whether there is significant time-drift between VMs
and actual hardware. Some combination of time-drift and a time-sensitive
security check could be causing this. IIRC, CentOS has a funky issue with
gettin
Hi Gary,
The coordinator is a special broker which is chosen for each consumer group
to manage its state. It facilitates group membership, partition assignment
and offset commits. If the coordinator is shutdown, then Kafka will choose
another broker to assume the role. The log message might be a l
Woops. Looks like Alex got there first. Glad you were able to figure it out.
-Jason
On Thu, Feb 18, 2016 at 9:55 AM, Jason Gustafson wrote:
> Hi Robin,
>
> It would be helpful if you posted the full code you were trying to use.
> How to seek largely depends on whether you are using new consumer
Hi Robin,
It would be helpful if you posted the full code you were trying to use. How
to seek largely depends on whether you are using new consumer in "simple"
or "group" mode. In simple mode, when you know the partitions you want to
consume, you should just be able to do something like the follow
Hi Saurabh,
This is occurring because the produce message queue is full when a produce
request is made. The size of the queue is configured
via queue.buffering.max.messages. You can experiment with increasing this
(which will require more JVM heap space), or fiddling with
queue.enqueue.timeout.ms
Hi Robin,
Glad it's working. I'll explain:
When a consumer subscribes to one or many topics using subscribe(), the
consumer group coordinator is responsible for assigning partitions to each
consumer in the consumer group, to ensure all messages in the topic are
being consumed. The coordinator han
my kafka version : kafka_2.9.1-0.8.2.2
I want add partition and replication-factor for a topic,but i haven’t
kafka-add-partitions.sh。
[cid:40200CA8-05DA-40FF-A762-538E2DE74DC9]
———
TD技术部(Technology Department)
刘浩(Eric Liu) Senior Technical Development Manager
TEL: 010-51292727
MP:
Thanks Ben.
As I mentioned, I'm developing a kafka library and not using standard java
producer.
My question is really about protocol guarantees.
/Ivan
- Original message -
From: Ben Stopford
To: users@kafka.apache.org
Subject: Re: Kafka response ordering guarantees
Date: Wed, 17 Feb
Running on Centos with Kafka 0.8.2.1
I’m experiencing periodic disconnects. I’ve enabled keepalives, though I don’t
think that’s related to the issue.
The fact that this happens at granularity of 5 minutes and always at 45 seconds
after is interesting. This may be some internal network issue,
Hi Everyone,
I am encountering this exception similar to Saurabh's report earlier today
as I try to scale up a Storm -> Kafka output via the KafkaBolt (i.e., add
more KafkaBolt executors).
Question...does this necessarily indicate back pressure from Kafka where
the Kafka writes cannot keep up wit
Hi,
We have a Kafka server deployment shared between multiple teams and i have
created a topic with multiple partitions on it for pushing some JSON data.
We have multiple such Kafka producers running from different machines which
produce/push data to a Kafka topic. A lot of times i see the follow
Hi,
Ok I did a poll() before my seek() and poll() again and now my consumer
starts at offset.
Thanks you a lot ! But I don't really understand why I have to do that, if
anyone can explain me.
Regards,
Robin
2016-02-17 20:39 GMT+01:00 Alex Loddengaard :
> Hi Robin,
>
> I believe seek() needs t
41 matches
Mail list logo