Hi,
I am reading data from Kafka into spark. It runs fine for sometime but
then hangs forever with following output. I don't see and errors in logs.
How do I debug this?
2015-12-01 06:04:30,697 [dag-scheduler-event-loop] INFO (Logging.scala:59)
- Adding task set 19.0 with 4 tasks
2015-12-01 06:0
Kris,
It just points to the mirror site. If you click on one of the links, you
will see the release notes.
Thanks,
Jun
On Mon, Nov 30, 2015 at 1:37 PM, Kris K wrote:
> Hi,
>
> Just noticed that the Release notes link of 0.9.0.0 is pointing to the
> download mirrors page.
>
>
> https://www.apa
Siyuan,
In general 0.9 new consumer API relies on the group coordinator on the
broker side to manage consumer groups, so you would need to upgrade the
brokers first.
However if you are only using the assign() function to assign partitions,
i.e. no subscribe() which will not need the group coordin
Thanks.
I just found the new KafkaConsumer does have two API functions
assignment() and
committed(TopicPartition partition). With these 2 functions, we¹ll be able
to retrieve the timestamp of last offset regardless whether offset storage
is using ZK or offset topic.
Howard
--
Howard Wang
Eng
Is 0.9 new consumer API compatible with 0.8.x.x broker
Hi,
I use the new KafkaConsumer from just released Kafka (kafka-clients) 0.9.0.0 to
manually commit offsets (consumer.commitAsync()). I have a use case where I
want to know the meta data related to the offset such as the timestamp of the
last offset.
In Kafka 0.8.* java API, there is an offse
Hi,
Just noticed that the Release notes link of 0.9.0.0 is pointing to the
download mirrors page.
https://www.apache.org/dyn/closer.cgi?path=/kafka/0.9.0.0/RELEASE_NOTES.html
Thanks,
Kris K
Hi guys,
I want to use partitionsFor method of new consumer API periodically to get
the monitor partition metadata change, It seems it only issue remote call
to the server for the first time. If I add partitions after that,
partitionsFor will return stale value. Is there a way to reuse consumer
ob
Hey Martin,
At a glance, it looks like your consumer's session timeout is expiring.
This shouldn't happen unless there is a delay between successive calls to
poll which is longer than the session timeout. It might help if you include
a snippet of your poll loop and your configuration (i.e. any ove
ok, you should be gt2g
~ Joe Stein
- - - - - - - - - - - - - - - - - - -
[image: Logo-Black.jpg]
http://www.elodina.net
http://www.stealth.ly
- - - - - - - - - - - - - - - - - - -
On Mon, Nov 30, 2015 at 1:05 PM, Andrew Schofield <
andrew_schofi...@uk.ibm.com> wrote:
> Hi,
> My Conflu
Hi,
My Confluence name is "andrew_schofield" and being able to edit would be
great.
Thanks,
Andrew
Andrew Schofield
Chief Architect, Hybrid Cloud Messaging
Senior Technical Staff Member
IBM Systems, Middleware
IBM United Kingdom Limited
Mail Point 211
Hursley Park
Winchester
Hampshire
SO21 2JN
Hey Andrew, cool, yeah!
What is your confluence name you can edit the page once you get permission
to edit just need to ask on list.
Has anyone thought about working more on that page putting it together more
for folks? I think once I put the page on 7 slides 9 font it wasn't
categorized or anyth
Hi,
Please could we be added to the "Powered by Kafka" list.
Company: IBM
Description: The Message Hub service in our Bluemix PaaS offers
Kafka-based messaging in a multi-tenant, pay-as-you-go public cloud. It's
intended to provide messaging services for microservices, event-driven
processing a
Well, I made the problem go away, but I'm not sure why it works :-/
Previously I used a time out value of 100 for Consumer.poll(). Increasing
it to 10.000 makes the problem go away completely?! I tried other values as
well:
- 0 problem remained
- 3000, same as heartbeat.interval, problem rem
Hi Guozhang,
I have done some testing with various values of heartbeat.interval.ms and
they don't seem to have any influence on the error messages. Running
kafka-consumer-groups also continues to return that the consumer groups
does not exists or is rebalancing. Do you have any suggestions to how
Hi Debraj,
A couple things you could try.
Given your design
https://chart.googleapis.com/chart?chl=digraph+G+%7B%0D%0A+++rankdir%3DLR%3B%0D%0A+++service1LSFWD+-%3E+LS+-%3E+Kafka+-%3E+LSELK+-%3E+ES+-%3E+Kibana%0D%0A+++service2LSFWD+-%3E+LS%0D%0A+++service3LSFWD+-%3E+LS%0D%0A+++service4LSFWD+-
I just ran in almost the same problem. In my case it was solved by setting
the 'advertised.host.name' to the correct value in de server properties.
The hostname you enter here should be resolvable from the cluster your
running the test from.
On Mon, Nov 30, 2015 at 3:40 AM Yuheng Du wrote:
> Als
17 matches
Mail list logo