Daniel,
Awesome, Ruby folks could use more Kafka love! I added the library to the
clients list here:
https://cwiki.apache.org/confluence/display/KAFKA/Clients#Clients-Ruby I'm
also cc'ing this to the clients list since I think they'd be interested as
well.
Lots of folks are using the Java clients
Gary,
Here are a few concrete examples from Kafka and Confluent Platform:
JSON (baked into Kafka Connect, not specifically designed for standalone
serialization but they should work for that):
https://github.com/apache/kafka/blob/trunk/connect/json/src/main/java/org/apache/kafka/connect/json/Json
On Wed, Feb 3, 2016 at 3:57 PM, Shane MacPhillamy
wrote:
> Hi
>
> I’m just coming up to speed with Kafka. Some beginner questions, may be
> point me to where I can find the answers please:
>
> 1. In a Kafka cluster what determines the maximum number of concurrent
> consumers that may be connected
The default max message size is 1MB. You'll probably need to increase a few
settings -- the topic max message size on a per-topic basis on the broker
(or broker-wide with message.max.bytes), the max.partition.fetch.bytes on
the new consumer, etc. You need to make sure all of the producer, broker,
a
Please meet, try and test pure javascript Kafka 0.9 client for Node.js with
full support for new group membership API and with some built-in assignment
strategies:
https://github.com/oleksiyk/kafka
Hi,
We've just updated to the 0.9 client & broker, and we're suddenly seeing a lot
of log spam in the consumers:
2016-02-05T03:31:34,182 | INFO | o.a.k.c.c.i.AbstractCoordinator
[kafkaspout-thread-0] | Response-events:7 | Marking the coordinator 2147483646
dead.
2016-02-05T03:31:34,182 | INFO
Great, thanks!
Is the clients list specific to the Java client or is it also for non-JVM
clients?
On Fri, Feb 5, 2016 at 9:44 AM Ewen Cheslack-Postava
wrote:
> Daniel,
>
> Awesome, Ruby folks could use more Kafka love! I added the library to the
> clients list here:
> https://cwiki.apache.org/c
Actually, this is incorrect - it looks like the consumer does not receive any
messages until the *first* 'coordinator dead' message is logged!
Is anyone able to offer insight into what's going on here?
Simon
-Original Message-
From: Simon Cooper [mailto:simon.coo...@featurespace.co.uk]
Hi Simon,
It may be worth trying the 0.9.0 branch as it includes a number of
important fixes to the new consumer.
Ismael
On Fri, Feb 5, 2016 at 12:33 PM, Simon Cooper <
simon.coo...@featurespace.co.uk> wrote:
> Actually, this is incorrect - it looks like the consumer does not receive
> any mess
Thanks, I'll have a look - is there a 0.9.0.1 release planned soon?
Simon
-Original Message-
From: isma...@gmail.com [mailto:isma...@gmail.com] On Behalf Of Ismael Juma
Sent: 05 February 2016 13:15
To: users@kafka.apache.org
Subject: Re: FW: 0.9 consumer log spam - Marking the coordinator
Hi all!
Right now I am working on reactive streams connector to kafka. I am using
new client and found strange behavior of commitAsync method which not
calling callbacks at all at some cases.
I found, that callback calling is a part of handling of incoming messages.
These messages are not fetchin
This is how I set up my JUnit test to get kafka and zookeeper running
during the duration of the test:
static {
embeddedZKServer = new TestingServer();
embeddedKafkaServerPort = TestUtils.RandomPort();
Properties brokerProperties = TestUtils.createBrokerConfig(1,
embed
Hello all!
When replicas are out of sync, is there a means to find out how fast
synchronization is happening? It would be really nice to be able to know how
many messages (or bytes) have to be transferred from partition X on host Y to
partition X’ on host Y’ to have them in sync.
Is this possi
Hello,
I am having trouble with KafaConsumer to make it read from the beginning,
or from any other explicit offset.
Running the command line tools for the consumer for the same topic , I do
see messages with the `--from-beginning` option and it hangs otherwise
$ ./kafka-console-consumer.sh --zoo
Hello,
After almost a year of running kafka on a single node, we are in the process of
migrating to a 3 node cluster. To test the process we followed the following
process:
* Stop our current kafka instance, copy the entire data directory and
zookeeper data directories to one of the new
Hi karthik - I usually address kafka-python specific questions via github.
Can you file an issue at github.com/dpkp/kafka-python and I will follow up
there?
My initial reaction is you should leave group_id=None if you want to
duplicate behavior of the console consumer.
-Dana
Hello,
I am having t
Found the issue. Our cluster is on AWS, and the third node had not set the
advertised.host.name property which is required on AWS. Set that and
replication completed successfully.
On 05/02/2016 10:17, "Rakesh Vidyadharan" wrote:
>Hello,
>
>After almost a year of running kafka on a single
Hi Ismael,
Is there a maven release planned soon? We've seen this problem too and it
is rather disconcerting.
Thanks,
Rajiv
On Fri, Feb 5, 2016 at 5:15 AM, Ismael Juma wrote:
> Hi Simon,
>
> It may be worth trying the 0.9.0 branch as it includes a number of
> important fixes to the new consume
I've updated Kafka-3159 with my findings.
Thanks,
Rajiv
On Thu, Feb 4, 2016 at 10:25 PM, Rajiv Kurian wrote:
> I think I found out when the problem happens. When a broker that is sent a
> fetch request has no messages for any of the partitions it is being asked
> messages for, it returns immedi
Thanks for getting to the bottom of this Rajiv.
Ismael
On Fri, Feb 5, 2016 at 5:50 PM, Rajiv Kurian wrote:
> I've updated Kafka-3159 with my findings.
>
> Thanks,
> Rajiv
>
> On Thu, Feb 4, 2016 at 10:25 PM, Rajiv Kurian wrote:
>
> > I think I found out when the problem happens. When a broker
Hey Rajiv,
Thanks for all the updates. I think I've been able to reproduce this. The
key seems to be waiting for the old log segment to be deleted. I'll
investigate a bit more and report what I find on the JIRA.
-Jason
On Fri, Feb 5, 2016 at 9:50 AM, Rajiv Kurian wrote:
> I've updated Kafka-31
Good Afternoon Gentlemen
I am new to Kafka and am attempting to build Kafka ..so I downloaded latest
kafka-0.9.0.0-src.tgz distro from
http://kafka.apache.org/downloads.html
and then built it with:
Kafka>gradlew build
org.apache.kafka.common.network.SslTransportLayerTest >
testEndpointId
Hi, Everyone,
We have fixed a few critical bugs since 0.9.0.0 was released and are still
investigating a few more issues. The current list of issues tracked for
0.9.0.1 can be found below. Among them, only KAFKA-3159 seems to be
critical.
https://issues.apache.org/jira/issues/?jql=project%20%3D%2
Hi Rajiv,
Jun just sent a message about 0.9.0.1. It should be out soon if everything
goes well.
Ismael
On Fri, Feb 5, 2016 at 5:48 PM, Rajiv Kurian wrote:
> Hi Ismael,
>
> Is there a maven release planned soon? We've seen this problem too and it
> is rather disconcerting.
>
> Thanks,
> Rajiv
>
Thanks for the update Ismael.
On Fri, Feb 5, 2016 at 10:31 AM, Ismael Juma wrote:
> Hi Rajiv,
>
> Jun just sent a message about 0.9.0.1. It should be out soon if everything
> goes well.
>
> Ismael
>
> On Fri, Feb 5, 2016 at 5:48 PM, Rajiv Kurian wrote:
>
> > Hi Ismael,
> >
> > Is there a maven
Thanks Jason.
On Fri, Feb 5, 2016 at 10:13 AM, Jason Gustafson wrote:
> Hey Rajiv,
>
> Thanks for all the updates. I think I've been able to reproduce this. The
> key seems to be waiting for the old log segment to be deleted. I'll
> investigate a bit more and report what I find on the JIRA.
>
>
Hi Jun,
I am taking KAFKA-3177 off the list because the correct fix might involve
some refactoring of exception hierarchy in new consumer. That may take some
time and 0.9.0.1 probably does not need to block on it.
Please let me know if you think we should have it fixed in 0.9.0.1.
Thanks,
Jiang
Hi Becket,
On Fri, Feb 5, 2016 at 9:15 PM, Becket Qin wrote:
> I am taking KAFKA-3177 off the list because the correct fix might involve
> some refactoring of exception hierarchy in new consumer. That may take some
> time and 0.9.0.1 probably does not need to block on it.
>
Sounds good to me.
Hi Jun,
What about https://issues.apache.org/jira/browse/KAFKA-3100?
Thanks,
Allen
On Fri, Feb 5, 2016 at 1:19 PM, Ismael Juma wrote:
> Hi Becket,
>
> On Fri, Feb 5, 2016 at 9:15 PM, Becket Qin wrote:
>
> > I am taking KAFKA-3177 off the list because the correct fix might involve
> > some re
Hi Allen,
As the JIRA says, KAFKA-3100 has already been integrated into the 0.9.0
branch and will be part of 0.9.0.1.
Ismael
On Fri, Feb 5, 2016 at 10:49 PM, Allen Wang wrote:
> Hi Jun,
>
> What about https://issues.apache.org/jira/browse/KAFKA-3100?
>
> Thanks,
> Allen
>
>
> On Fri, Feb 5, 20
Hi,
Since it's still early in 0.9.0.0's life, if KAFKA-3006 has a chance of
making the cut (provided a resolution is attained on the KIP-45) it would
be great to avoid leaving too much time for code relying on Arrays to
become common place.
On Sat, Feb 6, 2016 at 12:05 AM, Ismael Juma wrote:
>
Hey Alexey,
The API of the new consumer is designed around an event loop in which all
IO is driven by the poll() API. To make this work, you need to call poll()
in a loop (see the javadocs for examples). So in this example, when you
call commitAsync(), the request is basically just queued up to be
Looking for a replication scheme whereby a copy of my stream is replicated
into another dc such that the same events appear in the same order with the
same offsets in each dc.
This makes it easier for me to build replicated state machines as I get
exactly the same data in each dc
Is there any way
Hi Raja,
We seem to be encountering the same problem you were seeing where our
producer thread becomes blocked for some reason.
We also see our producer queue is full,
and for some reason, the producer isn¹t pulling from the queue and sending
to our brokers.
We were wondering if you might be able
Hi Raja,
We seem to be encountering the same problem you were seeing
Where our producer thread becomes blocked for some reason.
We were wondering if you might be able to share how you fixed your problem if
you fixed it.
Thanks,
Debbie
Hi all,
I was hoping to receive some guidance around rebalancing expectations and zk
timeout settings. We noticed that one of our consumer groups was rebalancing
constantly (roughly every minute). After doing some research, we increased our
zookeeper timeout settings from 10 seconds to 30 seco
Hi seen some discussion on this but nothing definitive.
If I have a 0.8.1.1 back end then can I safely use a 0.8.2 client?
I can't upgrade the back end yet but want to start using Scala 2.11 in my
client App but the lack of a 2.12 Kafka client dependency is holding me a
2.10. The earliest Kafka
37 matches
Mail list logo