Hi Jay,
Thanks for the proof-read. Will update.
- pyr
On 03/11/2015 04:08 AM, Jay Kreps wrote:
> This is really cool. One minor thing is that there is a typo in the title.
> I also think it would be good to give people a two sentence motivation of
> the problem you want to solve up front so th
Thank you guys for answering. I think it will be good that we can pass in a
customised topicCount ( I think this is the interface whitelist and
backlist implement if I am not mistaken) to MM to achieve similar thing
On Wednesday, March 11, 2015, Guozhang Wang wrote:
> Hi Tao,
>
> Unfortunately M
It looks that in your case it is because broker 1 somehow missed a
controller LeaderAndIsrRequest for [ad_click_sts,4]. So the zkVersion
would be different from the value stored in zookeeper from that on.
Therefore broker 1 failed to update ISR. In this case you have to bounce
broker to fix it.
Fro
@tao xiao and Jiangjie Qin, Thank you very much
I try to run kafka-reassign-partitions.sh, but the issue still exists…
this the log info:
[2015-03-11 11:00:40,086] ERROR Conditional update of path
/brokers/topics/ad_click_sts/partitions/4/state with data
{"controller_epoch":23,"leader":1,"ver
This is really cool. One minor thing is that there is a typo in the title.
I also think it would be good to give people a two sentence motivation of
the problem you want to solve up front so they can think about that as they
read through the article.
-Jay
On Tue, Mar 10, 2015 at 6:22 PM, Pierre-Y
Hey everyone,
As a follow-up to my previous questions, I would like to share this
article which uses an overly simplified approach to show-case how to
approach materialized views sourced from kafka.
http://spootnik.org/entries/2015/03/10_simple-materialized-views-in-kakfa-and-clojure.html
I woul
Hi,
Sorry to bring up this old thread, but my question is about this exact thing:
Guozhang, you said:
> A more concrete example: say you have topic AC: 3 partitions, topic BC: 6
> partitions.
>
> With createMessageStreams("AC" => 3, "BC" => 2) a total of 5 threads will
> be created, and consumin
Source code:
/** metadata.fetch.timeout.ms */
public static final String METADATA_FETCH_TIMEOUT_CONFIG = "
metadata.fetch.timeout.ms";
private static final String METADATA_FETCH_TIMEOUT_DOC = "The first
time data is sent to a topic we must fetch metadata about that topic to
know which s
Hi,
I am intermittently getting the following exception when producing the
messages using 0.8.2 new producer:
java.util.concurrent.ExecutionException:
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata
after 6 ms.
Network connectivity is fine and the brokers are all u
Hi Xiao,
For z/OS, do you mean z/VM or native z/OS? For z/VM, it probably will work
fine, but for z/OS, I would be surprised if Kafka can run directly on it.
I think Guozhang¹s approach for cross colo replication is worth trying.
One thing might need to be aware of for deploying Kafka cluster cro
Hello,
We are using Kafka 0.8.1.1 on the broker and 0.8.2 producer on the client.
After running for a few days, we have found that there are way too many
open file descriptors on the broker side. When we compare the connections
on the client side, we found some connections are already gone on the
I would suggest to use the new java producer if possible. It is more
efficient and does the round robin by default.
Jiangjie (Becket) Qin
On 3/10/15, 3:28 AM, "tao xiao" wrote:
>The default partitioner of old producer API is a sticky partitioner that
>keeps sending messages to the same partitio
This looks like a leader broker somehow did not respond to a fetch request
from the follower. It may be because the broker was too busy. If that is
the case, Xiao¹s approach could help - reassign partitions or reelect
leaders to balance the traffic among brokers.
Jiangjie (Becket) Qin
On 3/9/15,
On 03/09/15 22:56, Xiao wrote:
> Hi, Jay,
>
> Thank you!
>
> The Kafka document shows “Kafka should run well on any unix system". I assume
> it includes the major two Unix versions, IBM AIX and HP-UX. Right?
FWIW I have had good success running Kafka under load on FreeBSD. I was
using OpenJD
Congratulations everyone!
From: Joe Stein
Sent: Tuesday, March 10, 2015 11:12 AM
To: Jun Rao
Cc: d...@kafka.apache.org; users@kafka.apache.org;
kafka-clie...@googlegroups.com
Subject: Re: [kafka-clients] Re: [VOTE] 0.8.2.1 Candidate 2
Thanks Jun for gett
Hi Tao,
Unfortunately MM does not support whitelist / blacklist at the same time,
and you have to choose either one upon initialization. As for your case, I
think it can be captured by some reg-ex to exclude nothing else but "10",
but I do not know the exact expression.
Guozhang
On Tue, Mar 10,
On 03/10/2015 05:48 PM, Mayuresh Gharat wrote:
> How do you typically handle workers starting, always start at offset 0
> to make sure the view is correctly recreated ?
> ---> You will have to reset the offsets to 0 and the offset reset policy to
> earliest in consumer.
Yup, as expected.
>
> Ho
You will have to use wildcard iterator I suppose.
Thanks,
Mayuresh
On Tue, Mar 10, 2015 at 7:58 AM, tao xiao wrote:
> I actually mean if we can achieve this in mirror maker.
>
> On Tue, Mar 10, 2015 at 10:52 PM, tao xiao wrote:
>
> > Hi,
> >
> > I have an user case where I need to consume a l
How do you typically handle workers starting, always start at offset 0
to make sure the view is correctly recreated ?
---> You will have to reset the offsets to 0 and the offset reset policy to
earliest in consumer.
How do you handle topology changes in consumers, which lead to a
redistribution of
Hello Xiao,
The proposed transactional messaging at Kafka is aimed at improving the
at-least-once semantics of delivery to exactly-once, i.e. to avoid
duplicates. It is not aimed for grouping fsync of multiple messages into
one, as for avoiding data loss it is still dependent on data replication.
Thanks Jiangie! So what version is considered the "new api"? Is that the
javaapi in version 0.8.2?.
On Mon, Mar 9, 2015 at 2:29 PM, Jiangjie Qin
wrote:
> The stickiness of partition only applies to old producer. In new producer
> we have the round robin for each message. The batching in new prod
org.apache.kafka.clients.producer.Producer is the new api producer
On Tue, Mar 10, 2015 at 11:22 PM, Corey Nolet wrote:
> Thanks Jiangie! So what version is considered the "new api"? Is that the
> javaapi in version 0.8.2?.
>
> On Mon, Mar 9, 2015 at 2:29 PM, Jiangjie Qin
> wrote:
>
> > The sti
Thanks Jun for getting this release out the door and everyone that
contributed to the work in 0.8.2.1, awesome!
~ Joe Stein
- - - - - - - - - - - - - - - - -
http://www.stealth.ly
- - - - - - - - - - - - - - - - -
On Mon, Mar 9, 2015 at 2:12 PM, Jun Rao wrote:
> The following are the results
I actually mean if we can achieve this in mirror maker.
On Tue, Mar 10, 2015 at 10:52 PM, tao xiao wrote:
> Hi,
>
> I have an user case where I need to consume a list topics with name that
> matches pattern topic.* except for one that is topic.10. Is there a way
> that I can combine the use of w
Hi,
I have an user case where I need to consume a list topics with name that
matches pattern topic.* except for one that is topic.10. Is there a way
that I can combine the use of whitelist and blacklist so that I can achieve
something like accept all topics with regex topic.* but exclude topic.10?
Hi kafka,
I've started implementing simple materialized views with the log
compaction feature to test it out, and it works great. I'll share the
code and an accompanying article shortly but first wanted to discuss
some of the production implications my sandbox has.
I've separated the project in t
The default partitioner of old producer API is a sticky partitioner that
keeps sending messages to the same partition for 10 secs (I don't remember
the exact length duration) before switch to another partition if no key is
specified in message. You can easily override this by setting
partitioner.cl
I ended up running kafka-reassign-partitions.sh to reassign partitions to
different nodes
On Tue, Mar 10, 2015 at 11:31 AM, sy.pan wrote:
> Hi, tao xiao and Jiangjie Qin
>
> I encounter with the same issue, my node had recovered from high load
> problem (caused by other application)
>
> this is
Hi,
" Unless the producer is always putting the messages into one partition, I
would expect both consumer groups to read from the topic."
This was the issue. I thought the producer would round robin messages to
different partitions but it must have just been writing to a single partition.
I r
I suggest you to use Apache Flume, we have a custom JDBC sink at our Flume
distribution
https://github.com/Stratio/flume-ingestion/tree/develop/stratio-sinks/stratio-jdbc-sink
Regards.
2015-03-10 8:23 GMT+01:00 Joe Stein :
> If you wanted to use Go you could code a simple worker strategy
>
Hi,
Code snippet below. This creates two consumers with same group id
"consumer-group", they consume from "common-topic" which has 6 partitions. Each
group has 3 consumers.
However only one of the groups will ever run. Unless the producer is always
putting the messages into one partition, I wo
If you wanted to use Go you could code a simple worker strategy
https://github.com/stealthly/go_kafka_client/blob/master/consumers/consumers.go#L162-L170
with https://godoc.org/golang.org/x/tools/oracle to-do it and have at
least once insert/update guarantee.
Not sure if you have language require
Good evening. Can someone suggest existing framework that allows to
reliably load data from kafka into relation database like Oracle in real
time?
Thanks so much in advance,
Vadim
33 matches
Mail list logo