Thank you for your response. I have gone through the protocol wiki. Now I
have some understanding of it.
Sorry for again asking the question.
I want to know, Is it possible:
Let say, I have producer PA,PB,PC. They send request messages A,B,C
respectively. Now these messages goes to topic. There a
Wrong use-case. Kafka is a queue (in normal case a TTL (time to live) on
messages). There is no correlation between producers and consumers. There
is no concept of a consumed message. There is no "request" and no
"response".
You can produce messages (in another topic) as result of your processing
Thank you so much Svante for your response.
The application which we are designing depends lot upon request/response
mechanism. In that case should we move to some other message broker? If
yes, Can you please tell me the name which is best for this use case and
can handle large amount of requests?
Why do you want a message broker for RPC?
What is "large amounts of requests"?
/svante
2014-09-22 12:38 GMT+02:00 lavish goel :
> Thank you so much Svante for your response.
>
> The application which we are designing depends lot upon request/response
> mechanism. In that case should we move
Thank you Svante for response.
*Below are some requirements that makes us to consider Message Broker.*
1. We have to handle 30,000 TPS.
2. We need to prioritize the requests.
3. Request Data should not be lost.
Thanks
Regards
Lavish Goel
On Mon, Sep 22, 2014 at 4:20 PM, svante ka
1 ) HA proxy -> node.js (rest api). Use aerospike as session store if you
need one.
2) Difficult since the prioritization must be passed to lower levels and
that's usually "hard". (Get rid of this constraint and go for a SLA - like
99,9% within 50ms or something like that)
2b) Group you customer
Hello,
I am currently working on a Kafka implementation and have a couple of
questions concerning the road map for the future.
As I am unsure where to put such questions, I decided to try my luck on
this mailing list. If this is the wrong place for such inquiries, I
apologize. In this case it wou
My current interpretation is that if I start a partition reassignment, for
the sake of simplicity let's assume it's just for a single partition, the
new leader will first become a follower of the current leader, and when it
has caught up it'll transfer leadership over to itself?
Partition reassign
Folks,
Recently in one of our SimpleConsumer based client applications (0.8.1.1),we
spotted a very busy CPU with almost no traffic in/out from the clientand Kafka
broker (1broker+1zookeeper) (the stack trace is attached at the end).
The busy thread was invoked in a while loop anchored at the read
Agree with Svante, for your use case Kafka can be used to retain your
request messages, but it is not proper for RPC usages.
On Mon, Sep 22, 2014 at 6:06 AM, svante karlsson wrote:
> 1 ) HA proxy -> node.js (rest api). Use aerospike as session store if you
> need one.
> 2) Difficult since the
Hello,
1) The new consumer clients will be developed under a new directory. The
old consumer, including the SimpleConsumer will not be changed, though it
will be retired in the 0.9 release.
2) I am not very familiar with HTTP wrapper on the clients, could someone
who have done so comment here?
3
Hello ,
I am trying to use KafkaLog4jAppender to write to kafka and also a file
appender to write to a file. The conversion pattern works for the file, but
for the kafka appender it is not working
The output of the file appender is *2014-09-19T22:30:14.781Z INFO
com.test.poc.StartProgram Message1
I don't think the log4j appender supports patterns :(
On Sep 22, 2014 1:14 PM, Lakshmanan Muthuraman wrote:
Hello ,
I am trying to use KafkaLog4jAppender to write to kafka and also a file
appender to write to a file. The conversion pattern works for the file, but
for the kafka appender it is not
I'm sorry about the formatting issues below:-(I need to stop using hotmail as
the hotmail is mangling the message formatting:-(I'll try re-posting from my
gmail address.
Jagbir
> From: jsho...@hotmail.com
> To: users@kafka.apache.org
> Subject: Busy CPU while negotiating contentBuffer size at
>
This should be fixed in 0.8.2
https://issues.apache.org/jira/browse/KAFKA-847 you can apply the v2 patch
https://issues.apache.org/jira/secure/attachment/12664646/KAFKA-847-v2.patch
backwards
if you want too.
/***
Joe Stein
Founder, Principal Consultant
B
Note: Re-posting the older message from another account due to
formatting issues.
Folks,
Recently in one of our SimpleConsumer based client applications (0.8.1.1),
we spotted a very busy CPU with almost no traffic in/out from the client
and Kafka broker (1broker+1zookeeper) (the stack trace is
Excellent news!
From: Joe Stein
Sent: Monday, September 22, 2014 1:58 PM
To: users@kafka.apache.org
Subject: Re: Log 4j Conversion Pattern not working
kafka.producer.KafkaLog4jAppender in Kafka.0.8.1.1
This should be fixed in 0.8.2
https://issues.apache.o
>
> Partition reassignment will not move the leader unless the old leader is
>
not part of the new set of replicas.
> Even when it does move the leader, it waits until the new replicas enter
> the ISR.
Okay, so just to clarify, if I have a partition where the leader is broker
0, the ISR is [0, 1]
I just wanted to send this out as an FYI but it does not affect any
released versions.
This only affects those who release off trunk and use Kafka-based
consumer offset management. KAFKA-1469 fixes an issue in our
Utils.abs code. Since we use this method in determining the offset
manager for a co
I have a test data set of 1500 messages (~2.5 MB each) that I'm using to
test Kafka throughput. I'm pushing this data using 40 Kafka producers, and
I'm losing about 10% of the message on each trial.
I'm seeing errors of the following form:
Failed to send producer request with correlation id 80 to
Hey folks, we are going to be kicking off our first ever Apache Kafka NYC
meetup during Hadoop world (10/15/2014)
http://www.meetup.com/Apache-Kafka-NYC/events/206001372/ starting @ 6pm.
We already have 3 (maybe 4) speakers ready to go (I will be updating the
site over the next day or so). I was
The new consumer api will also allow you to do what you want in a
SimpleConsumer (e.g., subscribe to a static set of partitions, control
initial offsets, etc), only more conveniently.
Thanks,
Jun
On Mon, Sep 22, 2014 at 8:10 AM, Valentin wrote:
>
> Hello,
>
> I am currently working on a Kafka
We allocate a new BoundedByteBufferReceive for every fetch request. Are you
using SimpleConsumer directly? It seems it's started by the high level
consumer through the FetchFetcher thread.
Thanks,
Jun
On Mon, Sep 22, 2014 at 11:41 AM, Jagbir Hooda wrote:
> Note: Re-posting the older message f
What version of Kafka are you using? Have you increased the max message
size on the broker (default to 1MB)?
Thanks,
Jun
On Mon, Sep 22, 2014 at 3:41 PM, Kyle Banker wrote:
> I have a test data set of 1500 messages (~2.5 MB each) that I'm using to
> test Kafka throughput. I'm pushing this data
Also, don't forget to increase replica.fetch.max.bytes to be larger than
the max message size.
Thanks,
Jun
On Mon, Sep 22, 2014 at 9:35 PM, Jun Rao wrote:
> What version of Kafka are you using? Have you increased the max message
> size on the broker (default to 1MB)?
>
> Thanks,
>
> Jun
>
> On
25 matches
Mail list logo