If you can't see the image, I uploaded it to dropbox
https://www.dropbox.com/s/gckn4gt7gv26l9w/graph.png
From: Guy Doulberg [mailto:guy.doulb...@perion.com]
Sent: Monday, August 11, 2014 4:58 PM
To: users@kafka.apache.org
Subject: RE: Consume more than produce
Hey
I had an issue i
f that is so, why did the consumer continued consuming, might that
help me understand the problem I have in general that even when times are
regular the consumer consumes more the producer?
Thanks.
-Original Message-
From: Guy Doulberg [mailto:guy.doulb...@perion.com]
Sent: Monday, A
.
> On 4/08/2014, at 5:35 pm, Guy Doulberg wrote:
>
> Hi
>
> What do you mean producer ACK value?
>
> In my code I don't have a retry mechanism, the Kafka producer API has a retry
> mechanism?
>
>
> -Original Message-
> From: Guozhang Wang [
nsume more than produce
What is the ack value used in the producer?
On Fri, Aug 1, 2014 at 1:28 AM, Guy Doulberg
wrote:
> Hey,
>
>
> After a year or so I have Kafka as my streaming layer in my
> production, I decided it is time to audit, and to test how many events
> do I lo
Hey,
After a year or so I have Kafka as my streaming layer in my production, I
decided it is time to audit, and to test how many events do I lose, if I lose
events at all.
I discovered something interesting which I can't explain.
The producer produces less events that the consumer group con
Hi,
Is there a way to commit consuming of events only if I had ran code
successfully?
In other words,
In this code
while (it.hasNext()) { /** Iterator on kafka High lever consumer*/
try {
MessageAndMetadata current = it.next();
dealWithEvent(cu
we want to have a hadoop active-active clusters
(on two data-centers) and we might use the kafka as the sync stream from
which both centers will consume events,
Has anyone did something like that?
Thanks,
Guy Doulberg
Data Infra team leader @ Conduit
Hi Jun
Great presentation, great feature
On 04/09/2013 07:48 AM, Jun Rao wrote:
Piotr,
Thanks for sharing this. Very interesting and useful study. A few comments:
1. For existing 0.7 users, we have a migration tool that mirrors data from
an 0.7 cluster to an 0.8 cluster. Applications can up
Hi guys,
Do you know by a chance the status of the project IronCount
(git://github.com/edwardcapriolo/IronCount.git)
Will it be available to work kafka 0.8?
Should I start using? or should look for another project? I see it was
not updated for 10 months
Thanks
Guy Doulberg
Hi Ahmed,
I can share with you my experience, I have built a system similar to yours.
1. If all your messages are the same, I think you should use the default
partitioner, so the messages will spread evenly across all the
brokers/partition combinations, unless you have a better function to
s
d describe your use case a bit more?
On Sunday, January 6, 2013, Guy Doulberg wrote:
Hi
I am looking for an ETL tool that can connect to kafka, as a consumer and
as a producer,
Have you heard of such a tool?
Thanks
Guy
Hi
I am looking for an ETL tool that can connect to kafka, as a consumer
and as a producer,
Have you heard of such a tool?
Thanks
Guy
Hi Tobias
I am not using a nginx but a tomcat server, but I solved the same
problem in this tomcat server.
I created a Logback appender that writes to Kafka,
My tomcat server uses logback to log each of the requests,
I configured this logback to write to a file, and to write to a kafka
clu
Hi ,
I am running kafka service using runit,
in the start script I have this code:
#!/bin/sh
exec 2>&1
exec /opt/kafka/bin/kafka-server-start.sh /etc/kafka/server.properties
===
/opt/kafka/bin/kafka-server-start.sh is the scripts distributed in
Kafka release
Hop
14 matches
Mail list logo