Hi Team,
I am trying with Kafka client on Windows 7 64bit -corporate pc which is backed
by proxy and Kafka is hosted in Ubuntu 12.04. This is my code:
Properties props = new Properties();
props.put("metadata.broker.list", "10.10.10.10:9092"); //Example IP
props.put("serializer.class", "kafka.se
Hi Guozhang,
If I use high level consumer, how do I ensure all data goes to master even
if slave was up and running ? Is it just by forcing master to have enough
consumer thread to cover maximum number of partitions of a topic since
high level consumer doesn't have assumption of consumers who are
Hello Weide,
That should be doable via high-level consumer, you can take a look at this
page:
https://cwiki.apache.org/confluence/display/KAFKA/Consumer+Group+Example
Guozhang
On Fri, Aug 1, 2014 at 3:20 PM, Weide Zhang wrote:
> Hi,
>
> I have a use case for a master slave cluster where the
> Leadership moves automatically for at least a few of the topics, which
> never happens when we run them on our prod, non-AWS hardware. This causes
Under normal operation (i.e., without broker failures) leadership
should not move. Leader changes occur when brokers fail - due to GC,
controlled s
Hi,
I have a use case for a master slave cluster where the logic inside master
need to consume data from kafka and publish some aggregated data to kafka
again. When master dies, slave need to take the latest committed offset
from master and continue consuming the data from kafka and doing the pus
Hi,
We have a Kafka 0.8 cluster in a test environment (in this case, on AWS EC2
nodes). Even though we've tried to run very little load on this test
cluster, it seems like the instances can't even keep up with that.
Leadership moves automatically for at least a few of the topics, which
never hap
I too could benefit from an updated roadmap.
We're in a similar situation where some components in our stream processing
stack could use an overhaul, but I'm waiting for the offset API to be fully
realized before doing any meaningful planning.
On Fri, Aug 1, 2014 at 11:52 AM, Jonathan Weeks
wro
Howdy,
I was wondering if it would be possible to update the release plan:
https://cwiki.apache.org/confluence/display/KAFKA/Future+release+plan
aligned with the feature roadmap:
https://cwiki.apache.org/confluence/display/KAFKA/Index
We have several active projects actively and planning to u
Sure, can you give me the blurb you want?
-Jay
On Fri, Aug 1, 2014 at 6:58 AM, Vitaliy Verbenko
wrote:
> Dear Kafka team,
>
> Would you mind add us @
> https://cwiki.apache.org/confluence/display/KAFKA/Powered+By ?
> We're using it as part of our ticket sequencing system for our helpdesk
> softw
Thanks guys! I found my problem :)
I'm using the DefaultPartitioner and I was using the topic name as key...
I though it was doing a round-robin like in kafka 0.7 and not a hash of the
key...
Thank you for your help, it's really appreciate!:)
François Langelier
Étudiant en génie Logiciel - Éc
Have you looked at
https://cwiki.apache.org/confluence/display/KAFKA/FAQ#FAQ-Whyisdatanotevenlydistributedamongpartitionswhenapartitioningkeyisnotspecified
?
Thanks,
Jun
On Fri, Aug 1, 2014 at 7:33 AM, François Langelier
wrote:
> HI all!
>
> I think I already saw this question on the mailing
Do you have producer retries (due to broker failure) in those minutes when
you see a diff?
Thanks,
Jun
On Fri, Aug 1, 2014 at 1:28 AM, Guy Doulberg
wrote:
> Hey,
>
>
> After a year or so I have Kafka as my streaming layer in my production, I
> decided it is time to audit, and to test how many
One seed broker should be enough, and the the number of partitionMetadata
should be the same as num. of partitions. One note here is that the
metadata is propagated asynchronously to the brokers, and hence the
metadata returned by any broker may be stale by small chances, so you need
to periodicall
Hi,
What's the way to find a topic's partition count dynamically using
simpleconsumer api ?
If I use one seed broker within a cluster of 10 brokers, and add list of
topic name into the simple consumer request to find topics' metadata, when
it returns,
is the size of partitionsMetadata per topicme
Hi Anand,
You can use the high-level consumer and turn of auto.offset.commit, and do
sth. like:
message = consumer.iter.next();
bool acked = false
while (!acked) {
process(message)
acked = writeToDB();
}
consumer.commit()
Guozhang
On Fri, Aug 1, 2014 at 3:30 AM, anand jain wrote:
> I am ver
Are you using the random partitioner or a custom partitioner in your
producer?
Is your producer picking up all the available partitions?
What producer client are you using?
On 8/1/14, 7:33 AM, "François Langelier" wrote:
>HI all!
>
>I think I already saw this question on the mailing list, but I'
What is the ack value used in the producer?
On Fri, Aug 1, 2014 at 1:28 AM, Guy Doulberg
wrote:
> Hey,
>
>
> After a year or so I have Kafka as my streaming layer in my production, I
> decided it is time to audit, and to test how many events do I lose, if I
> lose events at all.
>
>
> I discove
Thanks Guozhang,
I was looking for actual real world workflows. I realize you can commit
after each message but if you’re using ZK for offsets for instance you’ll
put too much write load on the nodes and crush your throughput. So I was
interested in batching strategies people have used that balanc
You have to remember statsd uses udp and possibly lossy which might account
for the errors.
-Steve
On Fri, Aug 1, 2014 at 1:28 AM, Guy Doulberg
wrote:
> Hey,
>
>
> After a year or so I have Kafka as my streaming layer in my production, I
> decided it is time to audit, and to test how many event
Dear Kafka team,
Would you mind add us @
https://cwiki.apache.org/confluence/display/KAFKA/Powered+By ?
We're using it as part of our ticket sequencing system for our helpdesk
software.
--
*Vitaliy Verbenko - Business Development at Helprace *
vitaliy.verbe...@helprace.com
Customer Service Sof
HI all!
I think I already saw this question on the mailing list, but I'm not able
to find it back...
I'm using kafka 0.8.1.1, i have 3 brokers and I have a default replication
factor of 2 and a default partitioning factor of 2.
My partition are distributed fairly on every brokers.
My problem is
Kafka is a log and not a queue. The client is remembering a position in the
log rather than working with individual messages.
On Fri, Aug 1, 2014 at 4:02 PM, anand jain wrote:
> I want to delete the message from a Kafka broker after consuming it(Java
> consumer). How can I do that?
>
--
“ T
Hi,
Kafka supports two types of log/messages retention policies.
Log retention(size/time): The messages will be discarded after
log.retention.minutes or when the log size reaches log.retention.bytes
Log compaction: which ensures that Kafka will always retain at least the
last known value for ea
I want to delete the message from a Kafka broker after consuming it(Java
consumer). How can I do that?
I am very much new to Kafka and we are using Kafka 0.8.1.
What I need to do is to consume a message from topic. For that, I will have
to write one consumer in Java which will consume a message from topic and
then save that message to database. After a message is saved, some
acknowledgement will be
Hey,
After a year or so I have Kafka as my streaming layer in my production, I
decided it is time to audit, and to test how many events do I lose, if I lose
events at all.
I discovered something interesting which I can't explain.
The producer produces less events that the consumer group con
26 matches
Mail list logo