If you are using old producer for mirror maker, you can specify a custom
partitioner for the mirror maker producer which has exact same logic to
partition message as your custom producer does. If you are using new java
producer, currently there is no way to do it. We are working on adding a
message
Source verified, tests pass, quick start ok.
Binaries verified, tests on scala
https://github.com/stealthly/scala-kafka/pull/27 and go clients
https://github.com/stealthly/go_kafka_client/pull/55 passing.
If the release passes we should update the release notes to include the
change from KAFKA-17
We pushed first bits for the kafka framework we are working on to
https://github.com/mesos/kafka.
We have been using Aurora and Marathon to run Kafka on Mesos but are
cutting over to a framework approach as described in the ticket over the
next 5 weeks.
~ Joe Stein
- - - - - - - - - - - - - - - -
Howdy Kafka Team,
We are trying to aggregate every topic on different geo-separate clusters
all into one central kafka cluster. We have the guarantee that the number
of partitions for a given topic will be the same on the source and target
clusters. Due to our particular use case, we need to make
Hey Daniel,
partitionsFor() will block the very first time it sees a new topic that it
doesn't have metadata for yet. If you want to ensure you don't block even
that one time, call it prior to your regular usage so it initializes then.
The rationale for adding a partition in ProducerRecord was th
Cdhar B writes:
>
> Hi,
>
> Could you please provide some info. on Kafka 0.8.2 new producer example.
I
> tried but not able to find a complete example. Does anybody has a sample?
>
> Appreciate your quick help.
>
> Thanks,
> Cdhar
>
Hello Cdhar,
if you are looking for a java example, you
Gwen Shapira writes:
>
> Hi Daniel,
>
> I think you can still use the same logic you had in the custom
partitioner
> in the old producer. You just move it to the client that creates the
> records.
> The reason you don't cache the result of partitionsFor is that the
producer
> should handle th
Hi All,
Why does the Kafka codebase contain both Scala and Java code? There are
even some cases where the same class (i.e. javaapi.SimpleConsumer and
kafka.consumer.SimpleConsumer). Is it just to allow a Scala developer to
write Scala and a Java developer to use Java? We are trying to use the
Sim
Hi Jeff,
I see a patchless JIRA issue via
http://search-hadoop.com/?q=%2Bmesos+%2Bkafka
And if Kafka on YARN is of interest, there is KOYA:
http://search-hadoop.com/?q=koya
Otis
--
Monitoring * Alerting * Anomaly Detection * Centralized Log Management
Solr & Elasticsearch Support * http://semate
I've got several use cases where being able to spin up multiple *discrete*
kafka clusters would be extremely advantageous. External cloud services
aren't an option and we already have a decent Mesos infrastructure running
on bare metal for these types of things.
I was curious if anyone else has do
in scala
https://github.com/stealthly/scala-kafka/blob/master/src/test/scala/KafkaSpec.scala#L146-L168
~ Joe Stein
- - - - - - - - - - - - - - - - -
http://www.stealth.ly
- - - - - - - - - - - - - - - - -
On Sat, Feb 21, 2015 at 8:10 AM, Cdhar B wrote:
> Hi,
>
> Could you please provide some
Hi,
Could you please provide some info. on Kafka 0.8.2 new producer example. I
tried but not able to find a complete example. Does anybody has a sample?
Appreciate your quick help.
Thanks,
Cdhar
13 matches
Mail list logo