When 0.8.2 arrives in the near future, consumer offsets will be stored by the
brokers, and thus that workload will not be impacting ZK.
Best Regards,
-Jonathan
On Sep 10, 2014, at 8:20 AM, Mike Marzo wrote:
> Is it possible for the high level consumer to use a different zk cluster
> than the
I would look at writing a service that reads from your existing topic and
writes to a new topic with (e.g. four) partitions.
You will also need to pay attention to the partitioning policy (or implement
your own), as the default hashing in the current kafka version default can lead
to poor distr
I was one asking for 0.8.1.2 a few weeks back, when 0.8.2 was at least 6-8
weeks out.
If we truly believe that 0.8.2 will go “golden” and stable in 2-3 weeks, I, for
one, don’t need a 0.8.1.2, but it depends on the confidence in shipping 0.8.2
soonish.
YMMV,
-Jonathan
On Sep 30, 2014, at 12
Sure — take a look at the kafka unit tests as well as admin.AdminUtils , e.g.:
import kafka.admin.AdminUtils
AdminUtils.createTopic(zkClient, topicNameString, 10, 1)
Best Regards,
-Jonathan
On Oct 13, 2014, at 9:58 AM, hsy...@gmail.com wrote:
> Hi guys,
>
> Besides TopicCommand, which
There are various costs when a broker fails, including broker leader election
for each partition, etc., as well as exposing possible issues for in-flight
messages, and client rebalancing etc.
So even though replication provides partition redundancy, RAID 10 on each
broker is usually a good trad
Oct 22, 2014 at 11:20 AM, Gwen Shapira
> wrote:
>
>> Makes sense. Thanks :)
>>
>> On Wed, Oct 22, 2014 at 11:10 AM, Jonathan Weeks
>> wrote:
>>> There are various costs when a broker fails, including broker leader
>> election for each partition, etc
inherited a
> lot of our architecture, and many things have changed in that time. We're
> probably going to test out RAID 5 and 6 to start with and see how much we
> lose from the parity calculations.
>
> -Todd
>
>
> On Wed, Oct 22, 2014 at 3:59 PM, Jonathan Weeks
&g
+1 on this change — APIs are forever. As much as we’d love to see 0.8.2 release
ASAP, it is important to get this right.
-JW
> On Nov 24, 2014, at 5:58 PM, Jun Rao wrote:
>
> Hi, Everyone,
>
> I'd like to start a discussion on whether it makes sense to add the
> serializer api back to the new
Howdy,
I was wondering if it would be possible to update the release plan:
https://cwiki.apache.org/confluence/display/KAFKA/Future+release+plan
aligned with the feature roadmap:
https://cwiki.apache.org/confluence/display/KAFKA/Index
We have several active projects actively and planning to u
You can look at something like:
https://github.com/harelba/tail2kafka
(although I don’t know what the effort would be to update it, as it doesn’t
look like it has been updated in a couple years)
We are using flume to gather logs, and then sending them to a kafka cluster via
a flume kafka sink
The approach may well depend on your deploy horizon. Currently the offset
tracking of each partition is done in Zookeeper, which places an upper limit on
the number of topic/partitions you want to have and operate with any kind of
efficiency.
In 0.8.2 hopefully coming in the next month or two,
One tactic that might be worth exploring is to rely on the message key to
facilitate this.
It would require engineering careful functions for the key which hashes to the
partitions for your topic(s). It would also mean that your consumers for the
topic would be evaluating the key and discarding
s). I can then do the same function on the consumer
> when it reads the key. I'm essentially implementing consumer sliding
> window. Any suggestions or tips on where I would implement reading the
> message key?
>
> Thanks,
> Josh
>
>
> On Mon, Aug 18, 2014 at 6:43 PM,
I hand-applied this patch https://reviews.apache.org/r/23895/diff/ to the kafka
0.8.1.1 branch and was able to build successfully:
gradlew -PscalaVersion=2.11.2
-PscalaCompileOptions.useAnt=false releaseTarGz -x signArchives
I am testing the jar now, and will let you kno
+1 on a 0.8.1.2 release with support for Scala 2.11.x.
-Jonathan
On Aug 22, 2014, at 11:19 AM, Joe Stein wrote:
> The changes are committed to trunk. We didn't create the patch for 0.8.1.1
> since there were code changes required and we dropped support for Scala 2.8
> ( so we could just uploa
I am interested in this very topic as well. Also, can the trunk version of the
producer be used with an existing 0.8.1.1 broker installation, or does one need
to wait for 0.8.2 (at least)?
Thanks,
-Jonathan
On Aug 26, 2014, at 12:35 PM, Ryan Persaud wrote:
> Hello,
>
> I'm looking to insert
Tue, Aug 26, 2014 at 12:38 PM, Jonathan Weeks
> wrote:
>> I am interested in this very topic as well. Also, can the trunk version of
>> the producer be used with an existing 0.8.1.1 broker installation, or does
>> one need to wait for 0.8.2 (at least)?
>>
>> Tha
gt;
>
> On Fri, Aug 29, 2014 at 10:09 AM, Jonathan Weeks
> wrote:
>
>> Thanks, Jay. Follow-up questions:
>>
>> Some of our services will produce and consume. Is there consumer code on
>> trunk that is backwards compatible with an existing 0.8.1.1 broker cl
ted the wiki with some rough timelines.
>
> Thanks,
>
> Jun
>
>
> On Fri, Aug 1, 2014 at 11:52 AM, Jonathan Weeks
> wrote:
>
>> Howdy,
>>
>> I was wondering if it would be possible to update the release plan:
>>
>> https://c
+1
Topic Deletion with 0.8.1.1 is extremely problematic, and coupled with the fact
that rebalance/broker membership changes pay a cost per partition today,
whereby excessive partitions extend downtime in the case of a failure; this
means fewer topics (e.g. hundreds or thousands) is a best pract
20 matches
Mail list logo