I think you guys have created a system that demonstrates a new field of
mathematics, unless I am missed something in my research on the topic. If we
define the system as being an N-dimensional space of R-dimensional subspaces,
does this mean we need to use a real 3D matrix to model the algebra
Hi Guozhang,
Currently, the brokers do not know which high-level consumers are reading which
partitions and it is the rebalance between the consumers and the coordinator
which would authorize a consumer to fetch a particular partition, I think.
Does this mean that when a rebalance occurs, a
it’s happening). And
as
someone else mentioned, there are solutions for encrypting data for
multiple consumers. You can encrypt the data with an OTP, and then
multiply encrypt the OTP once for each consumer and store those
encrypted
strings in the envelope.
-Todd
On 6/7/14, 12:25 PM, "R
r better performance.
On Fri, Jun 6, 2014 at 10:48 AM, Rob Withers >
wrote:
On consideration, if we have 3 different access groups (1 for
production
WRITE and 2 consumers) they all need to decode the same encryption
and
so
all need the same public/private keycerts won't work, unle
etter seems to not use certs and wrap the encryption
specification with an ACL capabilities for each group of access.
On Jun 6, 2014, at 11:43 AM, Rob Withers wrote:
This is quite interesting to me and it is an excelent opportunity to
promote a slightly different security scheme. O
This is quite interesting to me and it is an excelent opportunity to
promote a slightly different security scheme. Object-capabilities are
perfect for online security and would use ACL style authentication to
gain capabilities filtered to those allowed resources for allow
actions (READ/WRI
Ch3eck, check. 5 by 5!
> -Original Message-
> From: Jun Rao [mailto:jun...@gmail.com]
> Sent: Tuesday, April 15, 2014 10:06 PM
> To: users@kafka.apache.org
> Subject: Re: Kafka upgrade 0.8.0 to 0.8.1 - kafka-preferred-replica-election
> failure
>
> Ok, we take a look and patch it in 0.8.
We need to try the one with filters, thanks. We found that if you
commitOffsets, after each msg, it works. The IO is not bad.
- charlie
> On Jan 16, 2014, at 8:14 AM, Hussain Pirosha
> wrote:
>
> Hello,
>
> While running the high level consumer mentioned on
> https://cwiki.apache.org/conf
It seems the answer may lie on the broker and zookeeper sides. Can these guys
listen on 2 ports, simultaneously? I believe you can listen on a filtered
ip:port, so they could go to 2 NICs on those boxes and then have different
props for the producer and consumers to use the different local NIC
Nice idea, but different sort of animal. Going to HDFS is different. It
requires aggregation of traffic, so there is the whole offset commit strategy
concern. When pulling traffic for per message work, we commit after every
pull, so exactly once. The tradeoff with aggregation is whether to a
That was an interesting section too. Which GC settings would you suggest?
Thank you,
- charlie
> On Jan 10, 2014, at 10:11 PM, Jun Rao wrote:
>
> Have you looked at our FAQ, especially
> https://cwiki.apache.org/confluence/display/KAFKA/FAQ#FAQ-Whyaretheremanyrebalancesinmyconsumerlog
> ?
>
>
quot;1.9.1". I change it and it works. This was changed
today, so perhaps they lost 1.8?
thanks,
rob
On Aug 5, 2013, at 8:19 PM, Rob Withers wrote:
> Well, I changed something, as it was working yesterday. Here's my attempt
> at updating…
>
> Robs-MacBook-Pro:kafka ree
Well, I changed something, as it was working yesterday. Here's my attempt at
updating…
Robs-MacBook-Pro:kafka reefedjib$ sbt "++2.10.2 update"
[info] Loading global plugins from /Users/reefedjib/.sbt/plugins
[info] Loading project definition from
/Users/reefedjib/Desktop/rob/comp/workspace-fra
0.0 publish-local"
>
> Assuming 2.10.0 is the Scala version you want.
>
> /***
> Joe Stein
> Founder, Principal Consultant
> Big Data Open Source Security LLC
> http://www.stealth.ly
> Twitter: @allthingshadoop
> **
good morning,
I have built a scala 2.10 version off of 0.8 and applied a patch of my own.
When I go to sbt package, it specifies a version. How can I control these
values (kafka_2.10-0.8.0-beta1)?
[info] Packaging
/Users/reefedjib/Desktop/rob/comp/newwork/kafka/core/target/scala-2.10/kafka_2
I built 0.8, fresh, with the scala 2.10 patches from KAFKA-717's tgz, on my
Macbook. I ran sbt tests (after running sbt eclipse) and got the following
error:
[error] Failed: : Total 180, Failed 1, Errors 0, Passed 179, Skipped 0
[error] Failed tests:
[error] kafka.log.LogTest
[error] (c
e => {
>println("unexpected exception: ")
>e.printStackTrace()
> }
> }
> } // End while loop
>
> HTH,
>
> Florin
>
>
>
> On Jul 25, 2013, at 10:56 AM, Rob Withers wrote:
>
>> Oh boy, is my mind slow to
/partition, cleanup, and return the response to
the REST call?
thanks,
rob
On Jul 25, 2013, at 11:49 AM, Rob Withers wrote:
> Thanks, Joe, I also see the answer to my other question, that the KafkaStream
> is not on a different thread, but I automatically expect it to be since all
> othe
Thanks, Joe, I also see the answer to my other question, that the KafkaStream
is not on a different thread, but I automatically expect it to be since all
other uses we have had of the KafkaStream are stuffed in a Runnable. duh.
thanks,
rob
On Jul 25, 2013, at 11:41 AM, Joe Stein wrote:
> in
Thanks, Jim. I saw that in the 0.8 config as well.
I am trying to write a REST service that dumps all traffic in a given
topic/partition. The issue I seem to be facing now is the blocking API of the
consumerIterator. Is there any way we can ask whether the traffic is drained?
Perhaps a way
11:57 AM, "Seshadri, Balaji"
wrote:
> Try ./sbt "++2.8.0 package".
>
> -Original Message-
> From: Rob Withers [mailto:reefed...@gmail.com]
> Sent: Thursday, June 06, 2013 11:54 AM
> To: kafka list
> Subject: package error
>
> I am
I am quite unfamiliar with compiling with sbt package. What could be my issue
here? It seems like the scala library is wrong, but where do I look for the
scala jar?
thanks,
rob
"$ ./sbt package
[info] Loading project definition from
/Users/reefedjib/Desktop/rob/comp/workspace/kafka/project
In thinking about the design of consumption, we have in mind a generic
consumer server which would consume from more than one message type. The
handling of each type of message would be different. I suppose we could
have upwards of say 50 different message types, eventually, maybe 100+
different
, 2013, at 11:13 AM, Neha Narkhede wrote:
> You can find all unit tests under core/test. Just look for *Test.scala.
>
> Thanks,
> Neha
> On May 23, 2013 9:59 AM, "Rob Withers" wrote:
>
>> I am using 0.8 in eclipse. I used the approach suggested to use the
I am using 0.8 in eclipse. I used the approach suggested to use the sbt plugin
and that works great (thank you to whomever recommended that). It pulled Scala
2.9.3. However, I am only able to run 2 common scala tests: TopicTest and
ConfigTest. How can I find and run the other scala tests?
t
Sorry for being unclear Neha. I meant that I had forgotten that the
introduction of replicas is happening in 0.8 and I was confusing the two.
Thanks,
rob
> -Original Message-
> From: Rob Withers [mailto:reefed...@gmail.com]
> Sent: Monday, May 20, 2013 8:21 PM
>
mer APIs. It will be great if you can read the wiki and provide
> > feedback
> > > -
> > >
> https://cwiki.apache.org/confluence/display/KAFKA/Consumer+Client+Re
> > > -
> > > Design
> > >
> > > Thanks,
> > > Neha
> > >
ou can read the wiki and provide
feedback
> - https://cwiki.apache.org/confluence/display/KAFKA/Consumer+Client+Re-
> Design
>
> Thanks,
> Neha
> On May 16, 2013 10:29 PM, "Rob Withers" wrote:
>
> > We want to ensure only-once message processing, but we also wa
tOffsets botched to zookeeper?
> > > > > > >
> > > > > > > Upgrading to a new zookeeper version is not an easy change. Also
> > > > > > zookeeper
> > > > > > > 3.3.4 is much more stable compared to 3.4.x. We think it is b
- Queue length, by group offset vs lastOffset
> - latency between produce and consume, again by group
>
> Thanks,
>
>
> Rob Withers
> Staff Analyst/Developer
> o: (720) 514-8963
> c: (571) 262-1873
>
>
>
> -Original Message-
> From: Jun Rao [
I've gotten to know y'all a bit, so I would like to ask my question here. :)
I am fairly unfamiliar with Scala, having worked a chapter or 2 out of a scala
book I bought. My understanding is that it is both an object language and a
functional language. The only language I am extremely familia
> Currently Kafka depends on zookeeper 3.3.4 that doesn't have a batch write
> api. So if you commit after every message at a high rate, it will be slow
and
> inefficient. Besides it will cause zookeeper performance to degrade.
>
> Thanks,
> Neha
> On May 16, 2013 6:54 PM,
We want to ensure only-once message processing, but we also want the benefit
of rebalancing. commitOffsets updates all partitions from out of a
connector instance. We want to commit the offset for just the partition
that delivered a message to the iterator, even if several fetchers are
feeding a
; > Sent: Wednesday, May 15, 2013 11:02 PM
> > To: users@kafka.apache.org
> > Subject: Re: could an Encoder/Decoder be stateful?
> >
> > Each producer/consumer uses a single instance of the encoder/decoder.
> >
> > Thanks,
> >
> > Jun
> >
>
Immediately monitoring. Later, possible thresholding to evoke a
reconfiguration of the number of partitions into a new topic and migrate
message index/logs and redirect pubs/subs to the new topic, during a traffic
spike.
> -Original Message-
> From: Jun Rao [mailto:jun...@gmail.com]
> Sen
We are calling commitOffsets after every message consumption. It looks to be
~60% slower, with 29 partitions. If a single KafkaStream thread is from a
connector, and there are 29 partitions, then commitOffsets sends 29 offset
updates, correct? Are these offset updates batched in one send to z
We are curious. It would be excellent if around 8/17 could be targeted,
perhaps go for 7/17 as RC and let 8/17 be a RC2 date, a month before we
would like to see it go to production. An RC with thorough testing with our
application may be workable.
Thanks,
rob
Or is the same instance used for each (un)marshaling? It would be nice to
have a cache and a duplicateMsgChecker function, from the app above to
ensure transactional guarantees, and object ref substitutions during
(de)serialization, to enable durable distributed objects and promises.
Thanks,
+ 1!
A call through the Consumer Group, queuedMessageCount(topic,
partitionNumber), could query zookeeper (and add the local fetched queue) to
return this interesting piece of information. That is if we cannot already
do this :)
Thanks,
rob
> -Original Message-
> From: Sining Ma [mailto
,
Andrea
On 05/11/2013 06:17 PM, Rob Withers wrote:
Could anyone throw me a nice shiny knuckle bone, please? Smile
thanks,
rob
Could anyone throw me a nice shiny knuckle bone, please?
thanks,
rob
nd to
an uncreated topic
It seems this is because we hardcoded "/" in HighwaterMarkCheckpoint. Could
you file a jira?
val name = path + "/" + HighwaterMarkCheckpoint.highWatermarkFileName
Thanks,
Jun
On Fri, May 10, 2013 at 5:38 PM, Rob Withers wrote:
Jun, thanks! I c
ps. Is there a way to determine the build of a jar?
From: Rob Withers
Sent: Friday, May 10, 2013 6:53 PM
To: users@kafka.apache.org
Subject: a build jar request
Is there anyway someone could generate a daily 0.8 jar, and post it somewhere,
so those of us unable to build at home could have
Is there anyway someone could generate a daily 0.8 jar, and post it somewhere,
so those of us unable to build at home could have the latest? I got my last
build, from work, dated 4/30. This would be a huge help.
thanks,
rob
Jun, thanks! I changed the properties for the broker to the following,
worked, and I was able to produce data, for a while. More on the next issue
below.
thanks,
rob
...I changed the broker properties to the following:
for broker 0:
props.setProperty("log.dir", "\\tmp\\kafka0-logs");
for br
I am running on windows. I am programmatically (no scripts) starting a zk,
2 brokers, 2 consumers and a producer, in this order but the first 3 at
once, then the other 3 at once, all with a nonexistent topic.
Here's the pertinent log for the producer (with other stuff mixed in, no
doubt):
> -Original Message-
> From: Chris Curtin [mailto:curtin.ch...@gmail.com]
> > 1 When you say the iterator may block, do you mean hasNext() may block?
> >
>
> Yes.
Is this due to a potential non-blocking fetch (broker/zookeeper returns an
empty block if offset is current)? Yet this blo
ould this be read?
Keeping in mind that these were auto-created, unsure of the implications, I
will be carefully reading the following, next:
https://cwiki.apache.org/KAFKA/kafka-replication.html.
thanks,
rob
> -Original Message-----
> From: Rob Withers [mailto:reefed...@gmail.com]
>
uced?
>
> You may want to take a look at the consumer example in
> http://kafka.apache.org/08/api.html
>
> Thanks,
>
> Jun
>
>
> On Wed, May 1, 2013 at 7:14 PM, Rob Withers wrote:
>
> > Running a consumer group (createStreams()), pointing to
Running a consumer group (createStreams()), pointing to the zookeeper and
with the topic and 1 consumer thread, results in only half the messages
being consumed. The topic was auto-created, with a replication factor of 2,
but the producer was configured to produce to 2 brokers and so 4 partitions
with topic auto-creation (and no previous topic) and a replication factor of
2 results in 4 partitions. They are numbered uniquely and sequentially, but
there are two leaders? Is it so? Should we only write to one broker? Does
it have to be the leader or will the producer get flipped by the zk?
Yes, like in KAFKA-876's producer log, I only get one
LeaderNotAvailableException. Thanks for fixing this issue, though could it
only be fixed in the 0.8 branch? A colleague is having the issue tonight
and I think he may be on the trunk.
Like KAFKA-876, I am now getting the
WARN [Kaf
52 matches
Mail list logo