This may be due to https://issues.apache.org/jira/browse/KAFKA-1367
On Thu, Oct 03, 2013 at 04:17:45PM -0400, Florian Weingarten wrote:
> Hi list,
>
> I am trying to debug a strange issue we are seeing. We are using
> "Sarama" [1], our own Go implementation of the Kafka API.
>
> Somehow, we eith
I just wanted to send this out as an FYI but it does not affect any
released versions.
This only affects those who release off trunk and use Kafka-based
consumer offset management. KAFKA-1469 fixes an issue in our
Utils.abs code. Since we use this method in determining the offset
manager for a co
The consumer iterator returns MessageAndMetadata which includes the
offset. You would need logic in your application to check the offset
if it needs to stop processing after a certain offset.
On Wed, Sep 24, 2014 at 12:09:40PM -0600, Kyle Banker wrote:
> What are the best ways to replay a portion
You can use any offset you like to fetch (provided it is present on
the broker). Since the client-side FetchRequest and the server-side
Log API don't support reading a specific _number_ of messages you will
need to specify the bytes (size) to read. You can then extract as many
messages as you like
Can you confirm that you are using a replication factor of two? As
Steven said, the replicas also consume from the leader. So it's your
consumer, plus the replica.
On Thu, Sep 25, 2014 at 10:04:29PM -0700, ravi singh wrote:
> Thanks Steven. That answers the difference in Bytes in and bytes Out pe
Unfortunately, I think the only option (going forward) until you pick
up KAFKA-1414 is to use a smaller segment size.
On Fri, Sep 26, 2014 at 02:04:33AM -0400, Otis Gospodnetic wrote:
> Hi,
>
> Just got a lovely email a bunch of our EC2 instances will be rebooted in a
> few days. Some of them ru
I think there are some issues still open with it that are currently
slated for 0.9 - e.g., KAFKA-1305. Would be great if someone can pick
that up as it is a useful feature to include in 0.8.2
On Fri, Sep 26, 2014 at 06:06:14PM +0400, Yury Ruchin wrote:
> Hello,
>
> A while back, I saw mentioning
> > kafka2_1 | [2014-09-26 12:35:07,289] INFO [Kafka Server 2],
> > started (kafka.server.KafkaServer)
> > kafka2_1 | [2014-09-26 12:35:07,394] INFO New leader is 2
> > (kafka.server.ZookeeperLeaderElector$LeaderChangeListener)
The above logs are for controller election. Not le
It looks like You set up three separate ZK clusters, not an ensemble.
You can take a look at
http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_zkMulitServerSetup
on how to set up an ensemble; and then register all three kafka
brokers on that single zk ensemble.
Joel
On Thu, Oct 09, 2
This is another potential use-case for message metadata. i.e., if we
had a DC/environment field in the header you could easily set up a
two-way mirroring pipeline. The mirror-maker can just filter out
messages that originated in the source cluster. For this to be
efficient the mirror maker should r
As Neha mentioned, with rep factor 2x, this shouldn't normally cause
an issue.
Taking the broker down will cause the leader to move to another
replica; consumers and producers will rediscover the new leader; no
rebalances should be triggered.
When you bring the broker back up, unless you run a pr
It has been replaced with:
log.retention.check.interval.ms
We will update the docs. Thanks for reporting this.
Joel
On Mon, Oct 20, 2014 at 11:08:38AM -0400, Libo Yu wrote:
> http://kafka.apache.org/documentation.html#brokerconfigs
>
> In section 6.3, there is an example your production server
Shlomi,
If you are on trunk, and your consumer subscriptions are identical
then you can try a slightly different partition assignment strategy.
Try setting partition.assignment.strategy="roundrobin" in your
consumer config.
Thanks,
Joel
On Wed, Oct 29, 2014 at 06:29:30PM -0700, Jun Rao wrote:
>
Do you see any errors on the broker logs? Can you check the broker's
public access logs and see if there are topic metadata requests coming
in from the producer?
On Wed, Oct 29, 2014 at 07:15:15PM -0700, Rajiv Kurian wrote:
> I don't see anything else that is relevant. I traced the first of these
threads request to join a consumer group they will be
> elected so that they are balanced across the machine denoted by the
> consumer set/ensemble identifier.
>
> will partition.assignment.strategy="roundrobin" help with that?
> 10x,
> Shlomi
>
> On Thu, Oct 30,
andler-6]
> > [state.change.logger ]: Broker 0 received invalid
> > LeaderAndIsr request with correlation id 158 from controller 0 epoch 29083
> > with an older leader epoch 5 for partition [myTopic,1002], current leader
> > epoch is
> >
> >
That is something that you will need to manage manually with the
SimpleConsumer. You can store it in ZK (manually) or a
file/database/other. You can also store it in Kafka if you send an
OffsetCommitRequest
(https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Protocol#AGuideToThe
nt partition(MyStuff stuff, int numPartitions) {
>
> return thingy.decide(stuff);
>
> }
>
> }
>
> The problem is I don't know how to pass a MyDeciderThingy to my
> Partitioner object given Kafka instantiates it.
>
> Thanks!
>
> On Thu, Oct 30,
f threads across 4 JVMs (the 4th
> > JVM does not get any active consumption threads). What is best way to
> > evenly (or close to even) distribute the consumption threads across JVMs.
> >
> >
> > Thanks,
> >
> > Bhavesh
> >
> > On Thu, Oct 30, 20
In your instance if you have four JVMs (i.e., consumer processes), six
threads per consumer process and 12 partitions, then each thread would
only get one partition but the first two processes will get all the
partitions and the last two processes would be idle. We could tweak
the assignment strate
That sounds good, although is that the only change (sorry I have not
done a careful review of that patch and would like to before it gets
checked in).
On Fri, Oct 31, 2014 at 10:42:13AM -0700, Jun Rao wrote:
> To circle back on this thread. The patch in kafka-1482 is almost ready. To
> make the mb
KAFKA-1070 will help with this and is pending a review.
On Mon, Nov 03, 2014 at 05:03:20PM -0500, Otis Gospodnetic wrote:
> Hi,
>
> How do people handle situations, and specifically the broker.id property,
> where the Kafka (broker) cluster is not fully defined right away?
>
> Here's the use cas
Yes there are some changes but will be checked-in prior to the full
release:
https://issues.apache.org/jira/browse/KAFKA-1728
Joel
On Mon, Nov 03, 2014 at 04:46:12PM -0500, Jason Rosenberg wrote:
> Are there any config parameter updates/changes? I see the doc here:
> http://kafka.apache.org/docu
We used to default to 10, but two should be sufficient. There is
little reason to buffer more than that. If you increase it to 2000 you
will most likely run into memory issues. E.g., if your fetch size is
1MB you would enqueue 1MB*2000 chunks in each queue.
On Tue, Nov 04, 2014 at 09:05:44AM -0800
Ops-experts can share more details but here are some comments:
>
> * Does Kafka 'like' lots of small partitions for replication, or larger
> ones? ie: if I'm passing 1Gbps into a topic, will replication be happier
> if that's one partition, or many partitions?
Since you also have to account for
nd drain as
> quickly as possible with auto commit on ?
>
> Thanks,
>
> Bhavesh
>
> On Tue, Nov 4, 2014 at 9:59 AM, Joel Koshy wrote:
>
> > We used to default to 10, but two should be sufficient. There is
> > little reason to buffer more than that. If you increa
+1 on the JMX + gradle properties. Is there any (seamless) way of
including the exact git hash? That would be extremely useful if users
need help debugging and happen to be on an unreleased build (say, off
trunk)
On Wed, Nov 12, 2014 at 09:34:35AM -0800, Gwen Shapira wrote:
> Actually, Jun suggest
ed up KAFKA-1580 so the above worked for us.
Thanks,
Joel
On Mon, Sep 22, 2014 at 03:36:46PM -0700, Joel Koshy wrote:
> I just wanted to send this out as an FYI but it does not affect any
> released versions.
>
> This only affects those who release off trunk and use Kafka-based
The imbalance is measured wrt preferred leaders. i.e., for every
partition, the first replica in the assigned replica list (as shown in
the output of kafka-topics.sh) is called the preferred replica. On
each broker, the auto-balancer counts the number of partitions led by
that broker for which the
Inline..
On Tue, Nov 18, 2014 at 04:26:04AM -0500, Marius Bogoevici wrote:
> Hello everyone,
>
> I have a few questions about the current status and future of the Kafka
> consumers.
>
> We have been working to adding Kafka support in Spring XD [1], currently
> using the high level consumer via S
gt; On Tue, Nov 18, 2014 at 2:55 PM, Joel Koshy wrote:
>
> > Inline..
> >
> > On Tue, Nov 18, 2014 at 04:26:04AM -0500, Marius Bogoevici wrote:
> > > Hello everyone,
> > >
> > > I have a few questions about the current status and future of the Kafka
>
> makes it hard to reason about what type of data is being sent to Kafka and
> also makes it hard to share an implementation of the serializer. For
> example, to support Avro, the serialization logic could be quite involved
> since it might need to register the Avro schema in some remote registry a
oint and it's not clear which layer a
> user should be using.
>
> Jun
>
>
> On Tue, Dec 2, 2014 at 12:34 AM, Joel Koshy wrote:
>
> > > makes it hard to reason about what type of data is being sent to Kafka
> > and
> > > also makes it hard to sha
the
> "contract" your application programs to is just the normal producer api.
>
> -Jay
>
> On Tue, Dec 2, 2014 at 10:06 AM, Joel Koshy wrote:
>
> > Re: pushing complexity of dealing with objects: we're talking about
> > just a call to a serialize
me, I should be clear that I don't think the proposal is in
any way unreasonable which is why I'm definitely not opposed to it,
but I'm also not convinced that it is necessary.
Thanks,
Joel
>
> On Tue, Dec 2, 2014 at 10:06 AM, Joel Koshy wrote:
>
> > Re: pushing co
lization after
> getting the raw bytes, perhaps it would be better to have these two steps
> integrated.
True, but it is just a marginal and very obvious step that shouldn't
surprise any user.
Thanks,
Joel
>
> Thanks,
>
> Jun
>
> On Tue, Dec 2, 2014 at 2:05 PM, Jo
(sorry about the late follow-up late - I'm traveling most of this
month)
I'm likely missing something obvious, but I find the following to be a
somewhat vague point that has been mentioned more than once in this
thread without a clear explanation. i.e., why is it hard to share a
serializer/deseria
f it's not directly used in the API.
>
> Thanks,
>
> Jun
>
> On Mon, Dec 15, 2014 at 2:11 AM, Joel Koshy wrote:
>
> > (sorry about the late follow-up late - I'm traveling most of this
> > month)
> >
> > I'm likely missing something obvio
(inline)
On Mon, Dec 15, 2014 at 11:45:07AM -0800, Rajiv Kurian wrote:
> I currently have a topic with 1024 partitions. I know it's kind of going
> past the recommended limits, but I kept it like that because I am moving a
> legacy system to kafka and it has a 1024 parallel partitions. I wanted to
>
> Also, my __consumer_offsets topic shows up with a replication factor of 1.
> Is that changeable?
Yes - see the offsets.topic.num.partitions and
offsets.topic.replication.factor broker configs.
--
Joel
> However it turns out to be difficult to know the existing consumer group
> strings. Is the message format in __consumer_offsets "public"/stable in any
> way or is there a better way of listing the existing group names?
Yes it is "stable" but not described in the user documentation since
it is an
e to the value serializer to get the value bytes, and finally send the
> bytes to the producer. The former will be simpler and likely makes the
> adoption easier.
>
> Thanks,
>
> Jun
>
> On Mon, Dec 15, 2014 at 7:20 PM, Joel Koshy wrote:
> >
> > Documentat
rted using it. However, that number is
> > > > likely
> > > > > > small. Also, the functionality of OffsetCommitRequest has changed
> > > since
> > > > > > it's writing the offset to a Kafka log, instead of ZK (for good
> > > > reas
> Is leadership rebalance a safe operation?
Yes - we use it routinely. For any partition, there should only be a
brief (order of seconds) period of rejected messages as leaders move.
When that happens the client should refresh metadata and discover the
new leader. Are you using the Java producer?
seems that after the rebalance, some
> resources in the brokers was tied up and it was only released after restart
> of consumers.
>
>
> On Thu, Jan 15, 2015 at 8:15 AM, Joel Koshy wrote:
>
> > > Is leadership rebalance a safe operation?
> >
> > Yes - we use
mpression error is likely to be caused by incompatible snappy
> version used by the producer.
>
> Also, it appears that when the metric FailedProduceRequestsPerSec (which
> captures the above error) raises to a certain level (around 20/s), it will
> start to have impact on the stability of
Hey Jason,
Is it an option for you to do the following:
- Bounce in a config change to the brokers to turn off auto-create
- (Batch)-delete the topic(s)
- Wait long enough for consumers to rebalance (after which they will
no longer consume the topic(s))
- Bounce in a config change to the broker
on, so a consumer rebalance will trigger a fresh
> > topic pull from the consumers? How long is 'long enough' to ensure a
> > rebalance has occurred everywhere?
> >
> > Jason
> >
> > On Mon, Jan 26, 2015 at 3:07 PM, Joel Koshy wrote:
> >
> &g
Hmm.. that's right. completely forgot about that.
On Mon, Jan 26, 2015 at 01:49:33PM -0800, Jun Rao wrote:
> Joel,
>
> That's probably because console consumer always uses wildcard for
> consumption.
>
> Thanks,
>
> Jun
>
> On Mon, Jan 26, 2015 at 1:
Hi Jason - can you describe how you verify that the metrics are not
coming through to the metrics registry? Looking at the metrics code
it seems that the mbeans are only registered by the yammer jmx
reporter only after being added to the metrics registry.
Thanks,
Joel
On Tue, Jan 27, 2015 at 02
Which version of the broker are you using?
On Mon, Jan 26, 2015 at 10:27:14PM -0800, Sumit Rangwala wrote:
> While running kafka in production I found an issue where a topic wasn't
> getting created even with auto topic enabled. I then went ahead and created
> the topic manually (from the command
Is it actually getting double-counted? I tried reproducing this
locally but the BrokerTopicMetrics.Count lines up with the sum of the
PerTopic.Counts for various metrics.
On Tue, Jan 27, 2015 at 03:29:37AM -0500, Jason Rosenberg wrote:
> Ok,
>
> It looks like the yammer MetricName is not being cr
This can happen as a result of unclean leader elections - there are
mbeans on the controller that give the unclean leader election rate -
or you can check the controller logs to determine if this happened.
On Tue, Jan 27, 2015 at 09:54:38PM -0800, Liju John wrote:
> Hi ,
>
> I have query regardin
is greater than number of available brokers. Check the
> > default.replication.factor parameter.
> >
> > Gwen
> >
> > On Tue, Jan 27, 2015 at 12:29 AM, Joel Koshy wrote:
> > > Which version of the broker are you using?
> > >
> > > On Mon, Jan 26, 2015 at 10:27:1
> If you can tell me where the find the logs I can check. I haven't restarted
> my brokers since the issue.
This will be specified in the log4j properties that you are using.
On Wed, Jan 28, 2015 at 12:01:01PM -0800, Sumit Rangwala wrote:
> On Tue, Jan 27, 2015 at 10:54 PM, Joe
There was some confusion here - turns out that they do turn it on. I added Tu
to this thread and his response:
We have speculative set to true by default. With these settings, we are
seeing about 5-7% of the tasks have speculative tasks launched, other 90%
finished within the standard deviations
creation. Since the exact setup is
> different I will a start another thread will the the information.
>
> Sumit
>
> On Thu, Jan 29, 2015 at 1:29 AM, Joel Koshy wrote:
>
> > > If you can tell me where the find the logs I can check. I haven't
> > restarted
&
Can you contact the maintainer directly?
http://github.com/claudemamo/kafka-web-console/issues
On Tue, Feb 03, 2015 at 12:09:46PM -0800, Sa Li wrote:
> Hi, All
>
> I am currently using kafka-web-console to monitor the kafka system, it get
> down regularly, so I have to restart it every few hours
- Can you check the log cleaner logs?
- Do you have any compressed messages in your log? Or messages without
a key?
- By default it is in a log-cleaner.log file unless you modified that.
- Can you take a thread-dump to see if the log cleaner is still alive?
- Also, there is an mbean that you can
Hey Sumit,
I thought you would be providing the actual steps to reproduce :)
Nevertheless, can you get all the relevant logs: state change logs and
controller logs at the very least and if possible server logs and send
those over?
Joel
On Tue, Feb 03, 2015 at 03:27:43PM -0800, Sumit Rangwala wro
stacktrace in the broker log if this is the case.
>
>
> -Original Message-
> From: Joel Koshy [mailto:jjkosh...@gmail.com]
> Sent: Tuesday, February 03, 2015 3:07 PM
> To: users@kafka.apache.org
> Subject: Re: Turning on cleanup.policy=compact for a topic => not starting
&
re exactly how to do that or what thread
> I'd be looking for specifically... Found a suggestion to run
>
> Jstack -l > jstack.out
>
> So I did that, and looked for anything containing "Clean" or "clean" and no
> matches.
>
> I wi
Thanks for the logs - will take a look tomorrow unless someone else
gets a chance to get to it today.
Joel
On Tue, Feb 03, 2015 at 04:11:57PM -0800, Sumit Rangwala wrote:
> On Tue, Feb 3, 2015 at 3:37 PM, Joel Koshy wrote:
>
> > Hey Sumit,
> >
> > I thought you would
y for the topic "test1" while
> >> > > >deletion of topic going on.
> >> > > >
> >> > >
> >> > > Yes it is the case. However, after a small period of time (say few
> >> > > minutes)
> >> > &
This is documented in the official docs:
http://kafka.apache.org/documentation.html#distributionimpl
On Thu, Feb 05, 2015 at 01:23:01PM -0500, Jason Rosenberg wrote:
> What are the defaults for those settings (I assume it will be to continue
> using only zookeeper by default)?
>
> Also, if I hav
https://cwiki.apache.org/confluence/display/KAFKA/FAQ#FAQ-HowdoIaccuratelygetoffsetsofmessagesforacertaintimestampusingOffsetRequest?
However, you will need to issue a TopicMetadataRequest first to
discover the leaders for all the partitions and then issue the offset
request.
On Thu, Feb 05, 2015
There has to be an implicit contract between the producer and
consumer. The K, V pairs don't _need_ to match but generally _should_.
If producer sends with the consumer may receive as
long as it knows how to convert those raw bytes to . In the
example if CK == byte[] and CV == byte[] it is effect
dirtiness threshold has been met). The
compaction policy is also documented on the site.
Thanks,
Joel
> On Thu, Feb 5, 2015 at 2:21 PM, Joel Koshy wrote:
>
> > This is documented in the official docs:
> > http://kafka.apache.org/documentation.html#distributionimpl
> >
&g
with the default
> >> num.partitions and
> >> > >> > replication.factor. Did you try stopping the consumer first and
> >> issue
> >> > >> > the topic delete.
> >> > >> > -Harsha
> >> > >> >
> >> > >> > On Tue, Feb 3, 2015, at 08:37 P
There are mbeans
(http://kafka.apache.org/documentation.html#monitoring) that you can
poke for incoming message rate - if you look at those over a period of
time you can figure out which of those are likely to be defunct and
then delete those topics.
On Thu, Feb 05, 2015 at 02:38:27PM -0800, Jagbi
> We can reset the offset and get first 10 messages, but since we need to back
> in reverse sequence, suppose user has consumed messages upto 100 offset ,
> currently there are only last 10 messages are visible, from 100 -90, now I
> want to retrieve messages from 80 to 90, how can we do that?
; Finally, why is section 5.6 titled "Distribution"? Seems to be a grab-bag
> of mostly consumer related topics?
Yes this was prior structure that can be improved.
> >
> > > On Thu, Feb 5, 2015 at 2:21 PM, Joel Koshy wrote:
> > >
> > > > This is do
On Thu, Feb 05, 2015 at 11:57:15PM -0800, Joel Koshy wrote:
> On Fri, Feb 06, 2015 at 12:43:37AM -0500, Jason Rosenberg wrote:
> > I'm not sure what you mean by 'default' behavior 'only if' offset.storage
> > is kafka. Does that mean the 'default&
+1
On Tue, Feb 10, 2015 at 01:32:13PM -0800, Jay Kreps wrote:
> I agree that would be a better name. We could rename it if everyone likes
> Compactor better.
>
> -Jay
>
> On Tue, Feb 10, 2015 at 9:33 AM, Gwen Shapira wrote:
>
> > btw. the name LogCleaner is seriously misleading. Its more of a
You should use the producer under o.a.k.c. The new consumer
implementation is not available in 0.8.2 (although the APIs are there)
so you would need to use the kafka.javaapi classes for the consumer.
We plan to deprecate kafka.javaapi eventually.
Thanks,
Joel
On Wed, Feb 11, 2015 at 11:42:04AM +
There are mbeans named KafkaCommitsPerSec and ZooKeeperCommitsPerSec -
can you look those up and see what they report?
On Thu, Feb 12, 2015 at 07:32:39PM +0800, tao xiao wrote:
> Hi team,
>
> I was trying to migrate my consumer offset from kafka to zookeeper.
>
> Here is the original settings of
r == ErrorMapping.NoError)
>
> offsetMap.put(topicAndPartition, offsetAndMetadata.offset)
>
> else {
>
> println("Could not fetch offset for %s due to %s.".format(
> topicAndPartition, ErrorMapping.exceptionFor(offsetAndMetadata
Actually I meant to say check that is not increasing.
On Thu, Feb 12, 2015 at 08:15:01AM -0800, Joel Koshy wrote:
> Possibly a bug - can you also look at the MaxLag mbean in the consumer
> to verify that the maxlag is zero?
>
> On Thu, Feb 12, 2015 at 11:24:42PM +0800, tao xiao w
12:18 AM, Joel Koshy wrote:
>
> > Actually I meant to say check that is not increasing.
> >
> > On Thu, Feb 12, 2015 at 08:15:01AM -0800, Joel Koshy wrote:
> > > Possibly a bug - can you also look at the MaxLag mbean in the consumer
> > > to verify that the
led=false.
> >2. After consuming messages for a while shutdown the consumer and change
> >setting dual.commit.enabled=true
> >3. bounce the consumer and run for while. The lag looks good
> >4. change setting offsets.storage=zookeeper and bounce the consumer.
> >Starting from
There are FetcherLagMetrics that you can take a look at. However, it
is probably easiest to just monitor MaxLag as that reports the maximum
of all the lag metrics.
On Fri, Feb 13, 2015 at 05:03:28PM +0800, tao xiao wrote:
> Hi team,
>
> Is there a metric that shows the consumer lag of a particula
Hi Chris,
In 0.8.2, the simple consumer Java API supports committing/fetching
offsets that are stored in ZooKeeper. You don't need to issue any
ConsumerMetadataRequest for this. Unfortunately, the API currently
does not support fetching offsets that are stored in Kafka.
Thanks,
Joel
On Mon, Feb
> You are also correct and perceptive to notice that if you check the end of
> the log then begin consuming and read up to that point compaction may have
> already kicked in (if the reading takes a while) and hence you might have
> an incomplete snapshot.
Isn't it sufficient to just repeat the che
consumer timeout set to -1, it takes some time
> to query the max offset values, which is still long enough for more
> messages to arrive.
Got it - thanks for clarifying.
>
>
>
> On 18 February 2015 at 23:16, Joel Koshy wrote:
>
> > > You are also correct and pe
On Tuesday, February 17, 2015 12:22 PM, Joel Koshy
> wrote:
>
>
> Hi Chris,
>
> In 0.8.2, the simple consumer Java API supports committing/fetching
> offsets that are stored in ZooKeeper. You don't need to issue any
> ConsumerMetadataRequest for this. Unfortu
to use the
> SimpleConsumer on failure recovery to set the offsets.
> Is that the recommended approach for this use case?
> Thanks.
> -Suren
>
>
> On Thursday, February 19, 2015 9:40 AM, Joel Koshy
> wrote:
>
>
> Are you using it from Java or Scala? i.e.
> On Thursday, February 19, 2015 10:25 AM, Joel Koshy
> wrote:
>
>
> Not sure what you mean by using the SimpleConsumer on failure
> recovery. Can you elaborate on this?
>
> On Thu, Feb 19, 2015 at 03:04:47PM +, Suren wrote:
> > Haven't used eithe
gt; To confirm then, the log-end-offset is the same as the cleaner point?
> > >
> > >
> > >
> > > On 19 February 2015 at 03:10, Jay Kreps wrote:
> > >
> > > > Yeah I was thinking either along the lines Joel was suggesting or else
> >
Yes it is supported in 0.8.2-beta. It is documented on the site - you
will need to set offsets.storage to kafka.
On Thu, Feb 19, 2015 at 03:57:31PM -0500, Matthew Butt wrote:
> I'm having a hard time figuring out if the new Kafka-based offset
> management in the high-level Scala Consumer is implem
February 2015 at 18:47, Joel Koshy wrote:
>
> > > If I consumed up to the log end offset and log compaction happens in
> > > between, I would have missed some messages.
> >
> > Compaction actually only runs on the rolled over segments (not the
> > active - i.e
; Joel/All,
> The SimpleConsumer constructor requires a specific host and port.
>
> Can this be any broker?
> If it needs to be a specific broker, for 0.8.2, should this be the offset
> coordinator? For 0.8.1, does it matter?
> -Suren
>
>
> On Thursday, Febru
Yeah that is a good point - will do the update as part of the doc
changes in KAFKA-1729
On Thu, Feb 19, 2015 at 09:26:30PM -0500, Evan Huus wrote:
> On Thu, Feb 19, 2015 at 8:43 PM, Joel Koshy wrote:
>
> > If you are using v0 of OffsetCommit/FetchRequest then you can issue
>
We do support delete topic. However, this is a client(admin) operation
that is done via zookeeper. It would be useful to do this
automatically on the broker-side. Can you file a jira for this? It is
not very straightforward to implement this since you would want to
check across all partitions whic
Can you add yourself as a watcher on KAFKA-1729? I will update that
when I fix the example on the wiki.
On Sun, Feb 22, 2015 at 10:16:44PM +0100, Jochen Mader wrote:
> I have a hard time figuring out how to do a commit using API 0.8.2 on JDK 8.
>
> I tried using the examples from 0.8.1.1.
>
> Fi
on restart
> > was a viable option, while continuing to use the High Level Consumer for
> > our normal operations. Not sure if there is a better way that is compatible
> > across 0.8.1 and 0.8.2.
> > -Suren
> >
> >
> > On Thursday, February 19, 2015
We will eventually only have Java clients.
For your specific question: javaapi.SimpleConsumer and
consumer.SimpleConsumer - there are some arguments that contain
scala-specific constructs. E.g., scala maps which cannot be created in
Java. This is why we expose a javaapi variant which takes Java
co
I think the camus mailing list would be more suitable for this
question.
Thanks,
Joel
On Wed, Mar 04, 2015 at 11:00:51AM -0500, max square wrote:
> Hi all,
>
> I have browsed through different conversations around Camus, and bring this
> as a kinda Kafka question. I know is not the most orthodo
This is not possible with the current high-level consumer without a
restart, but the new consumer (under development) does have support
for this.
On Wed, Mar 04, 2015 at 03:04:57PM -0500, Luiz Geovani Vier wrote:
> Hello,
>
> I'm using the high level consumer with auto-commit disabled and a
> sin
I think what you may be looking for is being discussed here:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-6+-+New+reassignment+partition+logic+for+rebalancing
On Wed, Mar 04, 2015 at 12:34:30PM +0530, sunil kalva wrote:
> Is there any way to automate
> On Mar 3, 2015 11:57 AM, "sunil kalv
+1 - if you have a way to reproduce that would be ideal. We don't know
the root cause of this yet. Our guess is a corner case around
shutdowns, but not sure.
On Fri, Mar 13, 2015 at 03:13:45PM -0700, Jun Rao wrote:
> Is there a way that you can reproduce this easily?
>
> Thanks,
>
> Jun
>
> On
1 - 100 of 324 matches
Mail list logo