Re: Perf producer/consumers for compacted topics

2016-05-18 Thread Manikumar Reddy
Hi,

There is a kafka.tools.TestLogCleaning tool, which is used to stress test
the compaction feature.
This tool validates the correctness of compaction process. This tool can be
improved for perf testing.

I think you want to benchmark server side compaction process.  Currently we
have few compaction
related metrics. We may need to add few more topic specific metrics for
better analysis.

log compaction related JMX metrics:
kafka.log:type=LogCleaner,name=cleaner-recopy-percent
kafka.log:type=LogCleaner,name=max-buffer-utilization-percent
kafka.log:type=LogCleaner,name=max-clean-time-secs
kafka.log:type=LogCleanerManager,name=max-dirty-percent

Manikumar

On Tue, May 17, 2016 at 8:45 PM, Tom Crayford  wrote:

> Hi there,
>
> As noted in the 0.10.0.0-RC4 release thread, we (Heroku Kafka) have been
> doing extensive benchmarking of Kafka. In our case this is to help give
> customers a good idea of the performance of our various configurations. For
> this we orchestrate the Kafka `producer-perf.sh` and `consumer-perf.sh`
> across multiple machines, which was relatively easy to do and very
> successful (recently leading to a doc change and a good lesson about 0.10).
>
> However, we're finding one thing missing from the current producer/consumer
> perf tests, which is that there's no good perf testing on compacted topics.
> Some folk will undoubtedly use compacted topics, so it would be extremely
> helpful (I think) for the community to have benchmarks that test
> performance on compacted topics. We're interested in working on this and
> contributing it upstream, but are pretty unsure what such a test should
> look like. One straw proposal is to adapt the existing producer/consumer
> perf tests to work on a compacted topic, likely with an additional flag on
> the producer that lets you choose how wide a key range to emit, if it
> should emit deletes (and how often to do so) and so on. Is there anything
> more we could or should do there?
>
> We're happy writing the code here, and want to continue contributing back,
> I'd just love a hand thinking about what perf tests for compacted topics
> should look like.
>
> Thanks
>
> Tom Crayford
> Heroku Kafka
>


Apache Kafka JIRA Worflow: Add Closed -> Reopen transition

2016-05-20 Thread Manikumar Reddy
Jun/Ismail,

I requested Apache Infra  to change JIRA workflow to add  Closed -> Reopen
transition.
https://issues.apache.org/jira/browse/INFRA-11857

Let me know, If any concerns

Manikumar


Re: Apache Kafka JIRA Worflow: Add Closed -> Reopen transition

2016-05-20 Thread Manikumar Reddy
Hi,

There were some jiras which are closed but not resolved.
I just wanted to close those jiras properly,  so that they won't
appear in jira search. Without this new transition i was not able close
them properly.

Manikumar
On May 21, 2016 11:23 AM, "Harsha"  wrote:

> Manikumar,
> Any reason for this. Before the workflow is to open
> a new JIRA if a JIRA closed.
> -Harsha
>
> On Fri, May 20, 2016, at 08:54 PM, Manikumar Reddy wrote:
> > Jun/Ismail,
> >
> > I requested Apache Infra  to change JIRA workflow to add  Closed ->
> > Reopen
> > transition.
> > https://issues.apache.org/jira/browse/INFRA-11857
> >
> > Let me know, If any concerns
> >
> > Manikumar
>


Re: [VOTE] KIP-58 - Make Log Compaction Point Configurable

2016-05-25 Thread Manikumar Reddy
+1 (non binding)

On Wed, May 25, 2016 at 4:03 PM, Tom Crayford  wrote:

> +1 (non binding)
>
> Agree on log.cleaner.compaction.delay.ms being the better name.
>
> I think this setting is going to be extremely hard to tune for users, and
> worry about adding yet more configuration - Kafka already has a huge number
> of tunables though, so we're in well trod ground with "just add more
> tuning". I can't however, come up with any better mechanism (that doesn't
> require tuning at all) without gross interactions with consumer offset
> storage, so I remain a +1 here.
>
> Thanks
>
> Tom Crayford
> Heroku Kafka
>
> On Wednesday, 25 May 2016, Ewen Cheslack-Postava  > wrote:
>
> > +1 (binding)
> >
> > Agreed that the log.cleaner.compaction.delay.ms is probably a better
> name,
> > and consistent with log.segment.delete.delay.ms. Checked configs for
> other
> > suffixes that seemed reasonable and despite only appearing in that one
> > broker config, it seems the best match.
> >
> > -Ewen
> >
> > On Tue, May 24, 2016 at 8:16 PM, Jay Kreps  wrote:
> >
> > > I'm +1 on the concept.
> > >
> > > As with others I think the core challenge is to express this in an
> > > intuitive way, and carry the same terminology across the docs, the
> > configs,
> > > and docstrings for the configs. Pictures would help.
> > >
> > > -Jay
> > >
> > > On Tue, May 24, 2016 at 6:54 PM, James Cheng 
> > wrote:
> > >
> > > > I'm not sure what are the rules for who is allowed to vote, but I'm:
> > > >
> > > > +1 (non-binding) on the proposal
> > > >
> > > > I agree that the "log.cleaner.min.compaction.lag.ms" name is a
> little
> > > > confusing.
> > > >
> > > > I like Becket's "log.cleaner.compaction.delay.ms", or something
> > similar.
> > > >
> > > > The KIP describes it as the portion of the topic "that will remain
> > > > uncompacted", so if you're open to alternate names:
> > > >
> > > > "log.cleaner.uncompacted.range.ms"
> > > > "log.cleaner.uncompacted.head.ms" (Except that I always get "log
> tail"
> > > > and "log head" mixed up...)
> > > > "log.cleaner.uncompacted.retention.ms" (Will it be confusing to have
> > the
> > > > word "retention" in non-time-based topics?)
> > > >
> > > > I just thought of something: what happens to the value of "
> > > > log.cleaner.delete.retention.ms"? Does it still have the same
> meaning
> > as
> > > > before? Does the timer start when log compaction happens (as it
> > currently
> > > > does), so in reality, tombstones will only be removed from the log
> some
> > > > time after (log.cleaner.min.compaction.lag.ms +
> > > > log.cleaner.delete.retention.ms)?
> > > >
> > > > -James
> > > >
> > > > > On May 24, 2016, at 5:46 PM, Becket Qin 
> > wrote:
> > > > >
> > > > > +1 (non-binding) on the proposal. Just a minor suggestion.
> > > > >
> > > > > I am wondering should we change the config name to "
> > > > > log.cleaner.compaction.delay.ms"? The first glance at the
> > > configuration
> > > > > name is a little confusing. I was thinking do we have a "max" lag?
> > And
> > > is
> > > > > this "lag" a bad thing?
> > > > >
> > > > > Thanks,
> > > > >
> > > > > Jiangjie (Becket) Qin
> > > > >
> > > > >
> > > > > On Tue, May 24, 2016 at 4:21 PM, Gwen Shapira 
> > > wrote:
> > > > >
> > > > >> +1 (binding)
> > > > >>
> > > > >> Thanks for responding to all my original concerns in the
> discussion
> > > > thread.
> > > > >>
> > > > >> On Tue, May 24, 2016 at 1:37 PM, Eric Wasserman <
> > > > eric.wasser...@gmail.com>
> > > > >> wrote:
> > > > >>
> > > > >>> Hi,
> > > > >>>
> > > > >>> I would like to begin voting on KIP-58 - Make Log Compaction
> Point
> > > > >>> Configurable
> > > > >>>
> > > > >>> KIP-58 is here:  <
> > > > >>>
> > > > >>>
> > > > >>
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-58+-+Make+Log+Compaction+Point+Configurable
> > > > 
> > > > >>>
> > > > >>> The Jira ticket KAFKA-1981 Make log compaction point configurable
> > > > >>> is here: 
> > > > >>>
> > > > >>> The original pull request is here: <
> > > > >>> https://github.com/apache/kafka/pull/1168>
> > > > >>> (this includes configurations for size and message count lags
> that
> > > will
> > > > >> be
> > > > >>> removed per discussion of KIP-58).
> > > > >>>
> > > > >>> The vote will run for 72 hours.
> > > > >>>
> > > > >>
> > > >
> > > >
> > >
> >
> >
> >
> > --
> > Thanks,
> > Ewen
> >
>


Re: [VOTE] KIP-62: Allow consumer to send heartbeats from a background thread

2016-06-17 Thread Manikumar Reddy
+1 (non-binding)

On Fri, Jun 17, 2016 at 3:37 PM, Rajini Sivaram <
rajinisiva...@googlemail.com> wrote:

> +1 (non-binding)
>
> On Fri, Jun 17, 2016 at 4:45 AM, Grant Henke  wrote:
>
> > +1
> >
> > On Thu, Jun 16, 2016 at 8:50 PM, tao xiao  wrote:
> >
> > > +1
> > >
> > > On Fri, 17 Jun 2016 at 09:03 Harsha  wrote:
> > >
> > > > +1 (binding)
> > > > Thanks,
> > > > Harsha
> > > >
> > > > On Thu, Jun 16, 2016, at 05:46 PM, Henry Cai wrote:
> > > > > +1
> > > > >
> > > > > On Thu, Jun 16, 2016 at 3:46 PM, Ismael Juma 
> > > wrote:
> > > > >
> > > > > > +1 (binding)
> > > > > >
> > > > > > On Fri, Jun 17, 2016 at 12:44 AM, Guozhang Wang <
> > wangg...@gmail.com>
> > > > > > wrote:
> > > > > >
> > > > > > > +1.
> > > > > > >
> > > > > > > On Thu, Jun 16, 2016 at 11:44 AM, Jason Gustafson <
> > > > ja...@confluent.io>
> > > > > > > wrote:
> > > > > > >
> > > > > > > > Hi All,
> > > > > > > >
> > > > > > > > I'd like to open the vote for KIP-62. This proposal attempts
> to
> > > > address
> > > > > > > one
> > > > > > > > of the recurring usability problems that users of the new
> > > consumer
> > > > have
> > > > > > > > faced with as little impact as possible. You can read the
> full
> > > > details
> > > > > > > > here:
> > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-62%3A+Allow+consumer+to+send+heartbeats+from+a+background+thread
> > > > > > > > .
> > > > > > > >
> > > > > > > > After some discussion on this list, I think we were in
> > agreement
> > > > that
> > > > > > > this
> > > > > > > > change addresses a major part of the problem and we've left
> the
> > > > door
> > > > > > open
> > > > > > > > for further improvements, such as adding a heartbeat() API
> or a
> > > > > > > separately
> > > > > > > > configured rebalance timeout. Thanks in advance to everyone
> who
> > > > helped
> > > > > > > > review the proposal.
> > > > > > > >
> > > > > > > > -Jason
> > > > > > > >
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > --
> > > > > > > -- Guozhang
> > > > > > >
> > > > > >
> > > >
> > >
> >
> >
> >
> > --
> > Grant Henke
> > Software Engineer | Cloudera
> > gr...@cloudera.com | twitter.com/gchenke | linkedin.com/in/granthenke
> >
>
>
>
> --
> Regards,
>
> Rajini
>


Re: [VOTE] KIP-4 Create Topics Schema

2016-06-20 Thread Manikumar Reddy
+1 (non-binding)

On Tue, Jun 21, 2016 at 9:16 AM, Ewen Cheslack-Postava 
wrote:

> +1 (binding) and thanks for the work on this Grant!
>
> -Ewen
>
> On Mon, Jun 20, 2016 at 12:18 PM, Gwen Shapira  wrote:
>
> > +1 (binding)
> >
> > On Mon, Jun 20, 2016 at 12:13 PM, Tom Crayford 
> > wrote:
> > > +1 (non-binding)
> > >
> > > On Mon, Jun 20, 2016 at 8:07 PM, Harsha  wrote:
> > >
> > >> +1 (binding)
> > >> -Harsha
> > >>
> > >> On Mon, Jun 20, 2016, at 11:33 AM, Ismael Juma wrote:
> > >> > +1 (binding)
> > >> >
> > >> > On Mon, Jun 20, 2016 at 8:27 PM, Dana Powers  >
> > >> > wrote:
> > >> >
> > >> > > +1 -- thanks for the update
> > >> > >
> > >> > > On Mon, Jun 20, 2016 at 10:49 AM, Grant Henke <
> ghe...@cloudera.com>
> > >> wrote:
> > >> > > > I have update the patch and wiki based on the feedback in the
> > >> discussion
> > >> > > > thread. The only change is that instead of logging and
> > disconnecting
> > >> in
> > >> > > the
> > >> > > > case of invalid messages (duplicate topics or both arguments) we
> > now
> > >> > > return
> > >> > > > and InvalidRequest error back to the client for that topic.
> > >> > > >
> > >> > > > I would like to restart the vote now including that change. If
> you
> > >> have
> > >> > > > already voted, please revote in this thread.
> > >> > > >
> > >> > > > Thank you,
> > >> > > > Grant
> > >> > > >
> > >> > > > On Sun, Jun 19, 2016 at 8:57 PM, Ewen Cheslack-Postava <
> > >> > > e...@confluent.io>
> > >> > > > wrote:
> > >> > > >
> > >> > > >> Don't necessarily want to add noise here, but I'm -1 based on
> the
> > >> > > >> disconnect part. See discussion in other thread. (I'm +1
> > otherwise,
> > >> and
> > >> > > >> happy to have my vote applied assuming we clean up that one
> > issue.)
> > >> > > >>
> > >> > > >> -Ewen
> > >> > > >>
> > >> > > >> On Thu, Jun 16, 2016 at 6:05 PM, Harsha 
> wrote:
> > >> > > >>
> > >> > > >> > +1 (binding)
> > >> > > >> > Thanks,
> > >> > > >> > Harsha
> > >> > > >> >
> > >> > > >> > On Thu, Jun 16, 2016, at 04:15 PM, Guozhang Wang wrote:
> > >> > > >> > > +1.
> > >> > > >> > >
> > >> > > >> > > On Thu, Jun 16, 2016 at 3:47 PM, Ismael Juma <
> > ism...@juma.me.uk
> > >> >
> > >> > > >> wrote:
> > >> > > >> > >
> > >> > > >> > > > +1 (binding)
> > >> > > >> > > >
> > >> > > >> > > > On Thu, Jun 16, 2016 at 11:50 PM, Grant Henke <
> > >> > > ghe...@cloudera.com>
> > >> > > >> > wrote:
> > >> > > >> > > >
> > >> > > >> > > > > I would like to initiate the voting process for the
> > "KIP-4
> > >> > > Create
> > >> > > >> > Topics
> > >> > > >> > > > > Schema changes". This is not a vote for all of KIP-4,
> but
> > >> > > >> > specifically
> > >> > > >> > > > for
> > >> > > >> > > > > the create topics changes. I have included the exact
> > changes
> > >> > > below
> > >> > > >> > for
> > >> > > >> > > > > clarity:
> > >> > > >> > > > > >
> > >> > > >> > > > > > Create Topics Request (KAFKA-2945
> > >> > > >> > > > > > )
> > >> > > >> > > > > >
> > >> > > >> > > > > > CreateTopics Request (Version: 0) =>
> > >> [create_topic_requests]
> > >> > > >> > timeout
> > >> > > >> > > > > >   create_topic_requests => topic num_partitions
> > >> > > >> replication_factor
> > >> > > >> > > > > [replica_assignment] [configs]
> > >> > > >> > > > > > topic => STRING
> > >> > > >> > > > > > num_partitions => INT32
> > >> > > >> > > > > > replication_factor => INT16
> > >> > > >> > > > > > replica_assignment => partition_id [replicas]
> > >> > > >> > > > > >   partition_id => INT32
> > >> > > >> > > > > >   replicas => INT32
> > >> > > >> > > > > > configs => config_key config_value
> > >> > > >> > > > > >   config_key => STRING
> > >> > > >> > > > > >   config_value => STRING
> > >> > > >> > > > > >   timeout => INT32
> > >> > > >> > > > > >
> > >> > > >> > > > > > CreateTopicsRequest is a batch request to initiate
> > topic
> > >> > > creation
> > >> > > >> > with
> > >> > > >> > > > > > either predefined or automatic replica assignment and
> > >> > > optionally
> > >> > > >> > topic
> > >> > > >> > > > > > configuration.
> > >> > > >> > > > > >
> > >> > > >> > > > > > Request semantics:
> > >> > > >> > > > > >
> > >> > > >> > > > > >1. Must be sent to the controller broker
> > >> > > >> > > > > >2. If there are multiple instructions for the same
> > >> topic in
> > >> > > >> one
> > >> > > >> > > > > >request an InvalidRequestException will be logged
> on
> > >> the
> > >> > > >> broker
> > >> > > >> > and
> > >> > > >> > > > > the
> > >> > > >> > > > > >client will be disconnected.
> > >> > > >> > > > > >   - This is because the list of topics is modeled
> > >> server
> > >> > > side
> > >> > > >> > as a
> > >> > > >> > > > > >   map with TopicName as the key
> > >> > > >> > > > > >3. The principal must be authorized to the
> "Create"
> > >> > > Operation
> > >> > > >> > on the
> > >> > > >> > > > > >"Cluster" resource to create topics.

consumer.subscribe(Pattern p , ..) method fails with Authorizer

2016-07-07 Thread Manikumar Reddy
Hi,

consumer.subscribe(Pattern p , ..) method implementation tries to get
metadata of all the topics.
This will throw TopicAuthorizationException on internal topics and other
unauthorized topics.
We may need to move the pattern matching to sever side.
Is this know issue?.  If not, I will raise JIRA.

logs:
[2016-07-07 22:48:06,317] WARN Error while fetching metadata with
correlation id 1 : {__consumer_offsets=TOPIC_AUTHORIZATION_FAILED}
(org.apache.kafka.clients.NetworkClient)
[2016-07-07 22:48:06,318] ERROR Unknown error when running consumer:
 (kafka.tools.ConsoleConsumer$)
org.apache.kafka.common.errors.TopicAuthorizationException: Not authorized
to access topics: [__consumer_offsets]


Thanks,
Manikumar


Re: [DISCUSS] KIP-4 ACL Admin Schema

2016-07-14 Thread Manikumar Reddy
Hi,

Can we allow ListAcls to take list of resources?  This may help when we
have many associated
resources under same principal.


Thanks
Manikumar

On Fri, Jul 15, 2016 at 11:15 AM, Gwen Shapira  wrote:

> Thank you, Grant. This is lovely :)
>
> Few comments / requests for clarifications below:
>
>
> >> ListAcls Request (Version: 0) => principal resource
> >>   principal => NULLABLE_STRING
> >>   resource => resource_type resource_name
> >> resource_type => INT8
> >> resource_name => STRING
>
> I am a bit confused about specifying resources.
> resource_type is something like "TOPIC" and resource_name is a name of
> a specific topic?
> Can you clarify a bit more about the use here? Can I have regexp? Can
> I leave resource_name empty and have the ACLs for everything in a
> resource type?
> Also, can you describe the interaction between principal and resource?
> I assume that if I specify both, I get all ACLs for a principal for
> the resources specified, but just making sure :)
>
>
> >> Alter ACLs Request
> >>
> >>3. ACLs with a delete action will be processed first and the add
> >>action second.
> >>1. This is to prevent confusion about sort order and final state when
> >>   a batch message is sent.
> >>   2. If an add request was processed first, it could be deleted
> right
> >>   after.
> >>   3. Grouping ACLs by their action allows batching requests to the
> >>   authorizer via the Authorizer.addAcls and Authorizer.removeAcls
> calls.
>
> I like this decision
>
> >>  - I suggest this be addressed in KIP-50 as well, though it has
> >>  some compatibility concerns.
>
> Isn't KIP-50 itself one gigantic compatibility concern? I don't see
> how your suggestions make it any worse...
>


Re: [kafka-clients] [VOTE] 0.10.0.1 RC1

2016-08-03 Thread Manikumar Reddy
Hi,

There are two versions of slf4j-log4j jar in the build. (1.6.1, 1.7.21).
slf4j-log4j12-1.6.1.jar is coming from streams:examples module.

Thanks,
Manikumar

On Tue, Aug 2, 2016 at 8:31 PM, Ismael Juma  wrote:

> Hello Kafka users, developers and client-developers,
>
> This is the second candidate for the release of Apache Kafka 0.10.0.1.
> This is a bug fix release and it includes fixes and improvements from 52
> JIRAs (including a few critical bugs). See the release notes for more
> details:
>
> http://home.apache.org/~ijuma/kafka-0.10.0.1-rc1/RELEASE_NOTES.html
>
> When compared to RC0, RC1 contains fixes for two bugs (KAFKA-4008
> and KAFKA-3950) and a couple of test stabilisation fixes.
>
> *** Please download, test and vote by Friday, 5 August, 8am PT ***
>
> Kafka's KEYS file containing PGP keys we use to sign the release:
> http://kafka.apache.org/KEYS
>
> * Release artifacts to be voted upon (source and binary):
> http://home.apache.org/~ijuma/kafka-0.10.0.1-rc1/
>
> * Maven artifacts to be voted upon:
> https://repository.apache.org/content/groups/staging
>
> * Javadoc:
> http://home.apache.org/~ijuma/kafka-0.10.0.1-rc1/javadoc/
>
> * Tag to be voted upon (off 0.10.0 branch) is the 0.10.0.1-rc1 tag:
>
> https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=108580e4594d694827c953264969fe1ce2a7
>
> * Documentation:
> http://kafka.apache.org/0100/documentation.html
>
> * Protocol:
> http://kafka.apache.org/0100/protocol.html
>
> * Successful Jenkins builds for the 0.10.0 branch:
> Unit/integration tests: *https://builds.apache.org/job/kafka-0.10.0-jdk7/179/
> *
> System tests: *https://jenkins.confluent.io/job/system-test-kafka-0.10.0/136/
> *
>
> Thanks,
> Ismael
>
> --
> You received this message because you are subscribed to the Google Groups
> "kafka-clients" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kafka-clients+unsubscr...@googlegroups.com.
> To post to this group, send email to kafka-clie...@googlegroups.com.
> Visit this group at https://groups.google.com/group/kafka-clients.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/kafka-clients/CAD5tkZaRxAjQbwS_1q4MqskSYKxQWBFmdPVf_PP020bjY9%3DCgQ%40mail.gmail.com
> 
> .
> For more options, visit https://groups.google.com/d/optout.
>


Re: [kafka-clients] [VOTE] 0.10.0.1 RC2

2016-08-05 Thread Manikumar Reddy
+1 (non-binding).
verified quick start and artifacts.

On Sat, Aug 6, 2016 at 5:45 AM, Joel Koshy  wrote:

> +1 (binding)
>
> Thanks Ismael!
>
> On Thu, Aug 4, 2016 at 6:54 AM, Ismael Juma  wrote:
>
>> Hello Kafka users, developers and client-developers,
>>
>> This is the third candidate for the release of Apache Kafka 0.10.0.1.
>> This is a bug fix release and it includes fixes and improvements from 53
>> JIRAs (including a few critical bugs). See the release notes for more
>> details:
>>
>> http://home.apache.org/~ijuma/kafka-0.10.0.1-rc2/RELEASE_NOTES.html
>>
>> When compared to RC1, RC2 contains a fix for a regression where an older
>> version of slf4j-log4j12 was also being included in the libs folder of the
>> binary tarball (KAFKA-4008). Thanks to Manikumar Reddy for reporting the
>> issue.
>>
>> *** Please download, test and vote by Monday, 8 August, 8am PT ***
>>
>> Kafka's KEYS file containing PGP keys we use to sign the release:
>> http://kafka.apache.org/KEYS
>>
>> * Release artifacts to be voted upon (source and binary):
>> http://home.apache.org/~ijuma/kafka-0.10.0.1-rc2/
>>
>> * Maven artifacts to be voted upon:
>> https://repository.apache.org/content/groups/staging
>>
>> * Javadoc:
>> http://home.apache.org/~ijuma/kafka-0.10.0.1-rc2/javadoc/
>>
>> * Tag to be voted upon (off 0.10.0 branch) is the 0.10.0.1-rc2 tag:
>> https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=
>> f8f56751744ba8e55f90f5c4f3aed8c3459447b2
>>
>> * Documentation:
>> http://kafka.apache.org/0100/documentation.html
>>
>> * Protocol:
>> http://kafka.apache.org/0100/protocol.html
>>
>> * Successful Jenkins builds for the 0.10.0 branch:
>> Unit/integration tests: *https://builds.apache.org/job/kafka-0.10.0-jdk7/182/
>> <https://builds.apache.org/job/kafka-0.10.0-jdk7/182/>*
>> System tests: *https://jenkins.confluent.io/job/system-test-kafka-0.10.0/138/
>> <https://jenkins.confluent.io/job/system-test-kafka-0.10.0/138/>*
>>
>> Thanks,
>> Ismael
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "kafka-clients" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to kafka-clients+unsubscr...@googlegroups.com.
>> To post to this group, send email to kafka-clie...@googlegroups.com.
>> Visit this group at https://groups.google.com/group/kafka-clients.
>> To view this discussion on the web visit https://groups.google.com/d/ms
>> gid/kafka-clients/CAD5tkZYMMxDEjg_2jt4x-mVZZHgJ6EC6HKSf4Hn%2
>> Bi59DbTdVoQ%40mail.gmail.com
>> <https://groups.google.com/d/msgid/kafka-clients/CAD5tkZYMMxDEjg_2jt4x-mVZZHgJ6EC6HKSf4Hn%2Bi59DbTdVoQ%40mail.gmail.com?utm_medium=email&utm_source=footer>
>> .
>> For more options, visit https://groups.google.com/d/optout.
>>
>
> --
> You received this message because you are subscribed to the Google Groups
> "kafka-clients" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kafka-clients+unsubscr...@googlegroups.com.
> To post to this group, send email to kafka-clie...@googlegroups.com.
> Visit this group at https://groups.google.com/group/kafka-clients.
> To view this discussion on the web visit https://groups.google.com/d/
> msgid/kafka-clients/CAAOfhrAUcmrFRH2PpsLLmv579WDOi
> oMOcpy1LBrLJfdWff5iFA%40mail.gmail.com
> <https://groups.google.com/d/msgid/kafka-clients/CAAOfhrAUcmrFRH2PpsLLmv579WDOioMOcpy1LBrLJfdWff5iFA%40mail.gmail.com?utm_medium=email&utm_source=footer>
> .
>
> For more options, visit https://groups.google.com/d/optout.
>


Re: [VOTE] KIP-70: Revise Partition Assignment Semantics on New Consumer's Subscription Change

2016-08-10 Thread Manikumar Reddy
+1 (non-binding)

On Wed, Aug 10, 2016 at 8:30 AM, Ewen Cheslack-Postava 
wrote:

> +1 (binding), thanks for working on this Vahid.
>
> @Dana - See https://cwiki.apache.org/confluence/display/KAFKA/Bylaws re:
> binding/non-binding, although I now notice that we specify criteria (lazy
> majority) on the KIP overview
> https://cwiki.apache.org/confluence/display/KAFKA/
> Kafka+Improvement+Proposals#KafkaImprovementProposals-Process
> but don't seem to specify whose votes are binding -- we've used active
> committers as binding votes for KIPs.
>
> -Ewen
>
> On Tue, Aug 9, 2016 at 11:25 AM, Guozhang Wang  wrote:
>
> > +1.
> >
> > On Tue, Aug 9, 2016 at 10:06 AM, Jun Rao  wrote:
> >
> > > Vahid,
> > >
> > > Thanks for the clear explanation in the KIP. +1
> > >
> > > Jun
> > >
> > > On Mon, Aug 8, 2016 at 11:53 AM, Vahid S Hashemian <
> > > vahidhashem...@us.ibm.com> wrote:
> > >
> > > > I would like to initiate the voting process for KIP-70 (
> > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > > 70%3A+Revise+Partition+Assignment+Semantics+on+New+
> > > > Consumer%27s+Subscription+Change
> > > > ).
> > > >
> > > > The only issue that was discussed in the discussion thread is
> > > > compatibility, but because it applies to an edge case, it is not
> > expected
> > > > to impact existing users.
> > > > The proposal was shared with Spark and Storm users and no issue was
> > > raised
> > > > by those communities.
> > > >
> > > > Thanks.
> > > >
> > > > Regards,
> > > > --Vahid
> > > >
> > > >
> > >
> >
> >
> >
> > --
> > -- Guozhang
> >
>
>
>
> --
> Thanks,
> Ewen
>


[Test Mail] Kafka JIRAs Waiting for Review

2015-05-25 Thread manikumar . reddy


Re: [VOTE] 0.8.2.0 Candidate 1

2015-01-15 Thread Manikumar Reddy
Yes,  we can add a check.  This option works only  with 64 bit jvm.
On Jan 15, 2015 6:53 PM, "Jaikiran Pai"  wrote:

> I just downloaded the Kafka binary and am trying this on my 32 bit JVM
> (Java 7)? Trying to start Zookeeper or Kafka server keeps failing with
> "Unrecognized VM option 'UseCompressedOops'":
>
> ./zookeeper-server-start.sh ../config/zookeeper.properties
> Unrecognized VM option 'UseCompressedOops'
> Error: Could not create the Java Virtual Machine.
> Error: A fatal exception has occurred. Program will exit.
>
> Same with the Kafka server startup scripts. My Java version is:
>
> java version "1.7.0_71"
> Java(TM) SE Runtime Environment (build 1.7.0_71-b14)
> Java HotSpot(TM) Server VM (build 24.71-b01, mixed mode)
>
> Should there be a check in the script, before adding this option?
>
> -Jaikiran
>
> On Wednesday 14 January 2015 10:08 PM, Jun Rao wrote:
>
>> + users mailing list. It would be great if people can test this out and
>> report any blocker issues.
>>
>> Thanks,
>>
>> Jun
>>
>> On Tue, Jan 13, 2015 at 7:16 PM, Jun Rao  wrote:
>>
>>  This is the first candidate for release of Apache Kafka 0.8.2.0. There
>>> has been some changes since the 0.8.2 beta release, especially in the new
>>> java producer api and jmx mbean names. It would be great if people can
>>> test
>>> this out thoroughly. We are giving people 10 days for testing and voting.
>>>
>>> Release Notes for the 0.8.2.0 release
>>> *https://people.apache.org/~junrao/kafka-0.8.2.0-
>>> candidate1/RELEASE_NOTES.html
>>> >> candidate1/RELEASE_NOTES.html>*
>>>
>>> *** Please download, test and vote by Friday, Jan 23h, 7pm PT
>>>
>>> Kafka's KEYS file containing PGP keys we use to sign the release:
>>> *https://people.apache.org/~junrao/kafka-0.8.2.0-candidate1/KEYS
>>> * in
>>> addition to the md5, sha1
>>> and sha2 (SHA256) checksum.
>>>
>>> * Release artifacts to be voted upon (source and binary):
>>> *https://people.apache.org/~junrao/kafka-0.8.2.0-candidate1/
>>> *
>>>
>>> * Maven artifacts to be voted upon prior to release:
>>> *https://people.apache.org/~junrao/kafka-0.8.2.0-
>>> candidate1/maven_staging/
>>> >> candidate1/maven_staging/>*
>>>
>>> * scala-doc
>>> *https://people.apache.org/~junrao/kafka-0.8.2.0-
>>> candidate1/scaladoc/#package
>>> >> candidate1/scaladoc/#package>*
>>>
>>> * java-doc
>>> *https://people.apache.org/~junrao/kafka-0.8.2.0-candidate1/javadoc/
>>> *
>>>
>>> * The tag to be voted upon (off the 0.8.2 branch) is the 0.8.2.0 tag
>>> *https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=
>>> b0c7d579f8aeb5750573008040a42b7377a651d5
>>> >> b0c7d579f8aeb5750573008040a42b7377a651d5>*
>>>
>>> /***
>>>
>>> Thanks,
>>>
>>> Jun
>>>
>>>
>


ProducerFailureHandlingTest.testCannotSendToInternalTopic is failing

2015-01-17 Thread Manikumar Reddy
ProducerFailureHandlingTest.testCannotSendToInternalTopic is failing on
both 0.8.2 and trunk.

Error on 0.8.2:
kafka.api.ProducerFailureHandlingTest > testCannotSendToInternalTopic FAILED
java.util.concurrent.ExecutionException:
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata
after 3000 ms.
at
org.apache.kafka.clients.producer.KafkaProducer$FutureFailure.(KafkaProducer.java:437)
at
org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:352)
at
org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:248)
at
kafka.api.ProducerFailureHandlingTest.testCannotSendToInternalTopic(ProducerFailureHandlingTest.scala:309)

Caused by:
org.apache.kafka.common.errors.TimeoutException: Failed to update
metadata after 3000 ms.


Error on Trunk:
kafka.api.test.ProducerFailureHandlingTest > testCannotSendToInternalTopic
FAILED
java.lang.AssertionError: null
at org.junit.Assert.fail(Assert.java:69)
at org.junit.Assert.assertTrue(Assert.java:32)
at org.junit.Assert.assertTrue(Assert.java:41)
at
kafka.api.test.ProducerFailureHandlingTest.testCannotSendToInternalTopic(ProducerFailureHandlingTest.scala:312)


Re: ProducerFailureHandlingTest.testCannotSendToInternalTopic is failing

2015-01-17 Thread Manikumar Reddy
 I am consistently getting these errors. May be transient errors.

On Sun, Jan 18, 2015 at 12:05 AM, Harsha  wrote:

> I don't see any failures in tests with the latest trunk or 0.8.2. I ran
> it few times in a loop.
> -Harsha
>
> On Sat, Jan 17, 2015, at 08:38 AM, Manikumar Reddy wrote:
> > ProducerFailureHandlingTest.testCannotSendToInternalTopic is failing on
> > both 0.8.2 and trunk.
> >
> > Error on 0.8.2:
> > kafka.api.ProducerFailureHandlingTest > testCannotSendToInternalTopic
> > FAILED
> > java.util.concurrent.ExecutionException:
> > org.apache.kafka.common.errors.TimeoutException: Failed to update
> > metadata
> > after 3000 ms.
> > at
> >
> org.apache.kafka.clients.producer.KafkaProducer$FutureFailure.(KafkaProducer.java:437)
> > at
> >
> org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:352)
> > at
> >
> org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:248)
> > at
> >
> kafka.api.ProducerFailureHandlingTest.testCannotSendToInternalTopic(ProducerFailureHandlingTest.scala:309)
> >
> > Caused by:
> > org.apache.kafka.common.errors.TimeoutException: Failed to update
> > metadata after 3000 ms.
> >
> >
> > Error on Trunk:
> > kafka.api.test.ProducerFailureHandlingTest >
> > testCannotSendToInternalTopic
> > FAILED
> > java.lang.AssertionError: null
> > at org.junit.Assert.fail(Assert.java:69)
> > at org.junit.Assert.assertTrue(Assert.java:32)
> > at org.junit.Assert.assertTrue(Assert.java:41)
> > at
> >
> kafka.api.test.ProducerFailureHandlingTest.testCannotSendToInternalTopic(ProducerFailureHandlingTest.scala:312)
>


Re: [kafka-clients] [VOTE] 0.8.2.0 Candidate 2

2015-01-21 Thread Manikumar Reddy
All links are pointing to
https://people.apache.org/~junrao/kafka-0.8.2.0-candidate1/.
They should be https://people.apache.org/~junrao/kafka-0.8.2.0-candidate2/
right?


On Tue, Jan 20, 2015 at 8:32 AM, Jun Rao  wrote:

> This is the second candidate for release of Apache Kafka 0.8.2.0. There
> has been some changes since the 0.8.2 beta release, especially in the new
> java producer api and jmx mbean names. It would be great if people can test
> this out thoroughly.
>
> Release Notes for the 0.8.2.0 release
> *https://people.apache.org/~junrao/kafka-0.8.2.0-candidate2/RELEASE_NOTES.html
> *
>
> *** Please download, test and vote by Monday, Jan 26h, 7pm PT
>
> Kafka's KEYS file containing PGP keys we use to sign the release:
> *http://kafka.apache.org/KEYS * in addition
> to the md5, sha1 and sha2 (SHA256) checksum.
>
> * Release artifacts to be voted upon (source and binary):
> *https://people.apache.org/~junrao/kafka-0.8.2.0-candidate2/
> *
>
> * Maven artifacts to be voted upon prior to release:
> *https://repository.apache.org/content/groups/staging/
> *
>
> * scala-doc
> *https://people.apache.org/~junrao/kafka-0.8.2.0-candidate2/scaladoc/
> *
>
> * java-doc
> *https://people.apache.org/~junrao/kafka-0.8.2.0-candidate2/javadoc/
> *
>
> * The tag to be voted upon (off the 0.8.2 branch) is the 0.8.2.0 tag
> *https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=058d58adef2ab2787e49d8efeefd61bb3d32f99c
> *
>  (commit 0b312a6b9f0833d38eec434bfff4c647c1814564)
>
> /***
>
> Thanks,
>
> Jun
>
> --
> You received this message because you are subscribed to the Google Groups
> "kafka-clients" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kafka-clients+unsubscr...@googlegroups.com.
> To post to this group, send email to kafka-clie...@googlegroups.com.
> Visit this group at http://groups.google.com/group/kafka-clients.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/kafka-clients/CAFc58G-UiYefj%3Dt3C1x85m7q1xjDifTnLSnkujMpP40GHLNwag%40mail.gmail.com
> 
> .
> For more options, visit https://groups.google.com/d/optout.
>


Re: [kafka-clients] [VOTE] 0.8.2.0 Candidate 2

2015-01-21 Thread Manikumar Reddy
Also Maven artifacts link is not correct

On Wed, Jan 21, 2015 at 9:50 PM, Jun Rao  wrote:

> Yes, will send out a new email with the correct links.
>
> Thanks,
>
> Jun
>
> On Wed, Jan 21, 2015 at 3:12 AM, Manikumar Reddy 
> wrote:
>
>> All links are pointing to
>> https://people.apache.org/~junrao/kafka-0.8.2.0-candidate1/.
>> They should be
>> https://people.apache.org/~junrao/kafka-0.8.2.0-candidate2/ right?
>>
>>
>> On Tue, Jan 20, 2015 at 8:32 AM, Jun Rao  wrote:
>>
>>> This is the second candidate for release of Apache Kafka 0.8.2.0. There
>>> has been some changes since the 0.8.2 beta release, especially in the
>>> new java producer api and jmx mbean names. It would be great if people can
>>> test this out thoroughly.
>>>
>>> Release Notes for the 0.8.2.0 release
>>> *https://people.apache.org/~junrao/kafka-0.8.2.0-candidate2/RELEASE_NOTES.html
>>> <https://people.apache.org/~junrao/kafka-0.8.2.0-candidate1/RELEASE_NOTES.html>*
>>>
>>> *** Please download, test and vote by Monday, Jan 26h, 7pm PT
>>>
>>> Kafka's KEYS file containing PGP keys we use to sign the release:
>>> *http://kafka.apache.org/KEYS <http://kafka.apache.org/KEYS>* in
>>> addition to the md5, sha1 and sha2 (SHA256) checksum.
>>>
>>> * Release artifacts to be voted upon (source and binary):
>>> *https://people.apache.org/~junrao/kafka-0.8.2.0-candidate2/
>>> <https://people.apache.org/~junrao/kafka-0.8.2.0-candidate1/>*
>>>
>>> * Maven artifacts to be voted upon prior to release:
>>> *https://repository.apache.org/content/groups/staging/
>>> <https://repository.apache.org/content/groups/staging/>*
>>>
>>> * scala-doc
>>> *https://people.apache.org/~junrao/kafka-0.8.2.0-candidate2/scaladoc/
>>> <https://people.apache.org/~junrao/kafka-0.8.2.0-candidate1/scaladoc/#package>*
>>>
>>> * java-doc
>>> *https://people.apache.org/~junrao/kafka-0.8.2.0-candidate2/javadoc/
>>> <https://people.apache.org/~junrao/kafka-0.8.2.0-candidate1/javadoc/>*
>>>
>>> * The tag to be voted upon (off the 0.8.2 branch) is the 0.8.2.0 tag
>>> *https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=058d58adef2ab2787e49d8efeefd61bb3d32f99c
>>> <https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=058d58adef2ab2787e49d8efeefd61bb3d32f99c>*
>>>  (commit 0b312a6b9f0833d38eec434bfff4c647c1814564)
>>>
>>> /***
>>>
>>> Thanks,
>>>
>>> Jun
>>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "kafka-clients" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to kafka-clients+unsubscr...@googlegroups.com.
>>> To post to this group, send email to kafka-clie...@googlegroups.com.
>>> Visit this group at http://groups.google.com/group/kafka-clients.
>>> To view this discussion on the web visit
>>> https://groups.google.com/d/msgid/kafka-clients/CAFc58G-UiYefj%3Dt3C1x85m7q1xjDifTnLSnkujMpP40GHLNwag%40mail.gmail.com
>>> <https://groups.google.com/d/msgid/kafka-clients/CAFc58G-UiYefj%3Dt3C1x85m7q1xjDifTnLSnkujMpP40GHLNwag%40mail.gmail.com?utm_medium=email&utm_source=footer>
>>> .
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>>
>  --
> You received this message because you are subscribed to the Google Groups
> "kafka-clients" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kafka-clients+unsubscr...@googlegroups.com.
> To post to this group, send email to kafka-clie...@googlegroups.com.
> Visit this group at http://groups.google.com/group/kafka-clients.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/kafka-clients/CAFc58G_U2S4SbbfFPZ913Pr6ThwDBepj9BKCk%3DL6uGVRuzgP2g%40mail.gmail.com
> <https://groups.google.com/d/msgid/kafka-clients/CAFc58G_U2S4SbbfFPZ913Pr6ThwDBepj9BKCk%3DL6uGVRuzgP2g%40mail.gmail.com?utm_medium=email&utm_source=footer>
> .
>
> For more options, visit https://groups.google.com/d/optout.
>


Re: [kafka-clients] [VOTE] 0.8.2.0 Candidate 2

2015-01-21 Thread Manikumar Reddy
Ok, got it.  Link is different from Release Candidate 1.

On Wed, Jan 21, 2015 at 10:01 PM, Jun Rao  wrote:

> Is it? You just need to navigate into org, then apache, then kafka, etc.
>
> Thanks,
>
> Jun
>
> On Wed, Jan 21, 2015 at 8:28 AM, Manikumar Reddy 
> wrote:
>
>> Also Maven artifacts link is not correct
>>
>> On Wed, Jan 21, 2015 at 9:50 PM, Jun Rao  wrote:
>>
>>> Yes, will send out a new email with the correct links.
>>>
>>> Thanks,
>>>
>>> Jun
>>>
>>> On Wed, Jan 21, 2015 at 3:12 AM, Manikumar Reddy 
>>> wrote:
>>>
>>>> All links are pointing to
>>>> https://people.apache.org/~junrao/kafka-0.8.2.0-candidate1/.
>>>> They should be
>>>> https://people.apache.org/~junrao/kafka-0.8.2.0-candidate2/ right?
>>>>
>>>>
>>>> On Tue, Jan 20, 2015 at 8:32 AM, Jun Rao  wrote:
>>>>
>>>>> This is the second candidate for release of Apache Kafka 0.8.2.0.
>>>>> There has been some changes since the 0.8.2 beta release, especially
>>>>> in the new java producer api and jmx mbean names. It would be great if
>>>>> people can test this out thoroughly.
>>>>>
>>>>> Release Notes for the 0.8.2.0 release
>>>>> *https://people.apache.org/~junrao/kafka-0.8.2.0-candidate2/RELEASE_NOTES.html
>>>>> <https://people.apache.org/~junrao/kafka-0.8.2.0-candidate1/RELEASE_NOTES.html>*
>>>>>
>>>>> *** Please download, test and vote by Monday, Jan 26h, 7pm PT
>>>>>
>>>>> Kafka's KEYS file containing PGP keys we use to sign the release:
>>>>> *http://kafka.apache.org/KEYS <http://kafka.apache.org/KEYS>* in
>>>>> addition to the md5, sha1 and sha2 (SHA256) checksum.
>>>>>
>>>>> * Release artifacts to be voted upon (source and binary):
>>>>> *https://people.apache.org/~junrao/kafka-0.8.2.0-candidate2/
>>>>> <https://people.apache.org/~junrao/kafka-0.8.2.0-candidate1/>*
>>>>>
>>>>> * Maven artifacts to be voted upon prior to release:
>>>>> *https://repository.apache.org/content/groups/staging/
>>>>> <https://repository.apache.org/content/groups/staging/>*
>>>>>
>>>>> * scala-doc
>>>>> *https://people.apache.org/~junrao/kafka-0.8.2.0-candidate2/scaladoc/
>>>>> <https://people.apache.org/~junrao/kafka-0.8.2.0-candidate1/scaladoc/#package>*
>>>>>
>>>>> * java-doc
>>>>> *https://people.apache.org/~junrao/kafka-0.8.2.0-candidate2/javadoc/
>>>>> <https://people.apache.org/~junrao/kafka-0.8.2.0-candidate1/javadoc/>*
>>>>>
>>>>> * The tag to be voted upon (off the 0.8.2 branch) is the 0.8.2.0 tag
>>>>> *https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=058d58adef2ab2787e49d8efeefd61bb3d32f99c
>>>>> <https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=058d58adef2ab2787e49d8efeefd61bb3d32f99c>*
>>>>>  (commit 0b312a6b9f0833d38eec434bfff4c647c1814564)
>>>>>
>>>>> /***
>>>>>
>>>>> Thanks,
>>>>>
>>>>> Jun
>>>>>
>>>>> --
>>>>> You received this message because you are subscribed to the Google
>>>>> Groups "kafka-clients" group.
>>>>> To unsubscribe from this group and stop receiving emails from it, send
>>>>> an email to kafka-clients+unsubscr...@googlegroups.com.
>>>>> To post to this group, send email to kafka-clie...@googlegroups.com.
>>>>> Visit this group at http://groups.google.com/group/kafka-clients.
>>>>> To view this discussion on the web visit
>>>>> https://groups.google.com/d/msgid/kafka-clients/CAFc58G-UiYefj%3Dt3C1x85m7q1xjDifTnLSnkujMpP40GHLNwag%40mail.gmail.com
>>>>> <https://groups.google.com/d/msgid/kafka-clients/CAFc58G-UiYefj%3Dt3C1x85m7q1xjDifTnLSnkujMpP40GHLNwag%40mail.gmail.com?utm_medium=email&utm_source=footer>
>>>>> .
>>>>> For more options, visit https://groups.google.com/d/optout.
>>>>>
>>>>
>>>>
>>>  --
>>> You received this message because you are subscribed to the Google
>>> Groups "kafka-clients" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an emai

Re: [kafka-clients] Re: [VOTE] 0.8.2.0 Candidate 2 (with the correct links)

2015-01-26 Thread Manikumar Reddy
+1 (Non-binding)
Verified source package, unit tests, release build, topic deletion,
compaction and random testing

On Mon, Jan 26, 2015 at 6:14 AM, Neha Narkhede  wrote:

> +1 (binding)
> Verified keys, quick start, unit tests.
>
> On Sat, Jan 24, 2015 at 4:26 PM, Joe Stein  wrote:
>
> > That makes sense, thanks!
> >
> > On Sat, Jan 24, 2015 at 7:00 PM, Jay Kreps  wrote:
> >
> > > But I think the flaw in trying to guess what kind of serializer they
> will
> > > use is when we get it wrong. Basically let's say we guess "String". Say
> > 30%
> > > of the time we will be right and we will save the two configuration
> > lines.
> > > 70% of the time we will be wrong and the user gets a super cryptic
> > > ClassCastException: "xyz cannot be cast to [B" (because [B is how java
> > > chooses to display the byte array class just to up the pain), then they
> > > figure out how to subscribe to our mailing list and email us the
> cryptic
> > > exception, then we explain about how we helpfully set these properties
> > for
> > > them to save them time. :-)
> > >
> > >
> https://www.google.com/?gws_rd=ssl#q=kafka+classcastexception+%22%5BB%22
> > >
> > > I think basically we did this experiment with the old clients and the
> > > conclusion is that serialization is something you basically have to
> think
> > > about to use Kafka and trying to guess just makes things worse.
> > >
> > > -Jay
> > >
> > > On Sat, Jan 24, 2015 at 2:51 PM, Joe Stein 
> wrote:
> > >
> > >> Maybe. I think the StringSerialzer could look more like a typical type
> > of
> > >> message.  Instead of encoding being a property it would be more
> > typically
> > >> just written in the bytes.
> > >>
> > >> On Sat, Jan 24, 2015 at 12:12 AM, Jay Kreps 
> > wrote:
> > >>
> > >> > I don't think so--see if you buy my explanation. We previously
> > defaulted
> > >> > to the byte array serializer and it was a source of unending
> > frustration
> > >> > and confusion. Since it wasn't a required config people just went
> > along
> > >> > plugging in whatever objects they had, and thinking that changing
> the
> > >> > parametric types would somehow help. Then they would get a class
> case
> > >> > exception and assume our stuff was somehow busted, not realizing we
> > had
> > >> > helpfully configured a type different from what they were passing in
> > >> under
> > >> > the covers. So I think it is actually good for people to think: how
> > am I
> > >> > serializing my data, and getting that exception will make them ask
> > that
> > >> > question right?
> > >> >
> > >> > -Jay
> > >> >
> > >> > On Fri, Jan 23, 2015 at 9:06 PM, Joe Stein 
> > >> wrote:
> > >> >
> > >> >> Should value.serializer in the new java producer be defaulted to
> > >> >> Array[Byte] ?
> > >> >>
> > >> >> I was working on testing some upgrade paths and got this
> > >> >>
> > >> >> ! return exception in callback when buffer cannot accept
> message
> > >> >>
> > >> >>   ConfigException: Missing required configuration
> > >> "value.serializer"
> > >> >> which has no default value. (ConfigDef.java:124)
> > >> >>
> > >> >>
>  org.apache.kafka.common.config.ConfigDef.parse(ConfigDef.java:124)
> > >> >>
> > >> >>
> > >> >>
> > >> >>
> > >>
> >
> org.apache.kafka.common.config.AbstractConfig.(AbstractConfig.java:48)
> > >> >>
> > >> >>
> > >> >>
> > >> >>
> > >>
> >
> org.apache.kafka.clients.producer.ProducerConfig.(ProducerConfig.java:235)
> > >> >>
> > >> >>
> > >> >>
> > >> >>
> > >>
> >
> org.apache.kafka.clients.producer.KafkaProducer.(KafkaProducer.java:129)
> > >> >>
> > >> >>
> > >> >>
> > >> >>
> > >>
> >
> ly.stealth.testing.BaseSpec$class.createNewKafkaProducer(BaseSpec.scala:42)
> > >> >>
> > >> >>
> > >>
> ly.stealth.testing.KafkaSpec.createNewKafkaProducer(KafkaSpec.scala:36)
> > >> >>
> > >> >>
> > >> >>
> > >> >>
> > >>
> >
> ly.stealth.testing.KafkaSpec$$anonfun$3$$anonfun$apply$37.apply(KafkaSpec.scala:175)
> > >> >>
> > >> >>
> > >> >>
> > >> >>
> > >>
> >
> ly.stealth.testing.KafkaSpec$$anonfun$3$$anonfun$apply$37.apply(KafkaSpec.scala:170)
> > >> >>
> > >> >>
> > >> >>
> > >> >> On Fri, Jan 23, 2015 at 5:55 PM, Jun Rao  wrote:
> > >> >>
> > >> >> > This is a reminder that the deadline for the vote is this Monday,
> > Jan
> > >> >> 26,
> > >> >> > 7pm PT.
> > >> >> >
> > >> >> > Thanks,
> > >> >> >
> > >> >> > Jun
> > >> >> >
> > >> >> > On Wed, Jan 21, 2015 at 8:28 AM, Jun Rao 
> wrote:
> > >> >> >
> > >> >> >> This is the second candidate for release of Apache Kafka
> 0.8.2.0.
> > >> There
> > >> >> >> has been some changes since the 0.8.2 beta release, especially
> in
> > >> the
> > >> >> new
> > >> >> >> java producer api and jmx mbean names. It would be great if
> people
> > >> can
> > >> >> test
> > >> >> >> this out thoroughly.
> > >> >> >>
> > >> >> >> Release Notes for the 0.8.2.0 release
> > >> >> >>
> > >> >> >>
> > >> >>
> > >>
> >
> https://people.apache.org/~junrao/kafka-0.8.2.0-candidate2/RELEASE_NOTES.html
> > >> >> >>
> > >> >> >> *** Please download, test

Re: Cannot stop Kafka server if zookeeper is shutdown first

2015-02-03 Thread Manikumar Reddy
I think we should consider to moving to  apache curator (KAFKA-873).
Curator is now more mature and a apache top-level project.


On Wed, Feb 4, 2015 at 11:29 AM, Harsha  wrote:

> Any reason not to go with apache curator http://curator.apache.org/ .
> -Harsha
> On Tue, Feb 3, 2015, at 09:55 PM, Guozhang Wang wrote:
> > I am also +1 on Neha's suggestion that "At some point, if we find
> > ourselves
> > fiddling too much with ZkClient, it wouldn't hurt to write our own little
> > zookeeper client wrapper." since we have accumulated a bunch of issues
> > with
> > zkClient which takes long time be resolved if ever, so we ended up have
> > some hacky way handling zkClient errors.
> >
> > Guozhang
> >
> > On Tue, Feb 3, 2015 at 7:47 PM, Jaikiran Pai 
> > wrote:
> >
> > > Yes, that's the plan :)
> > >
> > > -Jaikiran
> > >
> > > On Wednesday 04 February 2015 12:33 AM, Gwen Shapira wrote:
> > >
> > >> So I think the current plan is:
> > >> 1. Add timeout in zkclient
> > >> 2. Ask zkclient to release new version (we need it for few other
> things
> > >> too)
> > >> 3. Rebase on new zkclient
> > >> 4. Fix this jira and the few others than were waiting for the new
> zkclient
> > >>
> > >> Does that make sense?
> > >>
> > >> Gwen
> > >>
> > >> On Mon, Feb 2, 2015 at 8:33 PM, Jaikiran Pai <
> jai.forums2...@gmail.com>
> > >> wrote:
> > >>
> > >>> I just heard back from Stefan, who manages the ZkClient repo and he
> > >>> seems to
> > >>> be open to have these changes be part of ZkClient project. I'll be
> > >>> creating
> > >>> a pull request for that project to have it reviewed and merged.
> Although
> > >>> I
> > >>> haven't heard of exact release plans, Stefan's reply did indicate
> that
> > >>> the
> > >>> project could be released after this change is merged.
> > >>>
> > >>> -Jaikiran
> > >>>
> > >>> On Tuesday 03 February 2015 09:03 AM, Jaikiran Pai wrote:
> > >>>
> >  Thanks for pointing to that repo!
> > 
> >  I just had a look at it and it appears that the project isn't much
> >  active
> >  (going by the lack of activity). The latest contribution is from
> Gwen
> >  and
> >  that was around 3 months back. I haven't found release plans for
> that
> >  project or a place to ask about it (filing an issue doesn't seem
> right
> >  to
> >  ask this question). So I'll get in touch with the repo owner and see
> >  what
> >  his plans for the project are.
> > 
> >  -Jaikiran
> > 
> >  On Monday 02 February 2015 11:33 PM, Gwen Shapira wrote:
> > 
> > > I did!
> > >
> > > Thanks for clarifying :)
> > >
> > > The client that is part of Zookeeper itself actually does support
> > > timeouts.
> > >
> > > On Mon, Feb 2, 2015 at 9:54 AM, Guozhang Wang 
> > > wrote:
> > >
> > >> Hi Jaikiran,
> > >>
> > >> I think Gwen was talking about contributing to ZkClient project:
> > >>
> > >> https://github.com/sgroschupf/zkclient
> > >>
> > >> Guozhang
> > >>
> > >>
> > >> On Sun, Feb 1, 2015 at 5:30 AM, Jaikiran Pai <
> > >> jai.forums2...@gmail.com>
> > >> wrote:
> > >>
> > >>  Hi Gwen,
> > >>>
> > >>> Yes, the KafkaZkClient is a wrapper around ZkClient and not a
> > >>> complete
> > >>> replacement.
> > >>>
> > >>> As for contributing to Zookeeper, yes that indeed in on my mind,
> but
> > >>> I
> > >>> haven't yet had a chance to really look deeper into Zookeeper or
> get
> > >>> in
> > >>> touch with their dev team to try and explain this potential
> > >>> improvement
> > >>> to
> > >>> them. I have no objection to contributing this or something
> similar
> > >>> to
> > >>> Zookeeper directly. I think I should be able to bring this up in
> the
> > >>> Zookeeper dev forum, sometime soon in the next few weekends.
> > >>>
> > >>> -Jaikiran
> > >>>
> > >>>
> > >>> On Sunday 01 February 2015 11:40 AM, Gwen Shapira wrote:
> > >>>
> > >>>  It looks like the new KafkaZkClient is a wrapper around
> ZkClient,
> >  but
> >  not a replacement. Did I get it right?
> > 
> >  I think a wrapper for ZkClient can be useful - for example
> >  KAFKA-1664
> >  can also use one.
> > 
> >  However, I'm wondering why not contribute the fix directly to
> >  ZKClient
> >  project and ask for a release that contains the fix?
> >  This will benefit other users of the project who may also need a
> >  timeout (thats pretty basic...)
> > 
> >  As an alternative, if we don't want to collaborate with
> ZKClient for
> >  some reason, forking the project into Kafka will probably give
> us
> >  more
> >  control than wrappers and without much downside.
> > 
> >  Just a thought.
> > 
> >  Gwen
> > 
> > 
> > 
> > 
> > >

Re: gradle testAll stuck

2015-02-12 Thread Manikumar Reddy
This may be due to the recent issue reported by Gwen.

https://issues.apache.org/jira/browse/KAFKA-1948

On 2/12/15, Joel Koshy  wrote:
> - Can you enable test logging (see the README) and see if you can
>   figure out which test is getting stuck or taking forever?
> - A thread-dump may help.
>
> On Thu, Feb 12, 2015 at 08:57:11AM -0500, Tong Li wrote:
>>
>>
>> Hi, folks,
>> How are you all doing?
>> New bee here. Run gradle --daemon testAll on 0.8.2 worked and
>> finished
>> in about 30 minutes. pulled down trunk and run samething, always stuck.
>> left it run for overnight, checked in the morning still stuck. even on
>> 0.8.2, it still takes over 30 minutes on my modest dev system. Wonder how
>> you all run gradle and is there any specific settings needed? I am
>> running
>> it on Ubuntu 14.04. Any help or pointer is really appreciated.
>>
>> Thanks
>>
>> Tong Li
>> OpenStack & Kafka Community Development
>> Building 501/B205
>> liton...@us.ibm.com
>
>


Re: Scala IDE debugging Unit Test Issues

2015-02-19 Thread Manikumar Reddy
Wiki link on Eclipse setup:

https://cwiki.apache.org/confluence/display/KAFKA/Eclipse-Scala-Gradle-Git+Developement+Environment+Setup

On Thu, Feb 19, 2015 at 10:03 PM, Jonathan Rafalski <
jonathan.rafal...@gmail.com> wrote:

> Hello again,
>
>   Sorry again to send you guys such a generic error.  Seems eclispe
> did not want to give me any good error messages.  I switched over to
> intellij and was able to get everything up and running after resolving
> two blockers:
>
> 1) under Settings>Build, Execution, Deployment>Scala Compiler under
> the core project the "additional compiler options:" had the -target
> set to "jvm-1.8" which seems is not supported by scala 2.11.  removing
> that option and running under JDK 1.7 got me past there.
>
> 2)  under Project Structure>Project Settings>Modules the "Kafka"
> module's compile output had the same path for both output and test
> output which was preventing the compiler.
>
>   I will go back with the Eclispe/Scala IDE setup and see if these two
> errors were also preventing there and in the end will create a write
> up on my adventures for review.
>
>   Sorry again about the total newbieness of my prior email.  I will
> work harder on digging deeper before my next query to the list.
>
> Jonathan.
>
> On Tue, Feb 17, 2015 at 10:09 PM, Jonathan Rafalski
>  wrote:
> > Hello all,
> >
> >   Completely new to kafka and scala but thought I would get my feet
> > wet with a few of the newbie tasks.
> >
> >   I was able to get the source up and running in the Scala IDE and I
> > am able to debug the examples, however when I try to debug any of the
> > unit tests in core (for example the
> > unit.kafka.consumer.zookeeperconsumerconnectortest class) I get the
> > java.lang.ClassNotFoundException:
> >
> > Class not found unit.kafka.consumer.ZookeeperConsumerConnectorTest
> >
> >   I have searched the normal sites (SE and Mail archives) and
> > attempted a few solutions (adding physical directories of the .class
> > and .scala files to the build path adding junit libraries) but to no
> > avail.  My thoughts are this is due to the fact that the package
> > declaration on the unit tests point to the main pacakages not the unit
> > test package which is causing eclipse to freak out (though might be
> > way off base).
> >
> >  also since I am just starting and I have no alliances yet is eclipse
> > the preferred IDE here or should I be going with Intellij?
> >
> > I apologize for the complete newb question here but any help on setup
> > to get these unit tests up and running so I can start contributing I
> > would be grateful.
> >
> > Thank you again.
> >
> > Jonathan.
>


Re: problems submitting patch set for review and a bit of solution

2015-03-08 Thread Manikumar Reddy
Hi,
  jira.ini file should be in your home directory. Otherwise it prompts you
for jira username and password.


https://cwiki.apache.org/confluence/display/KAFKA/Patch+submission+and+review


Manikumar

On Mon, Mar 9, 2015 at 9:15 AM, Tong Li  wrote:

>
>
> While trying to submit a reviewboard request by running kafka-patch-review,
> I was getting some errors which do not make any sense. The error claims
> that a file did not exist in the repository but it did. Here is the error:
>
>
> 1. How the problems look like:
> tong@u1404:/opt/stack/kafka$ python kafka-patch-review.py -b
> origin/trunk -j KAFKA-1926
> Configuring reviewboard url to https://reviews.apache.org
> Updating your remote branches to pull the latest changes
> Verifying JIRA connection configurations
> ERROR: Error validating diff
>
> core/src/main/scala/kafka/utils/Utils.scala: The file was not found
> in the repository. (HTTP 400, API Error 207)
> ERROR: reviewboard update failed. Exiting.
>
>
> 2. Struggled a lot trying to figure out what the problem was. after tons of
> googling, I decided to upgrading python-rbtools
>
> tong@u1404:/opt/stack/kafka$ sudo apt-get install python-rbtools
> Reading package lists... Done
> Building dependency tree
> Reading state information... Done
> The following NEW packages will be installed:
>   python-rbtools
> 0 upgraded, 1 newly installed, 0 to remove and 163 not upgraded.
> Need to get 48.6 kB of archives.
> After this operation, 269 kB of additional disk space will be used.
> Get:1 http://us.archive.ubuntu.com/ubuntu/ trusty/universe
> python-rbtools all 0.3.4-1 [48.6 kB]
> Fetched 48.6 kB in 3s (13.3 kB/s)
> Selecting previously unselected package python-rbtools.
> (Reading database ... 57284 files and directories currently
> installed.)
> Preparing to unpack .../python-rbtools_0.3.4-1_all.deb ...
> Unpacking python-rbtools (0.3.4-1) ...
> Processing triggers for man-db (2.6.7.1-1) ...
> Setting up python-rbtools (0.3.4-1) ...
>
> 3. Tried to run patch-review again. But the command stuck and seemed to me
> it waits for some credentials. and it completely ignored the jira.ini file
> 4. Tried the command again, and it successfully posted patchset and updated
> the jira issue after I blindly type in my user name, then provided
> password:
>
> tong@u1404:/opt/stack/kafka$ python kafka-patch-review.py -b
> origin/trunk -j KAFKA-1926
> tongli
> Password:
> Configuring reviewboard url to https://reviews.apache.org
> Updating your remote branches to pull the latest changes
> Verifying JIRA connection configurations
> Review request #31844 posted.
>
> https://reviews.apache.org/r/31844/
>
> Creating diff against origin/trunk and uploading patch to JIRA
> KAFKA-1926
> Created a new reviewboard https://reviews.apache.org/r/31844/
> 5. it may be good to revisit the kafka-patch-review.py and see why it
> ignores the jira.ini file.
>
> When I posted the problem few days ago, I got no response from anyone, I
> thought this maybe a problem that other may not have encountered. sending
> to the mailing list in case anyone else stumbles on the same issue.
>
> cheers.
>
> Tong Li
> OpenStack & Kafka Community Development
> Building 501/B205
> liton...@us.ibm.com


Re: [ANNOUNCE] New committer: Ismael Juma

2016-04-25 Thread Manikumar Reddy
Congrats Ismael!!

On Tue, Apr 26, 2016 at 12:03 PM, Ashish Singh  wrote:

> Congrats Ismael, well deserved!
>
> On Monday, April 25, 2016, Guozhang Wang  wrote:
>
> > Congrats Ismael!
> >
> > On Mon, Apr 25, 2016 at 10:52 PM, Neha Narkhede  > > wrote:
> >
> > > The PMC for Apache Kafka has invited Ismael Juma to join as a committer
> > and
> > > we are pleased to announce that he has accepted!
> > >
> > > Ismael has contributed 121 commits
> > >  to a wide range
> of
> > > areas, notably within the security and the network layer. His
> involvement
> > > has been phenomenal across the board from mailing lists, JIRA, code
> > reviews
> > > and helping us move to GitHub pull requests to contributing features,
> bug
> > > fixes and code and documentation improvements.
> > >
> > > Thank you for your contribution and welcome to Apache Kafka, Ismael!
> > >
> > > --
> > > Thanks,
> > > Neha
> > >
> >
> >
> >
> > --
> > -- Guozhang
> >
>
>
> --
> Ashish 🎤h
>


Re: [DISCUSS] Deprecating the old Scala producers for the next release

2016-02-22 Thread Manikumar Reddy
+1

It will be great If we can completely close below issue.

https://issues.apache.org/jira/browse/KAFKA-1843


On Tue, Feb 23, 2016 at 3:25 AM, Joel Koshy  wrote:

> +1
>
> Thanks for bringing it up
>
> On Mon, Feb 22, 2016 at 9:36 AM, Ismael Juma  wrote:
>
> > Hi all,
> >
> > The new Java producer was introduced in 0.8.2.0 (released in February
> > 2015). It has become the default implementation for various tools since
> > 0.9.0.0 (released in October 2015) and it is the only implementation with
> > support for the security features introduced in 0.9.0.0.
> >
> > Given this, I think there's a good argument for deprecating the old Scala
> > producers for the next release (which is likely to be 0.10.0.0). This
> would
> > give our users a stronger signal regarding our plans to focus on the new
> > Java producer going forward.
> >
> > Note that this proposal is only about deprecating the old Scala producers
> > as, in my opinion, it is too early to do the same for the old Scala
> > consumers.
> >
> > Thoughts?
> >
> > Ismael
> >
>


Re: [DISCUSSION] Kafka 0.8.2.2 release?

2015-08-14 Thread Manikumar Reddy
+1  for 0.8.2.2 release

On Fri, Aug 14, 2015 at 5:49 PM, Ismael Juma  wrote:

> I think this is a good idea as the change is minimal on our side and it has
> been tested in production for some time by the reporter.
>
> Best,
> Ismael
>
> On Fri, Aug 14, 2015 at 1:15 PM, Jun Rao  wrote:
>
> > Hi, Everyone,
> >
> > Since the release of Kafka 0.8.2.1, a number of people have reported an
> > issue with snappy compression (
> > https://issues.apache.org/jira/browse/KAFKA-2189). Basically, if they
> use
> > snappy in 0.8.2.1, they will experience a 2-3X space increase. The issue
> > has since been fixed in trunk (just a snappy jar upgrade). Since 0.8.3 is
> > still a few months away, it may make sense to do an 0.8.2.2 release just
> to
> > fix this issue. Any objections?
> >
> > Thanks,
> >
> > Jun
> >
>


Re: KAFKA-2364 migrate docs from SVN to git

2015-08-19 Thread Manikumar Reddy
Hi,

  We have raised a Apache Infra ticket for migrating site docs from svn  ->
git .
  Currently, the gitwcsub client only supports using the "asf-site" branch
for site docs.
  Infra team is suggesting to create  new git repo for site docs.

   Infra ticket here:
   https://issues.apache.org/jira/browse/INFRA-10143

   Possible Options:
   1. Maintain code and docs in same repo, but on different branches (trunk
and asf-site)
   2. Create a new git repo for docs and integrate with gitwcsub.

   I vote for second option.


Kumar

On Wed, Aug 12, 2015 at 3:51 PM, Edward Ribeiro 
wrote:

> FYI, I created a tiny trivial patch to address a typo in the web site
> (KAFKA-2418), so maybe you can review it and eventually commit before
> moving to github. ;)
>
> Cheers,
> Eddie
> Em 12/08/2015 06:01, "Ismael Juma"  escreveu:
>
> > Hi Gwen,
> >
> > I filed KAFKA-2425 as KAFKA-2364 is about improving the website
> > documentation. Aseem Bansal seemed interested in helping us with the move
> > so I pinged him in the issue.
> >
> > Best,
> > Ismael
> >
> > On Wed, Aug 12, 2015 at 1:51 AM, Gwen Shapira  wrote:
> >
> > > Ah, there is already a JIRA in the title. Never mind :)
> > >
> > > On Tue, Aug 11, 2015 at 5:51 PM, Gwen Shapira 
> wrote:
> > >
> > > > The vote opened 5 days ago. I believe we can conclude with 3 binding
> > +1,
> > > 3
> > > > non-binding +1 and no -1.
> > > >
> > > > Ismael, are you opening and JIRA and migrating? Or are we looking
> for a
> > > > volunteer?
> > > >
> > > > On Tue, Aug 11, 2015 at 5:46 PM, Ashish Singh 
> > > wrote:
> > > >
> > > >> +1 on same repo.
> > > >>
> > > >> On Tue, Aug 11, 2015 at 12:21 PM, Edward Ribeiro <
> > > >> edward.ribe...@gmail.com>
> > > >> wrote:
> > > >>
> > > >> > +1. As soon as possible, please. :)
> > > >> >
> > > >> > On Sat, Aug 8, 2015 at 4:05 PM, Neha Narkhede 
> > > >> wrote:
> > > >> >
> > > >> > > +1 on the same repo for code and website. It helps to keep both
> in
> > > >> sync.
> > > >> > >
> > > >> > > On Thu, Aug 6, 2015 at 1:52 PM, Grant Henke <
> ghe...@cloudera.com>
> > > >> wrote:
> > > >> > >
> > > >> > > > +1 for the same repo. The closer docs can be to code the more
> > > >> accurate
> > > >> > > they
> > > >> > > > are likely to be. The same way we encourage unit tests for a
> new
> > > >> > > > feature/patch. Updating the docs can be the same.
> > > >> > > >
> > > >> > > > If we follow Sqoop's process for example, how would small
> > > >> > > > fixes/adjustments/additions to the live documentation occur
> > > without
> > > >> a
> > > >> > new
> > > >> > > > release?
> > > >> > > >
> > > >> > > > On Thu, Aug 6, 2015 at 3:33 PM, Guozhang Wang <
> > wangg...@gmail.com
> > > >
> > > >> > > wrote:
> > > >> > > >
> > > >> > > > > I am +1 on same repo too. I think keeping one git history of
> > > code
> > > >> /
> > > >> > doc
> > > >> > > > > change may actually be beneficial for this approach as well.
> > > >> > > > >
> > > >> > > > > Guozhang
> > > >> > > > >
> > > >> > > > > On Thu, Aug 6, 2015 at 9:16 AM, Gwen Shapira <
> > g...@confluent.io
> > > >
> > > >> > > wrote:
> > > >> > > > >
> > > >> > > > > > I prefer same repo for one-commit / lower-barrier
> benefits.
> > > >> > > > > >
> > > >> > > > > > Sqoop has the following process, which decouples
> > documentation
> > > >> > > changes
> > > >> > > > > from
> > > >> > > > > > website changes:
> > > >> > > > > >
> > > >> > > > > > 1. Code github repo contains a doc directory, with the
> > > >> > documentation
> > > >> > > > > > written and maintained in AsciiDoc. Only one version of
> the
> > > >> > > > > documentation,
> > > >> > > > > > since it is source controlled with the code. (unlike
> current
> > > SVN
> > > >> > > where
> > > >> > > > we
> > > >> > > > > > have directories per version)
> > > >> > > > > >
> > > >> > > > > > 2. Build process compiles the AsciiDoc to HTML and PDF
> > > >> > > > > >
> > > >> > > > > > 3. When releasing, we post the documentation of the new
> > > release
> > > >> to
> > > >> > > the
> > > >> > > > > > website
> > > >> > > > > >
> > > >> > > > > > Gwen
> > > >> > > > > >
> > > >> > > > > > On Thu, Aug 6, 2015 at 12:20 AM, Ismael Juma <
> > > ism...@juma.me.uk
> > > >> >
> > > >> > > > wrote:
> > > >> > > > > >
> > > >> > > > > > > Hi,
> > > >> > > > > > >
> > > >> > > > > > > For reference, here is the previous discussion on moving
> > the
> > > >> > > website
> > > >> > > > to
> > > >> > > > > > > Git:
> > > >> > > > > > >
> > > >> > > > > > > http://search-hadoop.com/m/uyzND11JliU1E8QU92
> > > >> > > > > > >
> > > >> > > > > > > People were positive to the idea as Jay said. I would
> like
> > > to
> > > >> > see a
> > > >> > > > bit
> > > >> > > > > > of
> > > >> > > > > > > a discussion around whether the website should be part
> of
> > > the
> > > >> > same
> > > >> > > > repo
> > > >> > > > > > as
> > > >> > > > > > > the code or not. I'll get the ball rolling.
> > > >> > > > > > >
> > > >> > > > > > > Pros for same repo:
> > > >> > > > > > > * On

Re: KAFKA-2364 migrate docs from SVN to git

2015-08-19 Thread Manikumar Reddy
oops.. i did not check Ismail's mail.

On Wed, Aug 19, 2015 at 9:25 PM, Manikumar Reddy 
wrote:

> Hi,
>
>   We have raised a Apache Infra ticket for migrating site docs from svn
>  -> git .
>   Currently, the gitwcsub client only supports using the "asf-site"
> branch for site docs.
>   Infra team is suggesting to create  new git repo for site docs.
>
>Infra ticket here:
>https://issues.apache.org/jira/browse/INFRA-10143
>
>Possible Options:
>1. Maintain code and docs in same repo, but on different branches
> (trunk and asf-site)
>2. Create a new git repo for docs and integrate with gitwcsub.
>
>I vote for second option.
>
>
> Kumar
>
> On Wed, Aug 12, 2015 at 3:51 PM, Edward Ribeiro 
> wrote:
>
>> FYI, I created a tiny trivial patch to address a typo in the web site
>> (KAFKA-2418), so maybe you can review it and eventually commit before
>> moving to github. ;)
>>
>> Cheers,
>> Eddie
>> Em 12/08/2015 06:01, "Ismael Juma"  escreveu:
>>
>> > Hi Gwen,
>> >
>> > I filed KAFKA-2425 as KAFKA-2364 is about improving the website
>> > documentation. Aseem Bansal seemed interested in helping us with the
>> move
>> > so I pinged him in the issue.
>> >
>> > Best,
>> > Ismael
>> >
>> > On Wed, Aug 12, 2015 at 1:51 AM, Gwen Shapira 
>> wrote:
>> >
>> > > Ah, there is already a JIRA in the title. Never mind :)
>> > >
>> > > On Tue, Aug 11, 2015 at 5:51 PM, Gwen Shapira 
>> wrote:
>> > >
>> > > > The vote opened 5 days ago. I believe we can conclude with 3 binding
>> > +1,
>> > > 3
>> > > > non-binding +1 and no -1.
>> > > >
>> > > > Ismael, are you opening and JIRA and migrating? Or are we looking
>> for a
>> > > > volunteer?
>> > > >
>> > > > On Tue, Aug 11, 2015 at 5:46 PM, Ashish Singh 
>> > > wrote:
>> > > >
>> > > >> +1 on same repo.
>> > > >>
>> > > >> On Tue, Aug 11, 2015 at 12:21 PM, Edward Ribeiro <
>> > > >> edward.ribe...@gmail.com>
>> > > >> wrote:
>> > > >>
>> > > >> > +1. As soon as possible, please. :)
>> > > >> >
>> > > >> > On Sat, Aug 8, 2015 at 4:05 PM, Neha Narkhede > >
>> > > >> wrote:
>> > > >> >
>> > > >> > > +1 on the same repo for code and website. It helps to keep
>> both in
>> > > >> sync.
>> > > >> > >
>> > > >> > > On Thu, Aug 6, 2015 at 1:52 PM, Grant Henke <
>> ghe...@cloudera.com>
>> > > >> wrote:
>> > > >> > >
>> > > >> > > > +1 for the same repo. The closer docs can be to code the more
>> > > >> accurate
>> > > >> > > they
>> > > >> > > > are likely to be. The same way we encourage unit tests for a
>> new
>> > > >> > > > feature/patch. Updating the docs can be the same.
>> > > >> > > >
>> > > >> > > > If we follow Sqoop's process for example, how would small
>> > > >> > > > fixes/adjustments/additions to the live documentation occur
>> > > without
>> > > >> a
>> > > >> > new
>> > > >> > > > release?
>> > > >> > > >
>> > > >> > > > On Thu, Aug 6, 2015 at 3:33 PM, Guozhang Wang <
>> > wangg...@gmail.com
>> > > >
>> > > >> > > wrote:
>> > > >> > > >
>> > > >> > > > > I am +1 on same repo too. I think keeping one git history
>> of
>> > > code
>> > > >> /
>> > > >> > doc
>> > > >> > > > > change may actually be beneficial for this approach as
>> well.
>> > > >> > > > >
>> > > >> > > > > Guozhang
>> > > >> > > > >
>> > > >> > > > > On Thu, Aug 6, 2015 at 9:16 AM, Gwen Shapira <
>> > g...@confluent.io
>> > > >
>> > > >> > > wrote:
>> > > >> > > > >
>> > > >> > > > > >

Re: KAFKA-2364 migrate docs from SVN to git

2015-08-19 Thread Manikumar Reddy
yes. we can not.  we need two separate github PRs for code and doc changes.

On Wed, Aug 19, 2015 at 9:35 PM, Guozhang Wang  wrote:

> Even under the second option, it sounds like we still cannot include the
> code and doc changes in one commit?
>
> Guozhang
>
> On Wed, Aug 19, 2015 at 8:56 AM, Manikumar Reddy 
> wrote:
>
> > oops.. i did not check Ismail's mail.
> >
> > On Wed, Aug 19, 2015 at 9:25 PM, Manikumar Reddy 
> > wrote:
> >
> > > Hi,
> > >
> > >   We have raised a Apache Infra ticket for migrating site docs from svn
> > >  -> git .
> > >   Currently, the gitwcsub client only supports using the "asf-site"
> > > branch for site docs.
> > >   Infra team is suggesting to create  new git repo for site docs.
> > >
> > >Infra ticket here:
> > >https://issues.apache.org/jira/browse/INFRA-10143
> > >
> > >Possible Options:
> > >1. Maintain code and docs in same repo, but on different branches
> > > (trunk and asf-site)
> > >2. Create a new git repo for docs and integrate with gitwcsub.
> > >
> > >I vote for second option.
> > >
> > >
> > > Kumar
> > >
> > > On Wed, Aug 12, 2015 at 3:51 PM, Edward Ribeiro <
> > edward.ribe...@gmail.com>
> > > wrote:
> > >
> > >> FYI, I created a tiny trivial patch to address a typo in the web site
> > >> (KAFKA-2418), so maybe you can review it and eventually commit before
> > >> moving to github. ;)
> > >>
> > >> Cheers,
> > >> Eddie
> > >> Em 12/08/2015 06:01, "Ismael Juma"  escreveu:
> > >>
> > >> > Hi Gwen,
> > >> >
> > >> > I filed KAFKA-2425 as KAFKA-2364 is about improving the website
> > >> > documentation. Aseem Bansal seemed interested in helping us with the
> > >> move
> > >> > so I pinged him in the issue.
> > >> >
> > >> > Best,
> > >> > Ismael
> > >> >
> > >> > On Wed, Aug 12, 2015 at 1:51 AM, Gwen Shapira 
> > >> wrote:
> > >> >
> > >> > > Ah, there is already a JIRA in the title. Never mind :)
> > >> > >
> > >> > > On Tue, Aug 11, 2015 at 5:51 PM, Gwen Shapira 
> > >> wrote:
> > >> > >
> > >> > > > The vote opened 5 days ago. I believe we can conclude with 3
> > binding
> > >> > +1,
> > >> > > 3
> > >> > > > non-binding +1 and no -1.
> > >> > > >
> > >> > > > Ismael, are you opening and JIRA and migrating? Or are we
> looking
> > >> for a
> > >> > > > volunteer?
> > >> > > >
> > >> > > > On Tue, Aug 11, 2015 at 5:46 PM, Ashish Singh <
> > asi...@cloudera.com>
> > >> > > wrote:
> > >> > > >
> > >> > > >> +1 on same repo.
> > >> > > >>
> > >> > > >> On Tue, Aug 11, 2015 at 12:21 PM, Edward Ribeiro <
> > >> > > >> edward.ribe...@gmail.com>
> > >> > > >> wrote:
> > >> > > >>
> > >> > > >> > +1. As soon as possible, please. :)
> > >> > > >> >
> > >> > > >> > On Sat, Aug 8, 2015 at 4:05 PM, Neha Narkhede <
> > n...@confluent.io
> > >> >
> > >> > > >> wrote:
> > >> > > >> >
> > >> > > >> > > +1 on the same repo for code and website. It helps to keep
> > >> both in
> > >> > > >> sync.
> > >> > > >> > >
> > >> > > >> > > On Thu, Aug 6, 2015 at 1:52 PM, Grant Henke <
> > >> ghe...@cloudera.com>
> > >> > > >> wrote:
> > >> > > >> > >
> > >> > > >> > > > +1 for the same repo. The closer docs can be to code the
> > more
> > >> > > >> accurate
> > >> > > >> > > they
> > >> > > >> > > > are likely to be. The same way we encourage unit tests
> for
> > a
> > >> new
> > >> > > >> > > > feature/patch. Updating the docs can be the same.
> > &

Re: KAFKA-2364 migrate docs from SVN to git

2015-08-20 Thread Manikumar Reddy
  Also can we migrate svn repo to git repo? This will help us to fix
occasional  doc changes/bug fixes through github PR.

On Thu, Aug 20, 2015 at 4:04 AM, Guozhang Wang  wrote:

> Gwen: I remembered it wrong. We would not need another round of voting.
>
> On Wed, Aug 19, 2015 at 3:06 PM, Gwen Shapira  wrote:
>
> > Looking back at this thread, the +1 mention "same repo", so I'm not sure
> a
> > new vote is required.
> >
> > On Wed, Aug 19, 2015 at 3:00 PM, Guozhang Wang 
> wrote:
> >
> > > So I think we have two different approaches here. The original proposal
> > > from Aseem is to move website from SVN to a separate Git repo, and
> hence
> > > have separate commits on code / doc changes. For that we have
> accumulated
> > > enough binging +1s to move on; Gwen's proposal is to move website into
> > the
> > > same repo under a different folder. If people feel they prefer this
> over
> > > the previous approach I would like to call for another round of voting.
> > >
> > > Guozhang
> > >
> > > On Wed, Aug 19, 2015 at 10:24 AM, Ashish 
> > wrote:
> > >
> > > > +1 to what Gwen has suggested. This is what we follow in Flume.
> > > >
> > > > All the latest doc changes are in git, once ready you move changes to
> > > > svn to update website.
> > > > The only catch is, when you need to update specific changes to
> website
> > > > outside release cycle, need to be a bit careful :)
> > > >
> > > > On Wed, Aug 19, 2015 at 9:06 AM, Gwen Shapira 
> > wrote:
> > > > > Yeah, so the way this works in few other projects I worked on is:
> > > > >
> > > > > * The code repo has a /docs directory with the latest revision of
> the
> > > > docs
> > > > > (not multiple versions, just one that matches the latest state of
> > code)
> > > > > * When you submit a patch that requires doc modification, you
> modify
> > > all
> > > > > relevant files in same patch and they get reviewed and committed
> > > together
> > > > > (ideally)
> > > > > * When we release, we copy the docs matching the release and commit
> > to
> > > > SVN
> > > > > website. We also do this occasionally to fix bugs in earlier docs.
> > > > > * Release artifacts include a copy of the docs
> > > > >
> > > > > Nice to have:
> > > > > * Docs are in Asciidoc and build generates the HTML. Asciidoc is
> > easier
> > > > to
> > > > > edit and review.
> > > > >
> > > > > I suggest a similar process for Kafka.
> > > > >
> > > > > On Wed, Aug 19, 2015 at 8:53 AM, Ismael Juma 
> > > wrote:
> > > > >
> > > > >> I should clarify: it's not possible unless we add an additional
> step
> > > > that
> > > > >> moves the docs from the code repo to the website repo.
> > > > >>
> > > > >> Ismael
> > > > >>
> > > > >> On Wed, Aug 19, 2015 at 4:42 PM, Ismael Juma 
> > > wrote:
> > > > >>
> > > > >> > Hi all,
> > > > >> >
> > > > >> > It looks like it's not feasible to update the code and website
> in
> > > the
> > > > >> same
> > > > >> > commit given existing limitations of the Apache infra:
> > > > >> >
> > > > >> >
> > > > >> >
> > > > >>
> > > >
> > >
> >
> https://issues.apache.org/jira/browse/INFRA-10143?focusedCommentId=14703175&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14703175
> > > > >> >
> > > > >> > Best,
> > > > >> > Ismael
> > > > >> >
> > > > >> > On Wed, Aug 12, 2015 at 10:00 AM, Ismael Juma <
> ism...@juma.me.uk>
> > > > wrote:
> > > > >> >
> > > > >> >> Hi Gwen,
> > > > >> >>
> > > > >> >> I filed KAFKA-2425 as KAFKA-2364 is about improving the website
> > > > >> >> documentation. Aseem Bansal seemed interested in helping us
> with
> > > the
> > > > >> move
> > > > >> >> so I pinged him in the issue.
> > > > >> >>
> > > > >> >> Best,
> > > > >> >> Ismael
> > > > >> >>
> > > > >> >> On Wed, Aug 12, 2015 at 1:51 AM, Gwen Shapira <
> g...@confluent.io
> > >
> > > > >> wrote:
> > > > >> >>
> > > > >> >>> Ah, there is already a JIRA in the title. Never mind :)
> > > > >> >>>
> > > > >> >>> On Tue, Aug 11, 2015 at 5:51 PM, Gwen Shapira <
> > g...@confluent.io>
> > > > >> wrote:
> > > > >> >>>
> > > > >> >>> > The vote opened 5 days ago. I believe we can conclude with 3
> > > > binding
> > > > >> >>> +1, 3
> > > > >> >>> > non-binding +1 and no -1.
> > > > >> >>> >
> > > > >> >>> > Ismael, are you opening and JIRA and migrating? Or are we
> > > looking
> > > > >> for a
> > > > >> >>> > volunteer?
> > > > >> >>> >
> > > > >> >>> > On Tue, Aug 11, 2015 at 5:46 PM, Ashish Singh <
> > > > asi...@cloudera.com>
> > > > >> >>> wrote:
> > > > >> >>> >
> > > > >> >>> >> +1 on same repo.
> > > > >> >>> >>
> > > > >> >>> >> On Tue, Aug 11, 2015 at 12:21 PM, Edward Ribeiro <
> > > > >> >>> >> edward.ribe...@gmail.com>
> > > > >> >>> >> wrote:
> > > > >> >>> >>
> > > > >> >>> >> > +1. As soon as possible, please. :)
> > > > >> >>> >> >
> > > > >> >>> >> > On Sat, Aug 8, 2015 at 4:05 PM, Neha Narkhede <
> > > > n...@confluent.io>
> > > > >> >>> >> wrote:
> > > > >> >>> >> >
> > > > >> >>> >> > > +1 on the same repo for code and website. It helps to
> > ke

Re: KAFKA-2364 migrate docs from SVN to git

2015-08-21 Thread Manikumar Reddy
Hi All,

Can we finalize the  approach? So that we can proceed further.

1. Gwen's suggestion + existing svn repo
2. Gwen's suggestion + new git repo for docs

kumar

On Thu, Aug 20, 2015 at 11:48 PM, Manikumar Reddy 
wrote:

>   Also can we migrate svn repo to git repo? This will help us to fix
> occasional  doc changes/bug fixes through github PR.
>
> On Thu, Aug 20, 2015 at 4:04 AM, Guozhang Wang  wrote:
>
>> Gwen: I remembered it wrong. We would not need another round of voting.
>>
>> On Wed, Aug 19, 2015 at 3:06 PM, Gwen Shapira  wrote:
>>
>> > Looking back at this thread, the +1 mention "same repo", so I'm not
>> sure a
>> > new vote is required.
>> >
>> > On Wed, Aug 19, 2015 at 3:00 PM, Guozhang Wang 
>> wrote:
>> >
>> > > So I think we have two different approaches here. The original
>> proposal
>> > > from Aseem is to move website from SVN to a separate Git repo, and
>> hence
>> > > have separate commits on code / doc changes. For that we have
>> accumulated
>> > > enough binging +1s to move on; Gwen's proposal is to move website into
>> > the
>> > > same repo under a different folder. If people feel they prefer this
>> over
>> > > the previous approach I would like to call for another round of
>> voting.
>> > >
>> > > Guozhang
>> > >
>> > > On Wed, Aug 19, 2015 at 10:24 AM, Ashish 
>> > wrote:
>> > >
>> > > > +1 to what Gwen has suggested. This is what we follow in Flume.
>> > > >
>> > > > All the latest doc changes are in git, once ready you move changes
>> to
>> > > > svn to update website.
>> > > > The only catch is, when you need to update specific changes to
>> website
>> > > > outside release cycle, need to be a bit careful :)
>> > > >
>> > > > On Wed, Aug 19, 2015 at 9:06 AM, Gwen Shapira 
>> > wrote:
>> > > > > Yeah, so the way this works in few other projects I worked on is:
>> > > > >
>> > > > > * The code repo has a /docs directory with the latest revision of
>> the
>> > > > docs
>> > > > > (not multiple versions, just one that matches the latest state of
>> > code)
>> > > > > * When you submit a patch that requires doc modification, you
>> modify
>> > > all
>> > > > > relevant files in same patch and they get reviewed and committed
>> > > together
>> > > > > (ideally)
>> > > > > * When we release, we copy the docs matching the release and
>> commit
>> > to
>> > > > SVN
>> > > > > website. We also do this occasionally to fix bugs in earlier docs.
>> > > > > * Release artifacts include a copy of the docs
>> > > > >
>> > > > > Nice to have:
>> > > > > * Docs are in Asciidoc and build generates the HTML. Asciidoc is
>> > easier
>> > > > to
>> > > > > edit and review.
>> > > > >
>> > > > > I suggest a similar process for Kafka.
>> > > > >
>> > > > > On Wed, Aug 19, 2015 at 8:53 AM, Ismael Juma 
>> > > wrote:
>> > > > >
>> > > > >> I should clarify: it's not possible unless we add an additional
>> step
>> > > > that
>> > > > >> moves the docs from the code repo to the website repo.
>> > > > >>
>> > > > >> Ismael
>> > > > >>
>> > > > >> On Wed, Aug 19, 2015 at 4:42 PM, Ismael Juma 
>> > > wrote:
>> > > > >>
>> > > > >> > Hi all,
>> > > > >> >
>> > > > >> > It looks like it's not feasible to update the code and website
>> in
>> > > the
>> > > > >> same
>> > > > >> > commit given existing limitations of the Apache infra:
>> > > > >> >
>> > > > >> >
>> > > > >> >
>> > > > >>
>> > > >
>> > >
>> >
>> https://issues.apache.org/jira/browse/INFRA-10143?focusedCommentId=14703175&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14703175
>> > > > >> >
>> > > > >> > Best

Re: KAFKA-2364 migrate docs from SVN to git

2015-08-24 Thread Manikumar Reddy
Hi,

   Infra team created git repo for kafka site docs.

   Gwen/Guozhang,
   Need your help to create a branch "asf-site" and copy the exiting
svn contents to that branch.

git repo: https://git-wip-us.apache.org/repos/asf/kafka-site.git

https://issues.apache.org/jira/browse/INFRA-10143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14709630#comment-14709630

Kumar

On Fri, Aug 21, 2015 at 6:16 PM, Ismael Juma  wrote:

> My preference would be to do `2` because it reduces the number of tools we
> need to know. If we want to clone the repo for the generated site, we can
> use the same tools as we do for the code repo and we can watch for changes
> on GitHub, etc.
>
> Ismael
>
> On Fri, Aug 21, 2015 at 1:34 PM, Manikumar Reddy 
> wrote:
>
> > Hi All,
> >
> > Can we finalize the  approach? So that we can proceed further.
> >
> > 1. Gwen's suggestion + existing svn repo
> > 2. Gwen's suggestion + new git repo for docs
> >
> > kumar
> >
> > On Thu, Aug 20, 2015 at 11:48 PM, Manikumar Reddy 
> > wrote:
> >
> > >   Also can we migrate svn repo to git repo? This will help us to fix
> > > occasional  doc changes/bug fixes through github PR.
> > >
> > > On Thu, Aug 20, 2015 at 4:04 AM, Guozhang Wang 
> > wrote:
> > >
> > >> Gwen: I remembered it wrong. We would not need another round of
> voting.
> > >>
> > >> On Wed, Aug 19, 2015 at 3:06 PM, Gwen Shapira 
> > wrote:
> > >>
> > >> > Looking back at this thread, the +1 mention "same repo", so I'm not
> > >> sure a
> > >> > new vote is required.
> > >> >
> > >> > On Wed, Aug 19, 2015 at 3:00 PM, Guozhang Wang 
> > >> wrote:
> > >> >
> > >> > > So I think we have two different approaches here. The original
> > >> proposal
> > >> > > from Aseem is to move website from SVN to a separate Git repo, and
> > >> hence
> > >> > > have separate commits on code / doc changes. For that we have
> > >> accumulated
> > >> > > enough binging +1s to move on; Gwen's proposal is to move website
> > into
> > >> > the
> > >> > > same repo under a different folder. If people feel they prefer
> this
> > >> over
> > >> > > the previous approach I would like to call for another round of
> > >> voting.
> > >> > >
> > >> > > Guozhang
> > >> > >
> > >> > > On Wed, Aug 19, 2015 at 10:24 AM, Ashish  >
> > >> > wrote:
> > >> > >
> > >> > > > +1 to what Gwen has suggested. This is what we follow in Flume.
> > >> > > >
> > >> > > > All the latest doc changes are in git, once ready you move
> changes
> > >> to
> > >> > > > svn to update website.
> > >> > > > The only catch is, when you need to update specific changes to
> > >> website
> > >> > > > outside release cycle, need to be a bit careful :)
> > >> > > >
> > >> > > > On Wed, Aug 19, 2015 at 9:06 AM, Gwen Shapira <
> g...@confluent.io>
> > >> > wrote:
> > >> > > > > Yeah, so the way this works in few other projects I worked on
> > is:
> > >> > > > >
> > >> > > > > * The code repo has a /docs directory with the latest revision
> > of
> > >> the
> > >> > > > docs
> > >> > > > > (not multiple versions, just one that matches the latest state
> > of
> > >> > code)
> > >> > > > > * When you submit a patch that requires doc modification, you
> > >> modify
> > >> > > all
> > >> > > > > relevant files in same patch and they get reviewed and
> committed
> > >> > > together
> > >> > > > > (ideally)
> > >> > > > > * When we release, we copy the docs matching the release and
> > >> commit
> > >> > to
> > >> > > > SVN
> > >> > > > > website. We also do this occasionally to fix bugs in earlier
> > docs.
> > >> > > > > * Release artifacts include a copy of the docs
> > >> > > > >
> > >> > 

Re: KAFKA-2364 migrate docs from SVN to git

2015-08-26 Thread Manikumar Reddy
Hi Guozhang,

  Our plan is to follow Gwen's suggested approach and migrate the existing
svn site repo to new git repo.

  (1) Gwen's suggestion will help to us maintain latest docs in Kafka repo
itself.  We periodically need to copy these latest docs to site repo. I
will submit patch for this.

  (2)  svn repo -> git repo  migration will help us to integrate site repo
to git tooling/github. It will be easy to maintain the site repo and
changes.  So we have created new git repo for docs and need committer help
to create a branch "asf-site".

   new git repo: https://git-wip-us.apache.org/repos/asf/kafka-site.git

  Hope this clears the confusion.

Kumar
I thought Gwen's suggestion was to us a separate folder in the same repo
for docs instead of a separate branch, Gwen can correct me if I was wrong?

Guozhang

On Mon, Aug 24, 2015 at 10:31 AM, Manikumar Reddy 
wrote:

> Hi,
>
>Infra team created git repo for kafka site docs.
>
>Gwen/Guozhang,
>Need your help to create a branch "asf-site" and copy the exiting
> svn contents to that branch.
>
> git repo: https://git-wip-us.apache.org/repos/asf/kafka-site.git
>
>
>
https://issues.apache.org/jira/browse/INFRA-10143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14709630#comment-14709630
>
> Kumar
>
> On Fri, Aug 21, 2015 at 6:16 PM, Ismael Juma  wrote:
>
> > My preference would be to do `2` because it reduces the number of tools
> we
> > need to know. If we want to clone the repo for the generated site, we
can
> > use the same tools as we do for the code repo and we can watch for
> changes
> > on GitHub, etc.
> >
> > Ismael
> >
> > On Fri, Aug 21, 2015 at 1:34 PM, Manikumar Reddy 
> > wrote:
> >
> > > Hi All,
> > >
> > > Can we finalize the  approach? So that we can proceed further.
> > >
> > > 1. Gwen's suggestion + existing svn repo
> > > 2. Gwen's suggestion + new git repo for docs
> > >
> > > kumar
> > >
> > > On Thu, Aug 20, 2015 at 11:48 PM, Manikumar Reddy <
> ku...@nmsworks.co.in>
> > > wrote:
> > >
> > > >   Also can we migrate svn repo to git repo? This will help us to fix
> > > > occasional  doc changes/bug fixes through github PR.
> > > >
> > > > On Thu, Aug 20, 2015 at 4:04 AM, Guozhang Wang 
> > > wrote:
> > > >
> > > >> Gwen: I remembered it wrong. We would not need another round of
> > voting.
> > > >>
> > > >> On Wed, Aug 19, 2015 at 3:06 PM, Gwen Shapira 
> > > wrote:
> > > >>
> > > >> > Looking back at this thread, the +1 mention "same repo", so I'm
> not
> > > >> sure a
> > > >> > new vote is required.
> > > >> >
> > > >> > On Wed, Aug 19, 2015 at 3:00 PM, Guozhang Wang <
> wangg...@gmail.com>
> > > >> wrote:
> > > >> >
> > > >> > > So I think we have two different approaches here. The original
> > > >> proposal
> > > >> > > from Aseem is to move website from SVN to a separate Git repo,
> and
> > > >> hence
> > > >> > > have separate commits on code / doc changes. For that we have
> > > >> accumulated
> > > >> > > enough binging +1s to move on; Gwen's proposal is to move
> website
> > > into
> > > >> > the
> > > >> > > same repo under a different folder. If people feel they prefer
> > this
> > > >> over
> > > >> > > the previous approach I would like to call for another round of
> > > >> voting.
> > > >> > >
> > > >> > > Guozhang
> > > >> > >
> > > >> > > On Wed, Aug 19, 2015 at 10:24 AM, Ashish <
> paliwalash...@gmail.com
> > >
> > > >> > wrote:
> > > >> > >
> > > >> > > > +1 to what Gwen has suggested. This is what we follow in
> Flume.
> > > >> > > >
> > > >> > > > All the latest doc changes are in git, once ready you move
> > changes
> > > >> to
> > > >> > > > svn to update website.
> > > >> > > > The only catch is, when you need to update specific changes
to
> > > >> website
> > > >> > > > outside release cycle, need to be a bit careful :)

Copying latest version docs to kafka repo

2015-08-26 Thread Manikumar Reddy
Hi Kafka Devs,

   Current svn website has the following directory structure.

   082/
   083/
   code.html
   coding-guide.html
   committers.html
   contact.html
   contributing.html
   diagrams/
   documentation.html
   downloads.html
images
includes/
index.html

I will be coping  083/ folder contents to  kafka/docs folder.
 kafka/docs will contain only latest version docs.

Kumar


Re: Do not log value of configs that Kafka doesn't recognize

2016-08-17 Thread Manikumar Reddy
During server/client startup,  we are logging all the supplied configs. May
be we can just mask
the password related config values for both valid/invalid configs.

On Wed, Aug 17, 2016 at 5:14 PM, Jaikiran Pai 
wrote:

> Any opinion about this proposed change?
>
> -Jaikiran
>
> On Tuesday 16 August 2016 02:28 PM, Jaikiran Pai wrote:
>
>> We are using 0.9.0.1 of Kafka (Java) libraries for our Kafka consumers
>> and producers. In one of our consumers, our consumer config had a SSL
>> specific property which ended up being used against a non-SSL Kafka broker
>> port. As a result, the logs ended up seeing messages like:
>>
>> 17:53:33,722  WARN [o.a.k.c.c.ConsumerConfig] - The configuration
>> *ssl.truststore.password = foobar* was supplied but isn't a known config.
>>
>> The log message is fine and makes sense, but can Kafka please not log the
>> values of the properties and instead just include the config name which it
>> considers as unknown? That way it won't ended up logging these potentially
>> sensitive values. I understand that only those with access to these log
>> files can end up seeing these values but even then some of our internal
>> processes forbid logging such sensitive information to the logs. This log
>> message will still end up being useful if only the config name is logged
>> without the value.
>>
>> Can I add this as a JIRA and provide a patch?
>>
>> -Jaikiran
>>
>
>


Re: [VOTE] KIP-74: Add FetchResponse size limit in bytes

2016-08-18 Thread Manikumar Reddy
+1 (non-binding)

This feature help us control memory footprint and allows consumer to
progress on fetching  large messages.

On Fri, Aug 19, 2016 at 10:32 AM, Gwen Shapira  wrote:

> +1 (binding)
>
> On Thu, Aug 18, 2016 at 1:47 PM, Andrey L. Neporada
>  wrote:
> > Hi all!
> > I’ve modified KIP-74 a little bit (as requested by Jason Gustafson & Jun
> Rao):
> > 1) provided more detailed explanation on memory usage (no functional
> changes)
> > 2) renamed “fetch.response.max.bytes” -> “fetch.max.bytes”
> >
> > Let’s continue voting in this thread.
> >
> > Thanks!
> > Andrey.
> >
> >> On 17 Aug 2016, at 00:02, Jun Rao  wrote:
> >>
> >> Andrey,
> >>
> >> Thanks for the KIP. +1
> >>
> >> Jun
> >>
> >> On Tue, Aug 16, 2016 at 1:32 PM, Andrey L. Neporada <
> >> anepor...@yandex-team.ru> wrote:
> >>
> >>> Hi!
> >>>
> >>> I would like to initiate the voting process for KIP-74:
> >>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> >>> 74%3A+Add+Fetch+Response+Size+Limit+in+Bytes
> >>>
> >>>
> >>> Thanks,
> >>> Andrey.
> >
>
>
>
> --
> Gwen Shapira
> Product Manager | Confluent
> 650.450.2760 | @gwenshap
> Follow us: Twitter | blog
>


Re: WARN log message flooding broker logs for a pretty typical SSL setup

2016-09-05 Thread Manikumar Reddy
We don't need JIRA for minor PRs. Just prefix "MINOR:" to PR title.

On Tue, Sep 6, 2016 at 9:16 AM, Jaikiran Pai 
wrote:

> Thanks Ismael, I'll raise a PR for this. As a process, is there a JIRA
> that's expected to be filed for this before I raise a PR or would this be
> OK without a JIRA?
>
> -Jaikiran
>
> On Monday 05 September 2016 03:55 PM, Ismael Juma wrote:
>
>> Hi Jaikiran,
>>
>> I agree that this is a valid configuration and the log level seems too
>> high
>> given that. The original motivation is explained in the PR:
>>
>> https://github.com/apache/kafka/pull/155/files#diff-fce430ae
>> 21a0c98d82da6d4aa551f824L603
>>
>> That is, help people figure out if client authentication was not setup
>> correctly, but it seems like a better way to do that is to set
>> `ssl.client.auth=required`. So I'd, personally, be fine with reducing the
>> log level to info or debug.
>>
>> Ismael
>>
>> On Sun, Sep 4, 2016 at 3:01 PM, Jaikiran Pai 
>> wrote:
>>
>> We just started enabling SSL for our Kafka brokers and (Java) clients and
>>> among some of the issues we are running into, one of them is the flooding
>>> of the server/broker Kafka logs where we are seeing these messages:
>>>
>>> [2016-09-02 08:07:13,773] WARN SSL peer is not authenticated, returning
>>> ANONYMOUS instead (org.apache.kafka.common.network.SslTransportLayer)
>>> [2016-09-02 08:07:15,710] WARN SSL peer is not authenticated, returning
>>> ANONYMOUS instead (org.apache.kafka.common.network.SslTransportLayer)
>>> [2016-09-02 08:07:15,711] WARN SSL peer is not authenticated, returning
>>> ANONYMOUS instead (org.apache.kafka.common.network.SslTransportLayer)
>>> [2016-09-02 08:07:15,711] WARN SSL peer is not authenticated, returning
>>> ANONYMOUS instead (org.apache.kafka.common.network.SslTransportLayer)
>>> [2016-09-02 08:07:15,712] WARN SSL peer is not authenticated, returning
>>> ANONYMOUS instead (org.apache.kafka.common.network.SslTransportLayer)
>>> 
>>>
>>> They just keep going on and on. In our SSL setup, we have the broker
>>> configured with the keystore and the Java clients have been configured
>>> with
>>> a proper truststore and all works fine except for these messages flooding
>>> the logs. We don't have any ACLs setup nor have we enabled client auth
>>> check.
>>>
>>> Looking at the code which generates this WARN message
>>> https://github.com/apache/kafka/blob/trunk/clients/src/main/
>>> java/org/apache/kafka/common/network/SslTransportLayer.java#L638 and the
>>> fact that the setup we have (where we just enable server/broker cert
>>> validation) is, IMO, a valid scenario and not some exceptional/incorrect
>>> setup issue, I think this log message is something that can be removed
>>> from
>>> the code (or at least logged at a very lower level given the frequency at
>>> which this gets logged)
>>>
>>> Any thoughts on this?
>>>
>>> It's a pretty straightforward change and if this change is something that
>>> sounds right, I can go ahead and submit a PR.
>>>
>>> P.S: This is both on 0.9.0.1 and latest 0.10.0.1.
>>>
>>> -Jaikiran
>>>
>>>
>>>
>


Re: Not able to download tests jar of kafka and kafka-streams from maven repo.

2016-09-12 Thread Manikumar Reddy
Hi,

Kafka uses  "test"  as classifier (default is "tests" ) for test jars.
We can add test  parameter to dependency tag to
resolve the error.


org.apache.kafka
kafka-streams
0.10.0.0
test-jar
test


Thanks
Manikumar

On Tue, Sep 13, 2016 at 11:43 AM, Satish Duggana 
wrote:

> Hi,
>
> Below dependency is added in one of our repos to use EmbeddedKafkaCluster
> but dependency installation fails with an error mentioned later.
>
> 
> org.apache.kafka
> kafka-streams
> 0.10.0.0
> test-jar
> test
> 
>
> This fails with an error below as
> https://repository.apache.org/content/repositories/
> snapshots/org/apache/kafka/kafka-streams/0.10.0.0/kafka-
> streams-0.10.0.0-tests.jar
> not available. But
> https://repository.apache.org/content/repositories/
> snapshots/org/apache/kafka/kafka-streams/0.10.0.0/kafka-
> streams-0.10.0.0-test.jar
> is available. You may need to fix POM to install right name which is
> kafka-streams-0.10.0.0-test.jar instead of kafka-streams-0.10.0.0-tests.
> jar
>
>
> [ERROR] Failed to execute goal on project schema-registry-avro: Could not
> resolve dependencies for project
> com.hortonworks.registries:schema-registry-avro:jar:0.1.0-SNAPSHOT: The
> following artifacts could not be resolved:
> org.apache.kafka:kafka-clients:jar:tests:0.10.0.0,
> org.apache.kafka:kafka-streams:jar:tests:0.10.0.0: Could not find artifact
> org.apache.kafka:kafka-clients:jar:tests:0.10.0.0 in central (
> http://repo1.maven.org/maven2/) -> [Help 1]
>
>
> JIRA is raised at https://issues.apache.org/jira/browse/KAFKA-4156.
>
> Thanks,
> Satish.
>


Re: Not able to download tests jar of kafka and kafka-streams from maven repo.

2016-09-12 Thread Manikumar Reddy
adding missing classifier parameter..


org.apache.kafka
kafka-streams
0.10.0.0
test-jar
test


On Tue, Sep 13, 2016 at 12:07 PM, Manikumar Reddy  wrote:

> Hi,
>
> Kafka uses  "test"  as classifier (default is "tests" ) for test jars.
> We can add test  parameter to dependency tag to
> resolve the error.
>
> 
> org.apache.kafka
> kafka-streams
> 0.10.0.0
> test-jar
> test
> 
>
> Thanks
> Manikumar
>
> On Tue, Sep 13, 2016 at 11:43 AM, Satish Duggana  > wrote:
>
>> Hi,
>>
>> Below dependency is added in one of our repos to use EmbeddedKafkaCluster
>> but dependency installation fails with an error mentioned later.
>>
>> 
>> org.apache.kafka
>> kafka-streams
>> 0.10.0.0
>> test-jar
>> test
>> 
>>
>> This fails with an error below as
>> https://repository.apache.org/content/repositories/snapshots
>> /org/apache/kafka/kafka-streams/0.10.0.0/kafka-streams-0.10.0.0-tests.jar
>> not available. But
>> https://repository.apache.org/content/repositories/snapshots
>> /org/apache/kafka/kafka-streams/0.10.0.0/kafka-streams-0.10.0.0-test.jar
>> is available. You may need to fix POM to install right name which is
>> kafka-streams-0.10.0.0-test.jar instead of kafka-streams-0.10.0.0-tests.j
>> ar
>>
>>
>> [ERROR] Failed to execute goal on project schema-registry-avro: Could not
>> resolve dependencies for project
>> com.hortonworks.registries:schema-registry-avro:jar:0.1.0-SNAPSHOT: The
>> following artifacts could not be resolved:
>> org.apache.kafka:kafka-clients:jar:tests:0.10.0.0,
>> org.apache.kafka:kafka-streams:jar:tests:0.10.0.0: Could not find
>> artifact
>> org.apache.kafka:kafka-clients:jar:tests:0.10.0.0 in central (
>> http://repo1.maven.org/maven2/) -> [Help 1]
>>
>>
>> JIRA is raised at https://issues.apache.org/jira/browse/KAFKA-4156.
>>
>> Thanks,
>> Satish.
>>
>
>


Re: KAFKA-2364 migrate docs from SVN to git

2015-09-02 Thread Manikumar Reddy
Jun/Gwen/Guozhang,
   Need your help to complete this.

  (1) Copy latest docs to kafka repo:
https://github.com/apache/kafka/pull/171

  (2) svn site repo -> git site repo migration : need committer help to
create a branch "asf-site".

   new git site repo :
https://git-wip-us.apache.org/repos/asf/kafka-site.git

Kumar

On Wed, Aug 26, 2015 at 7:43 PM, Manikumar Reddy 
wrote:

> Hi Guozhang,
>
>   Our plan is to follow Gwen's suggested approach and migrate the existing
> svn site repo to new git repo.
>
>   (1) Gwen's suggestion will help to us maintain latest docs in Kafka repo
> itself.  We periodically need to copy these latest docs to site repo. I
> will submit patch for this.
>
>   (2)  svn repo -> git repo  migration will help us to integrate site repo
> to git tooling/github. It will be easy to maintain the site repo and
> changes.  So we have created new git repo for docs and need committer help
> to create a branch "asf-site".
>
>new git repo: https://git-wip-us.apache.org/repos/asf/kafka-site.git
>
>   Hope this clears the confusion.
>
> Kumar
> I thought Gwen's suggestion was to us a separate folder in the same repo
> for docs instead of a separate branch, Gwen can correct me if I was wrong?
>
> Guozhang
>
> On Mon, Aug 24, 2015 at 10:31 AM, Manikumar Reddy 
> wrote:
>
> > Hi,
> >
> >Infra team created git repo for kafka site docs.
> >
> >Gwen/Guozhang,
> >Need your help to create a branch "asf-site" and copy the exiting
> > svn contents to that branch.
> >
> > git repo: https://git-wip-us.apache.org/repos/asf/kafka-site.git
> >
> >
> >
> https://issues.apache.org/jira/browse/INFRA-10143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14709630#comment-14709630
> >
> > Kumar
> >
> > On Fri, Aug 21, 2015 at 6:16 PM, Ismael Juma  wrote:
> >
> > > My preference would be to do `2` because it reduces the number of tools
> > we
> > > need to know. If we want to clone the repo for the generated site, we
> can
> > > use the same tools as we do for the code repo and we can watch for
> > changes
> > > on GitHub, etc.
> > >
> > > Ismael
> > >
> > > On Fri, Aug 21, 2015 at 1:34 PM, Manikumar Reddy  >
> > > wrote:
> > >
> > > > Hi All,
> > > >
> > > > Can we finalize the  approach? So that we can proceed further.
> > > >
> > > > 1. Gwen's suggestion + existing svn repo
> > > > 2. Gwen's suggestion + new git repo for docs
> > > >
> > > > kumar
> > > >
> > > > On Thu, Aug 20, 2015 at 11:48 PM, Manikumar Reddy <
> > ku...@nmsworks.co.in>
> > > > wrote:
> > > >
> > > > >   Also can we migrate svn repo to git repo? This will help us to
> fix
> > > > > occasional  doc changes/bug fixes through github PR.
> > > > >
> > > > > On Thu, Aug 20, 2015 at 4:04 AM, Guozhang Wang  >
> > > > wrote:
> > > > >
> > > > >> Gwen: I remembered it wrong. We would not need another round of
> > > voting.
> > > > >>
> > > > >> On Wed, Aug 19, 2015 at 3:06 PM, Gwen Shapira 
> > > > wrote:
> > > > >>
> > > > >> > Looking back at this thread, the +1 mention "same repo", so I'm
> > not
> > > > >> sure a
> > > > >> > new vote is required.
> > > > >> >
> > > > >> > On Wed, Aug 19, 2015 at 3:00 PM, Guozhang Wang <
> > wangg...@gmail.com>
> > > > >> wrote:
> > > > >> >
> > > > >> > > So I think we have two different approaches here. The original
> > > > >> proposal
> > > > >> > > from Aseem is to move website from SVN to a separate Git repo,
> > and
> > > > >> hence
> > > > >> > > have separate commits on code / doc changes. For that we have
> > > > >> accumulated
> > > > >> > > enough binging +1s to move on; Gwen's proposal is to move
> > website
> > > > into
> > > > >> > the
> > > > >> > > same repo under a different folder. If people feel they prefer
> > > this
> > > > >> over
> > > > >> > > the previous approach I would like to call for anot

Re: [VOTE] 0.8.2.2 Candidate 1

2015-09-09 Thread Manikumar Reddy
+1 (non-binding). verified the artifacts, quick start.

On Wed, Sep 9, 2015 at 2:41 AM, Ashish  wrote:

> +1 (non-binding)
>
> Ran the build, works fine. All test cases passed
>
> On Thu, Sep 3, 2015 at 9:22 AM, Jun Rao  wrote:
> > This is the first candidate for release of Apache Kafka 0.8.2.2. This
> only
> > fixes two critical issues (KAFKA-2189 and KAFKA-2308) related to snappy
> in
> > 0.8.2.1.
> >
> > Release Notes for the 0.8.2.2 release
> >
> https://people.apache.org/~junrao/kafka-0.8.2.2-candidate1/RELEASE_NOTES.html
> >
> > *** Please download, test and vote by Tuesday, Sep 8, 7pm PT
> >
> > Kafka's KEYS file containing PGP keys we use to sign the release:
> > http://kafka.apache.org/KEYS in addition to the md5, sha1
> > and sha2 (SHA256) checksum.
> >
> > * Release artifacts to be voted upon (source and binary):
> > https://people.apache.org/~junrao/kafka-0.8.2.2-candidate1/
> >
> > * Maven artifacts to be voted upon prior to release:
> > https://repository.apache.org/content/groups/staging/
> >
> > * scala-doc
> > https://people.apache.org/~junrao/kafka-0.8.2.2-candidate1/scaladoc/
> >
> > * java-doc
> > https://people.apache.org/~junrao/kafka-0.8.2.2-candidate1/javadoc/
> >
> > * The tag to be voted upon (off the 0.8.2 branch) is the 0.8.2.2 tag
> >
> https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=d01226cfdcb3d9daad8465234750fa515a1e7e4a
> >
> > /***
> >
> > Thanks,
> >
> > Jun
>
>
>
> --
> thanks
> ashish
>
> Blog: http://www.ashishpaliwal.com/blog
> My Photo Galleries: http://www.pbase.com/ashishpaliwal
>


Re: KAFKA-2364 migrate docs from SVN to git

2015-09-15 Thread Manikumar Reddy
Hi Gwen,

We need to create new branch named "asf-site"  in new git repository[1].
This is requirement from Apache Infra
for git based websites [2].  After creating new branch, we will the copy
the existing to svn repo contents to
new branch.


1. https://git-wip-us.apache.org/repos/asf/kafka-site.git
2. https://issues.apache.org/jira/browse/INFRA-10143


Kumar

On Tue, Sep 15, 2015 at 2:19 AM, Gwen Shapira  wrote:

> Hi Manikumar,
>
> Sorry for huge delay!
>
> 1) This looks good, I'll get it in
>
> 2) I'm confused - do we need a new branch or a new repository? it looks
> like you already got a new repository, so why do we need a branch as well?
>
>
>
> On Wed, Sep 2, 2015 at 8:11 AM, Manikumar Reddy 
> wrote:
>
> > Jun/Gwen/Guozhang,
> >Need your help to complete this.
> >
> >   (1) Copy latest docs to kafka repo:
> > https://github.com/apache/kafka/pull/171
> >
> >   (2) svn site repo -> git site repo migration : need committer help to
> > create a branch "asf-site".
> >
> >    new git site repo :
> > https://git-wip-us.apache.org/repos/asf/kafka-site.git
> >
> > Kumar
> >
> > On Wed, Aug 26, 2015 at 7:43 PM, Manikumar Reddy 
> > wrote:
> >
> > > Hi Guozhang,
> > >
> > >   Our plan is to follow Gwen's suggested approach and migrate the
> > existing
> > > svn site repo to new git repo.
> > >
> > >   (1) Gwen's suggestion will help to us maintain latest docs in Kafka
> > repo
> > > itself.  We periodically need to copy these latest docs to site repo. I
> > > will submit patch for this.
> > >
> > >   (2)  svn repo -> git repo  migration will help us to integrate site
> > repo
> > > to git tooling/github. It will be easy to maintain the site repo and
> > > changes.  So we have created new git repo for docs and need committer
> > help
> > > to create a branch "asf-site".
> > >
> > >new git repo:
> https://git-wip-us.apache.org/repos/asf/kafka-site.git
> > >
> > >   Hope this clears the confusion.
> > >
> > > Kumar
> > > I thought Gwen's suggestion was to us a separate folder in the same
> repo
> > > for docs instead of a separate branch, Gwen can correct me if I was
> > wrong?
> > >
> > > Guozhang
> > >
> > > On Mon, Aug 24, 2015 at 10:31 AM, Manikumar Reddy <
> ku...@nmsworks.co.in>
> > > wrote:
> > >
> > > > Hi,
> > > >
> > > >Infra team created git repo for kafka site docs.
> > > >
> > > >Gwen/Guozhang,
> > > >Need your help to create a branch "asf-site" and copy the
> > exiting
> > > > svn contents to that branch.
> > > >
> > > > git repo: https://git-wip-us.apache.org/repos/asf/kafka-site.git
> > > >
> > > >
> > > >
> > >
> >
> https://issues.apache.org/jira/browse/INFRA-10143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14709630#comment-14709630
> > > >
> > > > Kumar
> > > >
> > > > On Fri, Aug 21, 2015 at 6:16 PM, Ismael Juma 
> > wrote:
> > > >
> > > > > My preference would be to do `2` because it reduces the number of
> > tools
> > > > we
> > > > > need to know. If we want to clone the repo for the generated site,
> we
> > > can
> > > > > use the same tools as we do for the code repo and we can watch for
> > > > changes
> > > > > on GitHub, etc.
> > > > >
> > > > > Ismael
> > > > >
> > > > > On Fri, Aug 21, 2015 at 1:34 PM, Manikumar Reddy <
> > ku...@nmsworks.co.in
> > > >
> > > > > wrote:
> > > > >
> > > > > > Hi All,
> > > > > >
> > > > > > Can we finalize the  approach? So that we can proceed further.
> > > > > >
> > > > > > 1. Gwen's suggestion + existing svn repo
> > > > > > 2. Gwen's suggestion + new git repo for docs
> > > > > >
> > > > > > kumar
> > > > > >
> > > > > > On Thu, Aug 20, 2015 at 11:48 PM, Manikumar Reddy <
> > > > ku...@nmsworks.co.in>
> > > > > > wrote:
> > > > > >
> > > > > > >   Also can we migrate svn

Re: [ANNOUNCE] New Committer Sriharsha Chintalapani

2015-09-21 Thread Manikumar Reddy
congrats harsha!

On Tue, Sep 22, 2015 at 9:48 AM, Dong Lin  wrote:

> Congratulations Sriharsha!
>
> Dong
>
> On Tue, Sep 22, 2015 at 4:17 AM, Guozhang Wang  wrote:
>
> > Congrats Sriharsha!
> >
> > Guozhang
> >
> > On Mon, Sep 21, 2015 at 9:10 PM, Jun Rao  wrote:
> >
> > > I am pleased to announce that the Apache Kafka PMC has voted to
> > > invite Sriharsha Chintalapani as a committer and Sriharsha has
> accepted.
> > >
> > > Sriharsha has contributed numerous patches to Kafka. The most
> significant
> > > one is the SSL support.
> > >
> > > Please join me on welcoming and congratulating Sriharsha.
> > >
> > > I look forward to your continued contributions and much more to come!
> > >
> > > Jun
> > >
> >
> >
> >
> > --
> > -- Guozhang
> >
>


Re: KAFKA-2364 migrate docs from SVN to git

2015-10-02 Thread Manikumar Reddy
Thanks Gwen,  i will
update the next steps.
On Oct 3, 2015 1:08 AM, "Gwen Shapira"  wrote:

> Hi,
>
> I created asf-git under https://git-wip-us.apache.org/repos/asf/kafka-site
> .
> git and pushed our existing docs in there.
> What do we need to do to get infra to show this in our website?
>
> Next steps:
> 1) Minor fix to PR 171
> 2) Merge PR 171
> 3) Get Apache to show our git site
> 4) Update wiki with "contributing to docs" process
>
> Gwen
>
>
>
> On Tue, Sep 15, 2015 at 8:40 AM, Manikumar Reddy 
> wrote:
>
> > Hi Gwen,
> >
> > We need to create new branch named "asf-site"  in new git repository[1].
> > This is requirement from Apache Infra
> > for git based websites [2].  After creating new branch, we will the copy
> > the existing to svn repo contents to
> > new branch.
> >
> >
> > 1. https://git-wip-us.apache.org/repos/asf/kafka-site.git
> > 2. https://issues.apache.org/jira/browse/INFRA-10143
> >
> >
> > Kumar
> >
> > On Tue, Sep 15, 2015 at 2:19 AM, Gwen Shapira  wrote:
> >
> > > Hi Manikumar,
> > >
> > > Sorry for huge delay!
> > >
> > > 1) This looks good, I'll get it in
> > >
> > > 2) I'm confused - do we need a new branch or a new repository? it looks
> > > like you already got a new repository, so why do we need a branch as
> > well?
> > >
> > >
> > >
> > > On Wed, Sep 2, 2015 at 8:11 AM, Manikumar Reddy 
> > > wrote:
> > >
> > > > Jun/Gwen/Guozhang,
> > > >Need your help to complete this.
> > > >
> > > >   (1) Copy latest docs to kafka repo:
> > > > https://github.com/apache/kafka/pull/171
> > > >
> > > >   (2) svn site repo -> git site repo migration : need committer help
> to
> > > > create a branch "asf-site".
> > > >
> > > >new git site repo :
> > > > https://git-wip-us.apache.org/repos/asf/kafka-site.git
> > > >
> > > > Kumar
> > > >
> > > > On Wed, Aug 26, 2015 at 7:43 PM, Manikumar Reddy <
> ku...@nmsworks.co.in
> > >
> > > > wrote:
> > > >
> > > > > Hi Guozhang,
> > > > >
> > > > >   Our plan is to follow Gwen's suggested approach and migrate the
> > > > existing
> > > > > svn site repo to new git repo.
> > > > >
> > > > >   (1) Gwen's suggestion will help to us maintain latest docs in
> Kafka
> > > > repo
> > > > > itself.  We periodically need to copy these latest docs to site
> > repo. I
> > > > > will submit patch for this.
> > > > >
> > > > >   (2)  svn repo -> git repo  migration will help us to integrate
> site
> > > > repo
> > > > > to git tooling/github. It will be easy to maintain the site repo
> and
> > > > > changes.  So we have created new git repo for docs and need
> committer
> > > > help
> > > > > to create a branch "asf-site".
> > > > >
> > > > >new git repo:
> > > https://git-wip-us.apache.org/repos/asf/kafka-site.git
> > > > >
> > > > >   Hope this clears the confusion.
> > > > >
> > > > > Kumar
> > > > > I thought Gwen's suggestion was to us a separate folder in the same
> > > repo
> > > > > for docs instead of a separate branch, Gwen can correct me if I was
> > > > wrong?
> > > > >
> > > > > Guozhang
> > > > >
> > > > > On Mon, Aug 24, 2015 at 10:31 AM, Manikumar Reddy <
> > > ku...@nmsworks.co.in>
> > > > > wrote:
> > > > >
> > > > > > Hi,
> > > > > >
> > > > > >Infra team created git repo for kafka site docs.
> > > > > >
> > > > > >Gwen/Guozhang,
> > > > > >Need your help to create a branch "asf-site" and copy the
> > > > exiting
> > > > > > svn contents to that branch.
> > > > > >
> > > > > > git repo:
> > https://git-wip-us.apache.org/repos/asf/kafka-site.git
> > > > > >
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://issues.apach

Re: KAFKA-2364 migrate docs from SVN to git

2015-10-03 Thread Manikumar Reddy
Thanks gwen..  I am working on remaining steps. I will update you on the
progress.

Regards,
Mani

On Sat, Oct 3, 2015 at 7:27 PM, Gwen Shapira  wrote:

> OK, PR 171 is in, and the latest version of the docs is now in docs/
> directory of trunk!
>
> Next steps:
> 1. Follow up with infra on our github site
> 2. Update the docs contribution guide
> 3. Update the release guide (since we are releasing docs as part of our
> release artifacts)
>
> Mani, I assume you are on those?
> Anything I'm missing?
>
> Gwen
>
> On Fri, Oct 2, 2015 at 11:28 PM, Manikumar Reddy 
> wrote:
>
> > Thanks Gwen,  i will
> > update the next steps.
> > On Oct 3, 2015 1:08 AM, "Gwen Shapira"  wrote:
> >
> > > Hi,
> > >
> > > I created asf-git under
> > https://git-wip-us.apache.org/repos/asf/kafka-site
> > > .
> > > git and pushed our existing docs in there.
> > > What do we need to do to get infra to show this in our website?
> > >
> > > Next steps:
> > > 1) Minor fix to PR 171
> > > 2) Merge PR 171
> > > 3) Get Apache to show our git site
> > > 4) Update wiki with "contributing to docs" process
> > >
> > > Gwen
> > >
> > >
> > >
> > > On Tue, Sep 15, 2015 at 8:40 AM, Manikumar Reddy  >
> > > wrote:
> > >
> > > > Hi Gwen,
> > > >
> > > > We need to create new branch named "asf-site"  in new git
> > repository[1].
> > > > This is requirement from Apache Infra
> > > > for git based websites [2].  After creating new branch, we will the
> > copy
> > > > the existing to svn repo contents to
> > > > new branch.
> > > >
> > > >
> > > > 1. https://git-wip-us.apache.org/repos/asf/kafka-site.git
> > > > 2. https://issues.apache.org/jira/browse/INFRA-10143
> > > >
> > > >
> > > > Kumar
> > > >
> > > > On Tue, Sep 15, 2015 at 2:19 AM, Gwen Shapira 
> > wrote:
> > > >
> > > > > Hi Manikumar,
> > > > >
> > > > > Sorry for huge delay!
> > > > >
> > > > > 1) This looks good, I'll get it in
> > > > >
> > > > > 2) I'm confused - do we need a new branch or a new repository? it
> > looks
> > > > > like you already got a new repository, so why do we need a branch
> as
> > > > well?
> > > > >
> > > > >
> > > > >
> > > > > On Wed, Sep 2, 2015 at 8:11 AM, Manikumar Reddy <
> > ku...@nmsworks.co.in>
> > > > > wrote:
> > > > >
> > > > > > Jun/Gwen/Guozhang,
> > > > > >Need your help to complete this.
> > > > > >
> > > > > >   (1) Copy latest docs to kafka repo:
> > > > > > https://github.com/apache/kafka/pull/171
> > > > > >
> > > > > >   (2) svn site repo -> git site repo migration : need committer
> > help
> > > to
> > > > > > create a branch "asf-site".
> > > > > >
> > > > > >new git site repo :
> > > > > > https://git-wip-us.apache.org/repos/asf/kafka-site.git
> > > > > >
> > > > > > Kumar
> > > > > >
> > > > > > On Wed, Aug 26, 2015 at 7:43 PM, Manikumar Reddy <
> > > ku...@nmsworks.co.in
> > > > >
> > > > > > wrote:
> > > > > >
> > > > > > > Hi Guozhang,
> > > > > > >
> > > > > > >   Our plan is to follow Gwen's suggested approach and migrate
> the
> > > > > > existing
> > > > > > > svn site repo to new git repo.
> > > > > > >
> > > > > > >   (1) Gwen's suggestion will help to us maintain latest docs in
> > > Kafka
> > > > > > repo
> > > > > > > itself.  We periodically need to copy these latest docs to site
> > > > repo. I
> > > > > > > will submit patch for this.
> > > > > > >
> > > > > > >   (2)  svn repo -> git repo  migration will help us to
> integrate
> > > site
> > > > > > repo
> > > > > > > to git tooling/github. It will be easy to maintain the site
> repo
>

Re: KAFKA-2364 migrate docs from SVN to git

2015-10-05 Thread Manikumar Reddy
Hi Gwen,

Kafka site is updated to use Git repo. We can now push any site changes to
git web site repo.

1) "Contributing website changes" wiki page:
https://cwiki.apache.org/confluence/display/KAFKA/Contributing+Website+Documentation+Changes

2) "Website update process" added to Release Process wiki page:
https://cwiki.apache.org/confluence/display/KAFKA/Release+Process

3) PR to update contributing.html:
https://github.com/apache/kafka-site/pull/1


Regards
Mani

On Sat, Oct 3, 2015 at 9:28 PM, Ismael Juma  wrote:

> On 3 Oct 2015 16:44, "Gwen Shapira"  wrote:
>
> > OK, PR 171 is in, and the latest version of the docs is now in docs/
> > directory of trunk!
>
> Awesome. :)
>
> > Next steps:
> > 1. Follow up with infra on our github site
>
> Follow-up issue filed:
> https://issues.apache.org/jira/browse/INFRA-10539. Geoffrey
> Corey assigned the issue to himself.
>
> > 2. Update the docs contribution guide
> > 3. Update the release guide (since we are releasing docs as part of our
> > release artifacts)
> >
> > Mani, I assume you are on those?
> > Anything I'm missing?
>
> I can't think of anything else at this point.
>
> Ismael
>


Re: KAFKA-2364 migrate docs from SVN to git

2015-10-06 Thread Manikumar Reddy
On Tue, Oct 6, 2015 at 1:34 PM, Ismael Juma  wrote:

> Thanks Mani. Regarding the release process changes, a couple of comments:
>
> 1. Under "bug-fix releases", you mention "major release directory" a couple
> of times. Is this right?
>

 hmm..not sure. For bug fix releases like 0.8.2.X, we are referring its
major release docs (0.8.2 release). In that sense, i used "major release
directory". I may be wrong.


> 2. "Auto-generate the configuration docs" is mentioned a couple of times,
> would it be worth including the command used to do this as well?
>

  Yes, updated the wiki page.


>
> Ismael
>
> On Tue, Oct 6, 2015 at 3:37 AM, Manikumar Reddy 
> wrote:
>
> > Hi Gwen,
> >
> > Kafka site is updated to use Git repo. We can now push any site changes
> to
> > git web site repo.
> >
> > 1) "Contributing website changes" wiki page:
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/Contributing+Website+Documentation+Changes
> >
> > 2) "Website update process" added to Release Process wiki page:
> > https://cwiki.apache.org/confluence/display/KAFKA/Release+Process
> >
> > 3) PR to update contributing.html:
> > https://github.com/apache/kafka-site/pull/1
> >
> >
> > Regards
> > Mani
> >
> > On Sat, Oct 3, 2015 at 9:28 PM, Ismael Juma  wrote:
> >
> > > On 3 Oct 2015 16:44, "Gwen Shapira"  wrote:
> > >
> > > > OK, PR 171 is in, and the latest version of the docs is now in docs/
> > > > directory of trunk!
> > >
> > > Awesome. :)
> > >
> > > > Next steps:
> > > > 1. Follow up with infra on our github site
> > >
> > > Follow-up issue filed:
> > > https://issues.apache.org/jira/browse/INFRA-10539. Geoffrey
> > > Corey assigned the issue to himself.
> > >
> > > > 2. Update the docs contribution guide
> > > > 3. Update the release guide (since we are releasing docs as part of
> our
> > > > release artifacts)
> > > >
> > > > Mani, I assume you are on those?
> > > > Anything I'm missing?
> > >
> > > I can't think of anything else at this point.
> > >
> > > Ismael
> > >
> >
>


Re: automatic topic creation question

2014-09-26 Thread Manikumar Reddy
Hi,

I see a single partition for the newly created topic, and it has
> only one replica.  Is there a way to specify a replication factor greater
> than 1?
>

 you can set default.replication.factor config property, This property is
used to
set default replication factor for auto created topics.


Regards,
Manikumar


Re: [VOTE] 0.8.2-beta Release Candidate 1

2014-10-25 Thread Manikumar Reddy
+1 (non-binding). Verified the source and binary releases.

Regards,
Manikumar


Re: Welcome Kafka's newest committer

2014-11-19 Thread Manikumar Reddy
Congrats!

On Thu, Nov 20, 2014 at 8:04 AM, Jun Rao  wrote:

> Guozhang,
>
> Congratulations! Thanks a lot for all your work.
>
> Jun
>
> On Wed, Nov 19, 2014 at 4:05 PM, Neha Narkhede 
> wrote:
>
> > Hi everyone,
> >
> > I'm very happy to announce that the Kafka PMC has invited Guozhang Wang
> to
> > become a committer. Guozhang has made significant contributions to Kafka
> > over the past year, along with being very active on code reviews and the
> > mailing list.
> >
> > Please join me in welcoming him.
> >
> > Thanks,
> > Neha (on behalf of the Kafka PMC)
> >
>


Re: [DISCUSSION] adding the serializer api back to the new java producer

2014-11-25 Thread Manikumar Reddy
+1 for this change.

what about de-serializer  class in 0.8.2?  Say i am using new producer with
Avro and old consumer combination.
then i need to give custom Decoder implementation for Avro right?.

On Tue, Nov 25, 2014 at 9:19 PM, Joe Stein  wrote:

> The serializer is an expected use of the producer/consumer now and think we
> should continue that support in the new client. As far as breaking the API
> it is why we released the 0.8.2-beta to help get through just these type of
> blocking issues in a way that the community at large could be involved in
> easier with a build/binaries to download and use from maven also.
>
> +1 on the change now prior to the 0.8.2 release.
>
> - Joe Stein
>
>
> On Mon, Nov 24, 2014 at 11:43 PM, Sriram Subramanian <
> srsubraman...@linkedin.com.invalid> wrote:
>
> > Looked at the patch. +1 from me.
> >
> > On 11/24/14 8:29 PM, "Gwen Shapira"  wrote:
> >
> > >As one of the people who spent too much time building Avro repositories,
> > >+1
> > >on bringing serializer API back.
> > >
> > >I think it will make the new producer easier to work with.
> > >
> > >Gwen
> > >
> > >On Mon, Nov 24, 2014 at 6:13 PM, Jay Kreps  wrote:
> > >
> > >> This is admittedly late in the release cycle to make a change. To add
> to
> > >> Jun's description the motivation was that we felt it would be better
> to
> > >> change that interface now rather than after the release if it needed
> to
> > >> change.
> > >>
> > >> The motivation for wanting to make a change was the ability to really
> be
> > >> able to develop support for Avro and other serialization formats. The
> > >> current status is pretty scattered--there is a schema repository on an
> > >>Avro
> > >> JIRA and another fork of that on github, and a bunch of people we have
> > >> talked to have done similar things for other serialization systems. It
> > >> would be nice if these things could be packaged in such a way that it
> > >>was
> > >> possible to just change a few configs in the producer and get rich
> > >>metadata
> > >> support for messages.
> > >>
> > >> As we were thinking this through we realized that the new api we were
> > >>about
> > >> to introduce was kind of not very compatable with this since it was
> just
> > >> byte[] oriented.
> > >>
> > >> You can always do this by adding some kind of wrapper api that wraps
> the
> > >> producer. But this puts us back in the position of trying to document
> > >>and
> > >> support multiple interfaces.
> > >>
> > >> This also opens up the possibility of adding a MessageValidator or
> > >> MessageInterceptor plug-in transparently so that you can do other
> custom
> > >> validation on the messages you are sending which obviously requires
> > >>access
> > >> to the original object not the byte array.
> > >>
> > >> This api doesn't prevent using byte[] by configuring the
> > >> ByteArraySerializer it works as it currently does.
> > >>
> > >> -Jay
> > >>
> > >> On Mon, Nov 24, 2014 at 5:58 PM, Jun Rao  wrote:
> > >>
> > >> > Hi, Everyone,
> > >> >
> > >> > I'd like to start a discussion on whether it makes sense to add the
> > >> > serializer api back to the new java producer. Currently, the new
> java
> > >> > producer takes a byte array for both the key and the value. While
> this
> > >> api
> > >> > is simple, it pushes the serialization logic into the application.
> > >>This
> > >> > makes it hard to reason about what type of data is being sent to
> Kafka
> > >> and
> > >> > also makes it hard to share an implementation of the serializer. For
> > >> > example, to support Avro, the serialization logic could be quite
> > >>involved
> > >> > since it might need to register the Avro schema in some remote
> > >>registry
> > >> and
> > >> > maintain a schema cache locally, etc. Without a serialization api,
> > >>it's
> > >> > impossible to share such an implementation so that people can easily
> > >> reuse.
> > >> > We sort of overlooked this implication during the initial discussion
> > >>of
> > >> the
> > >> > producer api.
> > >> >
> > >> > So, I'd like to propose an api change to the new producer by adding
> > >>back
> > >> > the serializer api similar to what we had in the old producer.
> > >>Specially,
> > >> > the proposed api changes are the following.
> > >> >
> > >> > First, we change KafkaProducer to take generic types K and V for the
> > >>key
> > >> > and the value, respectively.
> > >> >
> > >> > public class KafkaProducer implements Producer {
> > >> >
> > >> > public Future send(ProducerRecord record,
> > >> Callback
> > >> > callback);
> > >> >
> > >> > public Future send(ProducerRecord record);
> > >> > }
> > >> >
> > >> > Second, we add two new configs, one for the key serializer and
> another
> > >> for
> > >> > the value serializer. Both serializers will default to the byte
> array
> > >> > implementation.
> > >> >
> > >> > public class ProducerConfig extends AbstractConfig {
> > >> >
> > >> > .define(KEY_SERIALIZER_CLASS_CONFIG, Type.CLASS,
> > >> > "org.apach

[jira] [Updated] (KAFKA-2159) offsets.topic.segment.bytes and offsets.topic.retention.minutes are ignored

2016-05-17 Thread Manikumar Reddy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar Reddy updated KAFKA-2159:
---
Resolution: Duplicate
Status: Resolved  (was: Patch Available)

> offsets.topic.segment.bytes and offsets.topic.retention.minutes are ignored
> ---
>
> Key: KAFKA-2159
> URL: https://issues.apache.org/jira/browse/KAFKA-2159
> Project: Kafka
>  Issue Type: Bug
>  Components: offset manager
>Affects Versions: 0.8.2.1
>Reporter: Rafał Boniecki
>Assignee: Manikumar Reddy
>  Labels: newbie
> Attachments: KAFKA-2159.patch, KAFKA-2159_2015-06-17_11:44:03.patch, 
> KAFKA-2159_2015-07-10_21:14:26.patch
>
>
> My broker configuration:
> {quote}offsets.topic.num.partitions=20
> offsets.topic.segment.bytes=10485760
> offsets.topic.retention.minutes=10080{quote}
> Describe of __consumer_offsets topic:
> {quote}Topic:__consumer_offsets   PartitionCount:20   
> ReplicationFactor:3 Configs:segment.bytes=104857600,cleanup.policy=compact
>   Topic: __consumer_offsets   Partition: 0Leader: 112 
> Replicas: 112,212,312   Isr: 212,312,112
>   Topic: __consumer_offsets   Partition: 1Leader: 212 
> Replicas: 212,312,412   Isr: 212,312,412
>   Topic: __consumer_offsets   Partition: 2Leader: 312 
> Replicas: 312,412,512   Isr: 312,412,512
>   Topic: __consumer_offsets   Partition: 3Leader: 412 
> Replicas: 412,512,112   Isr: 412,512,112
>   Topic: __consumer_offsets   Partition: 4Leader: 512 
> Replicas: 512,112,212   Isr: 512,212,112
>   Topic: __consumer_offsets   Partition: 5Leader: 112 
> Replicas: 112,312,412   Isr: 312,412,112
>   Topic: __consumer_offsets   Partition: 6Leader: 212 
> Replicas: 212,412,512   Isr: 212,412,512
>   Topic: __consumer_offsets   Partition: 7Leader: 312 
> Replicas: 312,512,112   Isr: 312,512,112
>   Topic: __consumer_offsets   Partition: 8Leader: 412 
> Replicas: 412,112,212   Isr: 412,212,112
>   Topic: __consumer_offsets   Partition: 9Leader: 512 
> Replicas: 512,212,312   Isr: 512,212,312
>   Topic: __consumer_offsets   Partition: 10   Leader: 112 
> Replicas: 112,412,512   Isr: 412,512,112
>   Topic: __consumer_offsets   Partition: 11   Leader: 212 
> Replicas: 212,512,112   Isr: 212,512,112
>   Topic: __consumer_offsets   Partition: 12   Leader: 312 
> Replicas: 312,112,212   Isr: 312,212,112
>   Topic: __consumer_offsets   Partition: 13   Leader: 412 
> Replicas: 412,212,312   Isr: 412,212,312
>   Topic: __consumer_offsets   Partition: 14   Leader: 512 
> Replicas: 512,312,412   Isr: 512,312,412
>   Topic: __consumer_offsets   Partition: 15   Leader: 112 
> Replicas: 112,512,212   Isr: 512,212,112
>   Topic: __consumer_offsets   Partition: 16   Leader: 212 
> Replicas: 212,112,312   Isr: 212,312,112
>   Topic: __consumer_offsets   Partition: 17   Leader: 312 
> Replicas: 312,212,412   Isr: 312,212,412
>   Topic: __consumer_offsets   Partition: 18   Leader: 412 
> Replicas: 412,312,512   Isr: 412,312,512
>   Topic: __consumer_offsets   Partition: 19   Leader: 512 
> Replicas: 512,412,112   Isr: 512,412,112{quote}
> OffsetManager logs:
> {quote}2015-04-29 17:58:43:403 CEST DEBUG 
> [kafka-scheduler-3][kafka.server.OffsetManager] Compacting offsets cache.
> 2015-04-29 17:58:43:403 CEST DEBUG 
> [kafka-scheduler-3][kafka.server.OffsetManager] Found 1 stale offsets (older 
> than 8640 ms).
> 2015-04-29 17:58:43:404 CEST TRACE 
> [kafka-scheduler-3][kafka.server.OffsetManager] Removing stale offset and 
> metadata for [drafts,tasks,1]: OffsetAndMetadata[824,consumer_id = drafts, 
> time = 1430322433,0]
> 2015-04-29 17:58:43:404 CEST TRACE 
> [kafka-scheduler-3][kafka.server.OffsetManager] Marked 1 offsets in 
> [__consumer_offsets,2] for deletion.
> 2015-04-29 17:58:43:404 CEST DEBUG 
> [kafka-scheduler-3][kafka.server.OffsetManager] Removed 1 stale offsets in 1 
> milliseconds.{quote}
> Parameters are ignored and default values are used instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2800) Update outdated dependencies

2016-05-20 Thread Manikumar Reddy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2800?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar Reddy updated KAFKA-2800:
---
Status: Reopened  (was: Closed)

> Update outdated dependencies
> 
>
> Key: KAFKA-2800
> URL: https://issues.apache.org/jira/browse/KAFKA-2800
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.8.2.2
>Reporter: Grant Henke
>Assignee: Grant Henke
> Fix For: 0.10.1.0, 0.10.0.0
>
>
> See the relevant discussion here: 
> http://search-hadoop.com/m/uyzND1LAyyi2IB1wW1/Dependency+Updates&subj=Dependency+Updates



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-2800) Update outdated dependencies

2016-05-20 Thread Manikumar Reddy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2800?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar Reddy resolved KAFKA-2800.

Resolution: Fixed

> Update outdated dependencies
> 
>
> Key: KAFKA-2800
> URL: https://issues.apache.org/jira/browse/KAFKA-2800
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.8.2.2
>Reporter: Grant Henke
>Assignee: Grant Henke
> Fix For: 0.10.1.0, 0.10.0.0
>
>
> See the relevant discussion here: 
> http://search-hadoop.com/m/uyzND1LAyyi2IB1wW1/Dependency+Updates&subj=Dependency+Updates



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3219) Long topic names mess up broker topic state

2016-05-20 Thread Manikumar Reddy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar Reddy updated KAFKA-3219:
---
Status: Reopened  (was: Closed)

> Long topic names mess up broker topic state
> ---
>
> Key: KAFKA-3219
> URL: https://issues.apache.org/jira/browse/KAFKA-3219
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Magnus Edenhill
>Assignee: Vahid Hashemian
> Fix For: 0.10.0.0
>
>
> Seems like the broker doesn't like topic names of 254 chars or more when 
> creating using kafka-topics.sh --create.
> The problem does not seem to arise when topic is created through automatic 
> topic creation.
> How to reproduce:
> {code}
> TOPIC=$(printf 'd%.0s' {1..254} ) ; bin/kafka-topics.sh --zookeeper 0 
> --create --topic $TOPIC --partitions 1 --replication-factor 1
> {code}
> {code}
> [2016-02-06 22:00:01,943] INFO [ReplicaFetcherManager on broker 3] Removed 
> fetcher for partitions 
> [dd,0]
>  (kafka.server.ReplicaFetcherManager)
> [2016-02-06 22:00:01,944] ERROR [KafkaApi-3] Error when handling request 
> {controller_id=3,controller_epoch=12,partition_states=[{topic=dd,partition=0,controller_epoch=12,leader=3,leader_epoch=0,isr=[3],zk_version=0,replicas=[3]}],live_leaders=[{id=3,host=eden,port=9093}]}
>  (kafka.server.KafkaApis)
> java.lang.NullPointerException
> at 
> scala.collection.mutable.ArrayOps$ofRef$.length$extension(ArrayOps.scala:114)
> at scala.collection.mutable.ArrayOps$ofRef.length(ArrayOps.scala:114)
> at 
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:32)
> at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)
> at 
> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
> at kafka.log.Log.loadSegments(Log.scala:138)
> at kafka.log.Log.(Log.scala:92)
> at kafka.log.LogManager.createLog(LogManager.scala:357)
> at kafka.cluster.Partition.getOrCreateReplica(Partition.scala:96)
> at 
> kafka.cluster.Partition$$anonfun$4$$anonfun$apply$2.apply(Partition.scala:176)
> at 
> kafka.cluster.Partition$$anonfun$4$$anonfun$apply$2.apply(Partition.scala:176)
> at scala.collection.mutable.HashSet.foreach(HashSet.scala:79)
> at kafka.cluster.Partition$$anonfun$4.apply(Partition.scala:176)
> at kafka.cluster.Partition$$anonfun$4.apply(Partition.scala:170)
> at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:259)
> at kafka.utils.CoreUtils$.inWriteLock(CoreUtils.scala:267)
> at kafka.cluster.Partition.makeLeader(Partition.scala:170)
> at 
> kafka.server.ReplicaManager$$anonfun$makeLeaders$4.apply(ReplicaManager.scala:696)
> at 
> kafka.server.ReplicaManager$$anonfun$makeLeaders$4.apply(ReplicaManager.scala:695)
> at 
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
> at 
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
> at 
> scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226)
> at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39)
> at scala.collection.mutable.HashMap.foreach(HashMap.scala:98)
> at kafka.server.ReplicaManager.makeLeaders(ReplicaManager.scala:695)
> at 
> kafka.server.ReplicaManager.becomeLeaderOrFollower(ReplicaManager.scala:641)
> at 
> kafka.server.KafkaApis.handleLeaderAndIsrRequest(KafkaApis.scala:142)
> at kafka.server.KafkaApis.handle(KafkaApis.scala:79)
> at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
> at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-3219) Long topic names mess up broker topic state

2016-05-20 Thread Manikumar Reddy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar Reddy resolved KAFKA-3219.

Resolution: Fixed

> Long topic names mess up broker topic state
> ---
>
> Key: KAFKA-3219
> URL: https://issues.apache.org/jira/browse/KAFKA-3219
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Magnus Edenhill
>Assignee: Vahid Hashemian
> Fix For: 0.10.0.0
>
>
> Seems like the broker doesn't like topic names of 254 chars or more when 
> creating using kafka-topics.sh --create.
> The problem does not seem to arise when topic is created through automatic 
> topic creation.
> How to reproduce:
> {code}
> TOPIC=$(printf 'd%.0s' {1..254} ) ; bin/kafka-topics.sh --zookeeper 0 
> --create --topic $TOPIC --partitions 1 --replication-factor 1
> {code}
> {code}
> [2016-02-06 22:00:01,943] INFO [ReplicaFetcherManager on broker 3] Removed 
> fetcher for partitions 
> [dd,0]
>  (kafka.server.ReplicaFetcherManager)
> [2016-02-06 22:00:01,944] ERROR [KafkaApi-3] Error when handling request 
> {controller_id=3,controller_epoch=12,partition_states=[{topic=dd,partition=0,controller_epoch=12,leader=3,leader_epoch=0,isr=[3],zk_version=0,replicas=[3]}],live_leaders=[{id=3,host=eden,port=9093}]}
>  (kafka.server.KafkaApis)
> java.lang.NullPointerException
> at 
> scala.collection.mutable.ArrayOps$ofRef$.length$extension(ArrayOps.scala:114)
> at scala.collection.mutable.ArrayOps$ofRef.length(ArrayOps.scala:114)
> at 
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:32)
> at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)
> at 
> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
> at kafka.log.Log.loadSegments(Log.scala:138)
> at kafka.log.Log.(Log.scala:92)
> at kafka.log.LogManager.createLog(LogManager.scala:357)
> at kafka.cluster.Partition.getOrCreateReplica(Partition.scala:96)
> at 
> kafka.cluster.Partition$$anonfun$4$$anonfun$apply$2.apply(Partition.scala:176)
> at 
> kafka.cluster.Partition$$anonfun$4$$anonfun$apply$2.apply(Partition.scala:176)
> at scala.collection.mutable.HashSet.foreach(HashSet.scala:79)
> at kafka.cluster.Partition$$anonfun$4.apply(Partition.scala:176)
> at kafka.cluster.Partition$$anonfun$4.apply(Partition.scala:170)
> at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:259)
> at kafka.utils.CoreUtils$.inWriteLock(CoreUtils.scala:267)
> at kafka.cluster.Partition.makeLeader(Partition.scala:170)
> at 
> kafka.server.ReplicaManager$$anonfun$makeLeaders$4.apply(ReplicaManager.scala:696)
> at 
> kafka.server.ReplicaManager$$anonfun$makeLeaders$4.apply(ReplicaManager.scala:695)
> at 
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
> at 
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
> at 
> scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226)
> at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39)
> at scala.collection.mutable.HashMap.foreach(HashMap.scala:98)
> at kafka.server.ReplicaManager.makeLeaders(ReplicaManager.scala:695)
> at 
> kafka.server.ReplicaManager.becomeLeaderOrFollower(ReplicaManager.scala:641)
> at 
> kafka.server.KafkaApis.handleLeaderAndIsrRequest(KafkaApis.scala:142)
> at kafka.server.KafkaApis.handle(KafkaApis.scala:79)
> at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
> at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2547) Make DynamicConfigManager to use the ZkNodeChangeNotificationListener introduced as part of KAFKA-2211

2016-05-20 Thread Manikumar Reddy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar Reddy updated KAFKA-2547:
---
Status: Reopened  (was: Closed)

> Make DynamicConfigManager to use the ZkNodeChangeNotificationListener 
> introduced as part of KAFKA-2211
> --
>
> Key: KAFKA-2547
> URL: https://issues.apache.org/jira/browse/KAFKA-2547
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Parth Brahmbhatt
>Assignee: Parth Brahmbhatt
> Fix For: 0.10.1.0, 0.10.0.0
>
>
> As part of KAFKA-2211 (https://github.com/apache/kafka/pull/195/files) we 
> introduced a reusable ZkNodeChangeNotificationListener to ensure node changes 
> can be processed in a loss less way. This was pretty much the same code in 
> DynamicConfigManager with little bit of refactoring so it can be reused. We 
> now need to make DynamicConfigManager itself to use this new class once 
> KAFKA-2211 is committed to avoid code duplication.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-2547) Make DynamicConfigManager to use the ZkNodeChangeNotificationListener introduced as part of KAFKA-2211

2016-05-20 Thread Manikumar Reddy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar Reddy resolved KAFKA-2547.

Resolution: Fixed

> Make DynamicConfigManager to use the ZkNodeChangeNotificationListener 
> introduced as part of KAFKA-2211
> --
>
> Key: KAFKA-2547
> URL: https://issues.apache.org/jira/browse/KAFKA-2547
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Parth Brahmbhatt
>Assignee: Parth Brahmbhatt
> Fix For: 0.10.1.0, 0.10.0.0
>
>
> As part of KAFKA-2211 (https://github.com/apache/kafka/pull/195/files) we 
> introduced a reusable ZkNodeChangeNotificationListener to ensure node changes 
> can be processed in a loss less way. This was pretty much the same code in 
> DynamicConfigManager with little bit of refactoring so it can be reused. We 
> now need to make DynamicConfigManager itself to use this new class once 
> KAFKA-2211 is committed to avoid code duplication.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2651) Remove deprecated config alteration from TopicCommand in 0.9.1.0

2016-06-05 Thread Manikumar Reddy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2651?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar Reddy updated KAFKA-2651:
---
Fix Version/s: 0.11.0.0

> Remove deprecated config alteration from TopicCommand in 0.9.1.0
> 
>
> Key: KAFKA-2651
> URL: https://issues.apache.org/jira/browse/KAFKA-2651
> Project: Kafka
>  Issue Type: Task
>Affects Versions: 0.9.0.0
>Reporter: Grant Henke
>    Assignee: Manikumar Reddy
> Fix For: 0.11.0.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3802) log mtimes reset on broker restart

2016-06-15 Thread Manikumar Reddy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar Reddy updated KAFKA-3802:
---
Status: Patch Available  (was: Open)

> log mtimes reset on broker restart
> --
>
> Key: KAFKA-3802
> URL: https://issues.apache.org/jira/browse/KAFKA-3802
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.1
>Reporter: Andrew Otto
> Fix For: 0.10.0.1
>
>
> Folks over in 
> http://mail-archives.apache.org/mod_mbox/kafka-users/201605.mbox/%3CCAO8=cz0ragjad1acx4geqcwj+rkd1gmdavkjwytwthkszfg...@mail.gmail.com%3E
>  are commenting about this issue.
> In 0.9, any data log file that was on
> disk before the broker has it's mtime modified to the time of the broker
> restart.
> This causes problems with log retention, as all the files then look like
> they contain recent data to kafka.  We use the default log retention of 7
> days, but if all the files are touched at the same time, this can cause us
> to retain up to 2 weeks of log data, which can fill up our disks.
> This happens *most* of the time, but seemingly not all.  We have seen broker 
> restarts where mtimes were not changed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3950) kafka mirror maker tool is not respecting whitelist option

2016-07-11 Thread Manikumar Reddy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar Reddy updated KAFKA-3950:
---

consumer.subscribe(Pattern p , ..) method is failing when authorizer is enabled.
irrespective of pattern supplied, Consumer.subscribe(Pattren p, ..) method is 
always fetches metadata of all the topics(including internal topics) available 
in the cluster.
Since consumer is trying to fetch all the topics available in the cluster, it 
will fail for internal topics and topics which does not have describe 
permission.
Even if we give permission to all the topics which are matching the pattern, 
consumer.subscribe(Pattern p,..) will fail by complaining about internal topics
and other uninterested topics.

Current Consumer.subscribe(Pattern p,..) method implementation is fetching all 
the topic metadata and then it is applying the
pattern matching on client side. One possible solution is, send the pattern 
string to server and fetch only required topic metadata


[~ijuma] [~hachikuji]] I would like to know your views on this issue.

> kafka mirror maker tool is not respecting whitelist option
> --
>
> Key: KAFKA-3950
> URL: https://issues.apache.org/jira/browse/KAFKA-3950
> Project: Kafka
>  Issue Type: Bug
>Reporter: Raghav Kumar Gautam
>    Assignee: Manikumar Reddy
>Priority: Critical
>
> A mirror maker launched like this:
> {code}
> /usr/bin/kinit -k -t /home/kfktest/hadoopqa/keytabs/kfktest.headless.keytab 
> kfkt...@example.com
> JAVA_HOME=/usr/jdk64/jdk1.8.0_77 JMX_PORT=9112 
> /usr/kafka/bin/kafka-run-class.sh kafka.tools.MirrorMaker --consumer.config 
> /usr/kafka/system_test/mirror_maker_testsuite/testcase_15001/config/mirror_consumer_12.properties
>  --producer.config 
> /usr/kafka/system_test/mirror_maker_testsuite/testcase_15001/config/mirror_producer_12.properties
>  --new.consumer --whitelist="test.*" >>  
> /usr/kafka/system_test/mirror_maker_testsuite/testcase_15001/logs/mirror_maker-12/mirror_maker_12.log
>  2>&1 & echo pid:$! >  
> /usr/kafka/system_test/mirror_maker_testsuite/testcase_15001/logs/mirror_maker-12/entity_12_pid
> {code}
> Lead to TopicAuthorizationException:
> {code}
> WARN Error while fetching metadata with correlation id 44 : 
> {__consumer_offsets=TOPIC_AUTHORIZATION_FAILED} 
> (org.apache.kafka.clients.NetworkClient)
> [2016-06-20 13:24:49,983] FATAL [mirrormaker-thread-0] Mirror maker thread 
> failure due to  (kafka.tools.MirrorMaker$MirrorMakerThread)
> org.apache.kafka.common.errors.TopicAuthorizationException: Not authorized to 
> access topics: [__consumer_offsets]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3950) kafka mirror maker tool is not respecting whitelist option

2016-07-12 Thread Manikumar Reddy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15373115#comment-15373115
 ] 

Manikumar Reddy commented on KAFKA-3950:


I was also thinking similar lines. Instead of removing the check, we can move 
it to else block and have separate check for pattern subscriptions (check below 
pseudo code).  with this, we can still  still throw error for  topics that 
meant to include in your regex and ignore other unauthorized topics.

I have a patch. I will test and raise PR.

{code}
if (subscriptions.hasPatternSubscription()) {

for (String topic : cluster.unauthorizedTopics()) {
if 
(subscriptions.getSubscribedPattern().matcher(topic).matches())
throw new TopicAuthorizationException(topic);
}

   .
   .
} else if (!cluster.unauthorizedTopics().isEmpty()) {
throw new TopicAuthorizationException(new 
HashSet<>(cluster.unauthorizedTopics()));
}
{code}

> kafka mirror maker tool is not respecting whitelist option
> --
>
> Key: KAFKA-3950
> URL: https://issues.apache.org/jira/browse/KAFKA-3950
> Project: Kafka
>  Issue Type: Bug
>Reporter: Raghav Kumar Gautam
>Assignee: Manikumar Reddy
>Priority: Critical
>
> A mirror maker launched like this:
> {code}
> /usr/bin/kinit -k -t /home/kfktest/hadoopqa/keytabs/kfktest.headless.keytab 
> kfkt...@example.com
> JAVA_HOME=/usr/jdk64/jdk1.8.0_77 JMX_PORT=9112 
> /usr/kafka/bin/kafka-run-class.sh kafka.tools.MirrorMaker --consumer.config 
> /usr/kafka/system_test/mirror_maker_testsuite/testcase_15001/config/mirror_consumer_12.properties
>  --producer.config 
> /usr/kafka/system_test/mirror_maker_testsuite/testcase_15001/config/mirror_producer_12.properties
>  --new.consumer --whitelist="test.*" >>  
> /usr/kafka/system_test/mirror_maker_testsuite/testcase_15001/logs/mirror_maker-12/mirror_maker_12.log
>  2>&1 & echo pid:$! >  
> /usr/kafka/system_test/mirror_maker_testsuite/testcase_15001/logs/mirror_maker-12/entity_12_pid
> {code}
> Lead to TopicAuthorizationException:
> {code}
> WARN Error while fetching metadata with correlation id 44 : 
> {__consumer_offsets=TOPIC_AUTHORIZATION_FAILED} 
> (org.apache.kafka.clients.NetworkClient)
> [2016-06-20 13:24:49,983] FATAL [mirrormaker-thread-0] Mirror maker thread 
> failure due to  (kafka.tools.MirrorMaker$MirrorMakerThread)
> org.apache.kafka.common.errors.TopicAuthorizationException: Not authorized to 
> access topics: [__consumer_offsets]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3961) broker sends malformed response when switching from no compression to snappy/gzip

2016-07-13 Thread Manikumar Reddy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15375342#comment-15375342
 ] 

Manikumar Reddy commented on KAFKA-3961:


you can try DumpLogSegments tools to to verify messages from data files.  This 
will give compression type for each message.
https://cwiki.apache.org/confluence/display/KAFKA/System+Tools#SystemTools-DumpLogSegment

> broker sends malformed response when switching from no compression to 
> snappy/gzip
> -
>
> Key: KAFKA-3961
> URL: https://issues.apache.org/jira/browse/KAFKA-3961
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.0
> Environment: docker container java:openjdk-8-jre on arch linux 
> 4.5.4-1-ARCH
>Reporter: Dieter Plaetinck
>
> Hi this is my first time using this tracker, so please bear with me (priority 
> seems to be major by default?)
> I should be allowed to switch back and forth between none/gzip/snappy 
> compression to the same topic/partition, right?
> (I couldn't find this explicitly anywhere but seems implied through the docs 
> and also from https://issues.apache.org/jira/browse/KAFKA-1499)
> when I try this, first i use no compression, than kill my producer, restart 
> it with snappy or gzip compression, send data to the same topic/partition 
> again, it seems the broker is sending a malformed response to my consumer.  
> At least that's what was suggested when i was reporting this problem in the 
> tracker for the client library I use 
> (https://github.com/Shopify/sarama/issues/698). Also noteworthy is that the 
> broker doesn't log anything when this happens.
> thanks!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-3102) Kafka server unable to connect to zookeeper

2016-07-15 Thread Manikumar Reddy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar Reddy resolved KAFKA-3102.

Resolution: Duplicate

> Kafka server unable to connect to zookeeper
> ---
>
> Key: KAFKA-3102
> URL: https://issues.apache.org/jira/browse/KAFKA-3102
> Project: Kafka
>  Issue Type: Bug
>  Components: security
> Environment: RHEL 6
>Reporter: Mohit Anchlia
>
> Server disconnects from the zookeeper with the following log, and logs are 
> not indicative of any problem. It works without the security setup however. 
> I followed the security configuration steps from this site: 
> http://docs.confluent.io/2.0.0/kafka/sasl.html
> In here find the list of principals, logs and Jaas file:
> 1) Jaas file 
> KafkaServer {
> com.sun.security.auth.module.Krb5LoginModule required
> useKeyTab=true
> storeKey=true
> keyTab="/mnt/kafka/kafka/kafka.keytab"
> principal="kafka/10.24.251@example.com";
> };
> Client {
> com.sun.security.auth.module.Krb5LoginModule required
> useKeyTab=true
> storeKey=true
> keyTab="/mnt/kafka/kafka/kafka.keytab"
> principal="kafka/10.24.251@example.com";
> };
> 2) Principles from krb admin
> kadmin.local:  list_principals
> K/m...@example.com
> kadmin/ad...@example.com
> kadmin/chang...@example.com
> kadmin/ip-10-24-251-175.us-west-2.compute.inter...@example.com
> kafka/10.24.251@example.com
> krbtgt/example@example.com
> [2016-01-13 16:26:00,551] INFO starting (kafka.server.KafkaServer)
> [2016-01-13 16:26:00,557] INFO Connecting to zookeeper on localhost:2181 
> (kafka.server.KafkaServer)
> [2016-01-13 16:27:30,718] FATAL Fatal error during KafkaServer startup. 
> Prepare to shutdown (kafka.server.KafkaServer)
> org.I0Itec.zkclient.exception.ZkTimeoutException: Unable to connect to 
> zookeeper server within timeout: 6000
> at org.I0Itec.zkclient.ZkClient.connect(ZkClient.java:1223)
> at org.I0Itec.zkclient.ZkClient.(ZkClient.java:155)
> at org.I0Itec.zkclient.ZkClient.(ZkClient.java:129)
> at kafka.utils.ZkUtils$.createZkClientAndConnection(ZkUtils.scala:89)
> at kafka.utils.ZkUtils$.apply(ZkUtils.scala:71)
> at kafka.server.KafkaServer.initZk(KafkaServer.scala:278)
> at kafka.server.KafkaServer.startup(KafkaServer.scala:168)
> at 
> kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:37)
> at kafka.Kafka$.main(Kafka.scala:67)
> at kafka.Kafka.main(Kafka.scala)
> [2016-01-13 16:27:30,721] INFO shutting down (kafka.server.KafkaServer)
> [2016-01-13 16:27:30,727] INFO shut down completed (kafka.server.KafkaServer)
> [2016-01-13 16:27:30,728] FATAL Fatal error during KafkaServerStartable 
> startup. Prepare to shutdown (kafka.server.KafkaServerStartable)
> org.I0Itec.zkclient.exception.ZkTimeoutException: Unable to connect to 
> zookeeper server within timeout: 6000
> at org.I0Itec.zkclient.ZkClient.connect(ZkClient.java:1223)
> at org.I0Itec.zkclient.ZkClient.(ZkClient.java:155)
> at org.I0Itec.zkclient.ZkClient.(ZkClient.java:129)
> at kafka.utils.ZkUtils$.createZkClientAndConnection(ZkUtils.scala:89)
> at kafka.utils.ZkUtils$.apply(ZkUtils.scala:71)
> at kafka.server.KafkaServer.initZk(KafkaServer.scala:278)
> at kafka.server.KafkaServer.startup(KafkaServer.scala:168)
> at 
> kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:37)
> at kafka.Kafka$.main(Kafka.scala:67)
> at kafka.Kafka.main(Kafka.scala)
> [2016-01-13 16:27:30,729] INFO shutting down (kafka.server.KafkaServer)
> "server.log" 156L, 6404C  
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3102) Kafka server unable to connect to zookeeper

2016-07-15 Thread Manikumar Reddy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15379135#comment-15379135
 ] 

Manikumar Reddy commented on KAFKA-3102:


[~liuxinjian] There is no kafka issue. This looks like config error from the 
issue reporter. If you have any specific issue/error, post it on kafka user 
mailing list.


> Kafka server unable to connect to zookeeper
> ---
>
> Key: KAFKA-3102
> URL: https://issues.apache.org/jira/browse/KAFKA-3102
> Project: Kafka
>  Issue Type: Bug
>  Components: security
> Environment: RHEL 6
>Reporter: Mohit Anchlia
>
> Server disconnects from the zookeeper with the following log, and logs are 
> not indicative of any problem. It works without the security setup however. 
> I followed the security configuration steps from this site: 
> http://docs.confluent.io/2.0.0/kafka/sasl.html
> In here find the list of principals, logs and Jaas file:
> 1) Jaas file 
> KafkaServer {
> com.sun.security.auth.module.Krb5LoginModule required
> useKeyTab=true
> storeKey=true
> keyTab="/mnt/kafka/kafka/kafka.keytab"
> principal="kafka/10.24.251@example.com";
> };
> Client {
> com.sun.security.auth.module.Krb5LoginModule required
> useKeyTab=true
> storeKey=true
> keyTab="/mnt/kafka/kafka/kafka.keytab"
> principal="kafka/10.24.251@example.com";
> };
> 2) Principles from krb admin
> kadmin.local:  list_principals
> K/m...@example.com
> kadmin/ad...@example.com
> kadmin/chang...@example.com
> kadmin/ip-10-24-251-175.us-west-2.compute.inter...@example.com
> kafka/10.24.251@example.com
> krbtgt/example@example.com
> [2016-01-13 16:26:00,551] INFO starting (kafka.server.KafkaServer)
> [2016-01-13 16:26:00,557] INFO Connecting to zookeeper on localhost:2181 
> (kafka.server.KafkaServer)
> [2016-01-13 16:27:30,718] FATAL Fatal error during KafkaServer startup. 
> Prepare to shutdown (kafka.server.KafkaServer)
> org.I0Itec.zkclient.exception.ZkTimeoutException: Unable to connect to 
> zookeeper server within timeout: 6000
> at org.I0Itec.zkclient.ZkClient.connect(ZkClient.java:1223)
> at org.I0Itec.zkclient.ZkClient.(ZkClient.java:155)
> at org.I0Itec.zkclient.ZkClient.(ZkClient.java:129)
> at kafka.utils.ZkUtils$.createZkClientAndConnection(ZkUtils.scala:89)
> at kafka.utils.ZkUtils$.apply(ZkUtils.scala:71)
> at kafka.server.KafkaServer.initZk(KafkaServer.scala:278)
> at kafka.server.KafkaServer.startup(KafkaServer.scala:168)
> at 
> kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:37)
> at kafka.Kafka$.main(Kafka.scala:67)
> at kafka.Kafka.main(Kafka.scala)
> [2016-01-13 16:27:30,721] INFO shutting down (kafka.server.KafkaServer)
> [2016-01-13 16:27:30,727] INFO shut down completed (kafka.server.KafkaServer)
> [2016-01-13 16:27:30,728] FATAL Fatal error during KafkaServerStartable 
> startup. Prepare to shutdown (kafka.server.KafkaServerStartable)
> org.I0Itec.zkclient.exception.ZkTimeoutException: Unable to connect to 
> zookeeper server within timeout: 6000
> at org.I0Itec.zkclient.ZkClient.connect(ZkClient.java:1223)
> at org.I0Itec.zkclient.ZkClient.(ZkClient.java:155)
> at org.I0Itec.zkclient.ZkClient.(ZkClient.java:129)
> at kafka.utils.ZkUtils$.createZkClientAndConnection(ZkUtils.scala:89)
> at kafka.utils.ZkUtils$.apply(ZkUtils.scala:71)
> at kafka.server.KafkaServer.initZk(KafkaServer.scala:278)
> at kafka.server.KafkaServer.startup(KafkaServer.scala:168)
> at 
> kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:37)
> at kafka.Kafka$.main(Kafka.scala:67)
> at kafka.Kafka.main(Kafka.scala)
> [2016-01-13 16:27:30,729] INFO shutting down (kafka.server.KafkaServer)
> "server.log" 156L, 6404C  
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2213) Log cleaner should write compacted messages using configured compression type

2016-07-15 Thread Manikumar Reddy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar Reddy updated KAFKA-2213:
---
Resolution: Duplicate
Status: Resolved  (was: Patch Available)

This duplicate of KAFKA-3252

> Log cleaner should write compacted messages using configured compression type
> -
>
> Key: KAFKA-2213
> URL: https://issues.apache.org/jira/browse/KAFKA-2213
> Project: Kafka
>  Issue Type: Bug
>Reporter: Joel Koshy
>    Assignee: Manikumar Reddy
> Attachments: KAFKA-2213.patch, KAFKA-2213_2015-05-30_00:23:01.patch, 
> KAFKA-2213_2015-06-17_16:05:53.patch, KAFKA-2213_2015-07-10_20:18:06.patch, 
> KAFKA-2213_2015-08-20_17:04:28.patch
>
>
> In KAFKA-1374 the log cleaner was improved to handle compressed messages. 
> There were a couple of follow-ups from that:
> * We write compacted messages using the original compression type in the 
> compressed message-set. We should instead append all retained messages with 
> the configured broker compression type of the topic.
> * While compressing messages we should ideally do some batching before 
> compression.
> * Investigate the use of the client compressor. (See the discussion in the 
> RBs for KAFKA-1374)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3102) Kafka server unable to connect to zookeeper

2016-07-15 Thread Manikumar Reddy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15379198#comment-15379198
 ] 

Manikumar Reddy commented on KAFKA-3102:


Subscribe to user mailing list.   Details are here: 
http://kafka.apache.org/contact.html


> Kafka server unable to connect to zookeeper
> ---
>
> Key: KAFKA-3102
> URL: https://issues.apache.org/jira/browse/KAFKA-3102
> Project: Kafka
>  Issue Type: Bug
>  Components: security
> Environment: RHEL 6
>Reporter: Mohit Anchlia
>
> Server disconnects from the zookeeper with the following log, and logs are 
> not indicative of any problem. It works without the security setup however. 
> I followed the security configuration steps from this site: 
> http://docs.confluent.io/2.0.0/kafka/sasl.html
> In here find the list of principals, logs and Jaas file:
> 1) Jaas file 
> KafkaServer {
> com.sun.security.auth.module.Krb5LoginModule required
> useKeyTab=true
> storeKey=true
> keyTab="/mnt/kafka/kafka/kafka.keytab"
> principal="kafka/10.24.251@example.com";
> };
> Client {
> com.sun.security.auth.module.Krb5LoginModule required
> useKeyTab=true
> storeKey=true
> keyTab="/mnt/kafka/kafka/kafka.keytab"
> principal="kafka/10.24.251@example.com";
> };
> 2) Principles from krb admin
> kadmin.local:  list_principals
> K/m...@example.com
> kadmin/ad...@example.com
> kadmin/chang...@example.com
> kadmin/ip-10-24-251-175.us-west-2.compute.inter...@example.com
> kafka/10.24.251@example.com
> krbtgt/example@example.com
> [2016-01-13 16:26:00,551] INFO starting (kafka.server.KafkaServer)
> [2016-01-13 16:26:00,557] INFO Connecting to zookeeper on localhost:2181 
> (kafka.server.KafkaServer)
> [2016-01-13 16:27:30,718] FATAL Fatal error during KafkaServer startup. 
> Prepare to shutdown (kafka.server.KafkaServer)
> org.I0Itec.zkclient.exception.ZkTimeoutException: Unable to connect to 
> zookeeper server within timeout: 6000
> at org.I0Itec.zkclient.ZkClient.connect(ZkClient.java:1223)
> at org.I0Itec.zkclient.ZkClient.(ZkClient.java:155)
> at org.I0Itec.zkclient.ZkClient.(ZkClient.java:129)
> at kafka.utils.ZkUtils$.createZkClientAndConnection(ZkUtils.scala:89)
> at kafka.utils.ZkUtils$.apply(ZkUtils.scala:71)
> at kafka.server.KafkaServer.initZk(KafkaServer.scala:278)
> at kafka.server.KafkaServer.startup(KafkaServer.scala:168)
> at 
> kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:37)
> at kafka.Kafka$.main(Kafka.scala:67)
> at kafka.Kafka.main(Kafka.scala)
> [2016-01-13 16:27:30,721] INFO shutting down (kafka.server.KafkaServer)
> [2016-01-13 16:27:30,727] INFO shut down completed (kafka.server.KafkaServer)
> [2016-01-13 16:27:30,728] FATAL Fatal error during KafkaServerStartable 
> startup. Prepare to shutdown (kafka.server.KafkaServerStartable)
> org.I0Itec.zkclient.exception.ZkTimeoutException: Unable to connect to 
> zookeeper server within timeout: 6000
> at org.I0Itec.zkclient.ZkClient.connect(ZkClient.java:1223)
> at org.I0Itec.zkclient.ZkClient.(ZkClient.java:155)
> at org.I0Itec.zkclient.ZkClient.(ZkClient.java:129)
> at kafka.utils.ZkUtils$.createZkClientAndConnection(ZkUtils.scala:89)
> at kafka.utils.ZkUtils$.apply(ZkUtils.scala:71)
> at kafka.server.KafkaServer.initZk(KafkaServer.scala:278)
> at kafka.server.KafkaServer.startup(KafkaServer.scala:168)
> at 
> kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:37)
> at kafka.Kafka$.main(Kafka.scala:67)
> at kafka.Kafka.main(Kafka.scala)
> [2016-01-13 16:27:30,729] INFO shutting down (kafka.server.KafkaServer)
> "server.log" 156L, 6404C  
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3980) JmxReporter uses excessive memory causing OutOfMemoryException

2016-07-20 Thread Manikumar Reddy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15387174#comment-15387174
 ] 

Manikumar Reddy commented on KAFKA-3980:


This may due to increasing in client quota metrics objects. We currently not 
clearing these metrics.
Are you restarting your producers with different clientIds?

> JmxReporter uses excessive memory causing OutOfMemoryException
> --
>
> Key: KAFKA-3980
> URL: https://issues.apache.org/jira/browse/KAFKA-3980
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.1
>Reporter: Andrew Jorgensen
>
> I have some nodes in a kafka cluster that occasionally will run out of memory 
> whenever I restart the producers. I was able to take a heap dump from both a 
> recently restarted Kafka node which weighed in at about 20 MB and a node that 
> has been running for 2 months is using over 700MB of memory. Looking at the 
> heap dump it looks like the JmxReporter is holding on to metrics and causing 
> them to build up over time. 
> !http://imgur.com/N6Cd0Ku.png!
> !http://imgur.com/kQBqA2j.png!
> The ultimate problem this causes is that there is a chance when I restart the 
> producers it will cause the node to experience an Java heap space exception 
> and OOM. The nodes  then fail to startup correctly and write a -1 as the 
> leader number to the partitions they were responsible for effectively 
> resetting the offset and rendering that partition unavailable. The kafka 
> process then needs to go be restarted in order to re-assign the node to the 
> partition that it owns.
> I have a few questions:
> 1. I am not quite sure why there are so many client id entries in that 
> JmxReporter map.
> 2. Is there a way to have the JmxReporter release metrics after a set amount 
> of time or a way to turn certain high cardinality metrics like these off?
> I can provide any logs or heap dumps if more information is needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3980) JmxReporter uses excessive memory causing OutOfMemoryException

2016-07-23 Thread Manikumar Reddy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15390792#comment-15390792
 ] 

Manikumar Reddy commented on KAFKA-3980:


[~ajorgensen] Can you enable debug log and check Produce request and Fetch 
request logs? These logs can help us to identify these unusual circuitIds.

> JmxReporter uses excessive memory causing OutOfMemoryException
> --
>
> Key: KAFKA-3980
> URL: https://issues.apache.org/jira/browse/KAFKA-3980
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.1
>Reporter: Andrew Jorgensen
>
> I have some nodes in a kafka cluster that occasionally will run out of memory 
> whenever I restart the producers. I was able to take a heap dump from both a 
> recently restarted Kafka node which weighed in at about 20 MB and a node that 
> has been running for 2 months is using over 700MB of memory. Looking at the 
> heap dump it looks like the JmxReporter is holding on to metrics and causing 
> them to build up over time. 
> !http://imgur.com/N6Cd0Ku.png!
> !http://imgur.com/kQBqA2j.png!
> The ultimate problem this causes is that there is a chance when I restart the 
> producers it will cause the node to experience an Java heap space exception 
> and OOM. The nodes  then fail to startup correctly and write a -1 as the 
> leader number to the partitions they were responsible for effectively 
> resetting the offset and rendering that partition unavailable. The kafka 
> process then needs to go be restarted in order to re-assign the node to the 
> partition that it owns.
> I have a few questions:
> 1. I am not quite sure why there are so many client id entries in that 
> JmxReporter map.
> 2. Is there a way to have the JmxReporter release metrics after a set amount 
> of time or a way to turn certain high cardinality metrics like these off?
> I can provide any logs or heap dumps if more information is needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-3980) JmxReporter uses excessive memory causing OutOfMemoryException

2016-07-23 Thread Manikumar Reddy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15390792#comment-15390792
 ] 

Manikumar Reddy edited comment on KAFKA-3980 at 7/23/16 6:27 PM:
-

[~ajorgensen] Can you enable debug log and check Produce request and Fetch 
request logs? These logs can help us to identify these unusual client-ids.


was (Author: omkreddy):
[~ajorgensen] Can you enable debug log and check Produce request and Fetch 
request logs? These logs can help us to identify these unusual circuitIds.

> JmxReporter uses excessive memory causing OutOfMemoryException
> --
>
> Key: KAFKA-3980
> URL: https://issues.apache.org/jira/browse/KAFKA-3980
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.1
>Reporter: Andrew Jorgensen
>
> I have some nodes in a kafka cluster that occasionally will run out of memory 
> whenever I restart the producers. I was able to take a heap dump from both a 
> recently restarted Kafka node which weighed in at about 20 MB and a node that 
> has been running for 2 months is using over 700MB of memory. Looking at the 
> heap dump it looks like the JmxReporter is holding on to metrics and causing 
> them to build up over time. 
> !http://imgur.com/N6Cd0Ku.png!
> !http://imgur.com/kQBqA2j.png!
> The ultimate problem this causes is that there is a chance when I restart the 
> producers it will cause the node to experience an Java heap space exception 
> and OOM. The nodes  then fail to startup correctly and write a -1 as the 
> leader number to the partitions they were responsible for effectively 
> resetting the offset and rendering that partition unavailable. The kafka 
> process then needs to go be restarted in order to re-assign the node to the 
> partition that it owns.
> I have a few questions:
> 1. I am not quite sure why there are so many client id entries in that 
> JmxReporter map.
> 2. Is there a way to have the JmxReporter release metrics after a set amount 
> of time or a way to turn certain high cardinality metrics like these off?
> I can provide any logs or heap dumps if more information is needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3992) InstanceAlreadyExistsException Error for Consumers Starting in Parallel

2016-07-26 Thread Manikumar Reddy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15393962#comment-15393962
 ] 

Manikumar Reddy commented on KAFKA-3992:


looks like you are giving same client-Id (consumerId) to multiple consumers.  
With this we will miss some metrics.

> InstanceAlreadyExistsException Error for Consumers Starting in Parallel
> ---
>
> Key: KAFKA-3992
> URL: https://issues.apache.org/jira/browse/KAFKA-3992
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Alexander Cook
>
> I see the following error sometimes when I start multiple consumers at about 
> the same time in the same process (separate threads). Everything seems to 
> work fine afterwards, so should this not actually be an ERROR level message, 
> or could there be something going wrong that I don't see? 
> Let me know if I can provide any more info! 
> Error processing messages: Error registering mbean 
> kafka.consumer:type=consumer-node-metrics,client-id=consumer-1,node-id=node--1
> org.apache.kafka.common.KafkaException: Error registering mbean 
> kafka.consumer:type=consumer-node-metrics,client-id=consumer-1,node-id=node--1
>  
> Caused by: javax.management.InstanceAlreadyExistsException: 
> kafka.consumer:type=consumer-node-metrics,client-id=consumer-1,node-id=node--1
> Here is the full stack trace: 
> M[?:com.ibm.streamsx.messaging.kafka.KafkaConsumerV9.produceTuples:-1]  - 
> Error processing messages: Error registering mbean 
> kafka.consumer:type=consumer-node-metrics,client-id=consumer-1,node-id=node--1
> org.apache.kafka.common.KafkaException: Error registering mbean 
> kafka.consumer:type=consumer-node-metrics,client-id=consumer-1,node-id=node--1
>   at 
> org.apache.kafka.common.metrics.JmxReporter.reregister(JmxReporter.java:159)
>   at 
> org.apache.kafka.common.metrics.JmxReporter.metricChange(JmxReporter.java:77)
>   at 
> org.apache.kafka.common.metrics.Metrics.registerMetric(Metrics.java:288)
>   at org.apache.kafka.common.metrics.Sensor.add(Sensor.java:177)
>   at org.apache.kafka.common.metrics.Sensor.add(Sensor.java:162)
>   at 
> org.apache.kafka.common.network.Selector$SelectorMetrics.maybeRegisterConnectionMetrics(Selector.java:641)
>   at org.apache.kafka.common.network.Selector.poll(Selector.java:268)
>   at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:270)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.clientPoll(ConsumerNetworkClient.java:303)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:197)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:187)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.awaitMetadataUpdate(ConsumerNetworkClient.java:126)
>   at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorKnown(AbstractCoordinator.java:186)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:857)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:829)
>   at 
> com.ibm.streamsx.messaging.kafka.KafkaConsumerV9.produceTuples(KafkaConsumerV9.java:129)
>   at 
> com.ibm.streamsx.messaging.kafka.KafkaConsumerV9$1.run(KafkaConsumerV9.java:70)
>   at java.lang.Thread.run(Thread.java:785)
>   at 
> com.ibm.streams.operator.internal.runtime.OperatorThreadFactory$2.run(OperatorThreadFactory.java:137)
> Caused by: javax.management.InstanceAlreadyExistsException: 
> kafka.consumer:type=consumer-node-metrics,client-id=consumer-1,node-id=node--1
>   at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:449)
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1910)
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:978)
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:912)
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:336)
>   at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:534)
>   at 
> org.apache.kafka.common.metrics.JmxReporter.reregister(JmxReporter.java:157)
>   ... 18 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3950) kafka mirror maker tool is not respecting whitelist option

2016-07-29 Thread Manikumar Reddy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15399444#comment-15399444
 ] 

Manikumar Reddy commented on KAFKA-3950:


[~ijuma] Can we include this in 0.10.0.1 release?

> kafka mirror maker tool is not respecting whitelist option
> --
>
> Key: KAFKA-3950
> URL: https://issues.apache.org/jira/browse/KAFKA-3950
> Project: Kafka
>  Issue Type: Bug
>Reporter: Raghav Kumar Gautam
>    Assignee: Manikumar Reddy
>Priority: Critical
>
> A mirror maker launched like this:
> {code}
> /usr/bin/kinit -k -t /home/kfktest/hadoopqa/keytabs/kfktest.headless.keytab 
> kfkt...@example.com
> JAVA_HOME=/usr/jdk64/jdk1.8.0_77 JMX_PORT=9112 
> /usr/kafka/bin/kafka-run-class.sh kafka.tools.MirrorMaker --consumer.config 
> /usr/kafka/system_test/mirror_maker_testsuite/testcase_15001/config/mirror_consumer_12.properties
>  --producer.config 
> /usr/kafka/system_test/mirror_maker_testsuite/testcase_15001/config/mirror_producer_12.properties
>  --new.consumer --whitelist="test.*" >>  
> /usr/kafka/system_test/mirror_maker_testsuite/testcase_15001/logs/mirror_maker-12/mirror_maker_12.log
>  2>&1 & echo pid:$! >  
> /usr/kafka/system_test/mirror_maker_testsuite/testcase_15001/logs/mirror_maker-12/entity_12_pid
> {code}
> Lead to TopicAuthorizationException:
> {code}
> WARN Error while fetching metadata with correlation id 44 : 
> {__consumer_offsets=TOPIC_AUTHORIZATION_FAILED} 
> (org.apache.kafka.clients.NetworkClient)
> [2016-06-20 13:24:49,983] FATAL [mirrormaker-thread-0] Mirror maker thread 
> failure due to  (kafka.tools.MirrorMaker$MirrorMakerThread)
> org.apache.kafka.common.errors.TopicAuthorizationException: Not authorized to 
> access topics: [__consumer_offsets]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3950) kafka mirror maker tool is not respecting whitelist option

2016-07-30 Thread Manikumar Reddy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar Reddy updated KAFKA-3950:
---
Affects Version/s: 0.10.0.0

> kafka mirror maker tool is not respecting whitelist option
> --
>
> Key: KAFKA-3950
> URL: https://issues.apache.org/jira/browse/KAFKA-3950
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.0
>Reporter: Raghav Kumar Gautam
>    Assignee: Manikumar Reddy
>Priority: Critical
> Fix For: 0.10.0.1
>
>
> A mirror maker launched like this:
> {code}
> /usr/bin/kinit -k -t /home/kfktest/hadoopqa/keytabs/kfktest.headless.keytab 
> kfkt...@example.com
> JAVA_HOME=/usr/jdk64/jdk1.8.0_77 JMX_PORT=9112 
> /usr/kafka/bin/kafka-run-class.sh kafka.tools.MirrorMaker --consumer.config 
> /usr/kafka/system_test/mirror_maker_testsuite/testcase_15001/config/mirror_consumer_12.properties
>  --producer.config 
> /usr/kafka/system_test/mirror_maker_testsuite/testcase_15001/config/mirror_producer_12.properties
>  --new.consumer --whitelist="test.*" >>  
> /usr/kafka/system_test/mirror_maker_testsuite/testcase_15001/logs/mirror_maker-12/mirror_maker_12.log
>  2>&1 & echo pid:$! >  
> /usr/kafka/system_test/mirror_maker_testsuite/testcase_15001/logs/mirror_maker-12/entity_12_pid
> {code}
> Lead to TopicAuthorizationException:
> {code}
> WARN Error while fetching metadata with correlation id 44 : 
> {__consumer_offsets=TOPIC_AUTHORIZATION_FAILED} 
> (org.apache.kafka.clients.NetworkClient)
> [2016-06-20 13:24:49,983] FATAL [mirrormaker-thread-0] Mirror maker thread 
> failure due to  (kafka.tools.MirrorMaker$MirrorMakerThread)
> org.apache.kafka.common.errors.TopicAuthorizationException: Not authorized to 
> access topics: [__consumer_offsets]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3950) kafka mirror maker tool is not respecting whitelist option

2016-07-30 Thread Manikumar Reddy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar Reddy updated KAFKA-3950:
---
 Reviewer: Ismael Juma
Fix Version/s: 0.10.0.1

> kafka mirror maker tool is not respecting whitelist option
> --
>
> Key: KAFKA-3950
> URL: https://issues.apache.org/jira/browse/KAFKA-3950
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.0
>Reporter: Raghav Kumar Gautam
>    Assignee: Manikumar Reddy
>Priority: Critical
> Fix For: 0.10.0.1
>
>
> A mirror maker launched like this:
> {code}
> /usr/bin/kinit -k -t /home/kfktest/hadoopqa/keytabs/kfktest.headless.keytab 
> kfkt...@example.com
> JAVA_HOME=/usr/jdk64/jdk1.8.0_77 JMX_PORT=9112 
> /usr/kafka/bin/kafka-run-class.sh kafka.tools.MirrorMaker --consumer.config 
> /usr/kafka/system_test/mirror_maker_testsuite/testcase_15001/config/mirror_consumer_12.properties
>  --producer.config 
> /usr/kafka/system_test/mirror_maker_testsuite/testcase_15001/config/mirror_producer_12.properties
>  --new.consumer --whitelist="test.*" >>  
> /usr/kafka/system_test/mirror_maker_testsuite/testcase_15001/logs/mirror_maker-12/mirror_maker_12.log
>  2>&1 & echo pid:$! >  
> /usr/kafka/system_test/mirror_maker_testsuite/testcase_15001/logs/mirror_maker-12/entity_12_pid
> {code}
> Lead to TopicAuthorizationException:
> {code}
> WARN Error while fetching metadata with correlation id 44 : 
> {__consumer_offsets=TOPIC_AUTHORIZATION_FAILED} 
> (org.apache.kafka.clients.NetworkClient)
> [2016-06-20 13:24:49,983] FATAL [mirrormaker-thread-0] Mirror maker thread 
> failure due to  (kafka.tools.MirrorMaker$MirrorMakerThread)
> org.apache.kafka.common.errors.TopicAuthorizationException: Not authorized to 
> access topics: [__consumer_offsets]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3950) kafka mirror maker tool is not respecting whitelist option

2016-07-30 Thread Manikumar Reddy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar Reddy updated KAFKA-3950:
---
Affects Version/s: (was: 0.10.0.0)
   0.9.0.0

> kafka mirror maker tool is not respecting whitelist option
> --
>
> Key: KAFKA-3950
> URL: https://issues.apache.org/jira/browse/KAFKA-3950
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Raghav Kumar Gautam
>    Assignee: Manikumar Reddy
>Priority: Critical
> Fix For: 0.10.0.1
>
>
> A mirror maker launched like this:
> {code}
> /usr/bin/kinit -k -t /home/kfktest/hadoopqa/keytabs/kfktest.headless.keytab 
> kfkt...@example.com
> JAVA_HOME=/usr/jdk64/jdk1.8.0_77 JMX_PORT=9112 
> /usr/kafka/bin/kafka-run-class.sh kafka.tools.MirrorMaker --consumer.config 
> /usr/kafka/system_test/mirror_maker_testsuite/testcase_15001/config/mirror_consumer_12.properties
>  --producer.config 
> /usr/kafka/system_test/mirror_maker_testsuite/testcase_15001/config/mirror_producer_12.properties
>  --new.consumer --whitelist="test.*" >>  
> /usr/kafka/system_test/mirror_maker_testsuite/testcase_15001/logs/mirror_maker-12/mirror_maker_12.log
>  2>&1 & echo pid:$! >  
> /usr/kafka/system_test/mirror_maker_testsuite/testcase_15001/logs/mirror_maker-12/entity_12_pid
> {code}
> Lead to TopicAuthorizationException:
> {code}
> WARN Error while fetching metadata with correlation id 44 : 
> {__consumer_offsets=TOPIC_AUTHORIZATION_FAILED} 
> (org.apache.kafka.clients.NetworkClient)
> [2016-06-20 13:24:49,983] FATAL [mirrormaker-thread-0] Mirror maker thread 
> failure due to  (kafka.tools.MirrorMaker$MirrorMakerThread)
> org.apache.kafka.common.errors.TopicAuthorizationException: Not authorized to 
> access topics: [__consumer_offsets]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-1612) Consumer offsets auto-commit before processing finishes

2016-08-10 Thread Manikumar Reddy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar Reddy resolved KAFKA-1612.

Resolution: Won't Fix

Closing this issue in favor new consumer API.

> Consumer offsets auto-commit before processing finishes
> ---
>
> Key: KAFKA-1612
> URL: https://issues.apache.org/jira/browse/KAFKA-1612
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.8.1.1
>Reporter: Gian Merlino
>Assignee: Neha Narkhede
>
> In a loop like this,
>   for (message <- kafkaStream) {
>  process(message)
>   }
> The consumer can commit offsets for the next message while "process" is 
> running. If the program crashes during "process", the next run will pick up 
> from the *next* message. The message in flight at the time of the crash will 
> never actually finish processing. Instead, I would have expected the high 
> level consumer to deliver messages at least once.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1884) Print metadata response errors

2015-04-17 Thread Manikumar Reddy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14501112#comment-14501112
 ] 

Manikumar Reddy commented on KAFKA-1884:


[~guozhang] can you review this trivial patch?

> Print metadata response errors
> --
>
> Key: KAFKA-1884
> URL: https://issues.apache.org/jira/browse/KAFKA-1884
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.8.2.0
>    Reporter: Manikumar Reddy
>Assignee: Manikumar Reddy
> Fix For: 0.8.3
>
> Attachments: KAFKA-1884.patch
>
>
> Print metadata response errors.
> producer logs:
> DEBUG [2015-01-20 12:46:13,406] NetworkClient: maybeUpdateMetadata(): Trying 
> to send metadata request to node -1
> DEBUG [2015-01-20 12:46:13,406] NetworkClient: maybeUpdateMetadata(): Sending 
> metadata request ClientRequest(expectResponse=true, payload=null, 
> request=RequestSend(header={api_key=3,api_version=0,correlation_id=50845,client_id=my-producer},
>  body={topics=[TOPIC=]})) to node -1
> TRACE [2015-01-20 12:46:13,416] NetworkClient: handleMetadataResponse(): 
> Ignoring empty metadata response with correlation id 50845.
> DEBUG [2015-01-20 12:46:13,417] NetworkClient: maybeUpdateMetadata(): Trying 
> to send metadata request to node -1
> DEBUG [2015-01-20 12:46:13,417] NetworkClient: maybeUpdateMetadata(): Sending 
> metadata request ClientRequest(expectResponse=true, payload=null, 
> request=RequestSend(header={api_key=3,api_version=0,correlation_id=50846,client_id=my-producer},
>  body={topics=[TOPIC=]})) to node -1
> TRACE [2015-01-20 12:46:13,417] NetworkClient: handleMetadataResponse(): 
> Ignoring empty metadata response with correlation id 50846.
> DEBUG [2015-01-20 12:46:13,417] NetworkClient: maybeUpdateMetadata(): Trying 
> to send metadata request to node -1
> DEBUG [2015-01-20 12:46:13,418] NetworkClient: maybeUpdateMetadata(): Sending 
> metadata request ClientRequest(expectResponse=true, payload=null, 
> request=RequestSend(header={api_key=3,api_version=0,correlation_id=50847,client_id=my-producer},
>  body={topics=[TOPIC=]})) to node -1
> TRACE [2015-01-20 12:46:13,418] NetworkClient: handleMetadataResponse(): 
> Ignoring empty metadata response with correlation id 50847.
> Broker logs:
> [2015-01-20 12:46:14,074] ERROR [KafkaApi-0] error when handling request 
> Name: TopicMetadataRequest; Version: 0; CorrelationId: 51020; ClientId: 
> my-producer; Topics: TOPIC= (kafka.server.KafkaApis)
> kafka.common.InvalidTopicException: topic name TOPIC= is illegal, contains a 
> character other than ASCII alphanumerics, '.', '_' and '-'
>   at kafka.common.Topic$.validate(Topic.scala:42)
>   at 
> kafka.admin.AdminUtils$.createOrUpdateTopicPartitionAssignmentPathInZK(AdminUtils.scala:186)
>   at kafka.admin.AdminUtils$.createTopic(AdminUtils.scala:177)
>   at kafka.server.KafkaApis$$anonfun$5.apply(KafkaApis.scala:367)
>   at kafka.server.KafkaApis$$anonfun$5.apply(KafkaApis.scala:350)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
>   at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
>   at scala.collection.immutable.Set$Set1.foreach(Set.scala:74)
>   at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
>   at 
> scala.collection.AbstractSet.scala$collection$SetLike$$super$map(Set.scala:47)
>   at scala.collection.SetLike$class.map(SetLike.scala:93)
>   at scala.collection.AbstractSet.map(Set.scala:47)
>   at kafka.server.KafkaApis.getTopicMetadata(KafkaApis.scala:350)
>   at 
> kafka.server.KafkaApis.handleTopicMetadataRequest(KafkaApis.scala:389)
>   at kafka.server.KafkaApis.handle(KafkaApis.scala:57)
>   at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:59)
>   at java.lang.Thread.run(Thread.java:722)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2131) Update new producer javadocs with correct documentation links

2015-04-17 Thread Manikumar Reddy (JIRA)
Manikumar Reddy created KAFKA-2131:
--

 Summary: Update new producer javadocs with correct documentation 
links
 Key: KAFKA-2131
 URL: https://issues.apache.org/jira/browse/KAFKA-2131
 Project: Kafka
  Issue Type: Improvement
  Components: clients
Affects Versions: 0.8.2.0
Reporter: Manikumar Reddy
Assignee: Manikumar Reddy
Priority: Trivial






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2131) Update new producer javadocs with correct documentation links

2015-04-17 Thread Manikumar Reddy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar Reddy updated KAFKA-2131:
---
Description: New producer java docs are referring to old producer 
documentation.

> Update new producer javadocs with correct documentation links
> -
>
> Key: KAFKA-2131
> URL: https://issues.apache.org/jira/browse/KAFKA-2131
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Affects Versions: 0.8.2.0
>    Reporter: Manikumar Reddy
>Assignee: Manikumar Reddy
>Priority: Trivial
>
> New producer java docs are referring to old producer documentation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2131) Update new producer javadocs with correct documentation links

2015-04-17 Thread Manikumar Reddy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar Reddy updated KAFKA-2131:
---
Status: Patch Available  (was: Open)

> Update new producer javadocs with correct documentation links
> -
>
> Key: KAFKA-2131
> URL: https://issues.apache.org/jira/browse/KAFKA-2131
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Affects Versions: 0.8.2.0
>    Reporter: Manikumar Reddy
>Assignee: Manikumar Reddy
>Priority: Trivial
> Attachments: KAFKA-2131.patch
>
>
> New producer java docs are referring to old producer documentation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2131) Update new producer javadocs with correct documentation links

2015-04-17 Thread Manikumar Reddy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14501144#comment-14501144
 ] 

Manikumar Reddy commented on KAFKA-2131:


Created reviewboard https://reviews.apache.org/r/4/diff/
 against branch origin/trunk

> Update new producer javadocs with correct documentation links
> -
>
> Key: KAFKA-2131
> URL: https://issues.apache.org/jira/browse/KAFKA-2131
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Affects Versions: 0.8.2.0
>    Reporter: Manikumar Reddy
>Assignee: Manikumar Reddy
>Priority: Trivial
> Attachments: KAFKA-2131.patch
>
>
> New producer java docs are referring to old producer documentation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Review Request 33334: Patch for KAFKA-2131

2015-04-17 Thread Manikumar Reddy O

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/4/
---

Review request for kafka.


Bugs: KAFKA-2131
https://issues.apache.org/jira/browse/KAFKA-2131


Repository: kafka


Description
---

Update new producer javadocs with correct documentation


Diffs
-

  clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java 
b91e2c52ed0acb1faa85915097d97bafa28c413a 
  clients/src/main/java/org/apache/kafka/clients/producer/ProducerConfig.java 
ca1c7fedbde7f53d64426da3a1aa3aeeafd2e9ad 

Diff: https://reviews.apache.org/r/4/diff/


Testing
---


Thanks,

Manikumar Reddy O



[jira] [Updated] (KAFKA-2131) Update new producer javadocs with correct documentation links

2015-04-17 Thread Manikumar Reddy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar Reddy updated KAFKA-2131:
---
Attachment: KAFKA-2131.patch

> Update new producer javadocs with correct documentation links
> -
>
> Key: KAFKA-2131
> URL: https://issues.apache.org/jira/browse/KAFKA-2131
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Affects Versions: 0.8.2.0
>    Reporter: Manikumar Reddy
>Assignee: Manikumar Reddy
>Priority: Trivial
> Attachments: KAFKA-2131.patch
>
>
> New producer java docs are referring to old producer documentation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1581) Log cleaner should have an option to ignore messages without keys

2015-04-18 Thread Manikumar Reddy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14501228#comment-14501228
 ] 

Manikumar Reddy commented on KAFKA-1581:


[~jjkoshy]  I think this can be closed now. KAFKA-1755 fixed this issue.

> Log cleaner should have an option to ignore messages without keys
> -
>
> Key: KAFKA-1581
> URL: https://issues.apache.org/jira/browse/KAFKA-1581
> Project: Kafka
>  Issue Type: Bug
>Reporter: Joel Koshy
>    Assignee: Manikumar Reddy
>
> Right now, there is a strict requirement that compacted topics contain only 
> messages with keys. This makes sense, but the issue with a hard requirement 
> is that if it fails the cleaner quits. We should probably allow ignoring 
> these messages (with a warning). Alternatively, we can catch this scenario 
> (instead of the hard requirement) and just skip compaction for that partition.
> This came up because I saw an invalid message (compressed and without a key) 
> in the offsets topic which broke both log compaction and the offset load 
> process. I filed KAFKA-1580 to prevent that from happening in the first place 
> but KAFKA-1580 is only for internal topics. In the general case (compacted 
> non-internal topics) we would not want the cleaners to exit permanently due 
> to an invalid (key-less) message in one of the partitions since that would 
> prevent compaction for other partitions as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1758) corrupt recovery file prevents startup

2015-04-18 Thread Manikumar Reddy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14501295#comment-14501295
 ] 

Manikumar Reddy commented on KAFKA-1758:


[~jkreps] Can I get review for this simple patch?

> corrupt recovery file prevents startup
> --
>
> Key: KAFKA-1758
> URL: https://issues.apache.org/jira/browse/KAFKA-1758
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Reporter: Jason Rosenberg
>    Assignee: Manikumar Reddy
>  Labels: newbie
> Fix For: 0.9.0
>
> Attachments: KAFKA-1758.patch
>
>
> Hi,
> We recently had a kafka node go down suddenly. When it came back up, it 
> apparently had a corrupt recovery file, and refused to startup:
> {code}
> 2014-11-06 08:17:19,299  WARN [main] server.KafkaServer - Error starting up 
> KafkaServer
> java.lang.NumberFormatException: For input string: 
> "^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
> ^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@"
> at 
> java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
> at java.lang.Integer.parseInt(Integer.java:481)
> at java.lang.Integer.parseInt(Integer.java:527)
> at 
> scala.collection.immutable.StringLike$class.toInt(StringLike.scala:229)
> at scala.collection.immutable.StringOps.toInt(StringOps.scala:31)
> at kafka.server.OffsetCheckpoint.read(OffsetCheckpoint.scala:76)
> at 
> kafka.log.LogManager$$anonfun$loadLogs$1.apply(LogManager.scala:106)
> at 
> kafka.log.LogManager$$anonfun$loadLogs$1.apply(LogManager.scala:105)
> at 
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
> at 
> scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:34)
> at kafka.log.LogManager.loadLogs(LogManager.scala:105)
> at kafka.log.LogManager.(LogManager.scala:57)
> at kafka.server.KafkaServer.createLogManager(KafkaServer.scala:275)
> at kafka.server.KafkaServer.startup(KafkaServer.scala:72)
> {code}
> And the app is under a monitor (so it was repeatedly restarting and failing 
> with this error for several minutes before we got to it)…
> We moved the ‘recovery-point-offset-checkpoint’ file out of the way, and it 
> then restarted cleanly (but of course re-synced all it’s data from replicas, 
> so we had no data loss).
> Anyway, I’m wondering if that’s the expected behavior? Or should it not 
> declare it corrupt and then proceed automatically to an unclean restart?
> Should this NumberFormatException be handled a bit more gracefully?
> We saved the corrupt file if it’s worth inspecting (although I doubt it will 
> be useful!)….
> The corrupt files appeared to be all zeroes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 30801: Patch for KAFKA-1758

2015-05-09 Thread Manikumar Reddy O

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/30801/
---

(Updated May 9, 2015, 7:02 a.m.)


Review request for kafka.


Bugs: KAFKA-1758
https://issues.apache.org/jira/browse/KAFKA-1758


Repository: kafka


Description (updated)
---

Addressing Neha's comments


Diffs (updated)
-

  core/src/main/scala/kafka/log/LogManager.scala 
e781ebac2677ebb22e0c1fef0cf7e5ad57c74ea4 

Diff: https://reviews.apache.org/r/30801/diff/


Testing
---


Thanks,

Manikumar Reddy O



[jira] [Commented] (KAFKA-1758) corrupt recovery file prevents startup

2015-05-09 Thread Manikumar Reddy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14536267#comment-14536267
 ] 

Manikumar Reddy commented on KAFKA-1758:


Updated reviewboard https://reviews.apache.org/r/30801/diff/
 against branch origin/trunk

> corrupt recovery file prevents startup
> --
>
> Key: KAFKA-1758
> URL: https://issues.apache.org/jira/browse/KAFKA-1758
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Reporter: Jason Rosenberg
>    Assignee: Manikumar Reddy
>  Labels: newbie
> Fix For: 0.9.0
>
> Attachments: KAFKA-1758.patch, KAFKA-1758_2015-05-09_12:29:20.patch
>
>
> Hi,
> We recently had a kafka node go down suddenly. When it came back up, it 
> apparently had a corrupt recovery file, and refused to startup:
> {code}
> 2014-11-06 08:17:19,299  WARN [main] server.KafkaServer - Error starting up 
> KafkaServer
> java.lang.NumberFormatException: For input string: 
> "^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
> ^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@"
> at 
> java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
> at java.lang.Integer.parseInt(Integer.java:481)
> at java.lang.Integer.parseInt(Integer.java:527)
> at 
> scala.collection.immutable.StringLike$class.toInt(StringLike.scala:229)
> at scala.collection.immutable.StringOps.toInt(StringOps.scala:31)
> at kafka.server.OffsetCheckpoint.read(OffsetCheckpoint.scala:76)
> at 
> kafka.log.LogManager$$anonfun$loadLogs$1.apply(LogManager.scala:106)
> at 
> kafka.log.LogManager$$anonfun$loadLogs$1.apply(LogManager.scala:105)
> at 
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
> at 
> scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:34)
> at kafka.log.LogManager.loadLogs(LogManager.scala:105)
> at kafka.log.LogManager.(LogManager.scala:57)
> at kafka.server.KafkaServer.createLogManager(KafkaServer.scala:275)
> at kafka.server.KafkaServer.startup(KafkaServer.scala:72)
> {code}
> And the app is under a monitor (so it was repeatedly restarting and failing 
> with this error for several minutes before we got to it)…
> We moved the ‘recovery-point-offset-checkpoint’ file out of the way, and it 
> then restarted cleanly (but of course re-synced all it’s data from replicas, 
> so we had no data loss).
> Anyway, I’m wondering if that’s the expected behavior? Or should it not 
> declare it corrupt and then proceed automatically to an unclean restart?
> Should this NumberFormatException be handled a bit more gracefully?
> We saved the corrupt file if it’s worth inspecting (although I doubt it will 
> be useful!)….
> The corrupt files appeared to be all zeroes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1758) corrupt recovery file prevents startup

2015-05-09 Thread Manikumar Reddy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar Reddy updated KAFKA-1758:
---
Attachment: KAFKA-1758_2015-05-09_12:29:20.patch

> corrupt recovery file prevents startup
> --
>
> Key: KAFKA-1758
> URL: https://issues.apache.org/jira/browse/KAFKA-1758
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Reporter: Jason Rosenberg
>    Assignee: Manikumar Reddy
>  Labels: newbie
> Fix For: 0.9.0
>
> Attachments: KAFKA-1758.patch, KAFKA-1758_2015-05-09_12:29:20.patch
>
>
> Hi,
> We recently had a kafka node go down suddenly. When it came back up, it 
> apparently had a corrupt recovery file, and refused to startup:
> {code}
> 2014-11-06 08:17:19,299  WARN [main] server.KafkaServer - Error starting up 
> KafkaServer
> java.lang.NumberFormatException: For input string: 
> "^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
> ^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@"
> at 
> java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
> at java.lang.Integer.parseInt(Integer.java:481)
> at java.lang.Integer.parseInt(Integer.java:527)
> at 
> scala.collection.immutable.StringLike$class.toInt(StringLike.scala:229)
> at scala.collection.immutable.StringOps.toInt(StringOps.scala:31)
> at kafka.server.OffsetCheckpoint.read(OffsetCheckpoint.scala:76)
> at 
> kafka.log.LogManager$$anonfun$loadLogs$1.apply(LogManager.scala:106)
> at 
> kafka.log.LogManager$$anonfun$loadLogs$1.apply(LogManager.scala:105)
> at 
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
> at 
> scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:34)
> at kafka.log.LogManager.loadLogs(LogManager.scala:105)
> at kafka.log.LogManager.(LogManager.scala:57)
> at kafka.server.KafkaServer.createLogManager(KafkaServer.scala:275)
> at kafka.server.KafkaServer.startup(KafkaServer.scala:72)
> {code}
> And the app is under a monitor (so it was repeatedly restarting and failing 
> with this error for several minutes before we got to it)…
> We moved the ‘recovery-point-offset-checkpoint’ file out of the way, and it 
> then restarted cleanly (but of course re-synced all it’s data from replicas, 
> so we had no data loss).
> Anyway, I’m wondering if that’s the expected behavior? Or should it not 
> declare it corrupt and then proceed automatically to an unclean restart?
> Should this NumberFormatException be handled a bit more gracefully?
> We saved the corrupt file if it’s worth inspecting (although I doubt it will 
> be useful!)….
> The corrupt files appeared to be all zeroes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 30801: Patch for KAFKA-1758

2015-05-09 Thread Manikumar Reddy O


> On April 26, 2015, 6:53 p.m., Neha Narkhede wrote:
> > core/src/main/scala/kafka/log/LogManager.scala, line 133
> > <https://reviews.apache.org/r/30801/diff/1/?file=858786#file858786line133>
> >
> > Is there a reason we are limiting this to only NumberFormatException? 
> > Seems like a fix that applies to all errors. 
> > 
> > Also, worth changing the error message to a more generic statement 
> > about the problem and the fix (resetting the recovery checkpoint to 0).

Yes, we can catch other exceptions also. updated the log message.


- Manikumar Reddy


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/30801/#review81622
---


On May 9, 2015, 7:02 a.m., Manikumar Reddy O wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/30801/
> ---
> 
> (Updated May 9, 2015, 7:02 a.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1758
> https://issues.apache.org/jira/browse/KAFKA-1758
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Addressing Neha's comments
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/log/LogManager.scala 
> e781ebac2677ebb22e0c1fef0cf7e5ad57c74ea4 
> 
> Diff: https://reviews.apache.org/r/30801/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Manikumar Reddy O
> 
>



Re: Review Request 24214: Patch for KAFKA-1374

2015-05-18 Thread Manikumar Reddy O

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/24214/
---

(Updated May 18, 2015, 5:29 p.m.)


Review request for kafka.


Bugs: KAFKA-1374
https://issues.apache.org/jira/browse/KAFKA-1374


Repository: kafka


Description (updated)
---

Addressing Joel's comments


Diffs (updated)
-

  core/src/main/scala/kafka/log/LogCleaner.scala 
abea8b251895a5cc0788c6e25b112a2935a3f631 
  core/src/main/scala/kafka/message/ByteBufferMessageSet.scala 
9dfe914991aaf82162e5e300c587c794555d5fd0 
  core/src/main/scala/kafka/message/MessageSet.scala 
28b56e68cfdbbf107dd7cbd248ffa8fa6bbcd13f 
  core/src/test/scala/kafka/tools/TestLogCleaning.scala 
844589427cb9337acd89a5239a98b811ee58118e 
  core/src/test/scala/unit/kafka/log/LogCleanerIntegrationTest.scala 
3b5aa9dc3b7ac5893c1d281ae1326be0e9ed8aad 
  core/src/test/scala/unit/kafka/log/LogTest.scala 
76d3bfd378f32fd2b216b3ebdec86e2070491924 

Diff: https://reviews.apache.org/r/24214/diff/


Testing
---

/*TestLogCleaning stress test output for compressed messages/

Producing 10 messages...
Logging produce requests to 
/tmp/kafka-log-cleaner-produced-6014466306002699464.txt
Sleeping for 120 seconds...
Consuming messages...
Logging consumed messages to 
/tmp/kafka-log-cleaner-consumed-177538909590644701.txt
10 rows of data produced, 13165 rows of data consumed (86.8% reduction).
De-duplicating and validating output files...
Validated 9005 values, 0 mismatches.

Producing 100 messages...
Logging produce requests to 
/tmp/kafka-log-cleaner-produced-3298578695475992991.txt
Sleeping for 120 seconds...
Consuming messages...
Logging consumed messages to 
/tmp/kafka-log-cleaner-consumed-7192293977610206930.txt
100 rows of data produced, 119926 rows of data consumed (88.0% reduction).
De-duplicating and validating output files...
Validated 89947 values, 0 mismatches.

Producing 1000 messages...
Logging produce requests to 
/tmp/kafka-log-cleaner-produced-3336255463347572934.txt
Sleeping for 120 seconds...
Consuming messages...
Logging consumed messages to 
/tmp/kafka-log-cleaner-consumed-9149188270705707725.txt
1000 rows of data produced, 1645281 rows of data consumed (83.5% reduction).
De-duplicating and validating output files...
Validated 899853 values, 0 mismatches.


/*TestLogCleaning stress test output for non-compressed messages*/

Producing 10 messages...
Logging produce requests to 
/tmp/kafka-log-cleaner-produced-5174543709786189363.txt
Sleeping for 120 seconds...
Consuming messages...
Logging consumed messages to 
/tmp/kafka-log-cleaner-consumed-514345501144701.txt
10 rows of data produced, 22775 rows of data consumed (77.2% reduction).
De-duplicating and validating output files...
Validated 17874 values, 0 mismatches.

Producing 100 messages...
Logging produce requests to 
/tmp/kafka-log-cleaner-produced-7814446915546169271.txt
Sleeping for 120 seconds...
Consuming messages...
Logging consumed messages to 
/tmp/kafka-log-cleaner-consumed-5172557663160447626.txt
100 rows of data produced, 129230 rows of data consumed (87.1% reduction).
De-duplicating and validating output files...
Validated 89947 values, 0 mismatches.

Producing 1000 messages...
Logging produce requests to 
/tmp/kafka-log-cleaner-produced-6092986571905399164.txt
Sleeping for 120 seconds...
Consuming messages...
Logging consumed messages to 
/tmp/kafka-log-cleaner-consumed-63626021421841220.txt
1000 rows of data produced, 1136608 rows of data consumed (88.6% reduction).
De-duplicating and validating output files...
Validated 899853 values, 0 mismatches.


Thanks,

Manikumar Reddy O



Re: Review Request 24214: Patch for KAFKA-1374

2015-05-18 Thread Manikumar Reddy O


> On May 12, 2015, 2:01 p.m., Joel Koshy wrote:
> > core/src/main/scala/kafka/log/LogCleaner.scala, line 409
> > <https://reviews.apache.org/r/24214/diff/9/?file=824405#file824405line409>
> >
> > I would suggest one of two options over this (i.e., instead of two 
> > helper methods)
> > - Inline both here and get rid of those
> > - Have a single private helper (e.g., collectRetainedMessages)

removed the  helper methods


> On May 12, 2015, 2:01 p.m., Joel Koshy wrote:
> > core/src/main/scala/kafka/log/LogCleaner.scala, line 479
> > <https://reviews.apache.org/r/24214/diff/9/?file=824405#file824405line479>
> >
> > We should now compress with the compression codec of the topic 
> > (KAFKA-1499)

will do as separate JIRA


> On May 12, 2015, 2:01 p.m., Joel Koshy wrote:
> > core/src/main/scala/kafka/log/LogCleaner.scala, line 498
> > <https://reviews.apache.org/r/24214/diff/9/?file=824405#file824405line498>
> >
> > We should instead do a trivial refactor in ByteBufferMessageSet to 
> > compress messages in a preallocated buffer. It would be preferable to avoid 
> > having this compression logic in different places.

moved the compresssMessages() method to ByteBufferMessageSet class. Pl let me 
know your thoughts..


- Manikumar Reddy


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/24214/#review83392
---


On May 18, 2015, 5:29 p.m., Manikumar Reddy O wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/24214/
> ---
> 
> (Updated May 18, 2015, 5:29 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1374
> https://issues.apache.org/jira/browse/KAFKA-1374
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Addressing Joel's comments
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/log/LogCleaner.scala 
> abea8b251895a5cc0788c6e25b112a2935a3f631 
>   core/src/main/scala/kafka/message/ByteBufferMessageSet.scala 
> 9dfe914991aaf82162e5e300c587c794555d5fd0 
>   core/src/main/scala/kafka/message/MessageSet.scala 
> 28b56e68cfdbbf107dd7cbd248ffa8fa6bbcd13f 
>   core/src/test/scala/kafka/tools/TestLogCleaning.scala 
> 844589427cb9337acd89a5239a98b811ee58118e 
>   core/src/test/scala/unit/kafka/log/LogCleanerIntegrationTest.scala 
> 3b5aa9dc3b7ac5893c1d281ae1326be0e9ed8aad 
>   core/src/test/scala/unit/kafka/log/LogTest.scala 
> 76d3bfd378f32fd2b216b3ebdec86e2070491924 
> 
> Diff: https://reviews.apache.org/r/24214/diff/
> 
> 
> Testing
> ---
> 
> /*TestLogCleaning stress test output for compressed messages/
> 
> Producing 10 messages...
> Logging produce requests to 
> /tmp/kafka-log-cleaner-produced-6014466306002699464.txt
> Sleeping for 120 seconds...
> Consuming messages...
> Logging consumed messages to 
> /tmp/kafka-log-cleaner-consumed-177538909590644701.txt
> 10 rows of data produced, 13165 rows of data consumed (86.8% reduction).
> De-duplicating and validating output files...
> Validated 9005 values, 0 mismatches.
> 
> Producing 100 messages...
> Logging produce requests to 
> /tmp/kafka-log-cleaner-produced-3298578695475992991.txt
> Sleeping for 120 seconds...
> Consuming messages...
> Logging consumed messages to 
> /tmp/kafka-log-cleaner-consumed-7192293977610206930.txt
> 100 rows of data produced, 119926 rows of data consumed (88.0% reduction).
> De-duplicating and validating output files...
> Validated 89947 values, 0 mismatches.
> 
> Producing 1000 messages...
> Logging produce requests to 
> /tmp/kafka-log-cleaner-produced-3336255463347572934.txt
> Sleeping for 120 seconds...
> Consuming messages...
> Logging consumed messages to 
> /tmp/kafka-log-cleaner-consumed-9149188270705707725.txt
> 1000 rows of data produced, 1645281 rows of data consumed (83.5% 
> reduction).
> De-duplicating and validating output files...
> Validated 899853 values, 0 mismatches.
> 
> 
> /*TestLogCleaning stress test output for non-compressed messages*/
> 
> Producing 10 messages...
> Logging produce requests to 
> /tmp/kafka-log-cleaner-produced-5174543709786189363.txt
> Sleeping for 120 seconds...
> Consuming messages...
> Logging consumed messages to 
> /tmp/kafka-log-cleaner-consumed-514345501144701.txt
> 10 rows of data produced, 22775 rows of data consumed (77.2% reduction).
> De-duplicating

[jira] [Updated] (KAFKA-1374) LogCleaner (compaction) does not support compressed topics

2015-05-18 Thread Manikumar Reddy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar Reddy updated KAFKA-1374:
---
Attachment: KAFKA-1374_2015-05-18_22:55:48.patch

> LogCleaner (compaction) does not support compressed topics
> --
>
> Key: KAFKA-1374
> URL: https://issues.apache.org/jira/browse/KAFKA-1374
> Project: Kafka
>  Issue Type: Bug
>Reporter: Joel Koshy
>    Assignee: Manikumar Reddy
>  Labels: newbie++
> Fix For: 0.8.3
>
> Attachments: KAFKA-1374.patch, KAFKA-1374_2014-08-09_16:18:55.patch, 
> KAFKA-1374_2014-08-12_22:23:06.patch, KAFKA-1374_2014-09-23_21:47:12.patch, 
> KAFKA-1374_2014-10-03_18:49:16.patch, KAFKA-1374_2014-10-03_19:17:17.patch, 
> KAFKA-1374_2015-01-18_00:19:21.patch, KAFKA-1374_2015-05-18_22:55:48.patch
>
>
> This is a known issue, but opening a ticket to track.
> If you try to compact a topic that has compressed messages you will run into
> various exceptions - typically because during iteration we advance the
> position based on the decompressed size of the message. I have a bunch of
> stack traces, but it should be straightforward to reproduce.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1374) LogCleaner (compaction) does not support compressed topics

2015-05-18 Thread Manikumar Reddy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14548354#comment-14548354
 ] 

Manikumar Reddy commented on KAFKA-1374:


Updated reviewboard https://reviews.apache.org/r/24214/diff/
 against branch origin/trunk

> LogCleaner (compaction) does not support compressed topics
> --
>
> Key: KAFKA-1374
> URL: https://issues.apache.org/jira/browse/KAFKA-1374
> Project: Kafka
>  Issue Type: Bug
>Reporter: Joel Koshy
>    Assignee: Manikumar Reddy
>  Labels: newbie++
> Fix For: 0.8.3
>
> Attachments: KAFKA-1374.patch, KAFKA-1374_2014-08-09_16:18:55.patch, 
> KAFKA-1374_2014-08-12_22:23:06.patch, KAFKA-1374_2014-09-23_21:47:12.patch, 
> KAFKA-1374_2014-10-03_18:49:16.patch, KAFKA-1374_2014-10-03_19:17:17.patch, 
> KAFKA-1374_2015-01-18_00:19:21.patch, KAFKA-1374_2015-05-18_22:55:48.patch
>
>
> This is a known issue, but opening a ticket to track.
> If you try to compact a topic that has compressed messages you will run into
> various exceptions - typically because during iteration we advance the
> position based on the decompressed size of the message. I have a bunch of
> stack traces, but it should be straightforward to reproduce.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Review Request 34403: Patch for KAFKA-2198

2015-05-19 Thread Manikumar Reddy O

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/34403/
---

Review request for kafka.


Bugs: KAFKA-2198
https://issues.apache.org/jira/browse/KAFKA-2198


Repository: kafka


Description
---

kafka-topics.sh: return non-zero error status on failures


Diffs
-

  core/src/main/scala/kafka/admin/TopicCommand.scala 
8e6f18633b25bf1beee3f813b28ef7aa7d779d7b 

Diff: https://reviews.apache.org/r/34403/diff/


Testing
---


Thanks,

Manikumar Reddy O



[jira] [Updated] (KAFKA-2198) kafka-topics.sh exits with 0 status on failures

2015-05-19 Thread Manikumar Reddy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar Reddy updated KAFKA-2198:
---
Assignee: Manikumar Reddy
  Status: Patch Available  (was: Open)

> kafka-topics.sh exits with 0 status on failures
> ---
>
> Key: KAFKA-2198
> URL: https://issues.apache.org/jira/browse/KAFKA-2198
> Project: Kafka
>  Issue Type: Bug
>  Components: admin
>Affects Versions: 0.8.2.1
>Reporter: Bob Halley
>Assignee: Manikumar Reddy
> Attachments: KAFKA-2198.patch
>
>
> In the two failure cases below, kafka-topics.sh exits with status 0.  You 
> shouldn't need to parse output from the command to know if it failed or not.
> Case 1: Forgetting to add Kafka zookeeper chroot path to zookeeper spec
> $ kafka-topics.sh --alter --topic foo --config min.insync.replicas=2 
> --zookeeper 10.0.0.1 && echo succeeded
> succeeded
> Case 2: Bad config option.  (Also, do we really need the java backtrace?  
> It's a lot of noise most of the time.)
> $ kafka-topics.sh --alter --topic foo --config min.insync.replicasTYPO=2 
> --zookeeper 10.0.0.1/kafka && echo succeeded
> Error while executing topic command requirement failed: Unknown configuration 
> "min.insync.replicasTYPO".
> java.lang.IllegalArgumentException: requirement failed: Unknown configuration 
> "min.insync.replicasTYPO".
> at scala.Predef$.require(Predef.scala:233)
> at kafka.log.LogConfig$$anonfun$validateNames$1.apply(LogConfig.scala:183)
> at kafka.log.LogConfig$$anonfun$validateNames$1.apply(LogConfig.scala:182)
> at scala.collection.Iterator$class.foreach(Iterator.scala:727)
> at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
> at kafka.log.LogConfig$.validateNames(LogConfig.scala:182)
> at kafka.log.LogConfig$.validate(LogConfig.scala:190)
> at 
> kafka.admin.TopicCommand$.parseTopicConfigsToBeAdded(TopicCommand.scala:205)
> at 
> kafka.admin.TopicCommand$$anonfun$alterTopic$1.apply(TopicCommand.scala:103)
> at 
> kafka.admin.TopicCommand$$anonfun$alterTopic$1.apply(TopicCommand.scala:100)
> at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
> at kafka.admin.TopicCommand$.alterTopic(TopicCommand.scala:100)
> at kafka.admin.TopicCommand$.main(TopicCommand.scala:57)
> at kafka.admin.TopicCommand.main(TopicCommand.scala)
> succeeded



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2198) kafka-topics.sh exits with 0 status on failures

2015-05-19 Thread Manikumar Reddy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar Reddy updated KAFKA-2198:
---
Attachment: KAFKA-2198.patch

> kafka-topics.sh exits with 0 status on failures
> ---
>
> Key: KAFKA-2198
> URL: https://issues.apache.org/jira/browse/KAFKA-2198
> Project: Kafka
>  Issue Type: Bug
>  Components: admin
>Affects Versions: 0.8.2.1
>Reporter: Bob Halley
> Attachments: KAFKA-2198.patch
>
>
> In the two failure cases below, kafka-topics.sh exits with status 0.  You 
> shouldn't need to parse output from the command to know if it failed or not.
> Case 1: Forgetting to add Kafka zookeeper chroot path to zookeeper spec
> $ kafka-topics.sh --alter --topic foo --config min.insync.replicas=2 
> --zookeeper 10.0.0.1 && echo succeeded
> succeeded
> Case 2: Bad config option.  (Also, do we really need the java backtrace?  
> It's a lot of noise most of the time.)
> $ kafka-topics.sh --alter --topic foo --config min.insync.replicasTYPO=2 
> --zookeeper 10.0.0.1/kafka && echo succeeded
> Error while executing topic command requirement failed: Unknown configuration 
> "min.insync.replicasTYPO".
> java.lang.IllegalArgumentException: requirement failed: Unknown configuration 
> "min.insync.replicasTYPO".
> at scala.Predef$.require(Predef.scala:233)
> at kafka.log.LogConfig$$anonfun$validateNames$1.apply(LogConfig.scala:183)
> at kafka.log.LogConfig$$anonfun$validateNames$1.apply(LogConfig.scala:182)
> at scala.collection.Iterator$class.foreach(Iterator.scala:727)
> at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
> at kafka.log.LogConfig$.validateNames(LogConfig.scala:182)
> at kafka.log.LogConfig$.validate(LogConfig.scala:190)
> at 
> kafka.admin.TopicCommand$.parseTopicConfigsToBeAdded(TopicCommand.scala:205)
> at 
> kafka.admin.TopicCommand$$anonfun$alterTopic$1.apply(TopicCommand.scala:103)
> at 
> kafka.admin.TopicCommand$$anonfun$alterTopic$1.apply(TopicCommand.scala:100)
> at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
> at kafka.admin.TopicCommand$.alterTopic(TopicCommand.scala:100)
> at kafka.admin.TopicCommand$.main(TopicCommand.scala:57)
> at kafka.admin.TopicCommand.main(TopicCommand.scala)
> succeeded



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2198) kafka-topics.sh exits with 0 status on failures

2015-05-19 Thread Manikumar Reddy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14550100#comment-14550100
 ] 

Manikumar Reddy commented on KAFKA-2198:


Created reviewboard https://reviews.apache.org/r/34403/diff/
 against branch origin/trunk

> kafka-topics.sh exits with 0 status on failures
> ---
>
> Key: KAFKA-2198
> URL: https://issues.apache.org/jira/browse/KAFKA-2198
> Project: Kafka
>  Issue Type: Bug
>  Components: admin
>Affects Versions: 0.8.2.1
>Reporter: Bob Halley
> Attachments: KAFKA-2198.patch
>
>
> In the two failure cases below, kafka-topics.sh exits with status 0.  You 
> shouldn't need to parse output from the command to know if it failed or not.
> Case 1: Forgetting to add Kafka zookeeper chroot path to zookeeper spec
> $ kafka-topics.sh --alter --topic foo --config min.insync.replicas=2 
> --zookeeper 10.0.0.1 && echo succeeded
> succeeded
> Case 2: Bad config option.  (Also, do we really need the java backtrace?  
> It's a lot of noise most of the time.)
> $ kafka-topics.sh --alter --topic foo --config min.insync.replicasTYPO=2 
> --zookeeper 10.0.0.1/kafka && echo succeeded
> Error while executing topic command requirement failed: Unknown configuration 
> "min.insync.replicasTYPO".
> java.lang.IllegalArgumentException: requirement failed: Unknown configuration 
> "min.insync.replicasTYPO".
> at scala.Predef$.require(Predef.scala:233)
> at kafka.log.LogConfig$$anonfun$validateNames$1.apply(LogConfig.scala:183)
> at kafka.log.LogConfig$$anonfun$validateNames$1.apply(LogConfig.scala:182)
> at scala.collection.Iterator$class.foreach(Iterator.scala:727)
> at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
> at kafka.log.LogConfig$.validateNames(LogConfig.scala:182)
> at kafka.log.LogConfig$.validate(LogConfig.scala:190)
> at 
> kafka.admin.TopicCommand$.parseTopicConfigsToBeAdded(TopicCommand.scala:205)
> at 
> kafka.admin.TopicCommand$$anonfun$alterTopic$1.apply(TopicCommand.scala:103)
> at 
> kafka.admin.TopicCommand$$anonfun$alterTopic$1.apply(TopicCommand.scala:100)
> at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
> at kafka.admin.TopicCommand$.alterTopic(TopicCommand.scala:100)
> at kafka.admin.TopicCommand$.main(TopicCommand.scala:57)
> at kafka.admin.TopicCommand.main(TopicCommand.scala)
> succeeded



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2198) kafka-topics.sh exits with 0 status on failures

2015-05-19 Thread Manikumar Reddy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14550120#comment-14550120
 ] 

Manikumar Reddy commented on KAFKA-2198:


Case 1: Zookeeper chroot path is optional.  Currently --zookeeper options 
accepts 10.0.0.1, 10.0.0.1:2181, 10.0.0.1:2181/kafka, 10.0.0.1/kafka formats

Case 2: Uploaded a simple patch which returns non-zero status on failures. I am 
not sure about the printing stacktrace. let the commiter can decide about it.

> kafka-topics.sh exits with 0 status on failures
> ---
>
> Key: KAFKA-2198
> URL: https://issues.apache.org/jira/browse/KAFKA-2198
> Project: Kafka
>  Issue Type: Bug
>  Components: admin
>Affects Versions: 0.8.2.1
>Reporter: Bob Halley
>Assignee: Manikumar Reddy
> Attachments: KAFKA-2198.patch
>
>
> In the two failure cases below, kafka-topics.sh exits with status 0.  You 
> shouldn't need to parse output from the command to know if it failed or not.
> Case 1: Forgetting to add Kafka zookeeper chroot path to zookeeper spec
> $ kafka-topics.sh --alter --topic foo --config min.insync.replicas=2 
> --zookeeper 10.0.0.1 && echo succeeded
> succeeded
> Case 2: Bad config option.  (Also, do we really need the java backtrace?  
> It's a lot of noise most of the time.)
> $ kafka-topics.sh --alter --topic foo --config min.insync.replicasTYPO=2 
> --zookeeper 10.0.0.1/kafka && echo succeeded
> Error while executing topic command requirement failed: Unknown configuration 
> "min.insync.replicasTYPO".
> java.lang.IllegalArgumentException: requirement failed: Unknown configuration 
> "min.insync.replicasTYPO".
> at scala.Predef$.require(Predef.scala:233)
> at kafka.log.LogConfig$$anonfun$validateNames$1.apply(LogConfig.scala:183)
> at kafka.log.LogConfig$$anonfun$validateNames$1.apply(LogConfig.scala:182)
> at scala.collection.Iterator$class.foreach(Iterator.scala:727)
> at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
> at kafka.log.LogConfig$.validateNames(LogConfig.scala:182)
> at kafka.log.LogConfig$.validate(LogConfig.scala:190)
> at 
> kafka.admin.TopicCommand$.parseTopicConfigsToBeAdded(TopicCommand.scala:205)
> at 
> kafka.admin.TopicCommand$$anonfun$alterTopic$1.apply(TopicCommand.scala:103)
> at 
> kafka.admin.TopicCommand$$anonfun$alterTopic$1.apply(TopicCommand.scala:100)
> at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
> at kafka.admin.TopicCommand$.alterTopic(TopicCommand.scala:100)
> at kafka.admin.TopicCommand$.main(TopicCommand.scala:57)
> at kafka.admin.TopicCommand.main(TopicCommand.scala)
> succeeded



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2134) Producer blocked on metric publish

2015-05-19 Thread Manikumar Reddy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14550190#comment-14550190
 ] 

Manikumar Reddy commented on KAFKA-2134:


metricChange() method will be called during initialization of the producer. Can 
you post the complete  thread dump for more info?

> Producer blocked on metric publish
> --
>
> Key: KAFKA-2134
> URL: https://issues.apache.org/jira/browse/KAFKA-2134
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.8.2.1
> Environment: debian7, java8
>Reporter: Vamsi Subhash Achanta
>Assignee: Jun Rao
>Priority: Blocker
>
> Hi,
> We have a REST api to publish to a topic. Yesterday, we started noticing that 
> the producer is not able to produce messages at a good rate and the 
> CLOSE_WAITs of our producer REST app are very high. All the producer REST 
> requests are hence timing out.
> When we took the thread dump and analysed it, we noticed that the threads are 
> getting blocked on JmxReporter metricChange. Here is the attached stack trace.
> "dw-70 - POST /queues/queue_1/messages" #70 prio=5 os_prio=0 
> tid=0x7f043c8bd000 nid=0x54cf waiting for monitor entry 
> [0x7f04363c7000]
>java.lang.Thread.State: BLOCKED (on object monitor)
> at 
> org.apache.kafka.common.metrics.JmxReporter.metricChange(JmxReporter.java:76)
> - waiting to lock <0x0005c1823860> (a java.lang.Object)
> at 
> org.apache.kafka.common.metrics.Metrics.registerMetric(Metrics.java:182)
> - locked <0x0007a5e526c8> (a 
> org.apache.kafka.common.metrics.Metrics)
> at org.apache.kafka.common.metrics.Sensor.add(Sensor.java:165)
> - locked <0x0007a5e526e8> (a 
> org.apache.kafka.common.metrics.Sensor)
> When I looked at the code of metricChange method, it uses a synchronised 
> block on an object resource and it seems that it is held by another.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 34403: Patch for KAFKA-2198

2015-05-19 Thread Manikumar Reddy O

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/34403/
---

(Updated May 19, 2015, 12:59 p.m.)


Review request for kafka.


Bugs: KAFKA-2198
https://issues.apache.org/jira/browse/KAFKA-2198


Repository: kafka


Description
---

kafka-topics.sh: return non-zero error status on failures


Diffs (updated)
-

  core/src/main/scala/kafka/admin/TopicCommand.scala 
8e6f18633b25bf1beee3f813b28ef7aa7d779d7b 

Diff: https://reviews.apache.org/r/34403/diff/


Testing
---


Thanks,

Manikumar Reddy O



  1   2   3   4   5   6   7   8   9   >