Re: [DISCUSS]KIP-216: IQ should throw different exceptions for different errors

2018-11-11 Thread Vito Jeng
Hi, Matthias,

Sorry for the late reply.

> I am wondering what the semantic impact/change is, if we introduce
> `RetryableStateStoreException` and `FatalStateStoreException` that both
> inherit from it. While I like the introduction of both from a high level
> point of view, I just want to make sure it's semantically sound and
> backward compatible. Atm, I think it's fine, but I want to point it out
> such that everybody can think about this, too, so we can verify that
> it's a natural evolving API change.

Thank you for pointing this out. This's really important for public API.

Just when I was replying to you, I found that KIP needs some modify.
I will fix it ASAP, and then let's continue the discussion.

---
Vito


On Wed, Nov 7, 2018 at 7:06 AM Matthias J. Sax 
wrote:

> Hey Vito,
>
> I saw that you updated your PR, but did not reply to my last comments.
> Any thoughts?
>
>
> -Matthias
>
> On 10/19/18 10:34 AM, Matthias J. Sax wrote:
> > Glad to have you back Vito :)
> >
> > Some follow up thoughts:
> >
> >  - the current `InvalidStateStoreException` is documents as being
> > sometimes retry-able. From the JavaDocs:
> >
> >> These exceptions may be transient [...] Hence, it is valid to backoff
> and retry when handling this exception.
> >
> > I am wondering what the semantic impact/change is, if we introduce
> > `RetryableStateStoreException` and `FatalStateStoreException` that both
> > inherit from it. While I like the introduction of both from a high level
> > point of view, I just want to make sure it's semantically sound and
> > backward compatible. Atm, I think it's fine, but I want to point it out
> > such that everybody can think about this, too, so we can verify that
> > it's a natural evolving API change.
> >
> >  - StateStoreClosedException:
> >
> >> will be wrapped to StateStoreMigratedException or
> StateStoreNotAvailableException later.
> >
> > Can you clarify the cases (ie, when will it be wrapped with the one or
> > the other)?
> >
> >  - StateStoreIsEmptyException:
> >
> > I don't understand the semantic of this exception. Maybe it's a naming
> > issue?
> >
> >> will be wrapped to StateStoreMigratedException or
> StateStoreNotAvailableException later.
> >
> > Also, can you clarify the cases (ie, when will it be wrapped with the
> > one or the other)?
> >
> >
> > I am also wondering, if we should introduce a fatal exception
> > `UnkownStateStoreException` to tell users that they passed in an unknown
> > store name?
> >
> >
> >
> > -Matthias
> >
> >
> >
> > On 10/17/18 8:14 PM, vito jeng wrote:
> >> Just open a PR for further discussion:
> >> https://github.com/apache/kafka/pull/5814
> >>
> >> Any suggestion is welcome.
> >> Thanks!
> >>
> >> ---
> >> Vito
> >>
> >>
> >> On Thu, Oct 11, 2018 at 12:14 AM vito jeng  wrote:
> >>
> >>> Hi John,
> >>>
> >>> Thanks for reviewing the KIP.
> >>>
>  I didn't follow the addition of a new method to the QueryableStoreType
>  interface. Can you elaborate why this is necessary to support the new
>  exception types?
> >>>
> >>> To support the new exception types, I would check stream state in the
> >>> following classes:
> >>>   - CompositeReadOnlyKeyValueStore class
> >>>   - CompositeReadOnlySessionStore class
> >>>   - CompositeReadOnlyWindowStore class
> >>>   - DelegatingPeekingKeyValueIterator class
> >>>
> >>> It is also necessary to keep backward compatibility. So I plan passing
> >>> stream
> >>> instance to QueryableStoreType instance during KafkaStreams#store()
> >>> invoked.
> >>> It looks a most simple way, I think.
> >>>
> >>> It is why I add a new method to the QueryableStoreType interface. I can
> >>> understand
> >>> that we should try to avoid adding the public api method. However, at
> the
> >>> moment
> >>> I have no better ideas.
> >>>
> >>> Any thoughts?
> >>>
> >>>
>  Also, looking over your KIP again, it seems valuable to introduce
>  "retriable store exception" and "fatal store exception" marker
> interfaces
>  that the various exceptions can mix in. It would be nice from a
> usability
>  perspective to be able to just log and retry on any "retriable"
> exception
>  and log and shutdown on any fatal exception.
> >>>
> >>> I agree that this is valuable to the user.
> >>> I'll update the KIP.
> >>>
> >>>
> >>> Thanks
> >>>
> >>>
> >>> ---
> >>> Vito
> >>>
> >>>
> >>> On Tue, Oct 9, 2018 at 2:30 AM John Roesler  wrote:
> >>>
>  Hi Vito,
> 
>  I'm glad to hear you're well again!
> 
>  I didn't follow the addition of a new method to the QueryableStoreType
>  interface. Can you elaborate why this is necessary to support the new
>  exception types?
> 
>  Also, looking over your KIP again, it seems valuable to introduce
>  "retriable store exception" and "fatal store exception" marker
> interfaces
>  that the various exceptions can mix in. It would be nice from a
> usability
>  perspective to be able to just log and retry on any "retriable"
> exception
> >>

Re: [DISCUSS]KIP-216: IQ should throw different exceptions for different errors

2018-11-11 Thread Vito Jeng
Hi, Matthias,

KIP already updated.

> - StateStoreClosedException:
>   will be wrapped to StateStoreMigratedException or
StateStoreNotAvailableException later.
> Can you clarify the cases (ie, when will it be wrapped with the one or
the other)?

For example, in the implementation(CompositeReadOnlyKeyValueStore#get), we
get all stores first, and then call ReadOnlyKeyValueStore#get to get value
in every store iteration.

When calling ReadOnlyKeyValueStore#get, the StateStoreClosedException will
be thrown if the state store is not open.
We need catch StateStoreClosedException and wrap it in different exception
type:
  * If the stream's state is CREATED, we wrap StateStoreClosedException
with StreamThreadNotStartedException. User can retry until to RUNNING.
  * If the stream's state is RUNNING / REBALANCING, the state store should
be migrated, we wrap StateStoreClosedException with
StateStoreMigratedException. User can rediscover the state store.
  * If the stream's state is PENDING_SHUTDOWN / NOT_RUNNING / ERROR, the
stream thread is not available, we wrap StateStoreClosedException with
StateStoreNotAvailableException. User cannot retry when this exception is
thrown.


> - StateStoreIsEmptyException:
>  I don't understand the semantic of this exception. Maybe it's a naming
issue?

I think yes. :)
Does `EmptyStateStoreException` is better ? (already updated in the KIP)


> - StateStoreIsEmptyException:
> will be wrapped to StateStoreMigratedException or
StateStoreNotAvailableException later.
> Also, can you clarify the cases (ie, when will it be wrapped with the one
or the other)?

For example, in the implementation (CompositeReadOnlyKeyValueStore#get), we
call StateStoreProvider#stores (WrappingStoreProvider#stores) to get all
stores. EmptyStateStoreException will be thrown when cannot find any store
and then we need catch it and wrap it in different exception type:
  * If the stream's state is CREATED, we wrap EmptyStateStoreException with
StreamThreadNotStartedException. User can retry until to RUNNING.
  * If the stream's state is RUNNING / REBALANCING, the state store should
be migrated, we wrap EmptyStateStoreException with
StateStoreMigratedException. User can rediscover the state store.
  * If the stream's state is PENDING_SHUTDOWN / NOT_RUNNING / ERROR, the
stream thread is not available, we wrap EmptyStateStoreException with
StateStoreNotAvailableException. User cannot retry when this exception is
thrown.

I hope the above reply can clarify.

The last one that was not replied was:

> I am also wondering, if we should introduce a fatal exception
> `UnkownStateStoreException` to tell users that they passed in an unknown
> store name?

Until now, unknown state store is not thinking about in the KIP.
I believe it would be very useful for users.

Looking at the related code(WrappingStoreProvider#stores),
I found that I can't distinguish between the state store was migrated or an
unknown state store.

Any thoughts?

---
Vito



On Sun, Nov 11, 2018 at 5:31 PM Vito Jeng  wrote:

> Hi, Matthias,
>
> Sorry for the late reply.
>
> > I am wondering what the semantic impact/change is, if we introduce
> > `RetryableStateStoreException` and `FatalStateStoreException` that both
> > inherit from it. While I like the introduction of both from a high level
> > point of view, I just want to make sure it's semantically sound and
> > backward compatible. Atm, I think it's fine, but I want to point it out
> > such that everybody can think about this, too, so we can verify that
> > it's a natural evolving API change.
>
> Thank you for pointing this out. This's really important for public API.
>
> Just when I was replying to you, I found that KIP needs some modify.
> I will fix it ASAP, and then let's continue the discussion.
>
> ---
> Vito
>
>
> On Wed, Nov 7, 2018 at 7:06 AM Matthias J. Sax 
> wrote:
>
>> Hey Vito,
>>
>> I saw that you updated your PR, but did not reply to my last comments.
>> Any thoughts?
>>
>>
>> -Matthias
>>
>> On 10/19/18 10:34 AM, Matthias J. Sax wrote:
>> > Glad to have you back Vito :)
>> >
>> > Some follow up thoughts:
>> >
>> >  - the current `InvalidStateStoreException` is documents as being
>> > sometimes retry-able. From the JavaDocs:
>> >
>> >> These exceptions may be transient [...] Hence, it is valid to backoff
>> and retry when handling this exception.
>> >
>> > I am wondering what the semantic impact/change is, if we introduce
>> > `RetryableStateStoreException` and `FatalStateStoreException` that both
>> > inherit from it. While I like the introduction of both from a high level
>> > point of view, I just want to make sure it's semantically sound and
>> > backward compatible. Atm, I think it's fine, but I want to point it out
>> > such that everybody can think about this, too, so we can verify that
>> > it's a natural evolving API change.
>> >
>> >  - StateStoreClosedException:
>> >
>> >> will be wrapped to StateStoreMigratedException or
>> StateStoreNotAvailableException later.
>> >
>>

Kafka message set V2

2018-11-11 Thread Victor Denisov
Hi,

I'm working on an implementation of kafka client for go. It's a client
written fully in go: https://github.com/segmentio/kafka-go

This library doesn't seem to support message set V2 and it throws
errors that v2 message format
can't be handled. However I'm having troubles getting a stable
reproduction. Kafka server may or may not send message sets in v2.

Can you recommend how to make kafka send messages in v2 format so that
I can fix the issue and test it?

Thanks,
Victor.


Re: [DISCUSS] KIP-258: Allow to Store Record Timestamps in RocksDB

2018-11-11 Thread Matthias J. Sax
Adam,

I am still working on it. Was pulled into a lot of other tasks lately so
this was delayed. Also had some discussions about simplifying the
upgrade path with some colleagues and I am prototyping this atm. Hope to
update the KIP accordingly soon.

-Matthias

On 11/10/18 7:41 AM, Adam Bellemare wrote:
> Hello Matthias
> 
> I am curious as to the status of this KIP. TTL and expiry of records will
> be extremely useful for several of our business use-cases, as well as
> another KIP I had been working on.
> 
> Thanks
> 
> 
> 
> On Mon, Aug 13, 2018 at 10:29 AM Eno Thereska 
> wrote:
> 
>> Hi Matthias,
>>
>> Good stuff. Could you comment a bit on how future-proof is this change? For
>> example, if we want to store both event timestamp "and" processing time in
>> RocksDB will we then need another interface (e.g. called
>> KeyValueWithTwoTimestampsStore)?
>>
>> Thanks
>> Eno
>>
>> On Thu, Aug 9, 2018 at 2:30 PM, Matthias J. Sax 
>> wrote:
>>
>>> Thanks for your input Guozhang and John.
>>>
>>> I see your point, that the upgrade API is not simple. If you don't
>>> thinks it's valuable to make generic store upgrades possible (atm), we
>>> can make the API internal, too. The impact is, that we only support a
>>> predefined set up upgrades (ie, KV to KVwithTs, Windowed to
>>> WindowedWithTS etc) for which we implement the internal interfaces.
>>>
>>> We can keep the design generic, so if we decide to make it public, we
>>> don't need to re-invent it. This will also have the advantage, that we
>>> can add upgrade pattern for other stores later, too.
>>>
>>> I also agree, that the `StoreUpgradeBuilder` is a little ugly, but it
>>> was the only way I could find to design a generic upgrade interface. If
>>> we decide the hide all the upgrade stuff, `StoreUpgradeBuilder` would
>>> become an internal interface I guess (don't think we can remove it).
>>>
>>> I will wait for more feedback about this and if nobody wants to keep it
>>> as public API I will update the KIP accordingly. Will add some more
>>> clarifications for different upgrade patterns in the mean time and fix
>>> the typos/minor issues.
>>>
>>> About adding a new state UPGRADING: maybe we could do that. However, I
>>> find it particularly difficult to make the estimation when we should
>>> switch to RUNNING, thus, I am a little hesitant. Using store callbacks
>>> or just logging the progress including some indication about the "lag"
>>> might actually be sufficient. Not sure what others think?
>>>
>>> About "value before timestamp": no real reason and I think it does not
>>> make any difference. Do you want to change it?
>>>
>>> About upgrade robustness: yes, we cannot control if an instance fails.
>>> That is what I meant by "we need to write test". The upgrade should be
>>> able to continuous even is an instance goes down (and we must make sure
>>> that we don't end up in an invalid state that forces us to wipe out the
>>> whole store). Thus, we need to write system tests that fail instances
>>> during upgrade.
>>>
>>> For `in_place_offline` upgrade: I don't think we need this mode, because
>>> people can do this via a single rolling bounce.
>>>
>>>  - prepare code and switch KV-Store to KVwithTs-Store
>>>  - do a single rolling bounce (don't set any upgrade config)
>>>
>>> For this case, the `StoreUpgradeBuilder` (or `KVwithTs-Store` if we
>>> remove the `StoreUpgradeBuilder`) will detect that there is only an old
>>> local KV store w/o TS, will start to restore the new KVwithTs store,
>>> wipe out the old store and replace with the new store after restore is
>>> finished, and start processing only afterwards. (I guess we need to
>>> document this case -- will also add it to the KIP.)
>>>
>>>
>>>
>>> -Matthias
>>>
>>>
>>>
>>> On 8/9/18 1:10 PM, John Roesler wrote:
 Hi Matthias,

 I think this KIP is looking really good.

 I have a few thoughts to add to the others:

 1. You mentioned at one point users needing to configure
 `upgrade.mode="null"`. I think this was a typo and you meant to say
>> they
 should remove the config. If they really have to set it to a string
>>> "null"
 or even set it to a null value but not remove it, it would be
>>> unfortunate.

 2. In response to Bill's comment #1 , you said that "The idea is that
>> the
 upgrade should be robust and not fail. We need to write according
>> tests".
 I may have misunderstood the conversation, but I don't think it's
>> within
 our power to say that an instance won't fail. What if one of my
>> computers
 catches on fire? What if I'm deployed in the cloud and one instance
 disappears and is replaced by a new one? Or what if one instance goes
>>> AWOL
 for a long time and then suddenly returns? How will the upgrade process
 behave in light of such failures?

 3. your thought about making in-place an offline mode is interesting,
>> but
 it might be a bummer for on-prem users who wish to upgrade online, but

Re: [VOTE] 2.1.0 RC1

2018-11-11 Thread Jonathan Santilli
Hello,

+1

I have downloaded the release artifact from
http://home.apache.org/~lindong/kafka-2.1.0-rc1/
Executed a 3 brokers cluster. (java8 8u192b12)
Executed kafka-monitor for about 1 hour without problems.

Thanks,
--
Jonathan


On Fri, Nov 9, 2018 at 11:33 PM Dong Lin  wrote:

> Hello Kafka users, developers and client-developers,
>
> This is the second candidate for feature release of Apache Kafka 2.1.0.
>
> This is a major version release of Apache Kafka. It includes 28 new KIPs
> and
>
> critical bug fixes. Please see the Kafka 2.1.0 release plan for more
> details:
>
> *
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=91554044*
>  >
>
> Here are a few notable highlights:
>
> - Java 11 support
> - Support for Zstandard, which achieves compression comparable to gzip with
> higher compression and especially decompression speeds(KIP-110)
> - Avoid expiring committed offsets for active consumer group (KIP-211)
> - Provide Intuitive User Timeouts in The Producer (KIP-91)
> - Kafka's replication protocol now supports improved fencing of zombies.
> Previously, under certain rare conditions, if a broker became partitioned
> from Zookeeper but not the rest of the cluster, then the logs of replicated
> partitions could diverge and cause data loss in the worst case (KIP-320)
> - Streams API improvements (KIP-319, KIP-321, KIP-330, KIP-353, KIP-356)
> - Admin script and admin client API improvements to simplify admin
> operation (KIP-231, KIP-308, KIP-322, KIP-324, KIP-338, KIP-340)
> - DNS handling improvements (KIP-235, KIP-302)
>
> Release notes for the 2.1.0 release:
> http://home.apache.org/~lindong/kafka-2.1.0-rc0/RELEASE_NOTES.html
>
> *** Please download, test and vote by Thursday, Nov 15, 12 pm PT ***
>
> * Kafka's KEYS file containing PGP keys we use to sign the release:
> http://kafka.apache.org/KEYS
>
> * Release artifacts to be voted upon (source and binary):
> http://home.apache.org/~lindong/kafka-2.1.0-rc1/
>
> * Maven artifacts to be voted upon:
> https://repository.apache.org/content/groups/staging/
>
> * Javadoc:
> http://home.apache.org/~lindong/kafka-2.1.0-rc1/javadoc/
>
> * Tag to be voted upon (off 2.1 branch) is the 2.1.0-rc1 tag:
> https://github.com/apache/kafka/tree/2.1.0-rc1
>
> * Documentation:
> *http://kafka.apache.org/21/documentation.html*
> 
>
> * Protocol:
> http://kafka.apache.org/21/protocol.html
>
> * Successful Jenkins builds for the 2.1 branch:
> Unit/integration tests: *https://builds.apache.org/job/kafka-2.1-jdk8/50/
> *
>
> Please test and verify the release artifacts and submit a vote for this RC,
> or report any issues so we can fix them and get a new RC out ASAP. Although
> this release vote requires PMC votes to pass, testing, votes, and bug
> reports are valuable and appreciated from everyone.
>
> Cheers,
> Dong
>


-- 
Santilli Jonathan


[jira] [Created] (KAFKA-7615) Support different topic name in source and destination server in Mirrormaker

2018-11-11 Thread Adeeti Kaushal (JIRA)
Adeeti Kaushal created KAFKA-7615:
-

 Summary: Support different topic name in source and destination 
server in Mirrormaker
 Key: KAFKA-7615
 URL: https://issues.apache.org/jira/browse/KAFKA-7615
 Project: Kafka
  Issue Type: New Feature
  Components: mirrormaker
Reporter: Adeeti Kaushal


Currently mirrormaker only supports same topic name in source and destination 
broker. Support for different topic names in source and destination brokers is 
needed. 

 

source broker : topic name -> topicA

destination broker: topic name -> topicA_new--



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7616) MockConsumer can return ConsumerRecords objects with a non-empty map but no records

2018-11-11 Thread JIRA
Stig Rohde Døssing created KAFKA-7616:
-

 Summary: MockConsumer can return ConsumerRecords objects with a 
non-empty map but no records
 Key: KAFKA-7616
 URL: https://issues.apache.org/jira/browse/KAFKA-7616
 Project: Kafka
  Issue Type: Bug
  Components: clients
Affects Versions: 2.0.1
Reporter: Stig Rohde Døssing
Assignee: Stig Rohde Døssing


The ConsumerRecords returned from MockConsumer.poll can return false for 
isEmpty while not containing any records. This behavior is because 
MockConsumer.poll eagerly adds entries to the returned Map>, based on which partitions have been added. If no records 
are returned for a partition, e.g. because the position was too far ahead, the 
entry for that partition will still be there.

 

The MockConsumer should lazily add entries to the map as they are needed, since 
it is more in line with how the real consumer behaves.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] KIP-374: Add '--help' option to all available Kafka CLI commands

2018-11-11 Thread Daniele Ascione
+1 (non-binding)

Il ven 9 nov 2018, 02:09 Colin McCabe  ha scritto:

> +1 (binding)
>
>
>
> On Wed, Oct 31, 2018, at 05:42, Srinivas Reddy wrote:
> > Hi All,
> >
> > I would like to call for a vote on KIP-374:
> > https://cwiki.apache.org/confluence/x/FgSQBQ
> >
> > Summary:
> > Currently, the '--help' option is recognized by some Kafka commands
> > but not all. To provide a consistent user experience, it would
> > be nice to> add a '--help' option to all Kafka commands.
> >
> > I'd appreciate any votes or feedback.
> >
> > --
> > Srinivas Reddy
> >
> > http://mrsrinivas.com/
> >
> >
> > (Sent via gmail web)
>
>


Re: [VOTE] KIP-374: Add '--help' option to all available Kafka CLI commands

2018-11-11 Thread Harsha Chintalapani
+1 (binding)

-Harsha
On Nov 11, 2018, 3:49 PM -0800, Daniele Ascione , wrote:
> +1 (non-binding)
>
> Il ven 9 nov 2018, 02:09 Colin McCabe  ha scritto:
>
> > +1 (binding)
> >
> >
> >
> > On Wed, Oct 31, 2018, at 05:42, Srinivas Reddy wrote:
> > > Hi All,
> > >
> > > I would like to call for a vote on KIP-374:
> > > https://cwiki.apache.org/confluence/x/FgSQBQ
> > >
> > > Summary:
> > > Currently, the '--help' option is recognized by some Kafka commands
> > > but not all. To provide a consistent user experience, it would
> > > be nice to> add a '--help' option to all Kafka commands.
> > >
> > > I'd appreciate any votes or feedback.
> > >
> > > --
> > > Srinivas Reddy
> > >
> > > http://mrsrinivas.com/
> > >
> > >
> > > (Sent via gmail web)
> >
> >


[jira] [Resolved] (KAFKA-7590) GETTING HUGE MESSAGE STRUCTURE THROUGH JMS CONNECTOR

2018-11-11 Thread Chenchu Lakshman kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chenchu Lakshman kumar resolved KAFKA-7590.
---
Resolution: Fixed

> GETTING HUGE MESSAGE STRUCTURE THROUGH JMS CONNECTOR
> 
>
> Key: KAFKA-7590
> URL: https://issues.apache.org/jira/browse/KAFKA-7590
> Project: Kafka
>  Issue Type: Test
>  Components: config, KafkaConnect
>Affects Versions: 2.0.0
>Reporter: Chenchu Lakshman kumar
>Priority: Major
>
> Message
>  
> {"schema":{"type":"struct","fields":[
> {"type":"string","optional":false,"doc":"This field stores the value of 
> `Message.getJMSMessageID() 
> `_.","field":"messageID"}
> ,
> {"type":"string","optional":false,"doc":"This field stores the type of 
> message that was received. This corresponds to the subinterfaces of `Message 
> <[http://docs.oracle.com/javaee/6/api/javax/jms/Message.html]>`_. 
> `BytesMessage 
> <[http://docs.oracle.com/javaee/6/api/javax/jms/BytesMessage.html]>`_ = 
> `bytes`, `MapMessage 
> <[http://docs.oracle.com/javaee/6/api/javax/jms/MapMessage.html]>`_ = `map`, 
> `ObjectMessage 
> <[http://docs.oracle.com/javaee/6/api/javax/jms/ObjectMessage.html]>`_ = 
> `object`, `StreamMessage 
> <[http://docs.oracle.com/javaee/6/api/javax/jms/StreamMessage.html]>`_ = 
> `stream` and `TextMessage 
> <[http://docs.oracle.com/javaee/6/api/javax/jms/TextMessage.html]>`_ = 
> `text`. The corresponding field will be populated with the values from the 
> respective Message subinterface.","field":"messageType"}
> ,
> {"type":"int64","optional":false,"doc":"Data from the `getJMSTimestamp() 
> <[http://docs.oracle.com/javaee/6/api/javax/jms/Message.html#getJMSTimestamp(])>`_
>  method.","field":"timestamp"}
> ,
> {"type":"int32","optional":false,"doc":"This field stores the value of 
> `Message.getJMSDeliveryMode() 
> <[http://docs.oracle.com/javaee/6/api/javax/jms/Message.html#getJMSDeliveryMode(])>`_.","field":"deliveryMode"}
> ,
> {"type":"string","optional":true,"doc":"This field stores the value of 
> `Message.getJMSCorrelationID() 
> <[http://docs.oracle.com/javaee/6/api/javax/jms/Message.html#getJMSCorrelationID(])>`_.","field":"correlationID"}
> ,{"type":"struct","fields":[
> {"type":"string","optional":false,"doc":"The type of JMS Destination, and 
> either ``queue`` or ``topic``.","field":"destinationType"}
> ,
> {"type":"string","optional":false,"doc":"The name of the destination. This 
> will be the value of `Queue.getQueueName() 
> <[http://docs.oracle.com/javaee/6/api/javax/jms/Queue.html#getQueueName(])>`_ 
> or `Topic.getTopicName() 
> <[http://docs.oracle.com/javaee/6/api/javax/jms/Topic.html#getTopicName(])>`_.","field":"name"}
> ],"optional":true,"name":"io.confluent.connect.jms.Destination","doc":"This 
> schema is used to represent a JMS Destination, and is either `queue 
> <[http://docs.oracle.com/javaee/6/api/javax/jms/Queue.html]>`_ or `topic 
> <[http://docs.oracle.com/javaee/6/api/javax/jms/Topic.html]>`_.","field":"replyTo"},{"type":"struct","fields":[
> {"type":"string","optional":false,"doc":"The type of JMS Destination, and 
> either ``queue`` or ``topic``.","field":"destinationType"}
> ,
> {"type":"string","optional":false,"doc":"The name of the destination. This 
> will be the value of `Queue.getQueueName() 
> <[http://docs.oracle.com/javaee/6/api/javax/jms/Queue.html#getQueueName(])>`_ 
> or `Topic.getTopicName() 
> <[http://docs.oracle.com/javaee/6/api/javax/jms/Topic.html#getTopicName(])>`_.","field":"name"}
> ],"optional":true,"name":"io.confluent.connect.jms.Destination","doc":"This 
> schema is used to represent a JMS Destination, and is either `queue 
> <[http://docs.oracle.com/javaee/6/api/javax/jms/Queue.html]>`_ or `topic 
> <[http://docs.oracle.com/javaee/6/api/javax/jms/Topic.html]>`_.","field":"destination"},
> {"type":"boolean","optional":false,"doc":"This field stores the value of 
> `Message.getJMSRedelivered() 
> <[http://docs.oracle.com/javaee/6/api/javax/jms/Message.html#getJMSRedelivered(])>`_.","field":"redelivered"}
> ,
> {"type":"string","optional":true,"doc":"This field stores the value of 
> `Message.getJMSType() 
> <[http://docs.oracle.com/javaee/6/api/javax/jms/Message.html#getJMSType(])>`_.","field":"type"}
> ,
> {"type":"int64","optional":false,"doc":"This field stores the value of 
> `Message.getJMSExpiration() 
> <[http://docs.oracle.com/javaee/6/api/javax/jms/Message.html#getJMSExpiration(])>`_.","field":"expiration"}
> ,
> {"type":"int32","optional":false,"doc":"This field stores the value of 
> `Message.getJMSPriority() 
> <[http://docs.oracle.com/javaee/6/api/javax/jms/Message.html#getJMSPriority(])>`_.","field":"priority"}
> 

Re: [VOTE] KIP-374: Add '--help' option to all available Kafka CLI commands

2018-11-11 Thread Becket Qin
Thanks for the KIP. +1 (binding).

On Mon, Nov 12, 2018 at 9:59 AM Harsha Chintalapani  wrote:

> +1 (binding)
>
> -Harsha
> On Nov 11, 2018, 3:49 PM -0800, Daniele Ascione ,
> wrote:
> > +1 (non-binding)
> >
> > Il ven 9 nov 2018, 02:09 Colin McCabe  ha scritto:
> >
> > > +1 (binding)
> > >
> > >
> > >
> > > On Wed, Oct 31, 2018, at 05:42, Srinivas Reddy wrote:
> > > > Hi All,
> > > >
> > > > I would like to call for a vote on KIP-374:
> > > > https://cwiki.apache.org/confluence/x/FgSQBQ
> > > >
> > > > Summary:
> > > > Currently, the '--help' option is recognized by some Kafka commands
> > > > but not all. To provide a consistent user experience, it would
> > > > be nice to> add a '--help' option to all Kafka commands.
> > > >
> > > > I'd appreciate any votes or feedback.
> > > >
> > > > --
> > > > Srinivas Reddy
> > > >
> > > > http://mrsrinivas.com/
> > > >
> > > >
> > > > (Sent via gmail web)
> > >
> > >
>