> > I wanted to start a vote on approval of KIP-393
> > <
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-393%3A+Time+windowed+serde+to+properly+deserialize+changelog+input+topic
> > >
> > to
> > fix the current time windowed serde for properly deserializing changelog
> > input topics. Let me know what you guys think.
> >
> > Thanks,
> > Shawn
> >
>
>
> --
> -- Guozhang
>
--
Liquan Pei
Software Engineer, Confluent Inc
derstood that the community is busy working on 2.0 release, but this
> KIP is really important for our internal use case. So if any of you got
> time, please focus on clarifying the use case and reaching the agreement of
> API. Really appreciate your time!
>
>
> Best,
>
> Boyang
>
>
>
--
Liquan Pei
Software Engineer, Confluent Inc
col#KIP-429:KafkaConsumerIncrementalRebalanceProtocol-RebalanceCallbackErrorHandling
>
> And the on-going PRs available for review:
>
> Part I: https://github.com/apache/kafka/pull/6528
> Part II: https://github.com/apache/kafka/pull/6778
>
>
> Thanks
> -- Guozhang
>
--
Liquan Pei
Software Engineer, Confluent Inc
>>>>> 60 consumers 1598621 bytes
> > >>>>>>> 70 consumers 1837359 bytes
> > >>>>>>> 80 consumers 2066934 bytes
> > >>>>>>> 90 consumers 2310970 bytes
> > >>>>>>> 100 consumers 2542735 bytes
> > >>>>>>>
> > >>>>>>> Note that the growth itself is pretty gradual. Plotting the
> > >> points
> > >>>>> makes
> > >>>>>> it
> > >>>>>>> look roughly linear w.r.t the number of consumers:
> > >>>>>>>
> > >>>>>>>
> > >>>>>>
> > >>>>>
> > >>>>
> > >>>
> > >>
> >
> https://www.wolframalpha.com/input/?i=(1,+54739),+(5,+261524),+(10,+459804),+(20,+702499),+(30,+930525),+(40,+1115657),+(50,+1363112),+(60,+1598621),+(70,+1837359),+(80,+2066934),+(90,+2310970),+(100,+2542735)
> > >>>>>>>
> > >>>>>>> Also note that these numbers aren't averages or medians or
> > >> anything
> > >>>>> like
> > >>>>>>> that. It's just the byte size from a given run. I did run them a
> > >>> few
> > >>>>>> times
> > >>>>>>> and saw similar results.
> > >>>>>>>
> > >>>>>>> Impact:
> > >>>>>>> Even after adding gzip to the __consumer_offsets topic with my
> > >>>> pending
> > >>>>>>> KAFKA-3718 patch, the AwaitingSync phase of the group fails with
> > >>>>>>> RecordTooLargeException. This means the combined size of each
> > >>>> member's
> > >>>>>>> subscriptions and assignments exceeded the
> > >>>> KafkaConfig.messageMaxBytes
> > >>>>> of
> > >>>>>>> 112 bytes. The group ends up dying.
> > >>>>>>>
> > >>>>>>> Options:
> > >>>>>>> 1. Config change: reduce the number of consumers in the group.
> > >> This
> > >>>>> isn't
> > >>>>>>> always a realistic answer in more strenuous use cases like
> > >>>> MirrorMaker
> > >>>>>>> clusters or for auditing.
> > >>>>>>> 2. Config change: split the group into smaller groups which
> > >>> together
> > >>>>> will
> > >>>>>>> get full coverage of the topics. This gives each group member a
> > >>>> smaller
> > >>>>>>> subscription.(ex: g1 has topics starting with a-m while g2 has
> > >>> topics
> > >>>>>>> starting ith n-z). This would be operationally painful to manage.
> > >>>>>>> 3. Config change: split the topics among members of the group.
> > >>> Again
> > >>>>> this
> > >>>>>>> gives each group member a smaller subscription. This would also
> > >> be
> > >>>>>>> operationally painful to manage.
> > >>>>>>> 4. Config change: bump up KafkaConfig.messageMaxBytes (a
> > >>> topic-level
> > >>>>>>> config) and KafkaConfig.replicaFetchMaxBytes (a broker-level
> > >>> config).
> > >>>>>>> Applying messageMaxBytes to just the __consumer_offsets topic
> > >> seems
> > >>>>>>> relatively harmless, but bumping up the broker-level
> > >>>>> replicaFetchMaxBytes
> > >>>>>>> would probably need more attention.
> > >>>>>>> 5. Config change: try different compression codecs. Based on 2
> > >>>> minutes
> > >>>>> of
> > >>>>>>> googling, it seems like lz4 and snappy are faster than gzip but
> > >>> have
> > >>>>>> worse
> > >>>>>>> compression, so this probably won't help.
> > >>>>>>> 6. Implementation change: support sending the regex over the wire
> > >>>>> instead
> > >>>>>>> of the fully expanded topic subscriptions. I think people said in
> > >>> the
> > >>>>>> past
> > >>>>>>> that different languages have subtle differences in regex, so
> > >> this
> > >>>>>> doesn't
> > >>>>>>> play nicely with cross-language groups.
> > >>>>>>> 7. Implementation change: maybe we can reverse the mapping?
> > >> Instead
> > >>>> of
> > >>>>>>> mapping from member to subscriptions, we can map a subscription
> > >> to
> > >>> a
> > >>>>> list
> > >>>>>>> of members.
> > >>>>>>> 8. Implementation change: maybe we can try to break apart the
> > >>>>>> subscription
> > >>>>>>> and assignments from the same SyncGroupRequest into multiple
> > >>> records?
> > >>>>>> They
> > >>>>>>> can still go to the same message set and get appended together.
> > >>> This
> > >>>>> way
> > >>>>>>> the limit become the segment size, which shouldn't be a problem.
> > >>> This
> > >>>>> can
> > >>>>>>> be tricky to get right because we're currently keying these
> > >>> messages
> > >>>> on
> > >>>>>> the
> > >>>>>>> group, so I think records from the same rebalance might
> > >>> accidentally
> > >>>>>>> compact one another, but my understanding of compaction isn't
> > >> that
> > >>>>> great.
> > >>>>>>>
> > >>>>>>> Todo:
> > >>>>>>> It would be interesting to rerun the tests with no compression
> > >> just
> > >>>> to
> > >>>>>> see
> > >>>>>>> how much gzip is helping but it's getting late. Maybe tomorrow?
> > >>>>>>>
> > >>>>>>> - Onur
> > >>>>>>>
> > >>>>>>
> > >>>>>
> > >>>>
> > >>>>
> > >>>>
> > >>>> --
> > >>>> -- Guozhang
> > >>>>
> > >>>
> > >>
> > >>
> > >>
> > >> --
> > >> -- Guozhang
> > >>
> >
> >
>
--
Liquan Pei
Software Engineer, Confluent Inc
It seems that the links to images in the KIP are broken.
Liquan
On Tue, May 24, 2016 at 9:33 AM, parth brahmbhatt <
brahmbhatt.pa...@gmail.com> wrote:
> 110. What does getDelegationTokenAs mean?
> In the current proposal we only allow a user to get delegation token for
> the identity that it aut
> could use the OffsetStorageReader to read previously-recorded offsets to
> more intelligently configure its tasks. This seems very straightforward,
> backward compatible, and non-intrusive.
>
> Is there any interest in this? If so, I can create an issue and work on a
> pull req
s
> as 3 and heartbeat.interval.ms as 1 to consumer and polling
> happens
> for sure with in 3. Can anyone help me out. Please let me know if any
> information is needed.
> I am using 3 node kafka cluster.
> Thanks,
> Sunny
>
--
Liquan Pei
Software Engineer, Confluent Inc
gt; feel there's something to still be discussed.
> >
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-51+-+List+Connectors+REST+API
> >
> > I'll obviously kick things off with a +1.
> >
> > -Ewen
> >
>
--
Liquan Pei
Department of Physics
University of Massachusetts Amherst
1551
> Email: victory_...@163.com
> ResearchLab Homepage: http://pasa-bigdata.nju.edu.cn
--
Liquan Pei
Software Engineer, Confluent Inc
201(ScheduledThreadPoolExecutor.java:180)
> at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at
> java.util.concurrent.ThreadPoolExecu
gt;> > > > has
> >> > > > >> an entry for the largest timestamp. Is that only for restarting
> >> > after
> >> > > a
> >> > > > >> hard failure?
> >> > > > >>
> >> > > > >> 11. On broker startup, if a log segment misses the time index,
> do
> >> we
> >> > > > >> always
> >> > > > >> rebuild it? This can happen when the broker is upgraded.
> >> > > > >>
> >> > > > >> 12. Related to Guozhang's question #1. It seems it's simpler to
> >> add
> >> > > time
> >> > > > >> index entries independent of the offset index since at index
> entry
> >> > may
> >> > > > not
> >> > > > >> be added to the offset and the time index at the same time.
> Also,
> >> > this
> >> > > > >> allows time index to be rebuilt independently if needed.
> >> > > > >>
> >> > > > >> Thanks,
> >> > > > >>
> >> > > > >> Jun
> >> > > > >>
> >> > > > >>
> >> > > > >> On Wed, Apr 6, 2016 at 5:44 PM, Becket Qin <
> becket@gmail.com>
> >> > > > wrote:
> >> > > > >>
> >> > > > >> > Hi all,
> >> > > > >> >
> >> > > > >> > I updated KIP-33 based on the initial implementation. Per
> >> > discussion
> >> > > > on
> >> > > > >> > yesterday's KIP hangout, I would like to initiate the new
> vote
> >> > > thread
> >> > > > >> for
> >> > > > >> > KIP-33.
> >> > > > >> >
> >> > > > >> > The KIP wiki:
> >> > > > >> >
> >> > > > >> >
> >> > > > >>
> >> > > >
> >> > >
> >> >
> >>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-33+-+Add+a+time+based+log+index
> >> > > > >> >
> >> > > > >> > Here is a brief summary of the KIP:
> >> > > > >> > 1. We propose to add a time index for each log segment.
> >> > > > >> > 2. The time indices are going to be used of log retention,
> log
> >> > > rolling
> >> > > > >> and
> >> > > > >> > message search by timestamp.
> >> > > > >> >
> >> > > > >> > There was an old voting thread which has some discussions on
> >> this
> >> > > KIP.
> >> > > > >> The
> >> > > > >> > mail thread link is following:
> >> > > > >> >
> >> > > > >> >
> >> > > > >>
> >> > > >
> >> > >
> >> >
> >>
> http://mail-archives.apache.org/mod_mbox/kafka-dev/201602.mbox/%3ccabtagwgoebukyapfpchmycjk2tepq3ngtuwnhtr2tjvsnc8...@mail.gmail.com%3E
> >> > > > >> >
> >> > > > >> > I have the following WIP patch for reference. It needs a few
> >> more
> >> > > unit
> >> > > > >> > tests and documentation. Other than that it should run fine.
> >> > > > >> >
> >> > > > >> >
> >> > > > >>
> >> > > >
> >> > >
> >> >
> >>
> https://github.com/becketqin/kafka/commit/712357a3fbf1423e05f9eed7d2fed5b6fe6c37b7
> >> > > > >> >
> >> > > > >> > Thanks,
> >> > > > >> >
> >> > > > >> > Jiangjie (Becket) Qin
> >> > > > >> >
> >> > > > >>
> >> > > > >
> >> > > > >
> >> > > >
> >> > >
> >> >
> >>
> >
> >
> >
> > --
> > -- Guozhang
>
--
Liquan Pei
Software Engineer, Confluent Inc
ly get a record where the key is A and the value is 5, i.e, the
> number of times the key A was seen in the input stream. However, this is
> not the case. What i receive on the count topic is:
> A:1
> A:2
> A:1
> A:2
> A:1
> A:2
> A:1
> A:2
> A:1
>
> Is
Hi
I would like to start a quick discussion on KIP-56
https://cwiki.apache.org/confluence/display/KAFKA/KIP-56%3A+Allow+cross+origin+HTTP+requests+on+all+HTTP+methods
This proposal is to allow cross origin HTTP requests on all HTTP methods.
Thanks,
Liquan
--
Liquan Pei
Software Engineer
Hi
I would like to start vote on KIP-56.
https://cwiki.apache.org/confluence/display/KAFKA/KIP-56%3A+Allow+cross+origin+HTTP+requests+on+all+HTTP+methods
Thanks,
--
Liquan Pei
Software Engineer, Confluent Inc
; Jira, Kafka is not visible in the list of projects in the first field
> of the ticket. I see the hundred+ other Apache projects, but no Kafka
> :(
>
--
Liquan Pei
Software Engineer, Confluent Inc
tml
>
> * Protocol:
> http://kafka.apache.org/0100/protocol.html
>
> /**
>
> Thanks,
>
> Gwen
>
--
Liquan Pei
Software Engineer, Confluent Inc
d
> numerous patches to Kafka. His most significant contribution is Kafka
> Connect which was released few days ago as part of 0.9.
>
> Please join me on welcoming and congratulating Ewen.
>
> Ewen, we look forward to your continued contributions to the Kafka
> community!
>
&
gineer
> Tink AB
>
> Email: jens.ran...@tink.se
> Phone: +46 708 84 18 32
> Web: www.tink.se
>
> Facebook <https://www.facebook.com/#!/tink.se> Linkedin
> <
> http://www.linkedin.com/company/2735919?trk=vsrp_companies_res_photo&trkInfo=VSRPsearchId%3A1057023381369207406670%2CVSRPtargetId%3A2735919%2CVSRPcmpt%3Aprimary
> >
> Twitter <https://twitter.com/tink>
>
--
Liquan Pei
Department of Physics
University of Massachusetts Amherst
ode contributions, Boyang
> > has
> > > also helped reviewing even more PRs and KIPs than his own.
> > >
> > > Thanks for all the contributions Boyang! And look forward to more
> > > collaborations with you on Apache Kafka.
> > >
> > >
> > > -- Guozhang, on behalf of the Apache Kafka PMC
> > >
> >
>
--
Liquan Pei
Software Engineer, Confluent Inc
Liquan Pei created KAFKA-7023:
-
Summary: Kafka Streams RocksDB bulk loading config may not be
honored with customized RocksDBConfigSetter
Key: KAFKA-7023
URL: https://issues.apache.org/jira/browse/KAFKA-7023
Liquan Pei created KAFKA-7103:
-
Summary: Use bulkloading for RocksDBSegmentedBytesStore during init
Key: KAFKA-7103
URL: https://issues.apache.org/jira/browse/KAFKA-7103
Project: Kafka
Issue
Liquan Pei created KAFKA-7105:
-
Summary: Refactor RocksDBSegmentsBatchingRestoreCallback and
RocksDBBatchingRestoreCallback into a single class
Key: KAFKA-7105
URL: https://issues.apache.org/jira/browse/KAFKA-7105
[
https://issues.apache.org/jira/browse/KAFKA-3734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Liquan Pei reassigned KAFKA-3734:
-
Assignee: Liquan Pei (was: Ewen Cheslack-Postava)
> Exceptions in SourceTask.commit() meth
[
https://issues.apache.org/jira/browse/KAFKA-3739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Liquan Pei reassigned KAFKA-3739:
-
Assignee: Liquan Pei
> Add no-arg constructor for library provided ser
[
https://issues.apache.org/jira/browse/KAFKA-3742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Liquan Pei reassigned KAFKA-3742:
-
Assignee: Liquan Pei
> Can't run connect-distributed with -dae
[
https://issues.apache.org/jira/browse/KAFKA-3742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Liquan Pei updated KAFKA-3742:
--
Summary: Can't run connect-distributed.sh with -daemon flag (was: Can't
run connect-distri
[
https://issues.apache.org/jira/browse/KAFKA-3742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Liquan Pei updated KAFKA-3742:
--
Description:
Running on ubuntu 14.04. Discovered while experimenting various different kafka
[
https://issues.apache.org/jira/browse/KAFKA-2894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Liquan Pei reassigned KAFKA-2894:
-
Assignee: Liquan Pei (was: Ewen Cheslack-Postava)
> WorkerSinkTask doesn't handle r
Liquan Pei created KAFKA-3782:
-
Summary: Transient failure with
kafkatest.tests.connect.connect_distributed_test.ConnectDistributedTest.test_bounce.clean=True
Key: KAFKA-3782
URL: https://issues.apache.org/jira
[
https://issues.apache.org/jira/browse/KAFKA-3782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Liquan Pei updated KAFKA-3782:
--
Description:
For commit 946ae60
max() arg is an empty sequence
Traceback (most recent call last
[
https://issues.apache.org/jira/browse/KAFKA-3820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Liquan Pei reassigned KAFKA-3820:
-
Assignee: Liquan Pei (was: Ewen Cheslack-Postava)
> Provide utilities for tracking sou
[
https://issues.apache.org/jira/browse/KAFKA-3829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Liquan Pei reassigned KAFKA-3829:
-
Assignee: Liquan Pei (was: Ewen Cheslack-Postava)
> Warn that kafka-connect group.id must
[
https://issues.apache.org/jira/browse/KAFKA-3868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Liquan Pei reassigned KAFKA-3868:
-
Assignee: Liquan Pei
> New producer metric record-size-avg does not provide average record s
Liquan Pei created KAFKA-3920:
-
Summary: Add Schema source connector to Kafka Connect
Key: KAFKA-3920
URL: https://issues.apache.org/jira/browse/KAFKA-3920
Project: Kafka
Issue Type: Improvement
Liquan Pei created KAFKA-4002:
-
Summary: task.open() should be invoked in case that 0 partitions
is assigned to task.
Key: KAFKA-4002
URL: https://issues.apache.org/jira/browse/KAFKA-4002
Project: Kafka
[
https://issues.apache.org/jira/browse/KAFKA-3316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Liquan Pei updated KAFKA-3316:
--
Affects Version/s: 0.10.0.0
Status: Patch Available (was: In Progress)
> Add Conn
[
https://issues.apache.org/jira/browse/KAFKA-3316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Work on KAFKA-3316 started by Liquan Pei.
-
> Add Connect REST API to list available connector clas
[
https://issues.apache.org/jira/browse/KAFKA-3462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Work on KAFKA-3462 started by Liquan Pei.
-
> Allow SinkTasks to disable consumer offset com
Liquan Pei created KAFKA-3462:
-
Summary: Allow SinkTasks to disable consumer offset commit
Key: KAFKA-3462
URL: https://issues.apache.org/jira/browse/KAFKA-3462
Project: Kafka
Issue Type: Bug
[
https://issues.apache.org/jira/browse/KAFKA-3462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Liquan Pei updated KAFKA-3462:
--
Description: SinkTasks should be able to disable consumer offset commit if
they manage offsets in the
[
https://issues.apache.org/jira/browse/KAFKA-3462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Liquan Pei updated KAFKA-3462:
--
Status: Patch Available (was: In Progress)
> Allow SinkTasks to disable consumer offset com
[
https://issues.apache.org/jira/browse/KAFKA-3500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Liquan Pei updated KAFKA-3500:
--
Environment: (was: In some cases, the key or value for the offset map
can be null. However, it
Liquan Pei created KAFKA-3500:
-
Summary: KafkaOffsetBackingStore set method needs to handle null
Key: KAFKA-3500
URL: https://issues.apache.org/jira/browse/KAFKA-3500
Project: Kafka
Issue Type
Liquan Pei created KAFKA-3520:
-
Summary: System tests of config validate and list connectors REST
APIs
Key: KAFKA-3520
URL: https://issues.apache.org/jira/browse/KAFKA-3520
Project: Kafka
Issue
[
https://issues.apache.org/jira/browse/KAFKA-3520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Work on KAFKA-3520 started by Liquan Pei.
-
> System tests of config validate and list connectors REST A
[
https://issues.apache.org/jira/browse/KAFKA-3520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Liquan Pei updated KAFKA-3520:
--
Status: Patch Available (was: In Progress)
> System tests of config validate and list connectors R
[
https://issues.apache.org/jira/browse/KAFKA-3526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Liquan Pei updated KAFKA-3526:
--
Description:
In the response of
PUT /connector-plugins/{name}/config/validate,
The value.value
Liquan Pei created KAFKA-3526:
-
Summary: REST APIs return object representation instead of string
for config values, default values and recommended values
Key: KAFKA-3526
URL: https://issues.apache.org/jira/browse
[
https://issues.apache.org/jira/browse/KAFKA-3526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Liquan Pei updated KAFKA-3526:
--
Description:
In the response of
{code}
PUT /connector-plugins/{name}/config/validate
{code}
The
[
https://issues.apache.org/jira/browse/KAFKA-3526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Liquan Pei updated KAFKA-3526:
--
Description:
In the response of
{code}
PUT /connector-plugins/{name}/config/validate
{code}
The
[
https://issues.apache.org/jira/browse/KAFKA-3526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Liquan Pei updated KAFKA-3526:
--
Status: Patch Available (was: In Progress)
> REST APIs return object representation instead of str
[
https://issues.apache.org/jira/browse/KAFKA-3526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Work on KAFKA-3526 started by Liquan Pei.
-
> REST APIs return object representation instead of string for config val
[
https://issues.apache.org/jira/browse/KAFKA-3530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Liquan Pei reassigned KAFKA-3530:
-
Assignee: Liquan Pei
> Making the broker-list option consistent across all to
[
https://issues.apache.org/jira/browse/KAFKA-3527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15233722#comment-15233722
]
Liquan Pei commented on KAFKA-3527:
---
[~jasong35] Do you mind if I take
[
https://issues.apache.org/jira/browse/KAFKA-3552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Liquan Pei reassigned KAFKA-3552:
-
Assignee: Liquan Pei (was: Neha Narkhede)
> New Consumer: java.lang.OutOfMemoryError: Dir
[
https://issues.apache.org/jira/browse/KAFKA-3552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15238372#comment-15238372
]
Liquan Pei commented on KAFKA-3552:
---
Hi Kanak,
Can you share with us the cons
[
https://issues.apache.org/jira/browse/KAFKA-3421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Liquan Pei reassigned KAFKA-3421:
-
Assignee: Liquan Pei (was: Ewen Cheslack-Postava)
> Update docs with new connector featu
[
https://issues.apache.org/jira/browse/KAFKA-3421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Work on KAFKA-3421 started by Liquan Pei.
-
> Update docs with new connector featu
[
https://issues.apache.org/jira/browse/KAFKA-3571?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Liquan Pei reassigned KAFKA-3571:
-
Assignee: Liquan Pei
> Traits for utilities like ConsumerGroupComm
[
https://issues.apache.org/jira/browse/KAFKA-3421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Liquan Pei updated KAFKA-3421:
--
Status: Patch Available (was: In Progress)
> Update docs with new connector featu
[
https://issues.apache.org/jira/browse/KAFKA-3573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15244768#comment-15244768
]
Liquan Pei commented on KAFKA-3573:
---
Thanks for working on this. Currently, there
Liquan Pei created KAFKA-3578:
-
Summary: Allow cross origin HTTP requests on all HTTP methods
Key: KAFKA-3578
URL: https://issues.apache.org/jira/browse/KAFKA-3578
Project: Kafka
Issue Type
[
https://issues.apache.org/jira/browse/KAFKA-3578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Liquan Pei reassigned KAFKA-3578:
-
Assignee: Liquan Pei (was: Ewen Cheslack-Postava)
> Allow cross origin HTTP requests on
Liquan Pei created KAFKA-3583:
-
Summary: Docs on pause/resume/restart APIs.
Key: KAFKA-3583
URL: https://issues.apache.org/jira/browse/KAFKA-3583
Project: Kafka
Issue Type: Improvement
[
https://issues.apache.org/jira/browse/KAFKA-3582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Liquan Pei reassigned KAFKA-3582:
-
Assignee: Liquan Pei
> remove references to Copcyat from connect property fi
[
https://issues.apache.org/jira/browse/KAFKA-3582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Work on KAFKA-3582 started by Liquan Pei.
-
> remove references to Copcyat from connect property fi
[
https://issues.apache.org/jira/browse/KAFKA-3582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Liquan Pei updated KAFKA-3582:
--
Status: Patch Available (was: In Progress)
> remove references to Copcyat from connect property fi
[
https://issues.apache.org/jira/browse/KAFKA-3459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Liquan Pei reassigned KAFKA-3459:
-
Assignee: Liquan Pei (was: Ewen Cheslack-Postava)
> Returning zero task configurations fro
[
https://issues.apache.org/jira/browse/KAFKA-3035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Liquan Pei reassigned KAFKA-3035:
-
Assignee: Liquan Pei
> Transient: kafka.api.PlaintextConsumerTest > testAutoOffsetReset
[
https://issues.apache.org/jira/browse/KAFKA-3459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Work on KAFKA-3459 started by Liquan Pei.
-
> Returning zero task configurations from a connector does not properly clean
&
[
https://issues.apache.org/jira/browse/KAFKA-3459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Liquan Pei updated KAFKA-3459:
--
Status: Patch Available (was: In Progress)
> Returning zero task configurations from a connector d
Liquan Pei created KAFKA-3606:
-
Summary: Traverse CLASSPATH during herder start to list connectors
Key: KAFKA-3606
URL: https://issues.apache.org/jira/browse/KAFKA-3606
Project: Kafka
Issue Type
[
https://issues.apache.org/jira/browse/KAFKA-3605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Liquan Pei resolved KAFKA-3605.
---
Resolution: Fixed
> Connector REST endpoint allows incorrectly overriding the connector n
[
https://issues.apache.org/jira/browse/KAFKA-3605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Liquan Pei reopened KAFKA-3605:
---
> Connector REST endpoint allows incorrectly overriding the connector n
[
https://issues.apache.org/jira/browse/KAFKA-3457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Liquan Pei reassigned KAFKA-3457:
-
Assignee: Liquan Pei
> KafkaConsumer.committed(...) hangs forever if port number is wr
[
https://issues.apache.org/jira/browse/KAFKA-3527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Liquan Pei reassigned KAFKA-3527:
-
Assignee: Liquan Pei (was: Jason Gustafson)
> Consumer commitAsync should not expose inter
[
https://issues.apache.org/jira/browse/KAFKA-2479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Liquan Pei resolved KAFKA-2479.
---
Resolution: Fixed
> Add CopycatExceptions to indicate transient and permanent errors i
[
https://issues.apache.org/jira/browse/KAFKA-3556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Liquan Pei reassigned KAFKA-3556:
-
Assignee: Liquan Pei
> Improve group coordinator metr
Liquan Pei created KAFKA-3611:
-
Summary: Remove WARNs when using reflections
Key: KAFKA-3611
URL: https://issues.apache.org/jira/browse/KAFKA-3611
Project: Kafka
Issue Type: Improvement
[
https://issues.apache.org/jira/browse/KAFKA-3611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Work on KAFKA-3611 stopped by Liquan Pei.
-
> Remove WARNs when using reflecti
[
https://issues.apache.org/jira/browse/KAFKA-3611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Liquan Pei updated KAFKA-3611:
--
Status: Patch Available (was: In Progress)
> Remove WARNs when using reflecti
[
https://issues.apache.org/jira/browse/KAFKA-3611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Work on KAFKA-3611 started by Liquan Pei.
-
> Remove WARNs when using reflecti
[
https://issues.apache.org/jira/browse/KAFKA-3611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Work on KAFKA-3611 started by Liquan Pei.
-
> Remove WARNs when using reflecti
Liquan Pei created KAFKA-3615:
-
Summary: Exclude test jars in CLASSPATH of kafka-run-class.sh
Key: KAFKA-3615
URL: https://issues.apache.org/jira/browse/KAFKA-3615
Project: Kafka
Issue Type
[
https://issues.apache.org/jira/browse/KAFKA-3615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15255889#comment-15255889
]
Liquan Pei commented on KAFKA-3615:
---
[~enothereska] Kafka Connect provides a REST
[
https://issues.apache.org/jira/browse/KAFKA-3626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Liquan Pei reassigned KAFKA-3626:
-
Assignee: Liquan Pei
> Transient failure in testGetAllTopicMetad
[
https://issues.apache.org/jira/browse/KAFKA-3627?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Liquan Pei reassigned KAFKA-3627:
-
Assignee: Liquan Pei (was: Neha Narkhede)
> New consumer doesn't run delayed tasks whi
[
https://issues.apache.org/jira/browse/KAFKA-3637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Liquan Pei reassigned KAFKA-3637:
-
Assignee: Liquan Pei
> Add method that checks if streams are initiali
[
https://issues.apache.org/jira/browse/KAFKA-3213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Liquan Pei reassigned KAFKA-3213:
-
Assignee: Liquan Pei (was: Ewen Cheslack-Postava)
> [CONNECT] It looks like we are not back
[
https://issues.apache.org/jira/browse/KAFKA-3582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Liquan Pei closed KAFKA-3582.
-
> remove references to Copcyat from connect property fi
[
https://issues.apache.org/jira/browse/KAFKA-3615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Liquan Pei closed KAFKA-3615.
-
> Exclude test jars in CLASSPATH of kafka-run-class
[
https://issues.apache.org/jira/browse/KAFKA-3606?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Liquan Pei closed KAFKA-3606.
-
> Traverse CLASSPATH during herder start to list connect
[
https://issues.apache.org/jira/browse/KAFKA-3578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Liquan Pei closed KAFKA-3578.
-
> Allow cross origin HTTP requests on all HTTP meth
[
https://issues.apache.org/jira/browse/KAFKA-3611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Liquan Pei closed KAFKA-3611.
-
> Remove WARNs when using reflections
>
>
>
Liquan Pei created KAFKA-3654:
-
Summary: ConnectorConfig defs short circuit implementation config
checks
Key: KAFKA-3654
URL: https://issues.apache.org/jira/browse/KAFKA-3654
Project: Kafka
[
https://issues.apache.org/jira/browse/KAFKA-3654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Liquan Pei updated KAFKA-3654:
--
Summary: ConnectorConfig short circuit implementation config validation
(was: ConnectorConfig defs
[
https://issues.apache.org/jira/browse/KAFKA-3654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Liquan Pei updated KAFKA-3654:
--
Description: Right now if you call the validate endpoint with a config that
has an invalid value for
[
https://issues.apache.org/jira/browse/KAFKA-3654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Liquan Pei updated KAFKA-3654:
--
Summary: Config validation should validate both common and connector
specific configurations (was
[
https://issues.apache.org/jira/browse/KAFKA-3656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Liquan Pei reassigned KAFKA-3656:
-
Assignee: Liquan Pei
> Avoid stressing system more when already under str
[
https://issues.apache.org/jira/browse/KAFKA-3649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Liquan Pei reassigned KAFKA-3649:
-
Assignee: Liquan Pei
> Add capability to query broker process for configuration propert
1 - 100 of 146 matches
Mail list logo