Re: [VOTE] KIP-159: Introducing Rich functions to Streams

2017-11-18 Thread Jeyhun Karimov
Hi, I did not expected that Context will be this much an issue. Instead of applying different semantics for different operators, I think we should remove this feature completely. Cheers, Jeyhun On Sat 18. Nov 2017 at 07:49, Jan Filipiak wrote: > Yes, the mail said only join so I wanted to clar

[jira] [Created] (KAFKA-6232) SaslSslAdminClientIntegrationTest sometimes fails

2017-11-18 Thread Ted Yu (JIRA)
Ted Yu created KAFKA-6232: - Summary: SaslSslAdminClientIntegrationTest sometimes fails Key: KAFKA-6232 URL: https://issues.apache.org/jira/browse/KAFKA-6232 Project: Kafka Issue Type: Test

Interested in being a contributor

2017-11-18 Thread Panuwat Anawatmongkhon
Hi All, I am interested in being a contributor. Can anyone guide me? I am a very new for contributing open source. Thank you, Benz

Re: Interested in being a contributor

2017-11-18 Thread Ted Yu
Please read this: https://kafka.apache.org/contributing You can use this Filter to find issues for new contributor: https://issues.apache.org/jira/issues/?jql=project%20%3D%20KAFKA%20AND%20labels%20%3D%20newbie%20AND%20status%20%3D%20Open Cheers On Sat, Nov 18, 2017 at 2:12 AM, Panuwat Anawatmo

Re: [VOTE] KIP-159: Introducing Rich functions to Streams

2017-11-18 Thread Jan Filipiak
Hi, not an issue at all. IMO the approach that is on the table would be perfect On 18.11.2017 10:58, Jeyhun Karimov wrote: Hi, I did not expected that Context will be this much an issue. Instead of applying different semantics for different operators, I think we should remove this feature com

Re: [DISCUSS] KIP-213 Support non-key joining in KTable

2017-11-18 Thread Jan Filipiak
Hi Matthias answers to the questions inline. On 16.11.2017 23:18, Matthias J. Sax wrote: Hi, I am just catching up on this discussion and did re-read the KIP and discussion thread. In contrast to you, I prefer the second approach with CombinedKey as return type for the following reasons: 1

Re: [DISCUSS] KIP-213 Support non-key joining in KTable

2017-11-18 Thread Jan Filipiak
On 17.11.2017 06:59, Guozhang Wang wrote: Thanks for the explanation Jan. On top of my head I'm leaning towards the "more intrusive" approach to resolve the race condition issue we discussed above. Matthias has some arguments for this approach already, so I would not re-iterate them here. To me

SessionKeySchema#segmentsToSearch()

2017-11-18 Thread Ted Yu
Hi, I was reading code for SessionKeySchema#segmentsToSearch() where: public List segmentsToSearch(final Segments segments, final long from, final long to) { return segments.segments(from, Long.MAX_VALUE); I wonder why the parameter to is ignored. WindowKeySchema#segmentsToSearch() pa

Re: SessionKeySchema#segmentsToSearch()

2017-11-18 Thread Ted Yu
This code: final Segment minSegment = segments .getMinSegmentGreaterThanEqualToTimestamp(timeFrom); final Segment maxSegment = segments .getMaxSegmentLessThanEqualToTimestamp(timeTo); Can be replaced with: final List searchSpace = keySchema.segmentsToSearch( segments, fr

[GitHub] kafka pull request #4231: MINOR: Small cleanups/refactoring in kafka.control...

2017-11-18 Thread mimaison
GitHub user mimaison opened a pull request: https://github.com/apache/kafka/pull/4231 MINOR: Small cleanups/refactoring in kafka.controller - Updated logging to use string templates - Minor refactors - Fixed a few typos ### Committer Checklist (excluded from commit m

[jira] [Created] (KAFKA-6233) Removed unnecessary null check

2017-11-18 Thread sagar sukhadev chavan (JIRA)
sagar sukhadev chavan created KAFKA-6233: Summary: Removed unnecessary null check Key: KAFKA-6233 URL: https://issues.apache.org/jira/browse/KAFKA-6233 Project: Kafka Issue Type: New

[GitHub] kafka pull request #4232: KAFKA-6233 :Removed unnecessary null check

2017-11-18 Thread sagarchavan3172
GitHub user sagarchavan3172 opened a pull request: https://github.com/apache/kafka/pull/4232 KAFKA-6233 :Removed unnecessary null check *More detailed description of your change, if necessary. The PR title and PR message become the squashed commit message, so use a separate

Re: [DISCUSS] KIP-213 Support non-key joining in KTable

2017-11-18 Thread Jan Filipiak
-> it think the relationships between the different used types, K0,K1,KO should be explains explicitly (all information is there implicitly, but one need to think hard to figure it out) I'm probably blind for this. can you help me here? how would you formulate this? Thanks, Jan On 16.11.20

[GitHub] kafka pull request #4206: KAFKA-6122: Global Consumer should handle TimeoutE...

2017-11-18 Thread asfgit
Github user asfgit closed the pull request at: https://github.com/apache/kafka/pull/4206 ---

Build failed in Jenkins: kafka-trunk-jdk7 #2982

2017-11-18 Thread Apache Jenkins Server
See Changes: [wangguoz] KAFKA-6122: Global Consumer should handle TimeoutException -- [...truncated 385.74 KB...] kafka.security.auth.SimpleAclAuthorizerTest > testDistri

Jenkins build is back to normal : kafka-trunk-jdk9 #204

2017-11-18 Thread Apache Jenkins Server
See

Jenkins build is back to normal : kafka-trunk-jdk8 #2221

2017-11-18 Thread Apache Jenkins Server
See

[GitHub] kafka-site pull request #110: Add missing close parenthesis

2017-11-18 Thread renchaorevee
GitHub user renchaorevee opened a pull request: https://github.com/apache/kafka-site/pull/110 Add missing close parenthesis You can merge this pull request into a Git repository by running: $ git pull https://github.com/renchaorevee/kafka-site master Alternatively you can rev

[GitHub] kafka pull request #4233: KAFKA-6181 Examining log messages with {{--deep-it...

2017-11-18 Thread deorenikhil
GitHub user deorenikhil opened a pull request: https://github.com/apache/kafka/pull/4233 KAFKA-6181 Examining log messages with {{--deep-iteration}} should show superset of fields Printing log data on Kafka brokers using kafka.tools.DumpLogSegments --deep-iteration option doesn't p

[GitHub] kafka pull request #4234: KAFKA-6207 : Include start of record when RecordIs...

2017-11-18 Thread jawalesumit
GitHub user jawalesumit opened a pull request: https://github.com/apache/kafka/pull/4234 KAFKA-6207 : Include start of record when RecordIsTooLarge When a message is too large to be sent (at org.apache.kafka.clients.producer.KafkaProducer#doSend), the RecordTooLargeException should