[GitHub] kafka pull request #3398: allow transactions in producer perf script

2017-06-21 Thread tcrayford
Github user tcrayford closed the pull request at: https://github.com/apache/kafka/pull/3398 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is

[GitHub] kafka pull request #3398: allow transactions in producer perf script

2017-06-21 Thread tcrayford
GitHub user tcrayford opened a pull request: https://github.com/apache/kafka/pull/3398 allow transactions in producer perf script allow the transactional producer to be enabled in `producer-perf.sh`, with a new flag `--use-transactions` You can merge this pull request into a Git

[GitHub] kafka pull request #1725: WIP KAFKA-3894: split log segment to avoid crashin...

2016-08-12 Thread tcrayford
GitHub user tcrayford opened a pull request: https://github.com/apache/kafka/pull/1725 WIP KAFKA-3894: split log segment to avoid crashing cleaner thread https://issues.apache.org/jira/browse/KAFKA-3894 This is a temporary PR, to see what Jenkins has to say about this work

[GitHub] kafka pull request #1660: KAFKA-3933: always fully read deepIterator

2016-07-25 Thread tcrayford
GitHub user tcrayford reopened a pull request: https://github.com/apache/kafka/pull/1660 KAFKA-3933: always fully read deepIterator Avoids leaking native memory and hence crashing brokers on bootup due to running out of memory. Seeeing as `messageFormat > 0` alw

[GitHub] kafka pull request #1660: KAFKA-3933: always fully read deepIterator

2016-07-25 Thread tcrayford
Github user tcrayford closed the pull request at: https://github.com/apache/kafka/pull/1660 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is

[GitHub] kafka pull request #1660: KAFKA-3933: always fully read deepIterator

2016-07-25 Thread tcrayford
GitHub user tcrayford opened a pull request: https://github.com/apache/kafka/pull/1660 KAFKA-3933: always fully read deepIterator Avoids leaking native memory and hence crashing brokers on bootup due to running out of memory. Seeeing as `messageFormat > 0` always re

[GitHub] kafka pull request #1614: KAFKA-3933: close deepIterator during log recovery

2016-07-25 Thread tcrayford
Github user tcrayford closed the pull request at: https://github.com/apache/kafka/pull/1614 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is

[GitHub] kafka pull request #1614: KAFKA-3933: close deepIterator during log recovery

2016-07-12 Thread tcrayford
GitHub user tcrayford opened a pull request: https://github.com/apache/kafka/pull/1614 KAFKA-3933: close deepIterator during log recovery Avoids leaking native memory and hence crashing brokers on bootup due to running out of memory. Introduces

[GitHub] kafka pull request #1598: KAFKA-3933: close deepIterator during log recovery

2016-07-11 Thread tcrayford
Github user tcrayford closed the pull request at: https://github.com/apache/kafka/pull/1598 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is

[GitHub] kafka pull request #1598: KAFKA-3933: close deepIterator during log recovery

2016-07-08 Thread tcrayford
GitHub user tcrayford opened a pull request: https://github.com/apache/kafka/pull/1598 KAFKA-3933: close deepIterator during log recovery Avoids leaking native memory and hence crashing brokers on bootup due to running out of memory. Introduces

[GitHub] kafka pull request: MINOR: document increased network bandwidth of...

2016-05-15 Thread tcrayford
GitHub user tcrayford opened a pull request: https://github.com/apache/kafka/pull/1389 MINOR: document increased network bandwidth of 0.10 under replication If you're pushing close to the network capacity, 0.10's additional 8 bytes per message can lead to overload of yo