problems in Kafka unit testing trunk

2018-11-27 Thread lk gen
When running ./gradlew test
on a centos machine with gradle and java set
In the trunk version from today

There are errors about too many files open of the form
"
kafka.admin.DeleteTopicTest > testDeletingPartiallyDeletedTopic FAILED
org.apache.kafka.common.KafkaException: java.io.IOException: Too many
open files
at
org.apache.kafka.common.network.Selector.(Selector.java:160)
at
org.apache.kafka.common.network.Selector.(Selector.java:212)
at
org.apache.kafka.common.network.Selector.(Selector.java:225)
at
kafka.coordinator.transaction.TransactionMarkerChannelManager$.apply(TransactionMarkerChannelManager.scala:66)
at
kafka.coordinator.transaction.TransactionCoordinator$.apply(TransactionCoordinator.scala:62)
at kafka.server.KafkaServer.startup(KafkaServer.scala:279)
at kafka.utils.TestUtils$.createServer(TestUtils.scala:135)
at
kafka.admin.DeleteTopicTest.$anonfun$createTestTopicAndCluster$2(DeleteTopicTest.scala:372)
at
scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:233)
at scala.collection.Iterator.foreach(Iterator.scala:937)
at scala.collection.Iterator.foreach$(Iterator.scala:937)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1425)
at scala.collection.IterableLike.foreach(IterableLike.scala:70)
at scala.collection.IterableLike.foreach$(IterableLike.scala:69)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at scala.collection.TraversableLike.map(TraversableLike.scala:233)
at scala.collection.TraversableLike.map$(TraversableLike.scala:226)
at scala.collection.AbstractTraversable.map(Traversable.scala:104)
at
kafka.admin.DeleteTopicTest.createTestTopicAndCluster(DeleteTopicTest.scala:372)
at
kafka.admin.DeleteTopicTest.createTestTopicAndCluster(DeleteTopicTest.scala:366)
at
kafka.admin.DeleteTopicTest.testDeletingPartiallyDeletedTopic(DeleteTopicTest.scala:418)

Caused by:
java.io.IOException: Too many open files
at sun.nio.ch.EPollArrayWrapper.epollCreate(Native Method)
at
sun.nio.ch.EPollArrayWrapper.(EPollArrayWrapper.java:130)
at
sun.nio.ch.EPollSelectorImpl.(EPollSelectorImpl.java:69)
at
sun.nio.ch.EPollSelectorProvider.openSelector(EPollSelectorProvider.java:36)
at java.nio.channels.Selector.open(Selector.java:227)
at
org.apache.kafka.common.network.Selector.(Selector.java:158)
... 20 more

"

Is the environment I am using for gradle test is invalid ? are there
special settings required ?


Re: problems in Kafka unit testing trunk

2018-11-28 Thread lk gen
ulimit shows
$ ulimit -n
1024

is it too small for Kafka ?




On Wed, Nov 28, 2018 at 6:18 AM Dhruvil Shah  wrote:

> The unit test itself does not seem to use too many files. What is the
> output for `ulimit -n` on your system? Running `lsof` might also be helpful
> to determine how many open files you have while Kafka is not running.
>
> - Dhruvil
>
> On Tue, Nov 27, 2018 at 9:20 AM lk gen  wrote:
>
> > When running ./gradlew test
> > on a centos machine with gradle and java set
> > In the trunk version from today
> >
> > There are errors about too many files open of the form
> > "
> > kafka.admin.DeleteTopicTest > testDeletingPartiallyDeletedTopic FAILED
> > org.apache.kafka.common.KafkaException: java.io.IOException: Too many
> > open files
> > at
> > org.apache.kafka.common.network.Selector.(Selector.java:160)
> > at
> > org.apache.kafka.common.network.Selector.(Selector.java:212)
> > at
> > org.apache.kafka.common.network.Selector.(Selector.java:225)
> > at
> >
> >
> kafka.coordinator.transaction.TransactionMarkerChannelManager$.apply(TransactionMarkerChannelManager.scala:66)
> > at
> >
> >
> kafka.coordinator.transaction.TransactionCoordinator$.apply(TransactionCoordinator.scala:62)
> > at kafka.server.KafkaServer.startup(KafkaServer.scala:279)
> > at kafka.utils.TestUtils$.createServer(TestUtils.scala:135)
> > at
> >
> >
> kafka.admin.DeleteTopicTest.$anonfun$createTestTopicAndCluster$2(DeleteTopicTest.scala:372)
> > at
> >
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:233)
> > at scala.collection.Iterator.foreach(Iterator.scala:937)
> > at scala.collection.Iterator.foreach$(Iterator.scala:937)
> > at scala.collection.AbstractIterator.foreach(Iterator.scala:1425)
> > at scala.collection.IterableLike.foreach(IterableLike.scala:70)
> > at scala.collection.IterableLike.foreach$(IterableLike.scala:69)
> > at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
> > at
> scala.collection.TraversableLike.map(TraversableLike.scala:233)
> > at
> scala.collection.TraversableLike.map$(TraversableLike.scala:226)
> > at
> scala.collection.AbstractTraversable.map(Traversable.scala:104)
> > at
> >
> >
> kafka.admin.DeleteTopicTest.createTestTopicAndCluster(DeleteTopicTest.scala:372)
> > at
> >
> >
> kafka.admin.DeleteTopicTest.createTestTopicAndCluster(DeleteTopicTest.scala:366)
> > at
> >
> >
> kafka.admin.DeleteTopicTest.testDeletingPartiallyDeletedTopic(DeleteTopicTest.scala:418)
> >
> > Caused by:
> > java.io.IOException: Too many open files
> > at sun.nio.ch.EPollArrayWrapper.epollCreate(Native Method)
> > at
> > sun.nio.ch.EPollArrayWrapper.(EPollArrayWrapper.java:130)
> > at
> > sun.nio.ch.EPollSelectorImpl.(EPollSelectorImpl.java:69)
> > at
> > sun.nio.ch
> > .EPollSelectorProvider.openSelector(EPollSelectorProvider.java:36)
> > at java.nio.channels.Selector.open(Selector.java:227)
> > at
> > org.apache.kafka.common.network.Selector.(Selector.java:158)
> > ... 20 more
> >
> > "
> >
> > Is the environment I am using for gradle test is invalid ? are there
> > special settings required ?
> >
>


Problem in CI for pull request

2018-11-28 Thread lk gen
Hi,

  I made a pull request and it passed CI on JDK 11 but failed on JDK 8

  I think the JDK 8 error may not related to my commit but an environment
problem on the CI

  How can I rerun the CI for my pull request ?

  The pull request is at
https://github.com/apache/kafka/pull/5960

error states

*19:27:48* ERROR: H36 is offline; cannot locate JDK 1.8
(latest)*19:27:48* ERROR: H36 is offline; cannot locate Gradle 4.8.1


Thanks


Finding reviewrs for a Kafka issue fix

2018-12-07 Thread lk gen
  I have fixed a Kafka issue over a week ago with a CI passing pull
request, but there are no reviewers

  How are reviewers added/chosen for Kafka issues fixes ?

  https://issues.apache.org/jira/browse/KAFKA-6988


Re: Finding reviewrs for a Kafka issue fix

2018-12-07 Thread lk gen
As a  Kafka development newbe, what is the process for selecting reviewers
? Is there some kind of list of reviewers ? Is it possible to assign
reviewers without checking with them ? is there some kind of bulletin board
for finding reviewers ?


On Fri, Dec 7, 2018 at 11:18 AM Gwen Shapira  wrote:

> We normally self-select. I think in this case, the challenge is
> finding reviewers who are comfortable with windows...
> On Fri, Dec 7, 2018 at 10:17 PM lk gen  wrote:
> >
> >   I have fixed a Kafka issue over a week ago with a CI passing pull
> > request, but there are no reviewers
> >
> >   How are reviewers added/chosen for Kafka issues fixes ?
> >
> >   https://issues.apache.org/jira/browse/KAFKA-6988
>
>
>
> --
> Gwen Shapira
> Product Manager | Confluent
> 650.450.2760 | @gwenshap
> Follow us: Twitter | blog
>


consumer offsets topic cleanup policy

2018-12-30 Thread lk gen
Hi,

  the consumer offsets Kafka internal topic is always created with a
compact cleanup policy

  If altering the consumer offsets topic policy from compact to delete in a
specific installed environment, will it cause problems ? will the consumer
still work if the consumer offsets are set to delete ?

Thanks


Re: consumer offsets topic cleanup policy

2018-12-30 Thread lk gen
The original issue is that in windows the compaction cleanup is causing the
Kafka process to crash due to file handling, in order to avoid it, I tried
to disable the compaction cleanup, but that causes the consumer offsets log
to keep increasing, is there a way to work with zookeeper for consumer
offsets with the latest versions of Kafka and consumers, or some other way
to bypass the cleanup of compacted logs ?

On Sun, Dec 30, 2018 at 8:18 PM Gwen Shapira  wrote:

> Depending on how many consumer groups and partitions you have and how often
> you commit, you risk either running out of disk space and/or deleting
> commit information that you will need.
> Either way, you will be storing lots of records you don't need.
>
>  Only do this if there is no other solution to whatever the real issue
> is...
>
> On Sun, Dec 30, 2018, 5:50 PM lk gen 
> > Hi,
> >
> >   the consumer offsets Kafka internal topic is always created with a
> > compact cleanup policy
> >
> >   If altering the consumer offsets topic policy from compact to delete
> in a
> > specific installed environment, will it cause problems ? will the
> consumer
> > still work if the consumer offsets are set to delete ?
> >
> > Thanks
> >
>


Re: consumer offsets topic cleanup policy

2018-12-31 Thread lk gen
There is an issue about log files cannot be deleted in general
https://issues.apache.org/jira/browse/KAFKA-1194
Someone suggested a solution for the deletion of regular topics (not
compacted topics such as consumer offsets) requiring to make consumer
offsets also a regular topic in addition to the fix for regular topics
I created a fix also for the compaction part but didn't continue to make a
pull request and fully verify the effect
Currently Windows cannot be avoided in the scenario required by me


On Mon, Dec 31, 2018 at 8:36 AM Gwen Shapira  wrote:

> Not really... If you don't clean-up, you have to delete or the logs will
> grow indefinitely.
>
> Is there a Jira for the windows issue?
> Also, is there a way to avoid windows until this is resolved? Docker
> containers perhaps?
>
> On Sun, Dec 30, 2018, 11:49 PM lk gen 
> > The original issue is that in windows the compaction cleanup is causing
> the
> > Kafka process to crash due to file handling, in order to avoid it, I
> tried
> > to disable the compaction cleanup, but that causes the consumer offsets
> log
> > to keep increasing, is there a way to work with zookeeper for consumer
> > offsets with the latest versions of Kafka and consumers, or some other
> way
> > to bypass the cleanup of compacted logs ?
> >
> > On Sun, Dec 30, 2018 at 8:18 PM Gwen Shapira  wrote:
> >
> > > Depending on how many consumer groups and partitions you have and how
> > often
> > > you commit, you risk either running out of disk space and/or deleting
> > > commit information that you will need.
> > > Either way, you will be storing lots of records you don't need.
> > >
> > >  Only do this if there is no other solution to whatever the real issue
> > > is...
> > >
> > > On Sun, Dec 30, 2018, 5:50 PM lk gen  > >
> > > > Hi,
> > > >
> > > >   the consumer offsets Kafka internal topic is always created with a
> > > > compact cleanup policy
> > > >
> > > >   If altering the consumer offsets topic policy from compact to
> delete
> > > in a
> > > > specific installed environment, will it cause problems ? will the
> > > consumer
> > > > still work if the consumer offsets are set to delete ?
> > > >
> > > > Thanks
> > > >
> > >
> >
>