Does someone in builds@ have the ability to address this? (restart the
jenkins slaves so that the new ulimit limits will take effect)

Thanks!

Patrick

On Wed, Jul 23, 2014 at 9:33 AM, Patrick Hunt <ph...@apache.org> wrote:
> Giri do you mean me? I don't have access to that afaict.
>
> Patrick
>
> On Wed, Jul 23, 2014 at 12:50 AM, Giridharan Kesavan
> <gkesa...@hortonworks.com> wrote:
>> jenkins slaves might need a restart.
>>
>> Could you pls re-launch the slaves from the jenkins UI configuration page?
>>
>> -giri
>>
>>
>> On Tue, Jul 22, 2014 at 2:25 PM, Patrick Hunt <ph...@apache.org> wrote:
>>>
>>> Thanks Giri! Unfortunately though it seems to not have taken effect, I
>>> just kicked off a precommit build and I see
>>>
>>> core file size          (blocks, -c) 0
>>> data seg size           (kbytes, -d) unlimited
>>> scheduling priority             (-e) 0
>>> file size               (blocks, -f) unlimited
>>> pending signals                 (-i) 386178
>>> max locked memory       (kbytes, -l) 64
>>> max memory size         (kbytes, -m) unlimited
>>> open files                      (-n) 4096
>>> pipe size            (512 bytes, -p) 8
>>> POSIX message queues     (bytes, -q) 819200
>>> real-time priority              (-r) 0
>>> stack size              (kbytes, -s) 8192
>>> cpu time               (seconds, -t) unlimited
>>> max user processes              (-u) 386178
>>> virtual memory          (kbytes, -v) unlimited
>>> file locks                      (-x) unlimited
>>>
>>> more details here:
>>>
>>> https://builds.apache.org/view/S-Z/view/ZooKeeper/job/PreCommit-ZOOKEEPER-Build/2215/console
>>>
>>> Patrick
>>>
>>> On Tue, Jul 22, 2014 at 1:29 PM, Giridharan Kesavan
>>> <gkesa...@hortonworks.com> wrote:
>>> >
>>> >
>>> > jenkins@asf901:~$ ulimit -a
>>> > core file size          (blocks, -c) 0
>>> > data seg size           (kbytes, -d) unlimited
>>> > scheduling priority             (-e) 0
>>> > file size               (blocks, -f) unlimited
>>> > pending signals                 (-i) 386178
>>> > max locked memory       (kbytes, -l) 64
>>> > max memory size         (kbytes, -m) unlimited
>>> > open files                      (-n) 60000
>>> > pipe size            (512 bytes, -p) 8
>>> > POSIX message queues     (bytes, -q) 819200
>>> > real-time priority              (-r) 0
>>> > stack size              (kbytes, -s) 8192
>>> > cpu time               (seconds, -t) unlimited
>>> > max user processes              (-u) 10240
>>> > virtual memory          (kbytes, -v) unlimited
>>> > file locks                      (-x) unlimited
>>> >
>>> > bumped up the open files and max user processes on all the slaves.
>>> >
>>> >
>>> >
>>> > -giri
>>> >
>>> >
>>> > On Tue, Jul 22, 2014 at 12:04 PM, Patrick Hunt <ph...@apache.org> wrote:
>>> >>
>>> >> Giri any chance can you take a look at the ulimit issue on the H#
>>> >> machines? All the ZK precommit builds are failing as a result.
>>> >>
>>> >> I updated the precommit build last night to output "ulimit -a" and it
>>> >> says the current limit is 4096, can we bump that up or set the default
>>> >> to unlimited?
>>> >>
>>> >> Thanks!
>>> >>
>>> >> Patrick
>>> >>
>>> >> On Sun, Jul 20, 2014 at 10:32 PM, Rakesh R <rake...@huawei.com> wrote:
>>> >> > +1
>>> >> >
>>> >> >
>>> >> > Adding one more point. I could see the following error too in the
>>> >> > pre-commit build.
>>> >> >
>>> >> >      [exec]     [junit] Exception in thread "CommitProcWorkThread-16"
>>> >> > java.lang.NoClassDefFoundError:
>>> >> > org/apache/zookeeper/server/ConnectionBean
>>> >> >      [exec]     [junit]         at
>>> >> >
>>> >> > org.apache.zookeeper.server.ServerCnxnFactory.registerConnection(ServerCnxnFactory.java:159)
>>> >> >      [exec]     [junit]         at
>>> >> >
>>> >> > org.apache.zookeeper.server.ZooKeeperServer.finishSessionInit(ZooKeeperServer.java:594)
>>> >> >      [exec]     [junit]         at
>>> >> >
>>> >> > org.apache.zookeeper.server.FinalRequestProcessor.processRequest(FinalRequestProcessor.java:198)
>>> >> >      [exec]     [junit]         at
>>> >> >
>>> >> > org.apache.zookeeper.server.quorum.CommitProcessor$CommitWorkRequest.doWork(CommitProcessor.java:295)
>>> >> >      [exec]     [junit]         at
>>> >> >
>>> >> > org.apache.zookeeper.server.WorkerService$ScheduledWorkRequest.run(WorkerService.java:161)
>>> >> >      [exec]     [junit]         at
>>> >> >
>>> >> > java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
>>> >> >      [exec]     [junit]         at
>>> >> >
>>> >> > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
>>> >> >      [exec]     [junit]         at
>>> >> > java.lang.Thread.run(Thread.java:662)
>>> >> >
>>> >> > -Rakesh
>>> >> >
>>> >> > -----Original Message-----
>>> >> > From: Flavio Junqueira [mailto:fpjunque...@yahoo.com.INVALID]
>>> >> > Sent: 21 July 2014 02:19
>>> >> > To: d...@zookeeper.apache.org
>>> >> > Cc: Andrew Bayer; builds@apache.org; Giridharan Kesavan
>>> >> > Subject: Re: ulimit changed with Apache Jenkins upgrade?
>>> >> >
>>> >> > +1
>>> >> >
>>> >> > On 18 Jul 2014, at 18:59, Patrick Hunt <ph...@apache.org> wrote:
>>> >> >
>>> >> >> Hi builds folks, is this a system wide issue or something we should
>>> >> >> address ourselves? Thanks!
>>> >> >>
>>> >> >> Patrick
>>> >> >>
>>> >> >> ---------- Forwarded message ----------
>>> >> >> From: Patrick Hunt <ph...@apache.org>
>>> >> >> Date: Fri, Jul 18, 2014 at 10:38 AM
>>> >> >> Subject: ulimit changed with Apache Jenkins upgrade?
>>> >> >> To: Giridharan Kesavan <gkesa...@hortonworks.com>
>>> >> >> Cc: DevZooKeeper <d...@zookeeper.apache.org>, Andrew Bayer
>>> >> >> <and...@cloudera.com>
>>> >> >>
>>> >> >>
>>> >> >> Hi Giri, can you check that the new hosts (H#) have the ulimit set
>>> >> >> to
>>> >> >> what it was set to on the original hadoop# hosts? I'm seeing new
>>> >> >> test
>>> >> >> failures with
>>> >> >>
>>> >> >>     [exec]     [junit] java.io.FileNotFoundException:
>>> >> >>
>>> >> >> /home/jenkins/jenkins-slave/workspace/PreCommit-ZOOKEEPER-Build/trunk/
>>> >> >>
>>> >> >> build/test/tmp/test7610638215300246179.junit.dir/version-2/log.1000000
>>> >> >> 01
>>> >> >> (Too many open files)
>>> >> >>
>>> >> >> which we've not seen before. I believe this means that we running
>>> >> >> out
>>> >> >> of file descriptors?
>>> >> >>
>>> >> >> Can you verify and address if possible?
>>> >> >>
>>> >> >> Thanks,
>>> >> >>
>>> >> >> Patrick
>>> >> >
>>> >
>>> >
>>> >
>>> > CONFIDENTIALITY NOTICE
>>> > NOTICE: This message is intended for the use of the individual or entity
>>> > to
>>> > which it is addressed and may contain information that is confidential,
>>> > privileged and exempt from disclosure under applicable law. If the
>>> > reader of
>>> > this message is not the intended recipient, you are hereby notified that
>>> > any
>>> > printing, copying, dissemination, distribution, disclosure or forwarding
>>> > of
>>> > this communication is strictly prohibited. If you have received this
>>> > communication in error, please contact the sender immediately and delete
>>> > it
>>> > from your system. Thank You.
>>
>>
>>
>> CONFIDENTIALITY NOTICE
>> NOTICE: This message is intended for the use of the individual or entity to
>> which it is addressed and may contain information that is confidential,
>> privileged and exempt from disclosure under applicable law. If the reader of
>> this message is not the intended recipient, you are hereby notified that any
>> printing, copying, dissemination, distribution, disclosure or forwarding of
>> this communication is strictly prohibited. If you have received this
>> communication in error, please contact the sender immediately and delete it
>> from your system. Thank You.

Reply via email to