We do have one NFS, for copying build artifacts to RPM repository.
tirsdag 6. august 2019 09.08.48 UTC+2 skrev Sverre Moe følgende:
>
> I was mistaken. We did not use NFS.
> The disk for JENKINS_HOME (Jenkins running on VM), is a LVM disk.
>
> mandag 29. juli 2019 18.15.20 UTC+2 skrev Ivan Fernand
I was mistaken. We did not use NFS.
The disk for JENKINS_HOME (Jenkins running on VM), is a LVM disk.
mandag 29. juli 2019 18.15.20 UTC+2 skrev Ivan Fernandez Calvo følgende:
>
> check the Cloudbees links, I've helped to write those KB when I was on
> CloudBees :), I'm pretty sure that the NFS is
check the Cloudbees links, I've helped to write those KB when I was on
CloudBees :), I'm pretty sure that the NFS is your pain and the root cause
of all your problems if you can rid of it better.
El lunes, 29 de julio de 2019, 18:03:59 (UTC+2), slide escribió:
>
> CloudBees (not my employer) has
CloudBees (not my employer) has some resources on using NFS (generally the
recommendation is to NOT use NFS for JENKINS_HOME).
https://support.cloudbees.com/hc/en-us/articles/115000486312-CJP-Performance-Best-Practices-for-Linux#nfs
and
https://support.cloudbees.com/hc/en-us/articles/217479948-NF
Yes, we are using NFS for JENKINS_HOME.
mandag 29. juli 2019 15.41.00 UTC+2 skrev Ivan Fernandez Calvo følgende:
>
> you have 83 threads in state:IN_NATIVE, probably stuck in IO operations,
> those 83 threads are blocking the other 382 threads, if you use an NFS or
> similar device for you Jenki
you have 83 threads in state:IN_NATIVE, probably stuck in IO operations,
those 83 threads are blocking the other 382 threads, if you use an NFS or
similar device for you Jenkins HOME this is probably your bottleneck, if
not check the IO stats on the OS to see where you have the bottleneck.
El l
I was unable to determine something from the stack output
Here is the
result:
https://fastthread.io/my-thread-report.jsp?p=c2hhcmVkLzIwMTkvMDcvMjkvLS1qc3RhY2sudHh0LS05LTE2LTI3
torsdag 18. juli 2019 11.28.06 UTC+2 skrev Sverre Moe følgende:
>
> There is no such reference in my jstack output.
> Th
There is no such reference in my jstack output.
The output says no deadlock detected.
I will try that site for analyzing the jstack.
Even a normal running Jenkins has many BLOCKED threads. If that is normal I
don't know.
We have a test Jenkins instance running on Java 11. That one does not have
In that dump I can not see which thread is blocking the others, the jstack
output has a reference on each thread that said what thread is the blocker
on each thread (- locked <0x> a java.lang.Object), you can try to
analyze those thread dump with https://fastthread.io/index.jsp or other
I cannot see any specific plugins in the stacktrace.
There are several duplicate threads. Here are some of them.
Most common denominator seems to be about SSH.
Thread 29360: (state = BLOCKED)
- java.lang.Object.wait(long) @bci=0 (Compiled frame; information may be
imprecise)
- java.lang.Object.
Those BLOCKED threads should be related to some plugin or class, see the
stack trace on the thread dump to try to figure out which one is, then
seems the root cause of your problem.
--
You received this message because you are subscribed to the Google Groups
"Jenkins Users" group.
To unsubscri
I ran jstack on Jenkins, and many of the threads had state BLOCKED.
However after a restart most of the threads are BLOCKED. Not sure if it is
an issue here.
After a restart Jenkins starts with aprox 200 threads open.
When I got problem with disconnected agents, the thread count reached 500.
ons
It seems to be the monitoring that gets the agents disconnected.
Got this in my log file this last time they got disconnectd.
Jul 17, 2019 11:58:22 AM
hudson.init.impl.InstallUncaughtExceptionHandler$DefaultUncaughtExceptionHandler
uncaughtExc
eption
SEVERE: A thread (Timer-3450/103166) died
We have had to blissfull days of stable Jenkins. Today two nodes are
disconnected and they will not come back online.
What is strange is it is the same two-three nodes every time.
Running disconnect on them through the URL
http://jenkins.example.com/jenkins/computer/NODE_NAME/disconnect, does no
I suspected it might be related, but was not sure.
The odd thing this just started being a problem a week ago. Nothing as far
as I can see has changed on the Jenkins server.
lørdag 13. juli 2019 13.04.44 UTC+2 skrev Ivan Fernandez Calvo følgende:
>
> I saw that you have another question related
I saw that you have another question related with OOM errors in Jenkins if
it is the same instance , this is your real issue with the agents, until
you do not have a stable Jenkins instance the agent disconnection will be a
side effect.
>
>
--
You received this message because you are subscrib
Hi,
You do not need to save the configuration to force the disconnection, you
can use the disconnection REST call URL see
https://github.com/jenkinsci/ssh-slaves-plugin/blob/master/doc/TROUBLESHOOTING.md#force-disconnection
About the disconnection error, this trace is the last error after the
Also when this happens, even after I have managed to relaunch the agent, no
build can run on it.
It stops on "Waiting for next available executor on ‘node-name’", even
though it is online.
the previous build I stopped is still on the executor. The only solution is
to restart Jenkins.
fredag 12
I don't actually have to do anything, judt open Configure, Save, then
Relaunch Agent.
fredag 12. juli 2019 13.30.05 UTC+2 skrev Sverre Moe følgende:
>
> Strange
> If I configure the agent, save then try to reconnect it is able to create
> a connection and is back online.
>
> tirsdag 9. juli 2019
Strange
If I configure the agent, save then try to reconnect it is able to create a
connection and is back online.
tirsdag 9. juli 2019 13.20.55 UTC+2 skrev Sverre Moe følgende:
>
> On the build agents that gets disconnected there is plenty of available
> disk space.
>
> When there are trying to
On the build agents that gets disconnected there is plenty of available
disk space.
When there are trying to connect, there are no remoting.jar java process on
the agent running.
lørdag 6. juli 2019 22.59.31 UTC+2 skrev Karan Kaushik følgende:
>
> Hi
>
> We had been facing the same issue with J
Hi
We had been facing the same issue with Jenkins agent, one thing I remember
doing was managing the space on the jenkins agent, the disconnect could
happen due to no space remaining on agent machine.
--
You received this message because you are subscribed to the Google Groups
"Jenkins Users"
22 matches
Mail list logo