Hello everyone, 

I just wanted to get an opinion on the approach i took to setting up 
automation testing in my company - and where better than here. Essentially, 
I have a repository with all my .spec.js files which contain playwright 
tests which are represented as jobs in my jenkins server. They are set up 
in a piepline architecture with an upstream job orchestrating and each test 
being a downstream job. They are run in parallel sets of 4 jobs. Now the 
isuse i was having with static agents was that if another pipeline is 
triggered, it creates a resource contention on my static agent as they were 
both using the same instance of the project located in a specific path, 
resulting in both crashing. So my approach was to use docker images and 
kubernetes clusters to isolate the environement of each job. So i created 
the docker image which contains a java installation amongst other 
dependencies so that it will be able to act as the inbound agent as well as 
the testing environment. I also set up the kubernetes plugin on jenkins by 
setting up the kubernetes node on the same local VM on which i had set up 
my static agent. When i tested 1 job in the pipeline (meaning upstream 
calls 1 downstream job and artifacts are copied from it and uplaoaded to 
azure blob storage), it worked fine albeit for longer wait times for agent 
provisioning on the kubernetes pod, and 4 minute wait times for declarative 
scm checkout (i had stored my jenkinsfile pipeine scripts in another repo 
and depending on the job and whether it is upstream or downstream it 
obtained it). This means that my architecture is now each job(upstream and 
downstream) having its own pod created which contains the dynamic jenkins 
agent and my project repo in which i run the required tests.

However now i fully ran one of my pipelines with 12 downstream jobs (run in 
sets of 4 in parallel), the already notably long time for the agent to be 
provisioned is prolonged as well as the fact it was eventually crashing 
before the downstream pipeline was intiated:

angular-jobs-contract-correspondence-110-k7wp5-3b5mt-8x4rg was marked 
offline: Connection was broken [Pipeline] // node [Pipeline] } [Pipeline] 
// podTemplate [Pipeline] End of Pipeline hudson.remoting.ProxyException: 
java.nio.channels.ClosedChannelException at 
jenkins.agents.WebSocketAgents$Session.closed(WebSocketAgents.java:157) 
Also: hudson.remoting.ProxyException: 
hudson.remoting.Channel$CallSiteStackTrace: Remote call to 
angular-jobs-contract-correspondence-110-k7wp5-3b5mt-8x4rg at 
hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1826) at 
hudson.remoting.Request.call(Request.java:199) at 
hudson.remoting.Channel.call(Channel.java:1041) at 
hudson.EnvVars.getRemote(EnvVars.java:438) at 
hudson.model.Computer.getEnvironment(Computer.java:1203) at 
PluginClassLoader for 
kubernetes//org.csanchez.jenkins.plugins.kubernetes.KubernetesSlave.checkHomeAndWarnIfNeeded(KubernetesSlave.java:521)
 
at PluginClassLoader for 
kubernetes//org.csanchez.jenkins.plugins.kubernetes.KubernetesSlave.createLauncher(KubernetesSlave.java:490)
 
at PluginClassLoader for 
workflow-durable-task-step//org.jenkinsci.plugins.workflow.support.steps.ExecutorStepExecution$PlaceholderTask$PlaceholderExecutable.run(ExecutorStepExecution.java:1051)
 
at hudson.model.ResourceController.execute(ResourceController.java:101) at 
hudson.model.Executor.run(Executor.java:446) Also: 
hudson.remoting.ProxyException: 
org.jenkinsci.plugins.workflow.actions.ErrorAction$ErrorId: 
60e0a5e9-7bd2-48fe-b37b-b8f9e13ca408 Caused: 
hudson.remoting.ProxyException: hudson.remoting.RequestAbortedException: 
java.nio.channels.ClosedChannelException at 
hudson.remoting.Request.abort(Request.java:346) 
given this error in my downstream jobs, i am not sure whether the issue has 
to do with network bandwidth/connectivity (my controller is hosted on a 
different server which is notoriously unstable, and the added dynamic 
jenkins agents communicating with it is causing some type of overload) or 
whether it is the kubernetes host running out of resources causing the 
websocket to be closed somehow, because when i ran the one job, it ran 
successfully, albeit the fact it took longer overall due to the time 
required to provision the agent and to start running the first shell 
command in the downstream job, as well as the longer time taken to checkout 
the declarative scm script.

I understand it is confusing what i just explained i am willing to explain 
further i just need some assistance as i am the only person in my company 
who has self taught myself to utilise these technologies so i would 
appreciate any input

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/1117400a-ecc0-436c-9cb5-e4e62ef730afn%40googlegroups.com.

Reply via email to