Hi Les,

Thanks for your follow-up.


> > I have not found a reference in this regard, and would appreciate a 
> pointer. 
> > I will do the digging. 
>
> This would be a starting point - there is also a cli and a way to use 
> groovy to access the whole api. 
> https://wiki.jenkins-ci.org/display/JENKINS/Remote+access+API 
>

I also did more digging and found this thread to be 
useful: 
http://serverfault.com/questions/309848/how-can-i-check-the-build-status-of-a-jenkins-build-from-the-command-line
 

> [...]
> > 
> > All have openjdk, Jenkins packages installed, all can ssh to each other 
> as 
> > the 'jenkins' user. 
>
> You shouldn't need the jenkins package installed on the slaves, and 
> they don't need to ssh to each other.  Just a user with ssh keys set 
> up so the master can execute commands and it will copy the slave jar 
> over itself. 
>

Due to the lack of clear documentation for hands-off type of cluster setup, 
so far, I have based on what have been done mostly the following:

   1. 
   
https://wiki.jenkins-ci.org/display/JENKINS/Distributed+builds#Distributedbuilds-OtherRequirements
   2. 
   
https://wiki.jenkins-ci.org/display/JENKINS/Distributed+builds#Distributedbuilds-Example%3AConfigurationonUnix
   
Up to now, I have been using the list in 2. as my check list.  This is why 
I install Jenkins on all POC CI nodes so far:

"*On master, I have a little shell script that uses rsync to synchronize 
master's /var/jenkins to slaves (except /var/jenkins/workspace) I use this 
to replicate tools on all slaves.*"

Thanks for letting me know that installing Jenkins on slaves is 
unnecessary.  Let me make my system more KISS :-)


> -- 
>   Les Mikesell 
>     lesmi...@gmail.com <javascript:> 
>

-- Zack 

Reply via email to