Have you tried "screen"?

On Fri, Nov 11, 2016 at 10:02 AM, bruce <badoug...@gmail.com> wrote:

> On Tue, Nov 8, 2016 at 1:33 PM, Rick Stevens <ri...@alldigital.com> wrote:
> > On 11/08/2016 10:00 AM, bruce wrote:
> >> Hey Rick!!
> >>
> >> Thanks for the reply...
> >>
> >> That was kind of going to be my thinking..
> >>
> >> I came across some apps that appear to be devOps related, one of which
> >> was ClusterSSH/Cluster SSH.
> >>
> >> As far as I can tell, it appears to allow you to setup the given
> >> ipAdress, as well as user to run the ssh connection as and the ssh
> >> ocnfig files, to allow the user to connect/access to the given term
> >> sessions for the remote instances.
> >>
> >> The app also appears to allow you to then run the different commands
> >> to the given systems. (Not sure if you can "package" the commands you
> >> want to run, so you can run group of cmnds to different groups of
> >> systems.)
> >>
> >> I'm currently looking at information on this, as well as a few others.
> >>
> >> My use case, has a bunch of vms on digitalocean, so I need a way of
> >> "managing"/starting the processes on the machines - manual ain't going
> >> to cut it when i have 40-50 to test, and if things wrk, will easily
> >> scale to 300-500 where I have to spin up, run the stuff, and then spin
> >> them down..
> >>
> >> Actually, it would be good to have a gui/tool to be able to implement
> >> the DO/digitalocean API to generate/create, run, create the snapshots,
> >> destroy, to save costs.
> >>
> >> Whew!!
> >
> > Ok, what I suggested was really aimed at launching processes in the
> > background on a remote machine in a way where you could check on them.
> >
> > ClusterSSH (a.k.a. "cssh") is a different beastie. It's a GUI tool that
> > allows you to open parallel ssh sessions to a whole bunch of remote
> > machines simultaneously. We use it a lot, as we have about 300 machines
> > in our data center broken into "clusters" that do specific things.
> >
> > ClusterSSH opens a small terminal window for each machine so you can
> > see what's going on. You can enter commands for THAT machine in that
> > window as well. It also opens a "master" command line window, and
> > whatever you type into that master window gets sent to ALL of the open
> > windows. Fairly handy, but be REALLY careful, as sometimes (due to
> > network load, etc.) some keystrokes may NOT make it to all of the open
> > windows. This can be disastrous if you're, say, editing files and such.
> >
> > cssh does offer a way to specify a command to be sent to all of the
> > windows by using its "-a" option. I'd imagine it'd be something like:
> >
> >         cssh -a "screen -d -m -S firstsessionname 'command you wish to
> > run on the VM'" user1@1.2.3.4 user1@5.6.7.8
> >
> > which should run that screen command on the two machines specified. I
> > can't speak to that too well. We don't typically use it like that--we
> > tend to use it in the interactive mode only.
> >
> > I can give you examples of cssh usage (such as an /etc/clusters file
> > and such) if you want to go down that road.
> >
> >> On Tue, Nov 8, 2016 at 12:12 PM, Rick Stevens <ri...@alldigital.com>
> wrote:
> >>> On 11/08/2016 04:02 AM, bruce wrote:
> >>>> Hi.
> >>>>
> >>>> Trying to get my head around what should be basic/trivial process.
> >>>>
> >>>> I've got a remote VM. I can fire up a local term, and then ssh into
> >>>> the remote VM with no prob. I can then run the remote functions, all
> >>>> is good.
> >>>>
> >>>> However, I'd really like to have some process on the local side, that
> >>>> would allow me to do all the above in a shell/prog process on the
> >>>> local side,
> >>>>
> >>>> Psuedo Processes::
> >>>> -spin up the remoter term of user1@1.2.3.4
> >>>> -track the remote term/session - so I could "log into it" see what's
> >>>>   going on for the initiated processes"
> >>>> -perform some dir functions as user1 on the remote system
> >>>> -run appA  as user1 on 1.2.3.4 (long running)
> >>>> -run appB  as user1 on 1.2.3.4 (long running)
> >>>> -etc..
> >>>> -when the apps/processes are finished, shut down the "remote term"
> >>>>
> >>>> I'd prefer to be able to do all of this, without actually having the
> >>>> "physical" local term be generated/displayed in the local desktop.
> >>>>
> >>>> I'm going to be running a bunch of long running apps on the cloud, so
> >>>> I'm trying to walk through the appropriate process/approach to
> >>>> handling this.
> >>>>
> >>>> Sites/Articles/thoughts are more than welcome.
> >>>
> >>> 1. Set up ssh keys so you don't need to use passwords between the
> >>> two systems (no interaction).
> >>>
> >>> 2. Launch your tasks on the remote VM using screen over ssh by doing
> >>> something like:
> >>>
> >>>         ssh user1@1.2.3.4 screen -d -m -S firstsessionname "command
> you wish to
> >>> run on the VM"
> >>>         ssh user1@1.2.3.4 screen -d -m -S secondsessionname "second
> command you
> >>> wish to run on the VM"
> >>>
> >>> 3. If you want to check on the tasks, log into the VM via ssh
> >>> interactively and check the various screen sessions. I recommend
> setting
> >>> the screen session names via the "-S" option so they're easier to
> >>> differentiate.
> >>>
> >>> That should do it. The items in step 2 could be done in a shell script
> >>> if you're lazy like me. :-)
>
> ------
> Hey Rick!!
>
> Thanks for the reply..
>
> Slowly slogging through this.
>
> My use case (for now) has DigitalOcean (DO) as the cloud provider. The
> project is working on the ability to be able to spin up/down 500-1000
> droplets as needed.
>
> DO provides the api, to create/delete the actual instances, so cost of
> the actual cloud usage can be handled. This is trivial.
>
> However, on the devops side, it appears that clusterSSH would/will be
> useful (needed), in order to be able to kind of manage/track the
> overall progress of the running apps on the droplets.
>
> I'm envisioning a process that allows:
>
> 1) The ability to use a "config" process to spin up the terms for the
> required droplets
>    -droplet would have user1:passwd1 and use sshkey
>    -droplet would have known ipAddress
>   -as the process/droplets change the required config file could be
> script generated
>
> 2) The ability to spin down the terms
> 3) The ability to "see" the progress of the generated/viewed terms
>   -The assumption is if the Screen process is used to run the
>    remote processes, then when the term is reconnected via
>    clusterSSH --ssh - then the screen session can automatically be
>    attached, to then see the current status/process...
> 4) As the term is spun up, process needs to be able to do invoke/run
>   the remote process
>
> **Also/aside, need to be able to generate a remote cmd, to check if a
> given process is
>   running on the droplet -- This should be trivial once the clusterSSH
> stuff is nailed down,
>   use the same/similar process for generating the remote Screen
> app/sessions on the
>   remote droplets..
>
> If the project is running 100-200 droplets, there's no way to check
> all the droplets within a gui on the desktop, so there should be a way
> to "view" 20-30 at a time...
>
> So... whatever you have regarding the clusterSSH/Screen session part, hit
> me up.
>
> Or, if someone has thoughts/comments, feel free to post!
>
> Thanks
> _______________________________________________
> users mailing list -- users@lists.fedoraproject.org
> To unsubscribe send an email to users-le...@lists.fedoraproject.org
>
_______________________________________________
users mailing list -- users@lists.fedoraproject.org
To unsubscribe send an email to users-le...@lists.fedoraproject.org

Reply via email to