On Fri, Jul 09, 2021 at 11:02:38AM +0100, Richard W.M. Jones wrote:
> On Fri, Jul 09, 2021 at 12:52:22PM +0300, Roman Bolshakov wrote:
> > On Thu, Jul 08, 2021 at 09:11:08PM +0100, Richard W.M. Jones wrote:
> > > On Thu, Jul 08, 2021 at 10:32:25PM +0300, Roman Bolshakov wrote:
> > > > Hi all,
> > > > 
> > > > I'm working on a guestfs [1] connection plugin and seeking for a design
> > > > advice.
> > > > 
> > > > libguestfs provides a set of command line tools that can be used to
> > > > operate on virtual machine disk images and modify its contents.
> > > > 
> > > > For every task, the connection plugin:
> > > > 
> > > >   1. Starts guestfish in --remote mode on a remote host over ssh and 
> > > > adds
> > > >      a disk (passed as a parameter to guestfs connection).
> > > > 
> > > >   2. Runs supermin applicance [2][3]. It typically takes two to four
> > > >      seconds to spin up the applicance VM.
> > > 
> > > Depending on the target, simply running something like
> > > 
> > >   guestfish -a /dev/null run
> > > 
> > > will create and cache an appliance in /var/tmp/.guestfs-$UID/ (and
> > > it's safe if two processes run in parallel).  Once the appliance is
> > > cached new libguestfs instances will use the cached appliance without
> > > any delay.
> > > 
> > > Doesn't this mechanism work?
> > 
> > Hi Rich,
> > 
> > Appliance caching indeed works. If I remove it, it takes around 20
> > seconds to rebuild new appliance. Then it's used for new libguestfs
> > instances.
> > 
> > I was rather talking about the inherent latency caused by instance/VM
> > start. In the current implementation of guestfs plugin, the appliance is
> > started before each task and stopped afterwards.
> 
> Oh I see, yes that's right.
> 
> > My intent is to find a way to run multiple ansible tasks on the same
> > libguestfs instance. That saves up to 2-4 seconds per task.
> 
> You shouldn't really reuse the same appliance across trust boundaries
> (eg. if processing two disks which are owned by different tenants of
> your cloud), since it means one tenant would be able to interfere with
> or extract secrets from the other tenant.  The 2-4 seconds is the
> price you pay here I'm afraid :-/
> 
> If all disks you are processing are owned by the same tenant then
> there's no worry about security.
> 

Right, I'm only trying to optimize access to the same disk by a set of
consecutive ansible tasks in the same playbook (and typically belonging
to a VM owned by a specific user) to ensure the trust boundaries.

Thanks,
Roman

> > > 
> > > Nevertheless for virt-v2v we have something similar because virt-v2v
> > > is a long-running process that we want to start and query status from.
> > > My colleague wrote a wrapper (essentially a sort of daemon) which
> > > manages virt-v2v, and I guess may be useful to look at:
> > > 
> > > https://github.com/ManageIQ/manageiq-v2v-conversion_host/tree/master/wrapper
> > 
> > I'm doing something similar, except I'm running guestfish ---remote
> > under nohup, remember PID and then interact with it. If we find a way to
> > pass PID associated to a connneciton from task to task in ansible and
> > kill it when it's no longer needed (to be able to start a real VM with
> > the disk image) then we can achieve very fast and reliable task
> > execution on the disk images.
> 
> Rich.
> 
> -- 
> Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
> Read my programming and virtualization blog: http://rwmj.wordpress.com
> virt-p2v converts physical machines to virtual machines.  Boot with a
> live CD or over the network (PXE) and turn machines into KVM guests.
> http://libguestfs.org/virt-v2v
> 

-- 
You received this message because you are subscribed to the Google Groups 
"Ansible Development" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-devel+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-devel/YOgoQhWlumFDQo3/%40SPB-NB-133.local.

Reply via email to