On 09/16/2013 07:16 PM, Marcus Sorensen wrote:
I agree, it makes things more complicated. However, updating libvirt
should be a maintenance mode thing; I've seen it lose track of VMs
between major libvirt upgrades and have to kill/restart all vms to
reregister them with libvirt, best to be in maintenance.

If I remember right, the issues with persistent storage definitiions
were primarily related to failure scenarios, things like hard power
off of host, host would build up dozens of primary storage definitions
that it didn't need, creating unnecessary dependencies and potential
unnecessary outages (see other email about what happens when storage
goes down).

If edison has a patch then that's great. One idea I head was that if
the pool fails to register due to mountpoint already existing, we
could use an alternate mount point to register it. Old stuff will keep
working, new stuff will continue.

No! That will break migrations to machines which don't have that mountpoint. Backing devices with QCOW2 will also break on the longer run.

Please do not do that.

Wido


On Mon, Sep 16, 2013 at 3:43 AM, Wei Zhou (JIRA) <j...@apache.org> wrote:

     [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-3565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13768197#comment-13768197
 ]

Wei Zhou commented on CLOUDSTACK-3565:
--------------------------------------

Marcus,

The change of the host to maintenance one by one makes upgrade/change 
complicated.
Edison has changed the source to support re-creating storage pool in the fix of 
CLOUDSTACK-2729.
However, the test failed in my environment.

By the way, I did not see any other issue when I used cloudstack 4.0 while the 
storage pools are persistent.
I do not know what issue will be leader by it.

Restarting libvirtd service leading to destroy storage pool
-----------------------------------------------------------

                 Key: CLOUDSTACK-3565
                 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-3565
             Project: CloudStack
          Issue Type: Bug
      Security Level: Public(Anyone can view this level - this is the default.)
          Components: KVM
    Affects Versions: 4.2.0
         Environment: KVM
Branch 4.2
            Reporter: Rayees Namathponnan
            Assignee: Marcus Sorensen
            Priority: Blocker
              Labels: documentation
             Fix For: 4.2.0


Steps to reproduce
Step 1 : Create cloudstack step in kvm
Step 2 : From kvm host check "virsh pool-list"
Step 3:  Stop and start libvirtd service
Step 4 : Check "virsh pool-list"
Actual Result
"virsh pool-list"  is blank after restart libvird service
[root@Rack2Host12 agent]# virsh pool-list
Name                 State      Autostart
-----------------------------------------
41b632b5-40b3-3024-a38b-ea259c72579f active     no
469da865-0712-4d4b-a4cf-a2d68f99f1b6 active     no
fff90cb5-06dd-33b3-8815-d78c08ca01d9 active     no
[root@Rack2Host12 agent]# service cloudstack-agent stop
Stopping Cloud Agent:
[root@Rack2Host12 agent]# virsh pool-list
Name                 State      Autostart
-----------------------------------------
41b632b5-40b3-3024-a38b-ea259c72579f active     no
469da865-0712-4d4b-a4cf-a2d68f99f1b6 active     no
fff90cb5-06dd-33b3-8815-d78c08ca01d9 active     no
[root@Rack2Host12 agent]# virsh list
  Id    Name                           State
----------------------------------------------------
[root@Rack2Host12 agent]# service libvirtd stop
Stopping libvirtd daemon:                                  [  OK  ]
[root@Rack2Host12 agent]# service libvirtd start
Starting libvirtd daemon:                                  [  OK  ]
[root@Rack2Host12 agent]# virsh pool-list
Name                 State      Autostart
-----------------------------------------

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to