Re: Advice on converting zone-wide to cluster-wide storage

2017-09-30 Thread Sateesh Chodapuneedi
Hi Andrija,
I’ve converted cluster-wide NFS based storage pools to zone-wide in the past.

Basically there are 2 steps for NFS and Ceph,
1. DB update
2. If there are more than 1 cluster in that zone, then do un-manage & manage 
all the clusters except the original cluster

In addition to Mike’s suggestion, you need to do following,
• Set ‘scope’ of the storage pool to ‘ZONE’ in `cloud`.`storage_pool` table

Example SQL looks like below, given that the hypervisor in my setup is VMware.
mysql> update storage_pool set scope='ZONE', cluster_id=NULL, pod_id=NULL, 
hypervisor='VMware' where id=;

With DB update, the changes would be reflected in UI as well.

Post the DB update, it is important to un-manage, followed by manage clusters 
(except the original cluster to which this storage pool belongs to) so that all 
hosts in other clusters also to connect to this storage pool, making this pool 
as a full-fledged zone wide storage pool.

Hope this helps you!

Regards,
Sateesh Ch,
CloudStack Development, Accelerite,
www.accelerite.com
@accelerite


-Original Message-
From: "Tutkowski, Mike" 
Reply-To: "dev@cloudstack.apache.org" 
Date: Friday, 29 September 2017 at 6:57 PM
To: "dev@cloudstack.apache.org" , 
"us...@cloudstack.apache.org" 
Subject: Re: Advice on converting zone-wide to cluster-wide storage

Hi Andrija,

I just took a look at the SolidFire logic around adding primary storage at 
the zone level versus the cluster scope.

I recommend you try this in development prior to production, but it looks 
like you can make the following changes for SolidFire:

• In cloud.storage_pool, enter the applicable value for pod_id (this should 
be null when being used as zone-wide storage and an integer when being used as 
cluster-scoped storage).
• In cloud.storage_pool, enter the applicable value for cluster_id (this 
should be null when being used as zone-wide storage and an integer when being 
used as cluster-scoped storage).
• In cloud.storage_pool, change the hypervisor_type from Any to (in your 
case) KVM.

Talk to you later!
Mike

On 9/29/17, 5:18 AM, "Andrija Panic"  wrote:

Hi all,

I was wondering if anyone have experience hacking DB and converting
zone-wide primary storage to cluster-wide.

We have:
1 x NFS primary storage, zone-wide
1 x CEPH primary storage, zone-wide
1 x SOLIDFIRE orimary storage, zone-wide
1 zone, 1 pod, 1 cluster., Advanced zone, and 1 NFS regular secondary
storage (SS not relevant here).

I'm assuming few DB changes would do it  - storage_pool table / scope,
cluster_id, pod_id fileds), but have not yet had time to play with it
really.

Any advice if this is OK to be done in production environment, would be
very much appreciated.

We plan to expand to many more racks, so we might move from
single-everything (pod/cluster) to multiple PODs/clusters etc, and thus
design Primary Storage accordingly.

Thanks !

-- 

Andrija Panić




DISCLAIMER
==
This e-mail may contain privileged and confidential information which is the 
property of Accelerite, a Persistent Systems business. It is intended only for 
the use of the individual or entity to which it is addressed. If you are not 
the intended recipient, you are not authorized to read, retain, copy, print, 
distribute or use this message. If you have received this communication in 
error, please notify the sender and delete all copies of this message. 
Accelerite, a Persistent Systems business does not accept any liability for 
virus infected mails.


Re: Support of Shelve like Nova in OpenStack?

2017-09-30 Thread Rafael Weingärtner
You mean to stop a VM, take a snapshot and delete it afterwards?

Not all in a single feature. You could do that with a script though. We
have snapshots and recurrent snapshots that you could use (even with a
running VM). When a VM is stopped, we also “remove”/free the amount of
resource it was allocating from the hypervisor (so, the resources
“allocated” for a VM are not always reserved). And of course, you can
delete a VM. If you are some sort of administrator, you delete and expunge
it immediately, otherwise, it goes through the normal deletion process,
which may take some time (for security reasons).

On Sat, Sep 30, 2017 at 5:22 AM, Leo Juan  wrote:

> Hi All,
>
> Does anyone know whether CloudStack support shelve function like
> nova shelve in OpenStack or not?
>
> "Shelving stops the instance and takes a snapshot of it. Then
> depending on the value of the shelved_offload_time config option, the
> instance is deleted from the hypervisor (0), never deleted (-1), or
> deleted after some period of time (> 0). Note that it's just
> destroying the backing instance on the hypervisor, the actual instance
> in the nova database is not deleted. Then you can later unshelve the
> instance"
>
> So that the resource of some unused VM can be released.
>
>Regards,
>
>   Leo Juan
>



-- 
Rafael Weingärtner


Re: Advice on converting zone-wide to cluster-wide storage

2017-09-30 Thread Tutkowski, Mike
Good points, Sateesh! Thanks for chiming in. :)

On Sep 30, 2017, at 4:03 AM, Sateesh Chodapuneedi 
mailto:sateesh.chodapune...@accelerite.com>>
 wrote:

Hi Andrija,
I’ve converted cluster-wide NFS based storage pools to zone-wide in the past.

Basically there are 2 steps for NFS and Ceph,
1. DB update
2. If there are more than 1 cluster in that zone, then do un-manage & manage 
all the clusters except the original cluster

In addition to Mike’s suggestion, you need to do following,
• Set ‘scope’ of the storage pool to ‘ZONE’ in `cloud`.`storage_pool` table

Example SQL looks like below, given that the hypervisor in my setup is VMware.
mysql> update storage_pool set scope='ZONE', cluster_id=NULL, pod_id=NULL, 
hypervisor='VMware' where id=;

With DB update, the changes would be reflected in UI as well.

Post the DB update, it is important to un-manage, followed by manage clusters 
(except the original cluster to which this storage pool belongs to) so that all 
hosts in other clusters also to connect to this storage pool, making this pool 
as a full-fledged zone wide storage pool.

Hope this helps you!

Regards,
Sateesh Ch,
CloudStack Development, Accelerite,
www.accelerite.com
@accelerite


-Original Message-
From: "Tutkowski, Mike" 
mailto:mike.tutkow...@netapp.com>>
Reply-To: "dev@cloudstack.apache.org" 
mailto:dev@cloudstack.apache.org>>
Date: Friday, 29 September 2017 at 6:57 PM
To: "dev@cloudstack.apache.org" 
mailto:dev@cloudstack.apache.org>>, 
"us...@cloudstack.apache.org" 
mailto:us...@cloudstack.apache.org>>
Subject: Re: Advice on converting zone-wide to cluster-wide storage

   Hi Andrija,

   I just took a look at the SolidFire logic around adding primary storage at 
the zone level versus the cluster scope.

   I recommend you try this in development prior to production, but it looks 
like you can make the following changes for SolidFire:

   • In cloud.storage_pool, enter the applicable value for pod_id (this should 
be null when being used as zone-wide storage and an integer when being used as 
cluster-scoped storage).
   • In cloud.storage_pool, enter the applicable value for cluster_id (this 
should be null when being used as zone-wide storage and an integer when being 
used as cluster-scoped storage).
   • In cloud.storage_pool, change the hypervisor_type from Any to (in your 
case) KVM.

   Talk to you later!
   Mike

   On 9/29/17, 5:18 AM, "Andrija Panic" 
mailto:andrija.pa...@gmail.com>> wrote:

   Hi all,

   I was wondering if anyone have experience hacking DB and converting
   zone-wide primary storage to cluster-wide.

   We have:
   1 x NFS primary storage, zone-wide
   1 x CEPH primary storage, zone-wide
   1 x SOLIDFIRE orimary storage, zone-wide
   1 zone, 1 pod, 1 cluster., Advanced zone, and 1 NFS regular secondary
   storage (SS not relevant here).

   I'm assuming few DB changes would do it  - storage_pool table / scope,
   cluster_id, pod_id fileds), but have not yet had time to play with it
   really.

   Any advice if this is OK to be done in production environment, would be
   very much appreciated.

   We plan to expand to many more racks, so we might move from
   single-everything (pod/cluster) to multiple PODs/clusters etc, and thus
   design Primary Storage accordingly.

   Thanks !

   --

   Andrija Panić




DISCLAIMER
==
This e-mail may contain privileged and confidential information which is the 
property of Accelerite, a Persistent Systems business. It is intended only for 
the use of the individual or entity to which it is addressed. If you are not 
the intended recipient, you are not authorized to read, retain, copy, print, 
distribute or use this message. If you have received this communication in 
error, please notify the sender and delete all copies of this message. 
Accelerite, a Persistent Systems business does not accept any liability for 
virus infected mails.