I'm testing this in lab, no load yet
Sent from my iPhone
> On Aug 23, 2018, at 2:30 AM, Matthew Thode wrote:
>
>> On 18-08-22 23:04:57, Satish Patel wrote:
>> Mathew,
>>
>> I haven't applied any patch yet but i am noticing in cluster some host
>> migrating VM super fast and some host migrat
Hi all
I'm deploying a Queens on Ubuntu 18.04 with one controller, one network
controller e for now one compute node. I'm using ML2 with linuxbridge
mechanism driver and a self-service type of network. This is is a dual
stack environment (v4 and v6).
IPv4 is working fine, NATs oks and packets flo
Matt,
I am going to override following in user_variable.yml file in that
case do i need to run ./bootstrap-ansible.sh script?
## Nova service
nova_git_repo: https://git.openstack.org/openstack/nova
nova_git_install_branch: a9c9285a5a68ab89a6543d143c364d90a01cd51c #
HEAD of "stable/queens" as of 0
On 18-08-23 14:33:44, Satish Patel wrote:
> Matt,
>
> I am going to override following in user_variable.yml file in that
> case do i need to run ./bootstrap-ansible.sh script?
>
> ## Nova service
> nova_git_repo: https://git.openstack.org/openstack/nova
> nova_git_install_branch: a9c9285a5a68ab89
Matt,
I've added "nova_git_install_branch:
a9c9285a5a68ab89a6543d143c364d90a01cd51c" in user_variables.yml and
run repo-build.yml playbook but it didn't change anything
I am inside the repo container and still its showing old timestamp on
all nova file and i check all file they seems didn't chang
Look like it need all 3 line in user_variables.yml file.. after
putting all 3 lines it works!!
## Nova service
nova_git_repo: https://git.openstack.org/openstack/nova
nova_git_install_branch: a9c9285a5a68ab89a6543d143c364d90a01cd51c #
HEAD of "stable/queens" as of 06.08.2018
nova_git_project_group
I have upgraded my nova and all nova component got upgrade but still
my live_migration running on 8Mbps speed.. what else is wrong here?
I am using CentOS 7.5
On Thu, Aug 23, 2018 at 3:26 PM Satish Patel wrote:
>
> Look like it need all 3 line in user_variables.yml file.. after
> putting all 3 l
I back up my volumes daily, using incremental backups to minimize
network traffic and storage consumption. I want to periodically remove
old backups, and during this pruning operation, avoid entering a state
where a volume has no recent backups. Ceph RBD appears to support this
workflow, but unfort
I have updated this bug here something is wrong:
https://bugs.launchpad.net/nova/+bug/1786346
After nova upgrade i have compared these 3 files
https://review.openstack.org/#/c/591761/ and i am not seeing any
change here so look like this is not a complete patch.
Are you sure they push this chang
Hi Chris,
Unless I overlooked something, I don't see Cinder or Ceph versions posted.
Feel free to just post the codenames but give us some inkling.
On Thu, Aug 23, 2018 at 3:26 PM, Chris Martin wrote:
> I back up my volumes daily, using incremental backups to minimize
> network traffic and sto
Apologies -- I'm running Pike release of Cinder, Luminous release of
Ceph. Deployed with OpenStack-Ansible and Ceph-Ansible respectively.
On Thu, Aug 23, 2018 at 8:27 PM, David Medberry wrote:
> Hi Chris,
>
> Unless I overlooked something, I don't see Cinder or Ceph versions posted.
>
> Feel free
Hi:
Sorry fo bothering everyone. Now I update my openstack to queen,and use the
nova-placement-api to provider resource.
When I use "/resource_providers/{uuid}/inventories/MEMORY_MB" to update
memory_mb allocation_ratio, and it success.But after some minutes,it recove to
old value automatic
Forgive me, by mistake i grab wrong commit and that was the reason i
didn't see any changer after applying patch.
It works after applying correct version :) Thanks
On Thu, Aug 23, 2018 at 6:36 PM Satish Patel wrote:
>
> I have updated this bug here something is wrong:
> https://bugs.launchpad.net
I am trying to set dedicated network for live migration and for that i
did following in nova.conf
My dedicated network is 172.29.0.0/24
live_migration_uri =
"qemu+ssh://nova@%s/system?no_verify=1&keyfile=/var/lib/nova/.ssh/id_rsa"
live_migration_tunnelled = False
live_migration_inbound_addr = "17
14 matches
Mail list logo