-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Potential reuse of revoked Identity tokens
- ---
### Summary ###
An authorization token issued by the Identity service can be revoked,
which is designed to immediately make that token invalid for future use.
When the PKI or PKIZ token providers are
I can speak to the networking bits...
In OpenStack, DHCP doesn't imply inconsistent IP addresses. If you boot a
VM without a specific IP address, it keeps the IP chosen for it until you
destroy the VM. If you boot a VM with a specific (static) IP address, DHCP
serves that IP to the VM. You can dis
Hi all,
We are moving from vsphere to openstack. Currently I¹m trying to figure
out the easiest way to move my vmdk files over. They are already
integrated directly into our internal network.
Option 1:
Use provider networks (ie.
http://docs.openstack.org/networking-guide/scenario_provider_ovs.ht
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Glance image signature uses an insecure hash algorithm (MD5)
- ---
### Summary ###
During the Liberty release the Glance project added a feature that
supports verifying images by their signature. There is a flaw in the
implementation that degrades v
IMHO for Magnum and Nested Quota we need more discussion
before proceeding ahead because :-
1. The main intent of hierarchical multi tenancy is creating a hierarchy of
projects (so that its easier for the cloud provider to manage different
projects) and nested quota driver being able to validate a
Long story short, i want my VM have dedicated CPU and it has to be
1-0-1 map. because i am running VoIP application and i need CPU
dedicate to guest VM. what i should do?
On Tue, Dec 15, 2015 at 12:34 PM, Arne Wiebalck wrote:
> Thanks for clarifying the terminology, Chris, that’s helpful!
>
> My
Thanks… it is really important from the user experience that we keep the nested
quota implementations in sync so we don’t have different semantics.
Tim
From: Adrian Otto [mailto:adrian.o...@rackspace.com]
Sent: 15 December 2015 18:44
To: OpenStack Development Mailing List (not for usage q
If you specify "vcpu_pin_set=2,3,6,7" in /etc/nova/nova.conf then nova will
limit the VMs to run on that subset of host CPUs. This involves pinning from
libvirt, but isn't really considered a "dedicated" instance in nova.
By default, instances that can run on all of the allowed host CPUs are
If i enable "NUMATopologyFilter", Does JUNO support pinning?
FYI, i am following this link:
http://redhatstackblog.redhat.com/2015/05/05/cpu-pinning-and-numa-topology-awareness-in-openstack-compute/
On Tue, Dec 15, 2015 at 1:11 PM, Satish Patel wrote:
> @chris,
>
> I already have "hw:cpu_poli
@chris,
I already have "hw:cpu_policy": "dedicated"
[root@control ~(keystone_admin)]# nova flavor-show 8
++-+
| Property | Value
|
Thanks for clarifying the terminology, Chris, that’s helpful!
My two points about performance were:
- without overcommit, an instance confined in a NUMA node does not profit from
1-to-1 pinning (at least from what we saw);
- an instance spanning multiple NUMA nodes needs to be aware of the topolo
Vilobh,
Thanks for advancing this important topic. I took a look at what Tim referenced
how Nova is implementing nested quotas, and it seems to me that’s something we
could fold in as well to our design. Do you agree?
Adrian
On Dec 14, 2015, at 10:22 PM, Tim Bell
mailto:tim.b...@cern.ch>> wro
I'm using vmware integrated openstack and I am trying to connect using
python.
from libcloud.compute.types import Provider
from libcloud.compute.providers import get_driver
import libcloud.security
def main():
auth_usr = 'r...@example.com'
auth_pass = 'mypass'
auth_url = 'https:
What was configured was pinning to a set and that is reflected, no? Or is that
not referred to as “pinning”?
Anyway, for performance we didn’t see a difference between 1:1 pinning and
confining (?) the vCPUs to a
a set as long as the instance is aware of the underlying NUMA topology.
Cheers,
Ar
Actually no, I don't think that's right. When pinning is enabled each vCPU will
be affined to a single host CPU. What is showing below is what I would expect
if the instance was using non-dedicated CPUs.
To the original poster, you should be using
'hw:cpu_policy': 'dedicated'
in your flavor
The pinning seems to have done what you asked for, but you probably
want to confine your vCPUs to NUMA nodes.
Cheers,
Arne
> On 15 Dec 2015, at 16:12, Satish Patel wrote:
>
> Sorry forgot to reply all :)
>
> This is what i am getting
>
> [root@compute-1 ~]# virsh vcpupin instance-0043
>
Sorry forgot to reply all :)
This is what i am getting
[root@compute-1 ~]# virsh vcpupin instance-0043
VCPU: CPU Affinity
--
0: 2-3,6-7
1: 2-3,6-7
Following numa info
[root@compute-1 ~]# numactl --hardware
available: 2 nodes (0-1)
node 0 cpus: 0 3 5 6
The pinning we set up goes indeed into the block:
—>
32
32768
…
<—
What does “virsh vcpupin ” give for your instance?
Cheers,
Arne
> On 15 Dec 2015, at 13:02, Satish Patel wrote:
>
> I am running JUNO version with qemu-kvm-ev-2.1.2-23.el7_1.9.1.x86_64
> on CentOS7.1
>
I am running JUNO version with qemu-kvm-ev-2.1.2-23.el7_1.9.1.x86_64
on CentOS7.1
I am trying to configure CPU pinning because my application is cpu
hungry. this is what i did.
in /etc/nova/nova.conf
vcpu_pin_set=2,3,6,7
I have created aggregated host with pinning=true and created flavor
with
19 matches
Mail list logo