On 01/30/2017 09:48 AM, Matthew Booth wrote:
As noted here https://bugs.launchpad.net/mos/+bug/1591084 (a MOS bug, but also
confirmed upstream and in RHOS), this is broken. By the looks of it, it has
never worked and would require architectural changes to make it work.
I'm assuming this means th
On 02/01/2017 09:49 AM, Gustavo Randich wrote:
Hi, is there any way to query via Compute API the NUMA topology of a compute
node, and free ram/cpu of each NUMA cell?
Not that I know of, but might be a useful thing for the admin to have.
Chris
___
O
On 03/16/2017 07:06 PM, Blair Bethwaite wrote:
Statement: breaks bin packing / have to match flavor dimensions to hardware
dimensions.
Comment: neither of these ring true to me given that most operators tend to
agree that memory is there first constraining resource dimension and it is
difficult
On 03/20/2017 04:24 PM, Blair Bethwaite wrote:
For me an interesting question to know the answer to here would be at what point
you have to stop resource sharing to guarantee your performance promises/SLAs
(disregarding memory over-provisioning). My gut says that unless you are also
doing all th
On 03/23/2017 11:01 AM, Jean-Philippe Methot wrote:
So basically, my question is, how does openstack actually manage ram allocation?
Will it ever take back the unused ram of a guest process? Can I force it to take
back that ram?
I don't think nova will automatically reclaim memory.
I'm pretty
As I understand it, the behaviour *should* be that any active nova-conductor or
nova-scheduler could possibly process any outstanding work item pulled from the
RPC queue. I don't think that nova-conductor and nova-scheduler need to be
co-located.
I think you might have found a bug though...I'
Hi,
In Mitaka nova introduced the "cpu_thread_policy" which can be specified in
flavor extra-specs. In the original spec, and in the original implementation,
not specifying the thread policy in the flavor was supposed to be equivalent to
specifying a policy of "prefer", and in both cases if t
On 05/22/2017 01:55 PM, Jay Pipes wrote:
On 05/22/2017 03:53 PM, Jonathan Proulx wrote:
To be clear on my view of the whole proposal
most of my Rescheduling that I've seen and want are of type "A" where
claim exceeds resources. At least I think they are type "A" and not
"C" unknown.
The exact
On 05/25/2017 01:53 PM, Marc Heckmann wrote:
On Mon, 2017-05-15 at 11:46 -0600, Chris Friesen wrote:
What do operators think we should do? I see two options, neither of
which is
really ideal:
1) Decide that the "new" behaviour has been out in the wild long
enough to
become t
On 06/07/2017 09:20 AM, Matt Riedemann wrote:
What I'm trying to do is continue to use ssh as the scheme since that's what
devstack sets up. So I set live_migration_scheme=ssh.
Within the libvirt driver, it starts with a URL like this for qemu:
qemu+%s://%s/system
And does a string replace on
On 06/14/2017 10:31 AM, Matt Riedemann wrote:
On 6/14/2017 10:57 AM, Carlos Konstanski wrote:
Is there a way to obtain nova configuration settings at runtime without
resorting to SSHing onto the compute host and greping nova.conf? For
instance a CLI call? At the moment I'm looking at cpu_allocat
On 07/12/2017 06:57 PM, Jay Pipes wrote:
On 07/04/2017 05:21 AM, Kekane, Abhishek wrote:
Hi operators,
I want to know how evacuation of resized instances is handled in real
environment.
For example if the vm is in resized state and if the compute host on which the
vm is resized goes down, then
On 07/27/2017 08:44 AM, Ignazio Cassano wrote:
Hello All,
instance created from images with the following metadata:
hw_disk_bus=scsi
hw_scsi_model=virtio-scsi
do not allow to attach new volume.
Deletting the above metadata in the image and creating a new instance, it allows
new volume attachme
On 07/27/2017 12:30 PM, Ignazio Cassano wrote:
The instance boots from a volume.
Fails attacching a second volume.
A guy suggested to use hw_disk_bus=virtio and it works.
Okay...I think this is a known issue with virtio-scsi:
https://bugs.launchpad.net/nova/+bug/1702999
Chris
___
g a second volume.
A guy suggested to use hw_disk_bus=virtio and it works.
Regards
Il 27/Lug/2017 05:33 PM, "Chris Friesen" mailto:chris.frie...@windriver.com>> ha scritto:
On 07/27/2017 08:44 AM, Ignazio Cassano wrote:
Hello All,
instance created from images w
On 08/01/2017 08:50 AM, Kimball, Conrad wrote:
·Are other operators routinely booting onto Cinder volumes instead of ephemeral
storage?
It's up to the end-user, but yes.
·What has been your experience with this; any advice?
It works fine. With Horizon you can do it in one step (select the
On 08/01/2017 02:32 PM, Mike Lowe wrote:
Two things, first info does not show how much disk is used du does. Second, the
semantics count, copy is different than clone and flatten. Clone and flatten
which should happen if you have things working correctly is much faster than
copy. If you are us
On 10/03/2017 11:12 AM, Clint Byrum wrote:
My personal opinion is that rebuild is an anti-pattern for cloud, and
should be frozen and deprecated. It does nothing but complicate Nova
and present challenges for scaling.
That said, if it must stay as a feature, I don't think updating the
user_data
On 10/18/2017 11:37 AM, Chris Apsey wrote:
All,
I'm working to add baremetal provisioning to an already-existing libvirt (kvm)
deployment. I was under the impression that our currently-existing endpoints
that already run nova-conductor/nova-scheduler/etc. can be modified to support
both kvm and
On 11/02/2017 08:48 AM, Mike Lowe wrote:
After moving from CentOS 7.3 to 7.4, I’ve had trouble getting live migration to
work when a volume is attached. As it turns out when a live migration takes
place the libvirt driver rewrites portions of the xml definition for the
destination hypervisor
On 10/31/2017 01:13 AM, haad wrote:
Hi,
We have an OSA installation with 10-12 compute nodes running Mitaka on Ubuntu
16.04. As initially we have not prepared any long term update strategy we would
like to create one now. Plan would be to upgrade it to new OSA
release(Ocata/Pike/Queens) in near
On 01/29/2018 07:47 AM, Jay Pipes wrote:
What I believe we can do is change the behaviour so that if a 0.0 value is found
in the nova.conf file on the nova-compute worker, then instead of defaulting to
16.0, the resource tracker would first look to see if the compute node was
associated with a h
On 03/07/2018 10:44 AM, Tim Bell wrote:
I think nested quotas would give the same thing, i.e. you have a parent project
for the group and child projects for the users. This would not need user/group
quotas but continue with the ‘project owns resources’ approach.
Agreed, I think that if we suppo
Are you talking about downtime of instances (and the dataplane), or of the
OpenStack API and control plane?
And when you say "zero downtime" are you really talking about "five nines" or
similar? Because nothing is truly zero downtime.
If you care about HA then you'll need additional componen
On 06/04/2018 05:43 AM, Tobias Urdin wrote:
Hello,
I have received a question about a more specialized use case where we need to
isolate several hypervisors to a specific project. My first thinking was
using nova flavors for only that project and add extra specs properties to
use a specific host
On 06/13/2018 07:58 AM, Blair Bethwaite wrote:
Is the collective wisdom to use LVM based instances for these use-cases? Putting
a host filesystem with qcow2 based disk images on it can't help
performance-wise... Though we have not used LVM based instance storage before,
are there any significant
On 08/14/2018 10:33 AM, Tobias Urdin wrote:
My goal is that we will be able to swap to Storyboard during the Stein cycle but
considering that we have a low activity on
bugs my opinion is that we could do this swap very easily anything soon as long
as everybody is in favor of it.
Please let me k
On 08/29/2018 10:02 AM, Jay Pipes wrote:
Also, I'd love to hear from anyone in the real world who has successfully
migrated (live or otherwise) an instance that "owns" expensive hardware
(accelerators, SR-IOV PFs, GPUs or otherwise).
I thought cold migration of instances with such devices was
On 08/30/2018 11:03 AM, Jeremy Stanley wrote:
The proposal is simple: create a new openstack-discuss mailing list
to cover all the above sorts of discussion and stop using the other
four.
Do we want to merge usage and development onto one list? That could be a busy
list for someone who's jus
On 07/17/2015 07:28 AM, Daniel P. Berrange wrote:
On Fri, Jul 17, 2015 at 06:58:46AM -0600, David Medberry wrote:
HI Daniel,
Yep found that all out.
Now I'm struggling through the NUMA mismatch. NUMA as there are two cpus.
The old CPU was a 10 core 20 thread thus 40 "cpus", {0-9,20-29} and the
On 09/09/2015 10:36 AM, David Medberry wrote:
Your users should also have reasonable quotas set. If they can boot thousands of
instances, you may have a quota issue to address. (No problem with the blueprint
or need to set an overall limit though--just that you should be able to address
this with
On 09/10/2015 08:11 AM, Matt Fischer wrote:
While I think there is probably some value in rate limiting API calls, I think
your "user wants to launch x000 instances" is extremely limited. There's maybe 1
or 2 (or 0) operators that have that amount of spare capacity just sitting
around that they c
This is a long-standing issue. Nikola has been working on it in Liberty for the
CPU pinning case, not sure about the non-pinned case. And of course patching
back to Kilo hasn't been done yet.
Aubrey, what you're seeing is definitely a bug. There is an existing bug
https://bugs.launchpad.net
On 09/25/2015 12:34 PM, Steve Gordon wrote:
Nikola's reply got bounced because he isn't subscribed, but:
"""
Thanks Steve!
So the below is likely the same root cause as this bug:
https://launchpad.net/bugs/1461777
Which has been fixed in Liberty and backported to stable/kilo (see
https://rev
Hi,
We've recently run into an issue where the notifications.info rabbitmq queue is
perpetually growing, ultimately consuming significant amounts of memory.
How do others deal with this? Do you always have a consumer draining the queue?
Chris
___
On 11/04/2015 08:46 AM, Saverio Proto wrote:
Hello there,
I am using cinder with rbd, and most volumes are created from glance
images on rbd as well.
Because of ceph features, these volumes are CoW and only blocks
different from the original parent image are really written.
Today I am debugging
On 03/03/2016 03:20 PM, Kris G. Lindgren wrote:
I would be curious if specifing the cpu type would actually restrict
performance. As far as I know, this only restricts the cpu features presented
to a vm. You can present a vm that has the cpu instruction sets of a Pentium 3
– but runs and is as
Hi all,
Just thought I'd mention that if anyone has been seeing odd idle/system/user
results in /proc/stats or "top" in a guest with CONFIG_VIRT_CPU_ACCOUNTING_GEN
enabled (it's automatically selected by CONFIG_NO_HZ_FULL) that it's not your
imagination or anything you did wrong.
I recently
Hi all,
There's a proposal for improving the nova scheduler logs up at
https://review.openstack.org/#/c/306647/
If you would like to be able to more easily determine why no valid host was
found, please review the proposal and leave feedback.
Thanks,
Chris
__
On 05/11/2016 11:46 AM, Ronald Bradford wrote:
I have been curious as to why as mentioned in the thread virt_type=kvm, but
os-hypervisors API call states QEMU.
Arguably in both cases the hypervisor is qemu. When virt_type=kvm we simply
enable some additional acceleration.
So rather than ask
On 05/11/2016 01:29 PM, Robert Starmer wrote:
I don't disagree, what we're really getting at is that any lookup (ask the
system what it's using on a particular instance, look at the config, look at the
output of a nova CLI request, querry via Horizon), should all return the same
answer. So one i
On 05/23/2016 08:46 PM, John Griffith wrote:
On Mon, May 23, 2016 at 8:32 AM, Ivan Kolodyazhny mailto:e...@e0ne.info>> wrote:
Hi developers and operators,
I would like to get any feedback from you about my idea before I'll start
work on spec.
In Nova, we've got max_concurrent_
On 05/24/2016 09:54 AM, Dan Smith wrote:
I like the idea of checking the md5 matches before each boot, as it
mirrors the check we do after downloading from glance. Its possible
thats very unlikely to spot anything that shouldn't already be worried
about by something else. It may just be my love o
On 07/28/2016 02:34 AM, Marcus Furlong wrote:
Hi,
I've been trying to find some information about using nova resize on
shared storage, without giving each compute node ssh access to every
other compute node. As the VM is on shared storage, the compute node
shouldn't need ssh access to another co
On 10/26/2016 06:07 AM, William Josefsson wrote:
Hi list,
I'm facing issues on Liberty/CentOS7 doing live migrations between to
hosts. The hosts are Haswell and Broadwell. However, there is not
feature specific running on my VMs
Haswell -> Broadwell works
Broadwell -> Haswell fails with the err
On 10/27/2016 11:09 PM, William Josefsson wrote:
hi, I did 'virsh capabilities' on the Haswell, which turned out to
list model: Haswell-noTSX. So I set in nova.conf
cpu_model=Haswell-noTSX on both Haswell and Broadwell hosts and it
seems to work. I believe this is my smallest common denominator.
Hi,
The OpenStack high availability guide seems to be a bit ambiguous about
whether RabbitMQ should be configured active/standby or
active/active...both methods are described.
Has anyone tried using active/active with mirrored queues as recommended
by the RabbitMQ developers? If so, what pr
On 09/11/2014 01:50 PM, James Dempsey wrote:
On 12/09/14 04:15, Chris Friesen wrote:
Hi,
The OpenStack high availability guide seems to be a bit ambiguous about
whether RabbitMQ should be configured active/standby or
active/active...both methods are described.
Has anyone tried using active
On 09/24/2014 02:43 PM, Scott Kohler wrote:
On 09/24/2014 04:29 PM, Abel Lopez wrote:
This is expected behavior, unfortunately.
I spoke to the ceph guys about this last year. When you delete an ‘image’ from
a pool, the monitors (IIRC) don’t instantly know where all the segments are
across all
On 03/19/2015 10:33 AM, Fox, Kevin M wrote:
We've running it both ways. We have clouds with dedicated storage nodes, and
clouds sharing storage/compute.
The storage/compute solution with ceph is working ok for us. But, that
particular cloud is 1gigabit only and seems very slow compared to our ot
On 03/29/2015 09:26 PM, Mike Dorman wrote:
Hi all,
I’m curious about how people deal with failures of compute nodes, as in total
failure when the box is gone for good. (Mainly care about KVM HV, but also
interested in more general cases as well.)
The particular situation we’re looking at: how
On 03/30/2015 02:47 PM, Jay Pipes wrote:
On 03/30/2015 10:42 AM, Chris Friesen wrote:
On 03/29/2015 09:26 PM, Mike Dorman wrote:
Hi all,
I’m curious about how people deal with failures of compute nodes,
as in total failure when the box is gone for good. (Mainly care
about KVM HV, but also
On 03/30/2015 04:57 PM, Jay Pipes wrote:
On 03/30/2015 06:42 PM, Chris Friesen wrote:
On 03/30/2015 02:47 PM, Jay Pipes wrote:
On 03/30/2015 10:42 AM, Chris Friesen wrote:
On 03/29/2015 09:26 PM, Mike Dorman wrote:
Hi all,
I’m curious about how people deal with failures of compute
nodes, as
On 03/30/2015 09:53 PM, Jay Pipes wrote:
On 03/30/2015 07:30 PM, Chris Friesen wrote:
On 03/30/2015 04:57 PM, Jay Pipes wrote:
On 03/30/2015 06:42 PM, Chris Friesen wrote:
On 03/30/2015 02:47 PM, Jay Pipes wrote:
On 03/30/2015 10:42 AM, Chris Friesen wrote:
On 03/29/2015 09:26 PM, Mike
54 matches
Mail list logo