Is there a straightforward way to determine where the time is going when
I run a command from novaclient?
For instance, if I run "nova list", that's going to run novaclient,
which will send a message to nova-api, which wakes up and does some
processing and sends a message to nova-conductor, wh
On 08/20/2014 07:21 AM, Jay Pipes wrote:
Hi Thierry, thanks for the reply. Comments inline. :)
On 08/20/2014 06:32 AM, Thierry Carrez wrote:
If we want to follow your model, we probably would have to dissolve
programs as they stand right now, and have blessed categories on one
side, and teams o
On 08/20/2014 09:54 PM, Clint Byrum wrote:
Excerpts from Jay Pipes's message of 2014-08-20 14:53:22 -0700:
On 08/20/2014 05:06 PM, Chris Friesen wrote:
On 08/20/2014 07:21 AM, Jay Pipes wrote:
Hi Thierry, thanks for the reply. Comments inline. :)
On 08/20/2014 06:32 AM, Thierry Carrez
On 08/25/2014 11:25 AM, Joe Cropper wrote:
I was thinking something simple such as only allowing the add
operation to succeed IFF no policies are found to be in violation...
and then nova wouldn't need to get into all the complexities you
mention?
Personally I would be in favour of this...nothi
On 08/28/2014 01:44 PM, Jay Pipes wrote:
On 08/27/2014 09:04 PM, Dugger, Donald D wrote:
I understand that reviews are a burden and very hard but it seems wrong
that a BP with multiple positive reviews and no negative reviews is
dropped because of what looks like indifference.
I would posit
On 08/28/2014 02:25 PM, Jay Pipes wrote:
On 08/28/2014 04:05 PM, Chris Friesen wrote:
On 08/28/2014 01:44 PM, Jay Pipes wrote:
On 08/27/2014 09:04 PM, Dugger, Donald D wrote:
I understand that reviews are a burden and very hard but it seems wrong
that a BP with multiple positive reviews and
On 08/28/2014 03:02 PM, Jay Pipes wrote:
I understand your frustration about the silence, but the silence from
core team members may actually be a loud statement about where their
priorities are.
Or it could be that they haven't looked at it, aren't aware of it, or
haven't been paying attenti
On 08/28/2014 04:01 PM, Joe Gordon wrote:
On Thu, Aug 28, 2014 at 2:43 PM, Alan Kavanagh
mailto:alan.kavan...@ericsson.com>> wrote:
I share Donald's points here, I believe what would help is to
clearly describe in the Wiki the process and workflow for the BP
approval process and b
On 09/05/2014 03:52 AM, Daniel P. Berrange wrote:
So my biggest fear with a model where each team had their own full
Nova tree and did large pull requests, is that we'd suffer major
pain during the merging of large pull requests, especially if any
of the merges touched common code. It could mak
Hi,
I see that the OpenStack high availability guide is still recommending
the active/standby method of configuring RabbitMQ.
Has anyone tried using active/active with mirrored queues as recommended
by the RabbitMQ developers? If so, what problems did you run into?
Thanks,
Chris
_
Hi,
I'm running Havana and I'm seeing some less-than-ideal behaviour on rabbitmq
failover. I'd like to figure out if this is expected behaviour or if something
is going wrong.
We're running rabbitmq in active/standby mode with DRBD storage. On the
controller the timeline looks like this:
07
On 09/10/2014 02:13 PM, Chris Friesen wrote:
As it stands, it seems that waiting for the RPC call to time out blocks
_report_state() from running again in report_interval seconds, which delays the
service update until the RPC timeout period expires.
Just noticed something...
In the case of
On 09/10/2014 02:44 AM, Daniel P. Berrange wrote:
On Tue, Sep 09, 2014 at 05:14:43PM -0700, Stefano Maffulli wrote:
I have the impression this idea has been circling around for a while but
for some reason or another (like lack of capabilities in gerrit and
other reasons) we never tried to impl
On 09/10/2014 04:11 PM, Jay Pipes wrote:
On 09/10/2014 05:55 PM, Chris Friesen wrote:
If each hypervisor team mostly only modifies their own code, why would
there be conflict?
As I see it, the only causes for conflict would be in the shared code,
and you'd still need to sort out the i
On 09/10/2014 04:16 PM, Russell Bryant wrote:
On Sep 10, 2014, at 2:03 PM, Joe Cropper
wrote:
I would like to craft up a blueprint proposal for Kilo to add two
simple extensions to the existing server group APIs that I believe
will make them infinitely more usable in any ‘real world’ scenari
On 09/11/2014 12:50 AM, Jesse Pretorius wrote:
On 10 September 2014 17:20, Chris Friesen mailto:chris.frie...@windriver.com>> wrote:
I see that the OpenStack high availability guide is still
recommending the active/standby method of configuring RabbitMQ.
Has anyone tried
On 09/11/2014 12:02 PM, Dan Prince wrote:
Maybe I'm impatient (I totally am!) but I see much of the review
slowdown as a result of the feedback loop times increasing over the
years. OpenStack has some really great CI and testing but I think our
focus on not breaking things actually has us painte
On 09/11/2014 03:01 PM, Jay Pipes wrote:
On 09/11/2014 04:51 PM, Matt Riedemann wrote:
On 9/10/2014 6:00 PM, Russell Bryant wrote:
On 09/10/2014 06:46 PM, Joe Cropper wrote:
Hmm, not sure I follow the concern, Russell. How is that any different
from putting a VM into the group when it’s boote
On 09/11/2014 04:22 PM, Joe Cropper wrote:
I would be a little wary about the DB level locking for stuff like that
— it’s certainly doable, but also comes at the expense of things
behaving ever-so-slightly different from DBMS to DBMS. Perhaps there
are multiple “logical efforts” here—i.e., addin
On 09/12/2014 04:59 PM, Joe Gordon wrote:
On Thu, Sep 11, 2014 at 2:18 AM, Daniel P. Berrange mailto:berra...@redhat.com>> wrote:
FYI, for Juno at least I really don't consider that even the libvirt
driver got acceptable review times in any sense. The pain of waiting
for reviews
Hi,
I'm running Havana, and I just tried a testcase involving doing six
simultaneous live-migrations.
It appears that the migrations succeeded, but two of the instances got stuck
with a status of "MIGRATING" because of RPC timeouts:
2014-09-16 20:35:07.376 12493 INFO nova.notifier [-] processi
On 11/8/2018 5:30 AM, Rambo wrote:
When I resize the instance, the compute node report that
"libvirtError: internal error: qemu unexpectedly closed the monitor:
2018-11-08T09:42:04.695681Z qemu-kvm: cannot set up guest memory
'pc.ram': Cannot allocate memory".Has anyone seen this situation?
On 11/18/2013 06:47 PM, Joshua Harlow wrote:
An idea related to this, what would need to be done to make the DB have
the exact state that a compute node is going through (and therefore the
scheduler would not make unreliable/racey decisions, even when there are
multiple schedulers). It's not like
On 11/19/2013 12:35 PM, Clint Byrum wrote:
Each scheduler process can own a different set of resources. If they
each grab instance requests in a round-robin fashion, then they will
fill their resources up in a relatively well balanced way until one
scheduler's resources are exhausted. At that ti
On 11/19/2013 12:27 PM, Joshua Harlow wrote:
Personally I would prefer #3 from the below. #2 I think will still have to
deal with consistency issues, just switching away from a DB doesn't make
magical ponies and unicorns appear (in-fact it can potentially make the
problem worse if its done incorr
On 11/19/2013 01:51 PM, Clint Byrum wrote:
Excerpts from Chris Friesen's message of 2013-11-19 11:37:02 -0800:
On 11/19/2013 12:35 PM, Clint Byrum wrote:
Each scheduler process can own a different set of resources. If they
each grab instance requests in a round-robin fashion, then they will
fi
On 11/20/2013 10:06 AM, Soren Hansen wrote:
2013/11/18 Mike Spreitzer :
There were some concerns expressed at the summit about scheduler
scalability in Nova, and a little recollection of Boris' proposal to
keep the needed state in memory.
I also heard one guy say that he thinks Nova does not
On 11/21/2013 10:52 AM, Stephen Gran wrote:
On 21/11/13 15:49, Chris Friesen wrote:
On 11/21/2013 02:58 AM, Soren Hansen wrote:
2013/11/20 Chris Friesen :
What about a hybrid solution?
There is data that is only used by the scheduler--for performance
reasons
maybe it would make sense to store
On 11/21/2013 02:58 AM, Soren Hansen wrote:
2013/11/20 Chris Friesen :
What about a hybrid solution?
There is data that is only used by the scheduler--for performance reasons
maybe it would make sense to store that information in RAM as described at
https://blueprints.launchpad.net/nova/+spec
On 11/22/2013 02:29 PM, Russell Bryant wrote:
I honestly don't understand why openstack@ and openstack-operators@ are
different lists. Perhaps openstack@ just needs better use of topic
tagging ...
Wouldn't openstack@ be the logical place for end-users to hang out,
while openstack-operators@
On 11/26/2013 07:48 PM, Vijay Venkatachalam wrote:
Hi,
Is there a way for the VM to identify that it is
getting booted in OpenStack?
As said in the below mail, once the VM knows it is
booting in OpenStack it will alter the boot sequence.
What does "getting bo
On 11/27/2013 11:50 AM, Zane Bitter wrote:
Even better would be if we had the keystone domain (instead of the
tenant id) incorporated into the endpoint in the keystone catalog and
then we could use the tenant^W project *name* in the URL and users would
never have to deal with UUIDs and invisible
Hi,
Just wondering why nova commit 52f6981 ("Evacuated instance disk not
deleted") wasn't backported to grizzly?
The symptoms of this bug are that if you evacuate a server off a compute
node that uses local storage then you can never move it back to that
compute node because the old files ar
On 11/28/2013 09:50 AM, Gary Kotton wrote:
One option worth thinking about is to introduce a new scheduling driver to
nova - this driver will interface with the external scheduler. This will
let us define the scheduling API, model etc, without being in the current
confines of Nova. This will als
Hi,
We're currently running Grizzly (going to Havana soon) and we're running
into an issue where if the active controller is ungracefully killed then
nova-compute on the compute node doesn't properly connect to the new
rabbitmq server on the newly-active controller node.
I saw a bugfix in Fo
On 11/29/2013 06:37 PM, David Koo wrote:
On Nov 29, 02:22:17 PM (Friday), Chris Friesen wrote:
We're currently running Grizzly (going to Havana soon) and we're
running into an issue where if the active controller is ungracefully
killed then nova-compute on the compute node doesn&
On 12/02/2013 02:31 PM, Vishvananda Ishaya wrote:
I'm going to reopen a can of worms, though. I think the most difficult part of
the forklift will be moving stuff out of the existing databases into
a new database.
Do we really need to move it to a new database for the forklift?
Chris
___
On 12/03/2013 04:08 AM, Daniel P. Berrange wrote:
On Tue, Dec 03, 2013 at 01:47:31AM -0800, Gary Kotton wrote:
Hi,
I think that this information should be used as part of the scheduling
decision, that is hosts that are to be selected should be excluded if they
do not have the necessary resources
Hi,
We've got a grizzly setup using quantum networking and libvirt/kvm with
VIR_MIGRATE_LIVE set.
I was live-migrating an instance back and forth between a couple of
compute nodes. It worked fine for maybe half a dozen migrations and
then after a migration I could no longer ping it.
It ap
On 12/12/2013 11:02 AM, Clint Byrum wrote:
So I'm asking, is there a standard way to determine whether or not a
nova-compute is definitely ready to have things scheduled on it? This
can be via an API, or even by observing something on the nova-compute
host itself. I just need a definitive signal
When we create a new instance via _build_instance() or
_build_and_run_instance(), in both cases we call instance_claim() to
reserve and test for resources.
During a cold migration I see us calling prep_resize() which calls
resize_claim().
How come we don't need to do something like this when
On 12/26/2013 01:56 AM, cosmos cosmos wrote:
Hello.
My name is Rucia for Samsung SDS.
I had in truouble in volume deleting.
I am developing for supporting big data storage such as hadoop in lvm.
it use as a full disk io for deleting of cinder lvm volume because of dd
the high disk I/O affec
On 01/15/2014 06:00 PM, Fox, Kevin M wrote:
What about a configuration option on the volume for delete type? I can see some
possible options:
* None - Don't clear on delete. Its junk data for testing and I don't want to
wait.
* Zero - Return zero's from subsequent reads either by zeroing on de
On 01/15/2014 06:30 PM, Jay S Bryant wrote:
There is already an option that can be set in cinder.conf using
'volume_clear=none'
Is there a reason that that option is not sufficient?
That option would be for the cloud operator and since it would apply to
all volumes on that cinder node.
My i
On 01/15/2014 11:25 PM, Clint Byrum wrote:
Excerpts from Alan Kavanagh's message of 2014-01-15 19:11:03 -0800:
Hi Paul
I posted a query to Ironic which is related to this discussion. My thinking was I want to
ensure the case you note here (1) " a tenant can not read another tenants
disk..
Hi,
I'm trying to figure out how resource tracking is intended to work for
live migration and evacuation.
For a while I thought that maybe we were relying on the call to
ComputeManager._instance_update() in
ComputeManager.post_live_migration_at_destination(). However, in
ResourceTracker.upd
On 01/16/2014 04:22 PM, Clint Byrum wrote:
Excerpts from Fox, Kevin M's message of 2014-01-16 09:29:14 -0800:
Yeah, I think the evil firmware issue is separate and should be solved
separately.
Ideally, there should be a mode you can set the bare metal server into where
firmware updates are no
On 01/16/2014 05:12 PM, CARVER, PAUL wrote:
Jumping back to an earlier part of the discussion, it occurs to me
that this has broader implications. There's some discussion going on
under the heading of Neutron with regard to PCI passthrough. I
imagine it's under Neutron because of a desire to pro
On 01/17/2014 04:20 PM, Devananda van der Veen wrote:
tl;dr, We should not be recycling bare metal nodes between untrusted
tenants at this time. There's a broader discussion about firmware
security going on, which, I think, will take a while for the hardware
vendors to really address.
What can
On 01/24/2014 08:33 AM, CARVER, PAUL wrote:
I agree that I’d like to see a set of use cases for this. This is the
second time in as many days that I’ve heard about a desire to have such
a thing but I still don’t think I understand any use cases adequately.
In the physical world it makes perfect
On 01/28/2014 10:55 AM, Jani, Nrupal wrote:
While technically it is possible, we as a team can decide
about the final recommendationJGiven that VFs are going to be used for
the high-performance VMs, mixing VMs with virtio & VFs may not be a good
option. Initially we can use PF interface for the
Has anyone ever considered adding the concept of transaction IDs to the
openstack REST API?
I'm envisioning a way to handle long-running transactions more cleanly.
For example:
1) A user sends a request to live-migrate an instance
2) Openstack acks the request and includes a "transaction I
On 02/03/2014 12:28 PM, Khanh-Toan Tran wrote:
Another though would be the need for Instance Group API [1].
Currently users can only request multiple instances of the same
flavors. These requests do not need LP to solve, just placing
instances one by one is sufficient. Therefore we need this API
On 02/03/2014 01:31 PM, Andrew Laski wrote:
On 02/03/14 at 01:10pm, Chris Friesen wrote:
Has anyone ever considered adding the concept of transaction IDs to
the openstack REST API?
I'm envisioning a way to handle long-running transactions more
cleanly. For example:
1) A user sends a re
On 02/10/2014 10:54 AM, Khanh-Toan Tran wrote:
Heat
may orchestrate the provisioning process, but eventually the instances will be
passed to Nova-scheduler (Gantt) as separated commands, which is exactly the
problem Solver Scheduler wants to correct. Therefore the Instance Group API is
needed, w
On 02/11/2014 03:21 AM, Khanh-Toan Tran wrote:
Second, there is nothing wrong with booting the instances (or
instantiating other
resources) as separate commands as long as we support some kind of
reservation token.
I'm not sure what reservation token would do, is it some kind of informing
the
On 02/20/2014 11:38 AM, Matt Riedemann wrote:
On 2/19/2014 4:05 PM, Matt Riedemann wrote:
The os-hosts OS API extension [1] showed up before I was working on the
project and I see that only the VMware and XenAPI drivers implement it,
but was wondering why the libvirt driver doesn't - either no
I'm looking at the live migration rollback code and I'm a bit confused.
When setting up a live migration we unconditionally run
ComputeManager.pre_live_migration() on the destination host to do
various things including setting up networks on the host.
If something goes wrong with the live mig
On 02/24/2014 04:01 PM, Morgan Fainberg wrote:
TL;DR, “don’t break the contract”. If we are seriously making
incompatible changes (and we will be regardless of the direction) the
only reasonable option is a new major version.
Agreed. I don't think we can possibly consider making
backwards-in
On 02/24/2014 04:59 PM, Sean Dague wrote:
So, that begs a new approach. Because I think at this point even if we
did put out Nova v3, there can never be a v4. It's too much, too big,
and doesn't fit in the incremental nature of the project.
Does it necessarily need to be that way though? Mayb
On 02/24/2014 05:17 PM, Sean Dague wrote:
On 02/24/2014 06:13 PM, Chris Friesen wrote:
On 02/24/2014 04:59 PM, Sean Dague wrote:
So, that begs a new approach. Because I think at this point even if we
did put out Nova v3, there can never be a v4. It's too much, too big,
and doesn't
On 02/25/2014 05:15 AM, John Garbutt wrote:
On 24 February 2014 22:14, Chris Friesen wrote:
What happens if we have a shared-storage instance that we try to migrate and
fail and end up rolling back? Are we going to end up with messed-up
networking on the destination host because we never
I'm in the process of putting together a bug report and a patch for
properly handling resource tracking on live migration.
The change involves code that will run on the destination compute node
in order to properly account for the resources that the instance to be
migrated will consume.
Te
On 02/26/2014 04:50 PM, Dan Smith wrote:
So if we make backwards incompatible changes we really need a major
version bump. Minor versions don't cut it, because the expectation is
you have API stability within a major version.
I disagree. If the client declares support for it, I think we can ver
On 02/27/2014 08:43 AM, Dan Smith wrote:
So I think once we start returning different response codes, or
completely different structures (such as the tasks change will be), it
doesn't matter if we make the change in effect by invoking /v2 prefix
or /v3 prefix or we look for a header. Its a major
On 02/27/2014 06:00 PM, Alex Xu wrote:
Does mean our code looks like as below?
if client_version > 2:
elif client_version > 3
...
elif client_version > 4:
...
elif client_version > 5:
...
elif client_version > 6:
..
And we need test each version... That looks bad...
I d
Hi,
I've been working with OpenStack for a while now but I'm still a bit
fuzzy on the precise meaning of some of the terminology.
It seems reasonably clear that a "node" is a computer running at least
one component of an Openstack system.
However, "nova service-list" talks about the "host"
On 02/28/2014 11:38 AM, Jiang, Yunhong wrote:
One reason of the confusion is, in some virt driver (maybe xenapi or
vmwareapi), one compute service manages multiple node.
Okay, so in the scenario above, is the nova-compute service running on a
"node" or a "host"? (And if it's a "host", then wha
On 03/03/2014 08:14 AM, Steve Gordon wrote:
I would be interested in your opinion on the impact of a V2
version release which had backwards incompatibility in only one
area - and that is input validation. So only apps/SDKs which are
currently misusing the API (I think the most common problem wou
How do I regenerate the doc/api_samples tests if I change the corresponding
template?
The instructions in nova/tests/functional/api_samples/README.rst say to run
"GENERATE_SAMPLES=True tox -epy27 nova.tests.unit.integrated", but that path
doesn't exist anymore.
I suspect the instructions sho
Hi,
I'm interested in the recent work around NUMA support for guest instances
(https://blueprints.launchpad.net/nova/+spec/virt-driver-numa-placement), but
I'm having some difficulty figuring out what versions of libvirt and qemu are
required.
From the research that I've done it seems like q
I'm looking to add support for server groups to heat. I've got working
code, but I thought I'd post the overall design here in case people had
objections.
Basically, what I propose is to add a "class NovaServerGroup" resource.
Currently it would only support a "policy" property to store th
On 04/25/2014 11:01 AM, Mike Spreitzer wrote:
Zane Bitter wrote on 04/25/2014 12:36:00 PM:
> On 25/04/14 12:23, Chris Friesen wrote:
More important is Zane's following question.
> > The "Server" class would be extended with an optional "server_group"
&g
On 04/25/2014 12:00 PM, Zane Bitter wrote:
On 25/04/14 13:50, Chris Friesen wrote:
In the nova boot command we pass the group uuid like this:
--hint group=e4cf5dea-4831-49a1-867d-e263f2579dd0
If we were to make use of the scheduler hints, how would that look?
Something like this? (I'
On 04/26/2014 09:41 PM, Jay Lau wrote:
Just noticed this email, I have already filed a blueprint related to
this topic
https://blueprints.launchpad.net/heat/+spec/vm-instance-group-support
My idea is that can we add a new field such as "PlacemenetPolicy" to
AutoScalingGroup? If the value is affi
On 04/25/2014 03:15 PM, Jay Pipes wrote:
There are myriad problems with the above user experience and
implementation. Let me explain them.
1. The user isn't creating a "server group" when they issue a nova
server-group-create call. They are creating a policy and calling it a
group. Cognitive di
On 04/28/2014 06:58 AM, Steve Gordon wrote:
- Original Message -
Create two new options to nova boot:
--near-tag and --not-near-tag
The first would tell the scheduler to place the new VM near other
VMs having a particular "tag". The latter would tell the scheduler
to place the new
On 04/28/2014 11:22 AM, Dan Smith wrote:
2. There's no way to add an existing server to this "group".
In the original API there was a way to add existing servers to the
group. This didn't make it into the code that was submitted. It is
however supported by the instance group db API in nova.
On 04/30/2014 03:41 PM, Mike Spreitzer wrote:
Chris Friesen wrote on 04/28/2014 10:44:46 AM:
> Using a property of a heat resource
> to trigger the creation of a nova resource would not fit that model.
For the sake of your argument, let's pretend that the new ASG blueprint
has
On 05/05/2014 10:51 AM, Steve Gordon wrote:
In addition extra specifications may denote the passthrough of additional
devices, adding another dimension. This seems likely to be the case in the use
case outlined in the original thread [1].
Thanks,
Steve
[1] http://lists.openstack.org/piperma
On 05/05/2014 11:40 AM, Solly Ross wrote:
One thing that I was discussing with @jaypipes and @dansmith over
on IRC was the possibility of breaking flavors down into separate
components -- i.e have a disk flavor, a CPU flavor, and a RAM flavor.
This way, you still get the control of the size of yo
On 05/05/2014 12:18 PM, Chris Friesen wrote:
As a simplifying view you could keep the existing flavors which group
all of them, while still allowing instances to specify each one
separately if desired.
Also, if we're allowing the cpu/memory/disk to be specified
independently at instance
On 06/03/2014 07:29 AM, Jay Pipes wrote:
Hi Stackers,
tl;dr
=
Move CPU and RAM allocation ratio definition out of the Nova scheduler
and into the resource tracker. Remove the calculations for overcommit
out of the core_filter and ram_filter scheduler pieces.
Makes sense to me.
Chris
__
On 06/07/2014 12:30 AM, Joe Cropper wrote:
Hi Folks,
I was wondering if there was any such mechanism in the compute node
structure to hold arbitrary key-value pairs, similar to flavors'
"extra_specs" concept?
It appears there are entries for things like pci_stats, stats and
recently added extra
On 06/09/2014 07:59 AM, Jay Pipes wrote:
On 06/06/2014 08:07 AM, Murray, Paul (HP Cloud) wrote:
Forcing an instance to a specific host is very useful for the
operator - it fulfills a valid use case for monitoring and testing
purposes.
Pray tell, what is that valid use case?
I find it useful
On 06/12/2014 09:36 AM, Sean Dague wrote:
This is what I mean by functional testing. If we were directly hitting a
real database on a set of in tree project tests, I think you could
discover issues like this. Neutron was headed down that path.
But if we're talking about a devstack / tempest run
Hi,
I'm looking for the community viewpoint on whether there is any chance
of license contamination between mysql and nova. I realize that lawyers
would need to be involved for a proper ruling, but I'm curious about the
view of the developers on the list.
Suppose someone creates a modified
In nova/virt/libvirt/driver.py the _compare_cpu() function checks the
cpu features using self._conn.compareCPU().
Is this actually valid? The kvm processes don't seem to have the
"--cpu" option specified, so we should get a compatible subset of cpu
features from qemu. If that's the case then
On 06/12/2014 01:30 PM, Mike Bayer wrote:
the GPL is excepted in the case of MySQL and other MySQL products
released by Oracle (can you imagine such a sentence being
written.), see
http://www.mysql.com/about/legal/licensing/foss-exception/.
Okay, good start. mysql itself is out of the pic
On 06/16/2014 03:33 AM, Thierry Carrez wrote:
David Kranz wrote:
[...]
There is a different way to do this. We could adopt the same methodology
we have now around gating, but applied to each project on its own
branch. These project branches would be integrated into master at some
frequency or wh
Hi,
I was just testing the current icehouse code and came across some
behaviour that looked suspicious.
I have two nodes, an all-in-one and a compute node. I was not using
shared instance storage.
I created a volume from an image and then booted an instance from the
volume. Once the imag
Hi,
I'm using havana and recent we ran into an issue with heat related to
character sets.
In heat/db/sqlalchemy/api.py in user_creds_get() we call
_decrypt() on an encrypted password stored in the database and then try
to convert the result to unicode. Today we hit a case where this
errore
On 03/08/2014 02:23 AM, ChangBo Guo wrote:
Are you using libvirt driver ?
As I remember, the way to check if compute nodes with shared storage
is : create a temporary file from source node , then check the file
from dest node , by accessing file system from operating system level.
And boot
On 03/10/2014 02:02 PM, Ben Nemec wrote:
We just had a discussion about this in #openstack-oslo too. See the
discussion starting at 2014-03-10T16:32:26
http://eavesdrop.openstack.org/irclogs/%23openstack-oslo/%23openstack-oslo.2014-03-10.log
In that discussion dhellmann said, "I wonder if we
On 03/10/2014 02:58 PM, Jay Pipes wrote:
On Mon, 2014-03-10 at 16:30 -0400, Shawn Hartsock wrote:
While I understand the general argument about pets versus cattle. The
question is, would you be willing to poke a few holes in the strict
"cattle" abstraction for the sake of pragmatism. Few shops a
On 03/11/2014 05:50 PM, Clint Byrum wrote:
But MySQL can't possibly know what you _meant_ when you were inserting
data. So, if you _assumed_ that the database was UTF-8, and inserted
UTF-8 with all of those things accidentally set for latin1, then you
will have UTF-8 in your db, but MySQL will t
Are there any tools that people can recommend for live debugging of
openstack services?
I'm looking for a mechanism where I could take a running system that
isn't behaving the way I expect and somehow poke around inside the
program while it keeps running. (Sort of like tracepoints in gdb.)
Hi,
I'm trying to run InstanceGroup.get_hosts() on a havana installation
that uses postgres. When I run the code, I get the following error:
RemoteError: Remote error: ProgrammingError (ProgrammingError) operator
does not exist: timestamp without time zone ~ unknown
2014-03-14 09:58:57.193
Hi,
I'm curious why the specified git commit chose to fix the anti-affinity
race condition by aborting the boot and triggering a reschedule.
It seems to me that it would have been more elegant for the scheduler to
do a database transaction that would atomically check that the chosen
host was
On 03/15/2014 04:29 AM, Sean Dague wrote:
On 03/15/2014 02:49 AM, Chris Friesen wrote:
Hi,
I'm trying to run InstanceGroup.get_hosts() on a havana installation
that uses postgres. When I run the code, I get the following error:
RemoteError: Remote error: ProgrammingError (Programming
1 - 100 of 561 matches
Mail list logo