ince they had been documented here in the official ceilometer
> measurements
> page<http://docs.openstack.org/developer/ceilometer/measurements.html>.
> Thanks for your enlightenment!
>
>
> On Thu, Jul 18, 2013 at 8:43 PM, Eoghan Glynn wrote:
>
> >
> > H
. I have enabled instance
> usage auditing in my nova.conf <http://pastebin.ubuntu.com/5887592/>. Is
> there anyway I could get these meters(memory and disk utilization) for VM's
> provisioned using OpenStack?
>
> Thanks for your efforts.
>
>
> On Thu, J
Hey Jobin,
Thanks for your perceptive question.
The reason is that the conduits for gathering CPU metering and memory
metering are quite different in ceilometer currently:
* cpu/cpu_util are derived by polling the libvirt daemon
* memory is derived from the "compute.instance.exists" notificati
+1
Thanks,
Eoghan
- Original Message -
> G'day,
>
>
> Would anyone be interested in morning runs (5K) during the Summit in PDX next
> week?
>
> If you are, let's meet in the lobby of the Portland Hilton on Sixth Avenue at
> 0600 on Monday and 0700 from Tuesday to Friday.
>
> Some of
> Here's a first pass at a proposal for unifying StackTach/Ceilometer
> and other instrumentation/metering/monitoring efforts.
>
> It's v1, so bend, spindle, mutilate as needed ... but send feedback!
>
> http://wiki.openstack.org/UnifiedInstrumentationMetering
Thanks for putting this together S
> > What would be the best way to achieve this? A small sqlite DB
> > per-agent, or even simpler just a pickled dict? The latter would
> > avoid the complexity of DB versioning and migration.
>
> At the risk of repeating myself, can I stress again how much we don't
> need to transform cumulative
> > if you have:
> >
> > Time | Value
> > 0 | 10
> > 1 | 30
> > 2 | 50
> > 3 | 80
> > 4 | 100
> >
> > If your delta-pollster is down at 1 and 2, you restart at 3,
> > therefore at 4 you'll send "20" as usage (100 minus 80).
> > So you miss the delta between 10 (time 0) and 80 (time 3)
> > (ther
> > Would we have also have some 'misses' with the cumulative approach
> > when the ceilometer agent was down?
>
> No, unless the counter resets several times while your agent is down.
> But delta has the same issue.
>
> > If I understood the (\Sigma local maxima)-first idea correctly,
> > the
> If your pollster is not running to compute delta and you have
> no state stored, you'll miss a part of what has been used.
Would we have also have some 'misses' with the cumulative approach
when the ceilometer agent was down?
If I understood the (\Sigma local maxima)-first idea correctly,
the
> > I don't think (max - min) would suffice to give an accurate
> > measure of the actual CPU time used, as the counter may have
> > reset multiple times in the course of the requested duration.
>
> It is, because /max in the API should be aware of the fact a
> reset can occur and computes acco
> Not at all. It means the CPU time consumed is reset to 0, but
> that's not an issue in itself, the API should be capable to
> deal with that if you ask for the total usage.
Would that total usage be much more apparent if we started
metering the delta between CPU times on subsequent polling
peri
Hi Yawei Wu,
The root of the confusion is the fact the cpu meter is reporting
the cumlative cpu_time stat from libvirt. This libvirt counter is
reset when the associated qemu process is restarted (an artifact
of how cpuacct works).
So when you stop/start or suspend/resume, a fresh qemu process
> I am testing ceilometer in my devstack virtual machine. Although I
> can see the meter data model in the mongodb, I am confused about
> some glossary when I test its Web API. I am not very clear about the
> " resource" in the " GET /v1/resources",either "source" in the "GET
> /v1/sources/(sourc
> My point was that if a user is currently configured to have a quota
> of 50 VMs, and the default is currently configured to be 20 VMs then
> there is a difference between "configuring the user to have a quota
> of 20" and "configuring a user to have the default quota".The
> first is just a s
> Isn't that just changing one custom limit with another ?
>
> A true reset to the defaults would see the user stay in step with any
> changes to the default values.
Do you mean configured changes to the defaults?
AFAIK 'nova quota-defaults' returns the current set of defaults,
which seems to
> HI All,
>
>
>
> I would like to open a discussion on a topic user should have a
> option to reset the tenant’s quotas( to the default).
Hi Vijaya,
I don't think a new nova command is needed for this use-case,
just add a simple custom script:
nova quota-update `nova quota-defaults $1 | t
Thanks Yunhong for pointing this issue out and submitting a patch
in quick order.
Your reasoning for switching from if offset to if offset is None,
in order to avoid including the offset==0 case, makes perfect sense.
You'll just have to propose the change first to openstack-common,
from where it
012-09-04 13:09:16 TRACE nova.api.openstack rv = list(rv)
> 2012-09-04 13:09:16 TRACE nova.api.openstack File
> "/usr/lib/python2.7/dist-packages/nova/rpc/amqp.py", line 304, in
> __iter__
> 2012-09-04 13:09:16 TRACE nova.api.openstack self.done()
> 2012-09-04 13:09:16 TRACE no
> I think its great that we're having this discussion.
+1, excellent discussion in terms of both tone & content.
> In the hope that its informative, I'd like to give some info on
> issues
> we're looking at when moving our Glance deployment to Folsom. A lot
> of
> this is in common with Ryan,
> > I have Installed Nova Volume in the Openstack Essex Controller. But
> > When I restart the nova-volume service, I get the following error
> [...]
> > 2012-09-03 12:26:22 TRACE nova OperationalError: (OperationalError)
> > (1054, "Unknown column 'volumes.instance_id' in 'field list'")
>
>
>
> I have Installed Nova Volume in the Openstack Essex Controller. But
> When I restart the nova-volume service, I get the following error
[...]
> 2012-09-03 12:26:22 TRACE nova OperationalError: (OperationalError)
> (1054, "Unknown column 'volumes.instance_id' in 'field list'")
Hi Trinath,
One
> While trying to create a VM instance on openstack, the boot command
> (nova boot) returns the following error:
> ---
> ERROR: The server has either erred or is incapable of performing the
> requested operation. (HTTP 500)
> ---
> everything seems to be working (nova services are starting).
>
>
Can you provide relevant glance-api and -registry log excerpts?
Also probably best to track this as a glance question[1] or bug[2].
Cheers,
Eoghan
[1] https://answers.launchpad.net/glance
[2] https://bugs.launchpad.net/glance
- Original Message -
>
> Hello
>
> I'm getting this error
Hi Jorge,
What version are you testing against?
I recently got a series of patches onto master that addressed a bunch
of issues in the EC2 CreateImage support, so that it now works smoothly
with volume-backed nova instances:
https://review.openstack.org/9732
https://review.openstack.org/98
> > Would that address your requirement?
>
> I think so. If these acted as a hard limit in conjunction with
> existing quota constraints, I think it would do the trick.
I've raised this a nova blueprint, so let's see if it gets any traction:
https://blueprints.launchpad.net/nova/+spec/flavo
lementation lean.
>
> Kiall
> On Jul 20, 2012 3:48 PM, "Eoghan Glynn" < egl...@redhat.com > wrote:
>
>
>
> > The harder part is that we need to be able to specify
> > independent/orthogonal quota constraints on different flavors. It
> > would
> The harder part is that we need to be able to specify
> independent/orthogonal quota constraints on different flavors. It
> would be really useful to be able to say basically, you can have 2TB
> of memory from this flavor, and 4TB of memory from that flavor. This
> would allow saying something l
> We're running a system with a really wide variety of node types. This
> variety (nodes with 24GB, 48GB, GPU nodes, and 1TB mem nodes) causes
> some real trouble with quotas. Basically, for any tenant that is going
> to use the large memory nodes (even in smaller slices), we need to set
> quotas
> Right - examining the current state isn't a good way to determine
> what happened with one particular request. This is exactly one of
> the reasons some providers create Jobs for all actions. Checking the
> resource "later" to see why something bad happened is fragile since
> other opertaons mig
> Note that I do distinguish between a 'real' async op (where you
> really return little more than a 202) and one that returns a
> skeleton of the resource being created - like instance.create() does
> now.
So the latter approach at least provides a way to poll on the resource
status, so as to fi
Thanks for the quick response ...
> Very basic things, not much other than the Jenkins Slave service and
> SSH. Nothing that should cause conflicts that you are seeing. We
> also intentionally only run one test run per slave at a time.
Interesting, seems the alternate explanation of a lag-on-
Folks,
A question for the CI side-of-the-house ...
What else is running on the Jenkins slaves, concurrently with the gating CI
tests?
The background is the intermittent glance service launch failure - the recently
added strace-on-failure logic reveals the issue to be an EADDRINUSE when the
reg
Hi Folks,
I've been looking into the (currently broken) EC2 CreateImage API support
and just wanted to get a sanity check on the following line of reasoning:
- EC2 CreateImage should *only* apply to booted-from-volume nova servers,
for fidelity with the EC2 limitation to EBS-based instances (
Hi Folks,
I wanted to use strace(1) to get to the bottom of the glance service
launch failures that have been plaguing Smokestack and Jenkins in the
past few weeks:
https://review.openstack.org/8722
However I just realized that Ubuntu from Maverick onward no longer allows
ptrace to attach t
data that must be replicated somehow across the system. I
> > don't think we can really ensure no collisions mapping from uuid ->
> > ec2_id deterministically, and I don't see a clear path forward when
> > we do get a collision.
> >
> > Vish
> >
>
a clear path forward when
> we do get a collision.
>
> Vish
>
> On May 8, 2012, at 12:24 AM, Michael Still wrote:
>
> > On 04/05/12 20:31, Eoghan Glynn wrote:
> >
> > Sorry for the slow reply, I've been trapped in meetings.
> >
> > [snip]
>
> > Current warts:
> > ...
> > - maintaining amazon ec2 ids across regions requires twiddling the
> > nova database where this mapping is stored
>
>
> Hi Mikal,
>
> We discussed that nova s3_images table earlier in the week on IRC.
> Now at the time, I wasn't fully clear on the mechanics of
> Current warts:
> ...
> - maintaining amazon ec2 ids across regions requires twiddling the
> nova database where this mapping is stored
Hi Mikal,
We discussed that nova s3_images table earlier in the week on IRC.
Now at the time, I wasn't fully clear on the mechanics of the glance
UUID ->
> Should you mix this into Keystone ? Seems kind of wrong to mix
> identity manager with Quotas?
This was discussed at several sessions the design summit, so I
brought it up at the keystone 'state of the nation' session to
get a feel for the keystone community's disposition to the idea.
There
Hi Andrei,
The underlying issue is starvation of the storage space used to store
image content(as opposed to the image metadata, which takes up very
little space).
The reason the killed image isn't showing up in the output of glance index
is that non-viable images are sanitized from the list.
I
> https://review.openstack.org/#/c/6847/
Nice!
> * Migrations added during Folsom release cycle could be compacted
> during "E" release cycle. TBD if/when we do the next compaction.
An alternative idea would be to do the compaction *prior* to the
Folsom relase instead of after, so that the c
- Original Message -
> > Kevin, should we start copying openstack-common tests to client
> > projects? Or just make sure to not count openstack-common code in
> > the
> > code coverage numbers for client projects?
>
> That's a tough one. If we copy in the tests, they end up being some
> There's something like 7 pages of open reviews on gerrit. The project
> has a good kind of problem with so many people trying to contribute.
> The question now is how to scale the development processes to handle
> that growth.
>
> It was nice to see a number of discussions at the summit in th
> We've just upgraded Gerrit to version 2.3. There are a lot of
> changes
> behind the scenes that we've been looking forward to (like being able
> to
> store data in innodb rather than myisam tables for extra data
> longevity). And there are a few visible changes that may be of
> interest
> to
Thanks for the response Caitlin,
> The versioning/dedup ring we are working on at Nexenta will support
> both 1 and 3. I'll be presenting at the Summit on this.
Great, I'll look forward to your presentation.
> The ultimate goal of distributed dedup is scenario #1. Only the
> client software ca
Folks,
>From previous posts on the ML, it seems there are a couple of
efforts in train to add distributed content deduping to Swift.
My question is whether either or both these approaches involve
active client participation in enabling duplicate chunk
detection?
One could see a spectrum rangin
> I try to assign quota to individual users, to control how many
> instances
> each user can run concurrently. But I don't see a doc describing how
> to
> do that. I use diablo release.
> Any help or doc pointer will be greatly appreciated.
Quotas apply at the nova project/tenant granularity, a
> APPENDIX B: Outstanding issues
> ...
> 2) How do we fit the existing 'copy_from' functionality in?
Is the v2 API retaining some equivalent of the existing
x-image-meta-location header, to allow an externally-stored
image be registered with glance?
e.g. via an image field specified on create
> > > Eoghan Glynn wrote:
> > >
> > > - how is the mapping between project and quota-class established?
> > > I was expecting a project_quota_class_association table or
> > > some-such in the nova DB. Is this association maintained by
&g
> > Eoghan Glynn wrote:
> >
> > - how is the mapping between project and quota-class established?
> > I was expecting a project_quota_class_association table or
> > some-such in the nova DB. Is this association maintained by
> > keystone instead?
>
> COMMUNITY STATISTICS
>
>
>
> • Activity on the main branch of OpenStack repositories, lines of
> code added and removed per developer during week 7 of 2012 (from
> Mon Mar 19 00:00:00 UTC 2012 to Mon March 26 00:00:00 UTC 2012)
Hi Stefano,
Assuming you're using git-log to gener
> > Presumably we'd also need some additional logic in the
> > quota-classes API
> > extension to allow tenant-to-quota-class mappings be established
> > and torn down?
>
> Well, yeah :)
Cool, captured in https://bugs.launchpad.net/nova/+bug/969537
I'll propose a patch early next week.
Cheers
> Eoghan Glynn wrote:
> > A couple of quick questions on how this quota class mechanism is
> > intended to work ...
> >
> > - how is the mapping between project and quota-class established?
> > I was expecting a project_quota_class_association table or
> &
> I wanted to let everyone know about a quota classes blueprint I've
> submitted; you can find the details here:
>
> * https://blueprints.launchpad.net/nova/+spec/quota-classes
> * http://wiki.openstack.org/QuotaClass
>
> I've already implemented this blueprint and pushed to Gerrit, but
> have
> Done.
>
> https://bugs.launchpad.net/glance/+bug/962998
Thanks, fixed here: https://review.openstack.org/5727
Cheers,
Eoghan
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://lau
Hi Juerg,
That's because 'owner' is not supported as an explicit parameter to 'glance
add'.
So as a result the CLI treats it a generic image property, and passes this to
the
API service via the header:
x-image-meta-property-owner: 2
The 'x-image-meta-property-' prefix is used to distinguis
> Kevin Mitchell wrote:
> I recently got the quota classes stuff merged into master (after the RC
> branch for Essex was cut, of course). After I had completed that work,
> I started thinking about quotas in general, and I think there's a better
> way to organize how we do quotas in the first pla
Thanks Jay for the feedback and background info, comments inline ...
> > Eoghan Glynn wrote:
> > So the question is whether there's already a means to acheive this
> > in one fell swoop?
>
> Jay Pipes wrote:
> Well, Horizon, in the launch instance modal dialog
Folks,
One thing that's been on my wishlist since hitting a bunch of
quota exceeded issues when first running Tempest and also on
the Fedora17 openstack test day.
It's ability to easily see the remaining headroom for each
per-project quota, e.g.
$ nova-manage quota --headroom --project=admin
> Yes, it does make perfect sense. Kind thanks for the explanation.
>
> However, what is still unclear is what config iteems that pertain to
> other apps must still be present (ie. duplicated in) glance-api.conf
> (e.g. image_cache_driver , etc )
This is probably something we should document m
Florian,
The key point in the split between glance-api.conf, glance-registry.conf,
glance-cache.conf etc. is the glance application intended to consume that
config.
This follows directly from the naming:
bin/glance-api by default consumes glance-api.conf
bin/glance-registry by default consume
Does /etc/glance/policy.json exist?
Is is readable?
- Original Message -
> From: ".。o 0 O泡泡" <501640...@qq.com>
> To: "openstack"
> Sent: Wednesday, 7 March, 2012 2:06:50 PM
> Subject: [Openstack] can not start glance-api in glance E4
>
>
> hi all:
>
> In glance E4 ,when I enter fol
> 1. Add catalog_name=compute to tempest.conf
> 2. Change "name" to "type" in rest_client.py
Yep, easiest to just apply this patch:
git fetch https://review.openstack.org/p/openstack/tempest
refs/changes/59/4259/1 && git format-patch -1 --stdout FETCH_HEAD
Cheers,
Eoghan
___
> Right now I'm having trouble setting up running the tests against
> devstack (a configuration problem on my part I suppose, but I blame
> it on the lack of documentation).
Hi Ionut,
I'm currently working on getting tempest running on Fedora 16.
I'll follow up here with a recipe once I have
This is great news Dean, thank you!
I'll try using your patch to get tempest running on F16,
and I'll get back to you with any issues I encounter.
Cheers,
Eoghan
- Original Message -
> From: "Dean Troyer"
> To: openstack@lists.launchpad.net
> Sent: Wednesday, 22 February, 2012 5:04:03
> Deltacloud already has support for OpenStack:
>
> http://deltacloud.apache.org/drivers.html
Yep, though the existing support is a thin extension over the
original deltacloud Rackspace driver, so is limited to the 1.0
version of the openstack compute API.
However work is under way on a new
> I'm not good in WSGI. I have a foolish question to ask.
> Which part of the source codes handle the receiving of the uploading
> data.
>
> As far as I know, the uploading data is in body_file from webob. I
> traced the webob
> code but it made my head blowed.
>
> ---> send chunked data -> |
So, what say ye?
> I'd like to request an Essex feature freeze exception for this
> blueprint:
>
> https://blueprints.launchpad.net/glance/+spec/retrieve-image-from
>
> as implemented by the following patch:
>
> https://review.openstack.org/#change,4096
>
> The blueprint was raised in r
Folks,
I like to request an Essex feature freeze exception for this blueprint:
https://blueprints.launchpad.net/glance/+spec/retrieve-image-from
as implemented by the following patch:
https://review.openstack.org/#change,4096
The blueprint was raised in response to a late-breaking feature
> > Yep, that's pretty much exactly the implementation we were hoping
> > might exist. If it can be built that would be phenomenal. Any
> > thoughts on whether that might be possible before E4 closes, or
> > will
> > it have to wait until Folsom?
>
> I'll propose a blueprint and see if I can get
an get it approved for E4.
My feeling is that it should be do-able in that timeframe.
Cheers,
Eoghan
> > From: Eoghan Glynn [mailto:egl...@redhat.com]
> >
> > A-ha, I see what you mean.
> >
> > AFAIK that mode of upload separate to the image POST is no
> BTW, does anybody knows who is taking care of it for Debian?
Apparently Janoš Guljaš was looking at packaging
it for Debian.
But apparently the original mainntainer of the python-sendfile package
is uncontactable so a "team upload" (Debian Python Modules Team) would
be needed
Cheers,
Eoghan
Folks,
Just a quick heads-up that this review[1] if accepted will result in
glance taking a soft dependency on pysendfile.
The import is conditional, so where pysendfile is unavailable on a
particular distro, the 'glance add' command will simply fallback to
the pre-existing chunk-at-a-time logic
ation that's accessible to the
glance API service.
Would that address your use-case, Gabriel?
Cheers,
Eoghan
> On Feb 7, 2012, at 2:09 PM, Eoghan Glynn wrote:
>
> >
> >
> >> The Horizon team is looking at adding a first-pass implementation
> >> of
>
> The Horizon team is looking at adding a first-pass implementation of
> image upload before the Essex release, and we'd really like to
> bypass the problems associated with, say, passing a 700MB Ubuntu
> image through the user's browser to a web server and then across to
> Glance...
>
> So the
Hey Jay,
I'll take this one (assuming no-one else was thinking of grabbing it?).
Cheers,
Eoghan
- Original Message -
> From: "Jay Pipes"
> To: openstack@lists.launchpad.net
> Sent: Tuesday, 7 February, 2012 2:37:17 AM
> Subject: [Openstack] [GLANCE] Easy blueprint for a new contribut
Hi Reynolds,
I've been looking into your interesting idea around sendfile()[1]
usage, here are a few initial thoughts:
- There's potentially even more speed-up to be harnessed in serving
out images from the filesystem store via sendfile(), than from using
it client-side on the initial uploa
f github:
https://review.openstack.org/3421
but if you publish the WADL at a well-known path under docs.openstack.org,
that would be much better.
Cheers,
Eoghan
- Original Message -
> From: "Anne Gentle"
> To: "Eoghan Glynn"
> Cc: openstack@lists.launchpad.ne
> So I was wondering whether there was an intention to publish a v1.1 WADL ...
Follow up question: would it be nasty to serve out that WADL directly from
github?
e.g
https://github.com/openstack/compute-api/blob//openstack-compute-api-1.1/src/os-compute-1.1.wadl
Hi Folks,
The describedby links in nova/api/openstack/compute/versions.py
contain broken hrefs to a v1.1 WADL document[1] and PDF[1].
Looks like a copy'n'paste from the corresponding 1.0 versions of the
WADL[3] and PDF[4], both of which are present and correct.
So I was wondering whether there
80 matches
Mail list logo