Really not trying to derail, but
"Mark McLoughlin" said:
>
> [..]
>> > Also, these global objects force us to do a bunch of hacks in unit
>> > tests. We need to do tricks to ensure the object is initialized as
>> > we want. We also need to save and restore its state between runs.
>>
Out of curiosity, why prefer keystone for centrally managing quota groups
rather than an admin api in nova? From my perspective, a nova admin api would
save a data migration and preserve nova-manage backwards compatibility.
Also, since quota clearly isn't an auth-n thing, is keystone way more a
What problems are caching strategies supposed to solve?
On the nova compute side, it seems like streamlining db access and
api-view tables would solve any performance problems caching would
address, while keeping the stale data management problem small.
"Sandy Walsh" said:
> o/
>
> Vek and mys
;t have to
> reinvent the wheel or hit the db at all.
>
> In addition to looking into caching technologies/approaches we're gluing
> together some tools for finding those bottlenecks. Our first step will
> be finding them, then squashing them ... however.
>
> -S
>
>
s/approaches we're gluing
> together some tools for finding those bottlenecks. Our first step will
> be finding them, then squashing them ... however.
>
> -S
>
> On 03/22/2012 06:25 PM, Mark Washenberger wrote:
>> What problems are caching strategies supposed to solve?
&g
optimization, I'm sure, will go equally far.
>
> Thanks again for the great feedback ... keep it comin'!
>
> -S
>
>
> On 03/22/2012 11:53 PM, Mark Washenberger wrote:
>> Working on this independently, I created a branch with some simple
>> performance lo
"Johannes Erdfelt" said:
>
> MySQL isn't exactly slow and Nova doesn't have particularly large
> tables. It looks like the slowness is coming from the network and how
> many queries are being made.
>
> Avoiding joins would mean even more queries, which looks like it would
> slow it down even
vm on a different hypervisor. Not sure why that
is the case! In any case it is trivial. . ~3 ms for first ping, ~0.3 ms
for subsequent pings.
"Sandy Walsh" said:
> Was the db on a separate server or loopback?
>
> On 03/23/2012 05:26 PM, Mark Washenberger wrote:
>>
>&
w many compute nodes do you have, how many VMs do you have, are you
> creating/destroying/migrating VMs, volumes, networks?
>
> Thanks,
>
> Yun
>
> On Fri, Mar 23, 2012 at 4:26 PM, Mark Washenberger
> wrote:
>>
>>
>> "Johannes Erdfelt" said:
>
Hi Yun,
This proposal looks very good to me. I am glad you included in it the
requirement that hard deletes can take place in any vm/task/power state.
I however feel that a similar requirement exists for revert resize. It should
be possible to issue a RevertResize command for any task_state (a
"Jay Pipes" said:
> On 05/29/2012 04:04 AM, Mark McLoughlin wrote:
>> Adopting this pattern across all projects will actually help
>> openstack-common more generally. For example, Russell is moving the RPC
>> code into openstack-common and it has a bunch of configuration options.
>> If it can as
sounds like a great place to address the other
rpc-specific concerns we've talked about. Otherwise I guess we're
stuck where I thought we were, where the bar needs to be set pretty
high to initially land in os-common.
"Mark McLoughlin" said:
> Hi Mark,
>
> On Thu, 2
> http://wiki.openstack.org/CommonLibrary#Incubation
Once an api is in incubation, if you make a change to it, you are expected to
update all the other openstack projects (not just core projects?) to make them
work with the new api. Am I understanding this requirement correctly? If so,
how is t
"Mark McLoughlin" said:
> On Tue, 2012-06-05 at 12:21 -0400, Mark Washenberger wrote:
>> > http://wiki.openstack.org/CommonLibrary#Incubation
>>
>> Once an api is in incubation, if you make a change to it, you are
>> expected to update all the
about
your blueprints and determine what is going into common and in what form?
I think I can be less disruptive if I'm involved in these discussions much
earlier.
"Mark McLoughlin" said:
> On Tue, 2012-06-05 at 17:25 -0400, Mark Washenberger wrote:
>>
>> "M
"Sean Dague" said:
> On 06/12/2012 05:53 PM, Dan Prince wrote:
>
>>> Here's my current suggested path forward, which I'd like comments on:
>>>* keep the existing nova.utils deprecation functions (don't remove
>>>them)
>>
>> My take is why keep a 200-300 line set of functions and tests
I'm tending to agree with Sandy's comments.
I think we all agree that we have a mess with the database stubbing that is
going on. And I'm confident that the db fake would make that mess more
manageable.
But the way I see the mess, it comes from having a giant flat db interface and
really large
ind this restriction, could you talk about it a
bit? I just want to understand the reasoning behind this choice.
Thanks!
Mark Washenberger
Rackspace Hosting
Software Developer
mark.washenber...@rackspace.com
___
Mailing list: https://launchpad.ne
Can you talk a little more about how you want to apply this failure
notification? That is, what is the case where you are going to use the
information that an operation failed? In my head I have an idea of getting code
simplicity dividends from an "everything succeeds" approach to some of our
o
t for the same
> reasons, but
> it can get in the way.
>
> -S
>
>
> From: openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net
> [openstack-bounces+sandy.walsh=rackspace@lists.launchpad.net] on behalf of
> Mark Washenberger [mark.washenber...@rackspace.com]
&
expected to support).
Context:
https://blueprints.launchpad.net/nova/+spec/trusted-computing-pools
http://wiki.openstack.org/TrustedComputingPools
https://review.openstack.org/1899
Mark Washenberger
Rackspace Hosting
Software Developer
mark.washenber...@rackspace.com
ces+fred.yang=intel@lists.launchpad.net
>> [mailto:openstack-bounces+fred.yang=intel@lists.launchpad.net] On
>> Behalf Of Vishvananda Ishaya
>> Sent: Friday, December 09, 2011 11:33 AM
>> To: Michael Pittaro
>> Cc: OpenStack Mailing List; Mark Washenberger
>>
Fred,
I can see the plugin-like behavior of the approach you have taken. However,
there are a few components of it that could be improved in order to avoid
adding extra complexity to the scheduler and to nova.
IMO, the parts that add complexity are the additional integrity caching
service, the
"Johannes Erdfelt" said:
> On Thu, Dec 15, 2011, Kevin L. Mitchell wrote:
>> 2. However, I violently disagree with the idea that the DB layer
>> must return dicts. It does not, even if you start talking about
>> allowing use of other kinds of databases. We can, and should
The only thing I see that ties us to sqlalchemy is using the model objects
directly. But I think there are actually three choices here: sqlalchemy
objects, dicts, and regular objects. Well really there are four, if we include
sqlalchemy objects that try to act like dicts :-). My preference order
"Johannes Erdfelt" said:
> I'm not saying you need to do it, but this is something that doesn't> have an
> obvious design and implementation. It would be easier to> understand and
> discuss with some real meat behind it.
>From what Monsyne Dragon and Jonathan LaCour have said on this topic, I f
to overstep.
So I ask: Is there a consensus among nova-core that the approach given in the
blueprint needs to be changed? Or the other way around, is there a consensus
approving of this approach?
Thanks
Mark Washenberger
Rackspace Hosting
Software Develo
Openstack-common could be great. There are lots of use cases that make a lot of
sense to put in openstack common. Configuration loading, context, some aspects
of logging, wsgi middleware, some parts of utils--those seem to me like great
opportunities to save time and effort, both writing and rea
perform nodes filtering twice if no trust req. specified by the
> instance
>
> Above patches can all be turned on/off by FLAGS control without embedding code
> into existing nova code
>
> Suggestion?
>
> -Fred
>
>
>
>
>> -Original Message--
"Gabe Westmaas" said:
> I think both these approaches are valid, and speaks to the fact that there
> isn't really a relationship between the two concepts.
Absolutely. An availability zone is about partitioning user instance
infrastructure and exposing that partitioning scheme to the api user f
> Remember that for many deployments, the entire system will be a single
> "zone", so
> whatever term is used should make sense in a singular sense. That rules out
> names
> such as 'slice' or 'fragment'.
I think this is a slightly outdated concept of zones.
The key to scalability in nova
Someone might have already said this (sure wish the listserv sent me mail
faster), but we tried out PyMysql and it was exceptionally slow, even under
almost no load.
I have a branch in my github that I was using to test out unblocking the
database access. For my cases I found that it was unbloc
While we are on the topic of api performance and the database, I have a
few thoughts I'd like to share.
TL;DR:
- we should consider refactoring our wsgi server to leverage multiple
processors
- we could leverage compute-cell database responsibility separataion
to speedup our api database perfo
"Eric Windisch" said:
>> an rpc implementation that writes to disk and returns,
>
> A what? I'm not sure what problem you're looking to solve here or what you
> think
> the RPC mechanism should do. Perhaps you're speaking of a Kombu or AMQP
> specific
> improvement?
>
> There is no absolute
> We need an unstable trunk:
I could not possibly disagree more. Trunk is about releasability and stability.
As developers we need a stable well-protected trunk so that we can actually
work successfully in parallel on our own branches. My ideal for trunk is that
when it comes time for tagging a
> This is what we're working on, and what Justin is proposing, Mark.
>
> Basically, in Drizzle-land, people propose a merge into trunk, Hudson
> picks up that proposal, pulls the brnach into lp:drizzle/staging,
> builds Drizzle on all supported platforms (>12 OS/distro combos), then
> runs all aut
I think it is commendable to identify bugs even if you can't fix them at the
time. I hope that we don't create incentives to ignore bugs you find during
development just to get your own merge through.
But I'm worried about staleness and usefulness with known bugs. If the known
bugs test cases a
This is great stuff. It sounds like there is a real distinction to be made
between the data central to the apis and the user-defined properties. Also, as
time and compatibility allow, we should probably change what we were calling
metadata to be called properties or somesuch.
"Jay Pipes" said:
Are we using the name metadata to describe a different feature than the one
that exists in the CloudServers api?
It seems like a different feature for the user-properties metadata to have
meaning to the api other than "store this information so I can read it later".
"Justin Santa Barbara" said
> [W]e
> shouldn't be overloading that functionality by performing some action based on
> user-defined metadata.
That is exactly what I've been trying to say, but you have stated it much more
succinctly. Thanks!
My specific concern is with quotas. If the current osapi metadata is overloaded
wit
Each time I call random.seed() on my box, it grabs another 256 bits from
/dev/urandom (verified by strace).
I feel like we can just rely on the old standby [random.choice(pwchars) for i
in xrange(pwlength)], peppering a few random.seed() calls in periodically to
skip onto a new pseudorandom loo
strace.
>
> Anyway, my focus is on users that don't want you setting passwords into
> their boxes (especially after reading this thread). Is bypassing password
> generation in scope, or should I open a new bug?
>
>
>
> On Wed, Mar 2, 2011 at 5:57 PM, Mark Washenber
> However, if we don't have documentation of the decision, then I vote that it
> never happened, and instance ids are strings. We've always been at war with
> Eastasia, and all ids have always been strings.
This approach might help us in fixing some of the nastier bits of the openstack
api image
> 1) Continue to add fields to the instances table (or compute_nodes
> table) for these main attributes like cpu_arch, etc.
> 2) Use the custom key/value table (instance_metadata) to store these
> attribute names and their values
> 3) Do both 1) and 2)
I've no particular preference here, but if we
> 1. FLAG --auto_assign_floating_ip (default=False)
> 2. Optional parameter "auto_assign_floating_ip" in existing "create" method
> 3. OpenStack API add floating_ip - allocate_floating_ip, associate_floating_ip
>
> What way is more suitable at this time?
I think primarily #1, with some degree of
Eldar,
I'm having some trouble finding the diff for your implementation of approach
#1. Any chance you can share it on the list?
Thanks
"Erik Carlin" said:
> Cool. Got it. Floating IPs or what Amazon calls Elastic IPs. How are you
> solving the cross L2 problem?
>
> Erik
>
> Sent fro
k,
there is implementation of floating ips in Nova. In implementation of approach
#1 we just care about auto assigning/deassigning. As I know floating ip
implemented like NAT from network nodes.
2011/4/17 Mark Washenberger <[mailto:mark.washenber...@rackspace.com]
mark.washenber...@racks
ing == public, and move towards referring to
addresses based on their container network's label (which could be end up being
"public" or "private" but could instead be "whizzlegoober" or "secret" if
desired).
-tr3buchet
On Mon, Apr 18, 2011 at 3:53 PM, Mark
> Add support for floating IPs in the OpenStack API (diablo-2, Jun 30)
> https://blueprints.launchpad.net/nova/+spec/openstack-api-floating-ips
> This should ultimately be deferred to the NaaS API, but we probably need
> some support for this until that is finalized.
I wasn't around for the discus
Sandy,
If I understand the features correctly, their implementation in nova seems
straightforward. However, I am still a little curious about their necessity.
For load balancing, what is the difference between a single request for N
instances and N requests for a single instance each?
"Sandy W
I'm totally on board with this as a future revision of the OS api. However it
sounds like we need some sort of solution for 1.1.
> 1. We can't treat the InstanceID as a ReservationID since they do two
> different
> things. InstanceID's are unique per instance and ReservationID's might span N
> i
+1 for UUIDs.
If we agree on this approach, there is some difficulty incorporating it into
nova as Ed has identified. However, any other projects, especially those hoping
to be adopted as Openstack projects by the PPB, can probably switch to this
approach more immediately.
Just a thought.
"Ed
I do not intend to suggest any disagreement on my part. In fact, I'm very happy
for Thorsten's input here to help motivate this feature. However:
> Looks like the
> implementation hasn't yet added support for that, but it will.
Aren't we being a bit presumptuous?
"Jorge Williams" said:
>
I don't know much about Cloudpipe and VPN, so I hope I don't hijack the thread.
However, regarding inject_file
> Another interesting situation is with inject_file compute APIs …
>
>
> on API level there is no even file/contents fields, only
> def inject_file(self, context, instance_id):
>
> b
54 matches
Mail list logo