HP MSA is supported by cinder and use the following guidelines:
http://docs.openstack.org/trunk/config-reference/content/hp-msa-driver.html
you could install devstack and follow the above wiki or update the above
defined HP MSA param as suggested by devstack at
http://devstack.org/configuration.
Hi,
Below is the beginning of a spec I'd like to get into Kilo. Before
going into detail, it occurred to me that a basic decision needs to be
made, so I'd like to get thoughts on the api Alternatives mentioned below.
Thanks,
Chuck Carlino (ChuckC)
==
Enab
Currently when booting multiple instances, the instance display-names will be
something like 'test-1,test-2' if we set
multi_instance_display_name_template = %(name)s-%(count)s. Here is the problem,
if we need more instances
and want the instance names start with 'test-3', there is no such way t
Le 15/09/2014 20:20, Dugger, Donald D a écrit :
I’d like to propose that we defer the meeting this week and reconvene
next Tues, 9/23. I don’t think there’s much new to talk about right
now and we’re waiting for a write up on the claims process. I’d like
to get that write up when it’s read
Hi Ben,
Thanks very much for the information!
I moved the blueprint to:
https://blueprints.launchpad.net/oslo.i18n/+spec/more-gettext-support
Best Regards,
Peng Wu
On Tue, 2014-08-26 at 10:05 -0500, Ben Nemec wrote:
> Hi Peng,
>
> We're using the spec process described in
> https://wiki.o
On Tue, Sep 16, 2014 at 03:34:28PM +1200, Steve Baker wrote:
> On 16/09/14 03:24, Alexis Lee wrote:
> > For your amusement,
> >
> > https://github.com/lxsli/heat-viz
> >
> > This produces HTML which shows which StructuredDeployments (boxes)
> > depends_on each other (bold arrows). It also shows
On 09/15/2014 09:33 PM, Clint Byrum wrote:
> Excerpts from Zane Bitter's message of 2014-09-15 12:05:09 -0700:
>> On 15/09/14 13:28, Clint Byrum wrote:
>>> Excerpts from Flavio Percoco's message of 2014-09-15 00:57:05 -0700:
On 09/12/2014 07:13 PM, Clint Byrum wrote:
> Excerpts from Thierr
Miguel Angel Ajo Pelayo wrote:
> During the ipset implementatio, we designed a refactor [1] to cleanup
> the firewall driver a bit, and move all the ipset low-level knowledge
> down into the IpsetManager.
>
> I'd like to see this merged for J, and, it's a bit of an urgent matter
> to decide, b
Folks,
I have had discussions with some folks individually about this but I would
like bring this to a broader audience.
I have been playing with security groups and I see the notion of 'default'
security group seems to create some nuisance/issues.
There are list of things I have noticed so far:
Hi All,
While trying to create a volume using my cinder driver (on devstack), i get
below issue w.r.t num_attempts.
Am i missing any configuration here ?
^[[01;31m2014-09-16 13:20:37.837 TRACE oslo.messaging.rpc.dispatcher
^[[01;35m^[[00m File
"/opt/stack/cinder/cinder/scheduler/filter_schedule
On Tue, Sep 16, 2014 at 07:30:26AM +1000, Michael Still wrote:
> On Tue, Sep 16, 2014 at 12:30 AM, Russell Bryant wrote:
> > On 09/15/2014 05:42 AM, Daniel P. Berrange wrote:
> >> On Sun, Sep 14, 2014 at 07:07:13AM +1000, Michael Still wrote:
> >>> Just an observation from the last week or so...
>
On Tue, Sep 16, 2014 at 12:31 PM, Manickam, Kanagaraj <
[email protected]> wrote:
> HP MSA is supported by cinder and use the following guidelines:
>
> http://docs.openstack.org/trunk/config-reference/content/hp-msa-driver.html
>
>
>
> you could install devstack and follow the above wiki
I think bug days are a good idea. We've had them sporadically in the
past, but never weekly. We stopped mostly because people stopped
showing up.
If we think we have critical mass again, or if it makes more sense to
run one during the RC period, then let's do it.
So... Who would show up for a bug
Hi i already tried many things but it was not working.
By default lvm multi-bakends is enabled in juno devstack.
then i went through devstack juno code and
*enabling only single backend:*
i didnot find exact solution so after installing devstack, i am changing
cinder.conf for
single backend and r
On 09/16/2014 01:10 AM, Clint Byrum wrote:
> Excerpts from Sean Dague's message of 2014-09-15 16:02:04 -0700:
>> On 09/15/2014 07:00 PM, Mark Washenberger wrote:
>>> Hi there logging experts,
>>>
>>> We've recently had a little disagreement in the glance team about the
>>> appropriate log levels fo
On Tue, Sep 16, 2014 at 2:18 PM, Nikesh Kumar Mahalka <
[email protected]> wrote:
> Hi i already tried many things but it was not working.
> By default lvm multi-bakends is enabled in juno devstack.
>
> then i went through devstack juno code and
>
> *enabling only single backend:*
> i didno
On 9/16/14, 11:12 AM, "Daniel P. Berrange" wrote:
>On Tue, Sep 16, 2014 at 07:30:26AM +1000, Michael Still wrote:
>> On Tue, Sep 16, 2014 at 12:30 AM, Russell Bryant
>>wrote:
>> > On 09/15/2014 05:42 AM, Daniel P. Berrange wrote:
>> >> On Sun, Sep 14, 2014 at 07:07:13AM +1000, Michael Still wr
On Tue, Sep 16, 2014 at 06:29:53PM +1000, Michael Still wrote:
> I think bug days are a good idea. We've had them sporadically in the
> past, but never weekly. We stopped mostly because people stopped
> showing up.
>
> If we think we have critical mass again, or if it makes more sense to
> run one
Michael Still wrote:
> Yes, that was my point. I don't mind us debating how to rearrange
> hypervisor drivers. However, if we think that will solve all our
> problems we are confused.
>
> So, how do we get people to start taking bugs / gate failures more seriously?
I think we need to build a cros
Hi.
Is time to start identifying (and working on) release critical bugs in
nova before we ship RC1.
My initial position is that any critical bug is release critical.
There are currently critical bugs not targeted to rc1, but that should
change in the next day or so. If we're not interested in fix
Looks like your glance upload is sufficiently slow that you're hitting
a timeout. Check your glance daemon logs to see if you can figure out
why it is slow.
On 15 September 2014 07:24, Nikesh Kumar Mahalka
wrote:
> Hi I deployed a Icehouse devstack on ubuntu 14.04.
> When i am running tempest tes
> -Original Message-
> From: Flavio Percoco [mailto:[email protected]]
> Sent: 16 September 2014 10:08
> To: [email protected]
> Subject: Re: [openstack-dev] [glance][all] Help with interpreting the log
> level
> guidelines
>
> On 09/16/2014 01:10 AM, Clint Byrum wrote:
>
Hi All,
If I set *max_attempts = 1* in /etc/cinder/cinder.conf, then the volume
creation via 3rd party driver is success.
However, this setting does not work for volume deletion.
The volume gets deleted at openstack but never executes the driver code.
I did above changes based on the error stack
This is a great idea, and will be hugely useful for new people in
Openstack, like me.
Thank you!
On 16 September 2014 03:31, Ricardo Carrillo Cruz <
[email protected]> wrote:
> This is awesome, thanks for this guys!
>
> Regards
>
> 2014-09-16 7:09 GMT+02:00 Angelo Matarazzo :
>
>>
Hello All!
Oslo team is pleased to announce the new Oslo database handling library
release - oslo.db 0.5.0
List of changes:
$ git log --oneline --no-merges 0.4.0..0.5.0
c785bee Updated from global requirements
ac05c2a Imported Translations from Transifex
57f499e Add a check for SQLite transactio
On 09/16/2014 06:44 AM, Kuvaja, Erno wrote:
>> -Original Message-
>> From: Flavio Percoco [mailto:[email protected]]
>> Sent: 16 September 2014 10:08
>> To: [email protected]
>> Subject: Re: [openstack-dev] [glance][all] Help with interpreting the log
>> level
>> guidelines
On 09/16/2014 05:44 AM, Thierry Carrez wrote:
> Michael Still wrote:
>> Yes, that was my point. I don't mind us debating how to rearrange
>> hypervisor drivers. However, if we think that will solve all our
>> problems we are confused.
>>
>> So, how do we get people to start taking bugs / gate failu
On 09/16/2014 03:57 AM, Thierry Carrez wrote:
> Miguel Angel Ajo Pelayo wrote:
>> During the ipset implementatio, we designed a refactor [1] to cleanup
>> the firewall driver a bit, and move all the ipset low-level knowledge
>> down into the IpsetManager.
>>
>> I'd like to see this merged for J,
On Tue, Sep 16, 2014 at 06:29:53PM +1000, Michael Still wrote:
> I think bug days are a good idea. We've had them sporadically in the
> past, but never weekly. We stopped mostly because people stopped
> showing up.
>
> If we think we have critical mass again, or if it makes more sense to
> run one
On 09/16/2014 04:29 AM, Michael Still wrote:
I think bug days are a good idea. We've had them sporadically in the
past, but never weekly. We stopped mostly because people stopped
showing up.
If we think we have critical mass again, or if it makes more sense to
run one during the RC period, then
On 09/16/2014 04:12 AM, Daniel P. Berrange wrote:
On Tue, Sep 16, 2014 at 07:30:26AM +1000, Michael Still wrote:
On Tue, Sep 16, 2014 at 12:30 AM, Russell Bryant wrote:
On 09/15/2014 05:42 AM, Daniel P. Berrange wrote:
On Sun, Sep 14, 2014 at 07:07:13AM +1000, Michael Still wrote:
Just an ob
On 09/15/2014 05:33 PM, Clint Byrum wrote:
> Excerpts from Zane Bitter's message of 2014-09-15 09:31:33 -0700:
>> On 14/09/14 11:09, Clint Byrum wrote:
>>> Excerpts from Gauvain Pocentek's message of 2014-09-04 22:29:05 -0700:
Hi,
A bit of background: I'm working on the publication o
On 09/16/2014 09:49 AM, Ryan Brown wrote:.
>
> (From Zane's other message)
>>
>> I think the first supported release is probably the right information
> to add.
>>
>
> I feel like for anything with nonzero upgrade effort (and upgrading your
> openstack install takes significantly more than 0 ef
> -Original Message-
> From: Sean Dague [mailto:[email protected]]
> Sent: 16 September 2014 12:40
> To: [email protected]
> Subject: Re: [openstack-dev] [glance][all] Help with interpreting the log
> level
> guidelines
>
> On 09/16/2014 06:44 AM, Kuvaja, Erno wrote:
> >> ---
Hi All,
Many of us this week are either swamped or travelling. Because we do not have
the numbers, I'm postpoing the meeting until next week.
P
Peter J. Pouliot CISSP
Sr. SDET OpenStack
Microsoft
New England Research & Development Center
1 Memorial Drive
Cambridge, MA 02142
P: 1.(857).4536436
The similar problem has been discussed before.
There is no definitive answer, and currently seems we cannot simply disable
it since G version.
However, we can add some ALLOW rules to bypass the rules inside the
iptables chains.
Hope there be more flexibility to controller the security groups in the
On 09/16/2014 09:39 AM, Jay Pipes wrote:
> On 09/16/2014 04:12 AM, Daniel P. Berrange wrote:
>> On Tue, Sep 16, 2014 at 07:30:26AM +1000, Michael Still wrote:
>>> On Tue, Sep 16, 2014 at 12:30 AM, Russell Bryant
>>> wrote:
On 09/15/2014 05:42 AM, Daniel P. Berrange wrote:
> On Sun, Sep 14
Hi Boahua,
Thanks for sharing your thoughts. The issues seen are not related to
"access", they are all related to API layer, so having ALLOW all etc does
not fix/workaround the problems I mentioned.
Please do share if you have something more to add.
Fawad Khaliq
On Tue, Sep 16, 2014 at 7:28 PM,
On 09/16/2014 10:16 AM, Kuvaja, Erno wrote:
>> -Original Message-
>> From: Sean Dague [mailto:[email protected]]
>> Sent: 16 September 2014 12:40
>> To: [email protected]
>> Subject: Re: [openstack-dev] [glance][all] Help with interpreting the log
>> level
>> guidelines
>>
>>
Right, graphing those sorts of variables has always been part of our test plan.
What I’ve done so far was just some pilot tests, and I realize now that I
wasn’t very clear on that point. I wanted to get a rough idea of where the
Redis driver sat in case there were any obvious bug fixes that need
Hi,
I deployed a juno devstack setup for a cinder volume driver.
I changed cinder.conf file and tempest.conf file for single backend and
restarted cinder services.
Now i ran tempest test as below:
/opt/stack/tempest/run_tempest.sh tempest.api.volume.test_volumes_snapshots
I am getting below outpu
Nice work.
We discussed similar work weeks ago.
And the idea is to generate the dot file from a heat template, and then
draw figures from the dot file.
Even in the reversed direction, we can generate a heat template from a dot
based file.
Seems the community are eager to seem some heat template v
Hi fawad
Yes, you're right.
I mentioned that not to answer the exact question, but think to drop some
line around it.
I do hope we can provide the capacity in the API layer, and let the
security group become more intuitive for users.
On Tue, Sep 16, 2014 at 10:45 PM, Fawad Khaliq wrote:
> Hi Boa
Hi crew, as promised I’ve continued to work through the performance test
plan. I’ve started a wiki page for the next batch of tests and results:
https://wiki.openstack.org/wiki/Zaqar/Performance/PubSub/Redis
I am currently running the same tests again with 2x web heads, and will
update the wiki p
On Mon, Sep 15, 2014 at 7:04 PM, Rochelle.RochelleGrober
wrote:
> +1000
> This is *great*. Not only for newbies, but refreshers, learning different
> approaches, putting faces to the signatures, etc. And Mock best practices is
> a brilliant starting place for developers.
Yes!
> I'd like to v
> -Original Message-
> From: Sean Dague [mailto:[email protected]]
> Sent: 16 September 2014 15:56
> To: [email protected]
> Subject: Re: [openstack-dev] [glance][all] Help with interpreting the log
> level
> guidelines
>
> On 09/16/2014 10:16 AM, Kuvaja, Erno wrote:
> >> ---
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Neutron ARP cache poisoning vulnerability
- ---
### Summary ###
The Neutron firewall driver 'iptables_firewall' does not prevent ARP
cache poisoning, as this driver is currently only capable of MAC address
and IP address based anti-spoofing rules. How
Hi,
There is current work in review to use conntrack to terminate these
connections [1][2] much like you suggested. I hope to get this in to
RC1 but it needs another iteration.
For Kilo, I'd like to explore stateless forwarding for floating ips.
Since conntrack is the root of the security issue
On 09/16/2014 12:07 PM, Kuvaja, Erno wrote:
>> -Original Message-
>> From: Sean Dague [mailto:[email protected]]
>> Sent: 16 September 2014 15:56
>> To: [email protected]
>> Subject: Re: [openstack-dev] [glance][all] Help with interpreting the log
>> level
>> guidelines
>>
>>
> -Original Message-
> From: Sean Dague [mailto:[email protected]]
> Sent: 16 September 2014 17:31
> To: [email protected]
> Subject: Re: [openstack-dev] [glance][all] Help with interpreting the log
> level
> guidelines
>
> On 09/16/2014 12:07 PM, Kuvaja, Erno wrote:
> >> ---
On 09/15/2014 08:28 PM, Nathan Kinder wrote:
On 09/12/2014 12:46 AM, Angus Lees wrote:
On Thu, 11 Sep 2014 03:21:52 PM Steven Hardy wrote:
On Wed, Sep 10, 2014 at 08:46:45PM -0400, Jamie Lennox wrote:
For service to service communication there are two types.
1) using the user's token like nov
On 09/16/2014 11:10 AM, Nikesh Kumar Mahalka wrote:
Hi,
I deployed a juno devstack setup for a cinder volume driver.
I changed cinder.conf file and tempest.conf file for single backend and
restarted cinder services.
Now i ran tempest test as below:
/opt/stack/tempest/run_tempest.sh tempest.api.v
On 09/16/2014 11:23 AM, Kurt Griffiths wrote:
Hi crew, as promised I’ve continued to work through the performance test
plan. I’ve started a wiki page for the next batch of tests and results:
https://wiki.openstack.org/wiki/Zaqar/Performance/PubSub/Redis
I am currently running the same tests aga
On 09/16/2014 10:16 AM, Kuvaja, Erno wrote:
> In my point of view it makes life
much easier if we have information where the request failed
The request did not fail. The HTTP request succeeded and Glance returned
a 404 Not Found. If the caller was expecting an image to be there, but
it wasn't
Hi neutrons,
We've been discussing various ways of doing cloud upgrades.
One of the safe and viable solutions seems to be moving existing resources
to a new cloud deployed with new version of openstack.
By saying 'moving' I mean replication of all resources and wiring
everything together in a new
Hi Stackers!
I'm working on moving Brick out of Cinder for K release.
There're a lot of open questions for now:
- Should we move it to oslo or somewhere on stackforge?
- Better architecture of it to fit all Cinder and Nova requirements
- etc.
Before starting discussion, I've created so
On Sep 15, 2014 8:20 AM, "James Slagle" wrote:
>
> On Mon, Sep 15, 2014 at 7:44 AM, Steven Hardy wrote:
> > All,
> >
> > Starting this thread as a follow-up to a strongly negative reaction by the
> > Ironic PTL to my patches[1] adding initial Heat->Ironic integration, and
> > subsequent very deta
On Mon, Sep 15, 2014 at 9:50 AM, Clint Byrum wrote:
> Excerpts from Steven Hardy's message of 2014-09-15 04:44:24 -0700:
>>
>>
> First, Ironic is hidden under Nova as far as TripleO is concerned. So
> mucking with the servers underneath Nova during deployment is a difficult
> proposition. Would I
On Mon, Sep 15, 2014 at 10:51 AM, Jay Faulkner wrote:
> Steven,
>
> It's important to note that two of the blueprints you reference:
>
> https://blueprints.launchpad.net/ironic/+spec/drac-raid-mgmt
> https://blueprints.launchpad.net/ironic/+spec/drac-hw-discovery
>
> are both very unlikely to land
On Mon, Sep 15, 2014 at 9:00 AM, Steven Hardy wrote:
> For example, today, I've been looking at the steps required for driving
> autodiscovery:
>
> https://etherpad.openstack.org/p/Ironic-PoCDiscovery-Juno
>
> Driving this process looks a lot like application orchestration:
>
> 1. Take some input
On Sep 15, 2014 8:31 PM, "Jay Pipes" wrote:
>
> On 09/15/2014 08:07 PM, Jeremy Stanley wrote:
>>
>> On 2014-09-15 17:59:10 -0400 (-0400), Jay Pipes wrote:
>> [...]
>>>
>>> Sometimes it's pretty hard to determine whether something in the
>>> E-R check page is due to something in the infra scripts,
I'm please to announce the latest release of python-neutronclient,
version 2.3.8. This will be the last release before the Juno release
of OpenStack. The main change we were waiting for was the CLI changes
for L3 HA. In addition, the following changes are a part of this
release:
19527c4 Narrow dow
> -Original Message-
> From: Jay Pipes [mailto:[email protected]]
> Sent: 16 September 2014 18:10
> To: [email protected]
> Subject: Re: [openstack-dev] [glance][all] Help with interpreting the log
> level
> guidelines
>
> On 09/16/2014 10:16 AM, Kuvaja, Erno wrote:
>
On Mon, Sep 15, 2014 at 1:08 PM, Steven Hardy wrote:
> On Mon, Sep 15, 2014 at 05:51:43PM +, Jay Faulkner wrote:
>> Steven,
>>
>> It's important to note that two of the blueprints you reference:
>>
>> https://blueprints.launchpad.net/ironic/+spec/drac-raid-mgmt
>> https://blueprints.launchpad.
Hi,
Neutron would like to move the distributed virtual router (DVR)
tempest job, currently in the experimental queue, to the check queue
[1]. It will still be non-voting for the time being. Could infra
have a look? We feel that running this on all Neutron patches is
important to maintain the st
On 16/09/14 02:49, Qiming Teng wrote:
Nice. What would be even nicer is a change to python-heatclient so that
heat resource-list has an option to output in dotfile format.
+1.
It would also be interesting to check if the dependency analysis is
capable of exploding a resource-group. Say I have
On 09/16/2014 02:11 PM, Carl Baldwin wrote:
> Hi,
>
> Neutron would like to move the distributed virtual router (DVR)
> tempest job, currently in the experimental queue, to the check queue
> [1]. It will still be non-voting for the time being. Could infra
> have a look? We feel that running thi
On 16/09/14 13:56, Devananda van der Veen wrote:
On Mon, Sep 15, 2014 at 9:00 AM, Steven Hardy wrote:
For example, today, I've been looking at the steps required for driving
autodiscovery:
https://etherpad.openstack.org/p/Ironic-PoCDiscovery-Juno
Driving this process looks a lot like applicat
On 16/09/14 13:54, Devananda van der Veen wrote:
On Sep 15, 2014 8:20 AM, "James Slagle" wrote:
>
>On Mon, Sep 15, 2014 at 7:44 AM, Steven Hardy wrote:
> >
> >The initial assumption is that there is some discovery step (either
> >automatic or static generation of a manifest of nodes), that ca
Thanks for the reminder! I’ll make note of that. In these tests the
clients are hitting Nginx (which is acting as a load balancer) so I could
try disabling keep-alive there and seeing what happens. So far I just used
the default that was written into the conf when the package was installed
("keepal
On Tue, Sep 16, 2014 at 11:57 AM, Zane Bitter wrote:
> On 16/09/14 13:54, Devananda van der Veen wrote:
>>
>> On Sep 15, 2014 8:20 AM, "James Slagle" wrote:
>>>
>>> >
>>> >On Mon, Sep 15, 2014 at 7:44 AM, Steven Hardy wrote:
> >
> >The initial assumption is that there is some disc
On Tue, Sep 16, 2014 at 11:44 AM, Zane Bitter wrote:
> On 16/09/14 13:56, Devananda van der Veen wrote:
>>
>> On Mon, Sep 15, 2014 at 9:00 AM, Steven Hardy wrote:
>>>
>>> For example, today, I've been looking at the steps required for driving
>>> autodiscovery:
>>>
>>> https://etherpad.openstack.
Results are now posted for all workloads for 2x web heads and 1x Redis
proc (Configuration 2). Stats are also available for the write-heavy
workload with 2x webheads and 2x redis procs (Configuration 3). The latter
results look promising, and I suspect the setup could easily handle a
significantly
On 16/09/14 15:24, Devananda van der Veen wrote:
On Tue, Sep 16, 2014 at 11:44 AM, Zane Bitter wrote:
On 16/09/14 13:56, Devananda van der Veen wrote:
On Mon, Sep 15, 2014 at 9:00 AM, Steven Hardy wrote:
For example, today, I've been looking at the steps required for driving
autodiscovery:
On Tue, Sep 16, 2014 at 7:24 AM, Sean Dague wrote:
> On 09/16/2014 03:57 AM, Thierry Carrez wrote:
>> Miguel Angel Ajo Pelayo wrote:
>>> During the ipset implementatio, we designed a refactor [1] to cleanup
>>> the firewall driver a bit, and move all the ipset low-level knowledge
>>> down into the
Now that I've replied to individual emails, let me try to summarize my
thoughts on why Heat feels like the wrong tool for the task that I
think you're trying to accomplish. This discussion has been really
helpful for me in understanding why that is, and I think, at a really
high level, it is becaus
On Tue, Sep 16, 2014 at 12:42 PM, Zane Bitter wrote:
> On 16/09/14 15:24, Devananda van der Veen wrote:
>>
>> On Tue, Sep 16, 2014 at 11:44 AM, Zane Bitter wrote:
>>>
>>> On 16/09/14 13:56, Devananda van der Veen wrote:
On Mon, Sep 15, 2014 at 9:00 AM, Steven Hardy wrote:
>
>>
Hi,
I'm running Havana, and I just tried a testcase involving doing six
simultaneous live-migrations.
It appears that the migrations succeeded, but two of the instances got stuck
with a status of "MIGRATING" because of RPC timeouts:
2014-09-16 20:35:07.376 12493 INFO nova.notifier [-] processi
The project infrastructure team will be taking the Gerrit service on
review.openstack.org offline briefly from 20:30 to 21:00 UTC this
Friday, September 19 in an effort to move the newly-approved Shared
File Systems program repositories from stackforge into the openstack
namespace. The specific lis
Based on my reading of the wiki page about this it sounds like it should
be a sub-project of the Storage program. While it is targeted for use
by multiple projects, it's pretty specific to interacting with Cinder,
right? If so, it seems like Oslo wouldn't be a good fit. We'd just end
up adding a
On 09/16/2014 11:55 PM, Ben Nemec wrote:
> Based on my reading of the wiki page about this it sounds like it should
> be a sub-project of the Storage program. While it is targeted for use
> by multiple projects, it's pretty specific to interacting with Cinder,
> right? If so, it seems like Oslo w
Phase one for dealing with Federation can be done with CORS support
solely for Keystone/Horizon integration:
1. Horizon Login page creates Javascript to do AJAX call to Keystone
2. Keystone generates a token
3. Javascript reads token out of response and sends it to Horizon.
This should supp
On Tue, Sep 16, 2014 at 07:09:36AM +, Carlino, Chuck wrote:
> Hi,
>
> Below is the beginning of a spec I'd like to get into Kilo. Before
> going into detail, it occurred to me that a basic decision needs to be
> made, so I'd like to get thoughts on the api Alternatives mentioned below.
>
> T
On 09/16/2014 02:04 PM, Kuvaja, Erno wrote:
-Original Message- From: Jay Pipes
[mailto:[email protected]] Sent: 16 September 2014 18:10 To:
[email protected] Subject: Re: [openstack-dev]
[glance][all] Help with interpreting the log level guidelines
On 09/16/2014 10:16 AM
On Mon, Sep 15, 2014 at 6:38 AM, Elizabeth K. Joseph
wrote:
> The OpenStack Infrastructure (Infra) team is hosting our weekly
> meeting on Tuesday September 16th, at 19:00 UTC in #openstack-meeting
Minutes and log from the meeting today available here:
Minutes:
http://eavesdrop.openstack.org/me
This is generally the right plan. The hard parts are in getting people to
deploy it correctly and securely, and handling fallback cases for lack of
browser support, etc.
What we really don't want to do is to encourage people to set
"Access-Control-Allow-Origin: *" type headers or other such non
Originally I wrote the connector side of brick to be the LUN discovery
shared code between Cinder and Nova.
I tried to make a patch in Havana that would remove do this but it
didn't make it in.
The upside to brick not making it in Nova is that it has given us some
time to rethink things a bit.
All results have been posted for the 2x web head + 2x redis tests (C3). I
also rearranged the wiki page to make it easier to compare the impact of
adding more capacity on the backend.
In C3, The load generator’s CPUs were fully saturated, while there was
still plenty of headroom on the web heads (
On 2014-09-16 7:03 PM, Walter A. Boring IV wrote:
The upside to brick not making it in Nova is that it has given us some
time to rethink things a bit. What I would actually
like to see happen now is to create a new cinder/storage agent instead
of just a brick library. The agent would run on e
Hi,
Inline:
On Tue, Sep 16, 2014 at 1:00 AM, Fawad Khaliq wrote:
> Folks,
>
> I have had discussions with some folks individually about this but I would
> like bring this to a broader audience.
>
> I have been playing with security groups and I see the notion of 'default'
> security group seems
CORS for all of OpenStack is possible once the oslo middleware lands*, but
as you note it's only one of many elements to be considered when exposing
the APIs to browsers. There is no current support for CSRF protection in
the OpenStack APIs, for example. I believe that sort of functionality
belongs
As I know there is no a way to disable default security groups, but I think
this BP can solve this problem:
https://blueprints.launchpad.net/neutron/+spec/default-rules-for-default-security-group
在 2014-09-17 07:44:42,"Aaron Rosen" 写道:
Hi,
Inline:
On Tue, Sep 16, 2014 at 1:00 AM, Fawad
Now there is already a bug:https://bugs.launchpad.net/neutron/+bug/1334926 for
this problem, meanwhile the security group also has same problem, I have report
a bug:
https://bugs.launchpad.net/neutron/+bug/1335375
在 2014-09-16 01:46:11,"Martinx - ジェームズ" 写道:
Hey stackers,
Let me ask so
In our environment using VXLAN/GRE would make it difficult to keep some of
the features we currently offer our customers. So for a while now I've been
looking at the DVR code, blueprints and Google drive docs and other than it
being the way the code was written I can't find anything indicating why
I think the VLAN should also be supported later. The tunnel should not be the
prerequisite for the DVR feature.
-- Original --
From: "Steve Wormley";
Date: Wed, Sep 17, 2014 10:29 AM
To: "openstack-dev";
Subject: [openstack-dev] [neutron] DVR Tunnel Desig
On 09/16/2014 06:59 PM, Gabriel Hurley wrote:
This is generally the right plan. The hard parts are in getting people to
deploy it correctly and securely, and handling fallback cases for lack of
browser support, etc.
Do we really care about Browser support? I mean, are we really going to
have
On 09/16/2014 08:56 PM, Richard Jones wrote:
CORS for all of OpenStack is possible once the oslo middleware lands*,
but as you note it's only one of many elements to be considered when
exposing the APIs to browsers. There is no current support for CSRF
protection in the OpenStack APIs, for exam
> Dina has been doing a great work and has been very helpful during the
> Juno cycle and her help is very valuable. She's been doing a lot of> reviews
> and has been very active in our community.+1cheers,
gord
__
Hi Folks,
Neutron DVR meeting will be cancelled on Sept 17th 2014.
We will resume next week.
Swaminathan Vasudevan
Systems Software Engineer (TC)
HP Networking
Hewlett-Packard
8000 Foothills Blvd
M/S 5541
Roseville, CA - 95747
tel: 916.785.0937
fax: 916.785.1815
email: swaminathan.vasude...@hp.
99 matches
Mail list logo