Hi all,
Could keystone to keystone federation be deployed on Centos. I have notice all
the document was deployment on Ubuntu. If could, is there some documents that
is about deploying k2k on centos.__
OpenStack Development M
Hi Xinni,
There is no need that you push a tag manually for official deliverables.
You can propose a patch to openstack/releases repository.
Horizon PTL or release liaison (at now both Ivan) can confirm it and the
release team will approve it.
Once it is approved, a release tag will be added and a
Hi Xinni,
Please, send me a list of packages which should be released.
In general, release-* groups are different from core-*. We should discuss
how to go forward with it
Regards,
Ivan Kolodyazhny,
http://blog.e0ne.info/
On Wed, Apr 4, 2018 at 8:34 AM, Xinni Ge wrote:
> Hi Ivan and other Hori
Hi Ivan and other Horizon team member,
Thanks for adding us into xstatic-core group.
But I still need your opinion and help to release the newly-added xstatic
packages to pypi index.
Current `xstatic-core` group doesn't have the permission to PUSH SIGNED
TAG, and I cannot release the first non-tr
Hi team,
In our last weekly meeting, High Precision Time Synchronization Card Use Case
was firstly introduced. In the following link is a summary/description about
this use case. Please take a look and don't hesitate to ask any question. :)
https://etherpad.openstack.org/p/clock-driver
Regards,
Hi team,
In our last weekly meeting, High Precision Time Synchronization Card Use Case
was firstly introduced. In the following link is a summary/description about
this use case. Please take a look and don't hesitate to ask any question. :)
https://etherpad.openstack.org/p/clock-driver
Regards,
On Tue, 03 Apr 2018 18:53:33 -0400, Doug Hellmann wrote:
Excerpts from melanie witt's message of 2018-04-03 15:30:07 -0700:
On Tue, 3 Apr 2018 15:26:17 -0700, Melanie Witt wrote:
On Wed, 4 Apr 2018 07:54:59 +1000, Michael Still wrote:
Thanks to jichenjc for fixing the pep8 failures I was seein
Note: this is the fifteenth edition of a weekly update of what happens in
TripleO.
The goal is to provide a short reading (less than 5 minutes) to learn where
we are and what we're doing.
Any contributions and feedback are welcome.
Link to the previous version:
http://lists.openstack.org/pipermail/
Excerpts from melanie witt's message of 2018-04-03 15:30:07 -0700:
> On Tue, 3 Apr 2018 15:26:17 -0700, Melanie Witt wrote:
> > On Wed, 4 Apr 2018 07:54:59 +1000, Michael Still wrote:
> >> Thanks to jichenjc for fixing the pep8 failures I was seeing on master.
> >> I'd decided they were specific to
Excerpts from Michael Still's message of 2018-04-03 22:23:10 +:
> I think the bit I am lost on is the concept of running pep8 "under" a
> version of python. Is this an artifact of what version of pep8 I have
> installed somehow?
>
> If the py3 pep8 is stricter, couldn't we just move to only th
On Tue, 3 Apr 2018 15:26:17 -0700, Melanie Witt wrote:
On Wed, 4 Apr 2018 07:54:59 +1000, Michael Still wrote:
Thanks to jichenjc for fixing the pep8 failures I was seeing on master.
I'd decided they were specific to my local dev environment given no one
else was seeing them.
As I said in the p
On Wed, 4 Apr 2018 07:54:59 +1000, Michael Still wrote:
Thanks to jichenjc for fixing the pep8 failures I was seeing on master.
I'd decided they were specific to my local dev environment given no one
else was seeing them.
As I said in the patch that fixed the issue [1], I think its worth
expl
I think the bit I am lost on is the concept of running pep8 "under" a
version of python. Is this an artifact of what version of pep8 I have
installed somehow?
If the py3 pep8 is stricter, couldn't we just move to only that one?
Michael
On Wed., 4 Apr. 2018, 8:19 am Kevin L. Mitchell, wrote:
>
On Wed, 2018-04-04 at 07:54 +1000, Michael Still wrote:
> Thanks to jichenjc for fixing the pep8 failures I was seeing on
> master. I'd decided they were specific to my local dev environment
> given no one else was seeing them.
>
> As I said in the patch that fixed the issue [1], I think its worth
On Mon, Apr 2, 2018 at 6:28 PM, Brian Rosmaita
wrote:
> These need to be reviewed in master:
> - https://review.openstack.org/#/c/50/
> - https://review.openstack.org/#/c/556292/
Thanks for the reviews. The requested changes have been made and Zuul
has given a +1, so ready for reviews again!
Excerpts from Michael Still's message of 2018-04-04 07:54:59 +1000:
> Thanks to jichenjc for fixing the pep8 failures I was seeing on master. I'd
> decided they were specific to my local dev environment given no one else
> was seeing them.
>
> As I said in the patch that fixed the issue [1], I thi
Hello!
Another meeting tonight late/tomorrow depending on where in the world you
live :) 0800 UTC Wednesday.
Here is the agenda if you have anything to add [1]. Or if you want to add
your name to the ping list it is there as well!
See you all soon!
-Kendall (diablo_rojo)
[1] https://wiki.opens
Thanks to jichenjc for fixing the pep8 failures I was seeing on master. I'd
decided they were specific to my local dev environment given no one else
was seeing them.
As I said in the patch that fixed the issue [1], I think its worth
exploring how these got through the gate in the first place. Ther
Hello everyone,
During the recent holiday weekend some of our channels experienced some IRC
trolling/vandalism. In particular the meetbot was used to start meetings titled
'maintenance' which updated the channel topic to 'maintenance'. The individual
or bot doing this then used this as the pret
Thanks to everybody who has commented on the Cyborg/Nova scheduling spec
(https://review.openstack.org/#/c/554717/).
As you may have noted, some issues were raised (*1), discussed (*2) and
a potential solution was offered (*3). I have tried to synthesize the
new solution from Nova team here:
Hello from Infra.
It's that time again... on Wednesday, May 02, 2018 20:00 UTC, the OpenStack
Project Infrastructure team is upgrading the server which runs
review.openstack.org to Ubuntu Xenial, and that means a new virtual machine
instance with new IP addresses assigned by our service provider.
On Tue, 3 Apr 2018 at 13:53 Dan Prince wrote:
> On Tue, Apr 3, 2018 at 10:00 AM, Javier Pena wrote:
> >
> >> Greeting folks,
> >>
> >> During the last PTG we spent time discussing some ideas around an
> All-In-One
> >> installer, using 100% of the TripleO bits to deploy a single node
> OpenStack
Thank you Melanie for the complimentary nomination, to the cores for
welcoming me into the fold, and especially to all (cores and non, Nova
and otherwise) who have mentored me along the way thus far. I hope to
live up to your example and continue to pay it forward.
-efried
On 04/03/2018 02:20 PM
On Mon, 26 Mar 2018 19:00:06 -0700, Melanie Witt wrote:
Howdy everyone,
I'd like to propose that we add Eric Fried to the nova-core team.
Eric has been instrumental to the placement effort with his work on
nested resource providers and has been actively contributing to many
other areas of opens
On Tue, Apr 3, 2018 at 10:00 AM, Javier Pena wrote:
>
>> Greeting folks,
>>
>> During the last PTG we spent time discussing some ideas around an All-In-One
>> installer, using 100% of the TripleO bits to deploy a single node OpenStack
>> very similar with what we have today with the containerized
Excerpts from Eric Fried's message of 2018-03-31 16:12:22 -0500:
> Hi Doug, I made this [2] for you. I tested it locally with oslo.config
> master, and whereas I started off with a slightly different set of
> errors than you show at [1], they were in the same suites. Since I
> didn't want to tox
The new pbr version is now in upper-constraints, so it should be getting
exercised in ci going forward. Please report any issues to #openstack-oslo.
On 03/26/2018 11:56 AM, Ben Nemec wrote:
Hi,
Since this will potentially affect the majority of OpenStack projects, I
wanted to give everyone s
On Tue, Apr 3, 2018 at 9:23 AM, James Slagle wrote:
> On Mon, Apr 2, 2018 at 9:05 PM, Dan Prince wrote:
>> On Thu, Mar 29, 2018 at 5:32 PM, Emilien Macchi wrote:
>>> Greeting folks,
>>>
>>> During the last PTG we spent time discussing some ideas around an All-In-One
>>> installer, using 100% of
On Thu, Mar 29, 2018 at 9:05 PM, Jeffrey Zhang wrote:
> cool. kolla will try to implement it.
Cool !
For reference, openstack-ansible already retooled their log collection
to copy the database instead of generating the report [1].
[1]: https://review.openstack.org/#/c/557921/
David Moreau Simar
html: https://anticdent.org/tc-report-18-14.html
If the [logs of
#openstack-tc](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/index.html)
are any indicator of reality (they are not), then the only things
that happened in the past week are that the next OpenStack release
got a name, and
On Tue, 2018-04-03 at 12:04 -0400, Zane Bitter wrote:
> On 03/04/18 06:28, Stephen Finucane wrote:
> > On Mon, 2018-04-02 at 19:41 -0400, Zane Bitter wrote:
> > > On 28/03/18 10:31, Stephen Finucane wrote:
> > > > As noted last week [1], we're trying to move away from pbr's autodoc
> > > > feature
On 04/03/2018 11:51 AM, Michael Bayer wrote:
On Tue, Apr 3, 2018 at 11:41 AM, Jay Pipes wrote:
On 04/03/2018 11:07 AM, Michael Bayer wrote:
Yes.
b. oslo.db script to run generically, yes or no?
No. Just have Triple-O install galera_innoptimizer and run it in a cron job.
OK, here are
On 03/04/18 06:28, Stephen Finucane wrote:
On Mon, 2018-04-02 at 19:41 -0400, Zane Bitter wrote:
On 28/03/18 10:31, Stephen Finucane wrote:
As noted last week [1], we're trying to move away from pbr's autodoc
feature as part of the new docs PTI. To that end, I've created
sphinxcontrib-apidoc, w
On Tue, Apr 3, 2018 at 11:41 AM, Jay Pipes wrote:
> On 04/03/2018 11:07 AM, Michael Bayer wrote:
>>
>
> Yes.
>
>> b. oslo.db script to run generically, yes or no?
>
>
> No. Just have Triple-O install galera_innoptimizer and run it in a cron job.
OK, here are the issues I have with galera_innoptim
On 04/03/2018 04:25 AM, Xiong, Huan wrote:
Hi,
I'm using a cloud benchmarking tool [1], which creates a *single* nova
client object in main thread and invoke methods on that object in different
worker threads. I find it generated malformed requests at random (my
system has python-novaclient 10.1
On 04/03/2018 11:07 AM, Michael Bayer wrote:
The MySQL / MariaDB variants we use nowadays default to
innodb_file_per_table=ON and we also set this flag to ON in installer
tools like TripleO. The reason we like file per table is so that
we don't grow an enormous ibdata file that can't be shrun
- Original Message -
> On Tue, 3 Apr 2018 at 10:00 Javier Pena < jp...@redhat.com > wrote:
> > > Greeting folks,
>
> > >
>
> > > During the last PTG we spent time discussing some ideas around an
> > > All-In-One
>
> > > installer, using 100% of the TripleO bits to deploy a single node
Stackers,
Today, a few of us had a chat to discuss changes to the Placement REST
API [1] that will allow multiple clients to safely update a single
consumer's set of resource allocations. This email is to summarize the
decisions coming out of that chat.
Note that Ed is currently updating the
The MySQL / MariaDB variants we use nowadays default to
innodb_file_per_table=ON and we also set this flag to ON in installer
tools like TripleO. The reason we like file per table is so that
we don't grow an enormous ibdata file that can't be shrunk without
rebuilding the database. Instead, we
On Tue, 3 Apr 2018 at 10:00 Javier Pena wrote:
>
> > Greeting folks,
> >
> > During the last PTG we spent time discussing some ideas around an
> All-In-One
> > installer, using 100% of the TripleO bits to deploy a single node
> OpenStack
> > very similar with what we have today with the container
I'd really love to this going forward, I fit perfectly on the category that
I usually don't test stuff on tripleO because it can get too complex and it
will take a lot of time to deploy, so this seems like a perfect solution
for that.
Thanks for putting this forward.
On Tue, Apr 3, 2018 at 11:00
> Greeting folks,
>
> During the last PTG we spent time discussing some ideas around an All-In-One
> installer, using 100% of the TripleO bits to deploy a single node OpenStack
> very similar with what we have today with the containerized undercloud and
> what we also have with other tools like Pa
On Mon, Apr 2, 2018 at 9:05 PM, Dan Prince wrote:
> On Thu, Mar 29, 2018 at 5:32 PM, Emilien Macchi wrote:
>> Greeting folks,
>>
>> During the last PTG we spent time discussing some ideas around an All-In-One
>> installer, using 100% of the TripleO bits to deploy a single node OpenStack
>> very s
Hi,
These patches probably solve the issue, if someone could review them:
https://review.openstack.org/#/c/557005/
and
https://review.openstack.org/#/c/557006/
Thanks,
Előd
On 2018-04-01 05:55, Tony Breeds wrote:
On Sat, Mar 31, 2018 at 06:17:41AM +, A mailing list for the OpenStack
Hi Minwook,
Thanks for the explanation, I understand the reasons for not running these
checks on a regular basis in Zabbix or other monitoring tools. It makes sense.
However, I don’t want to re-invent the wheel and add to Vitrage functionality
that already exists in other projects.
How about u
On Mon, 2018-04-02 at 19:41 -0400, Zane Bitter wrote:
> On 28/03/18 10:31, Stephen Finucane wrote:
> > As noted last week [1], we're trying to move away from pbr's autodoc
> > feature as part of the new docs PTI. To that end, I've created
> > sphinxcontrib-apidoc, which should do what pbr was previ
Hi,
I'm using a cloud benchmarking tool [1], which creates a *single* nova
client object in main thread and invoke methods on that object in different
worker threads. I find it generated malformed requests at random (my
system has python-novaclient 10.1.0 installed). The root cause was because
som
Hey,
On 30.03.2018 16:26, Kashyap Chamarthy wrote:
[...]
Taking the DistroSupportMatrix into picture, for the sake of discussion,
how about the following NEXT_MIN versions for "Solar" release:
(a) libvirt: 3.2.0 (released on 23-Feb-2017)
[...]
(b) QEMU: 2.9.0 (released on 20-Apr-2017)
[..
Hi Training Team,
Our next training in Vancouver[1] is quickly approaching and we still have a
lot of work to do.
In order to sync up I created a Doodle poll[2] with hours that are somewhat
inconvenient, but can work around the globe. Please respond to the poll so we
can setup a call to check
49 matches
Mail list logo