Erik McCormick wrote:
I've been running Ceph-backed Cinder since, I think, Icehouse. It's
really more of a function of your backend or the hypervisor than Cinder
itself. That being said, it's been probabky mt smallest Openstack pain
point iver the years.
I can't imagine what sort of concurrency
I recalled that UC is inviting WG chairs to join the UC IRC meeting to
share/provide WG updates on high level status/activities? Is this something
similar?
Should the WG chair attend the UC meeting instead of setting up another
separate meeting?
On Wednesday, May 31, 2017, MCCABE, JAMEY A wrote:
I've been running Ceph-backed Cinder since, I think, Icehouse. It's really
more of a function of your backend or the hypervisor than Cinder itself.
That being said, it's been probabky mt smallest Openstack pain point iver
the years.
I can't imagine what sort of concurrency issues you'd run into sh
We have run ceph backed cinder from Liberty through Newton, with the exception
of a libvirt 2.x bug that should now be fixed, cinder really hasn't caused us
any problems.
Sent from my iPad
> On May 31, 2017, at 6:12 PM, Joshua Harlow wrote:
>
> Hi folks,
>
> So I was having some back and for
This is a request for any operators out there that configure nova to set:
[cinder]
cross_az_attach=False
To check out these two bug fixes:
1. https://review.openstack.org/#/c/366724/
This is a case where nova is creating the volume during boot from volume
and providing an AZ to cinder during
Hi folks,
So I was having some back and forth internally about is cinder ready for
usage and wanted to get other operators thoughts on how there cinder
experiences have been going, any trials and tribulations.
For context, we are running on liberty (yes I know, working on getting
that to new
Hi Lauren -
We had a bit of discussion and the following list of top picks were recommended:
* Manila on CephFS at CERN:
https://www.openstack.org/videos/boston-2017/manila-on-cephfs-at-cern-the-short-way-to-production
* Containers as a Service on GPU Cloud:
https://www.openstack.org/videos/bost
On 05/31/2017 05:52 AM, federica fanzago wrote:
Hello operators,
we have a problem with the placement after the update of our cloud from
Mitaka to Ocata release.
We started from a mitaka cloud and we have followed these steps: updated
the cloud controller from Mitaka to newton, run the dbsync
No prob. Thanks for replying.
On May 31, 2017 10:11 AM, "Gustavo Randich"
wrote:
> Hi Kevin, I confirm that applying the patch the problem is fixed.
>
> Sorry for the inconvenience.
>
>
> On Tue, May 30, 2017 at 9:36 PM, Kevin Benton wrote:
>
>> Do you have that patch already in your environmen
Working group (WG) chairs or delegates, please enter your name (and WG name)
and what times you could meet at this poll:
https://beta.doodle.com/poll/6k36zgre9ttciwqz#table
As back ground and to share progress:
* We started and generally confirmed the desire to have a regular cross WG
stat
Hi Kevin, I confirm that applying the patch the problem is fixed.
Sorry for the inconvenience.
On Tue, May 30, 2017 at 9:36 PM, Kevin Benton wrote:
> Do you have that patch already in your environment? If not, can you
> confirm it fixes the issue?
>
> On Tue, May 30, 2017 at 9:49 AM, Gustavo R
Thanks Grodon ! I've found that I need !
Regarding the output problem, for now I execute gnocchi-api -p 8041 &>>
/var/log/gnocchi-uwsgi.log &and it solves the
problem. Will try to execute api part with uwsgi options.
--
Best regards,
Mate200
On Tue, 2017-05-30 at 20:11 +, gordon chung wrote
Hello operators,
we have a problem with the placement after the update of our cloud from
Mitaka to Ocata release.
We started from a mitaka cloud and we have followed these steps: updated
the cloud controller from Mitaka to newton, run the dbsync, updated from
newton to ocata adding at this st
13 matches
Mail list logo