Hi Mike,
Thanks for the update. I will keep a keen eye on the progress. Once you get to
the point you think you have fixed the stability problems, let me know if you
need somebody to help test.
Nick
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On B
Is there an equivalent of 'ceph health' but for OSD ?
Like warning about slowness or troubles with communication between OSDs?
I've spent good amount of time debugging what looked like stuck pgs
only but it turned out to be bad NIC and it was only apparent once I
saw some OSD logs like
2016-02-0
Hello Community
I need some guidance how can i reduce openstack instance boot time using
Ceph
We are using Ceph Storage with openstack ( cinder, glance and nova ). All
OpenStack images and instances are being stored on Ceph in different pools
glance and nova pool respectively.
I assume that Ceph
On Fri, Feb 5, 2016 at 10:19 PM, Michael Metz-Martini | SpeedPartner
GmbH wrote:
> Hi,
>
> Am 06.02.2016 um 07:15 schrieb Yan, Zheng:
>>> On Feb 6, 2016, at 13:41, Michael Metz-Martini | SpeedPartner GmbH
>>> wrote:
>>> Am 04.02.2016 um 15:38 schrieb Yan, Zheng:
> On Feb 4, 2016, at 17:00, M
On Mon, Feb 8, 2016 at 3:25 AM, Mariusz Gronczewski
wrote:
> Is there an equivalent of 'ceph health' but for OSD ?
>
> Like warning about slowness or troubles with communication between OSDs?
>
> I've spent good amount of time debugging what looked like stuck pgs
> only but it turned out to be bad
Hi,
I've been developing a new array type of erasure code.
I'll be glad if you can send me few pointers for the two following items:
(1) Required CRUSH map for Array code, e.g. RAID-DP or MSR erasure code.
It is different than normal RS(n, k) or LRC code. For example, for RAID-DP
or RDP (n, k) er
I've been testing the performance of ceph by storing objects through
RGW. This is on Debian with Hammer using 40 magnetic OSDs, 5 mons, and
4 RGW instances. Initially the storage time was holding reasonably
steady, but it has started to rise recently as shown in the attached chart.
The tes
On Mon, Feb 8, 2016 at 8:49 AM, Kris Jurka wrote:
>
> I've been testing the performance of ceph by storing objects through RGW.
> This is on Debian with Hammer using 40 magnetic OSDs, 5 mons, and 4 RGW
> instances. Initially the storage time was holding reasonably steady, but it
> has started to
I want to know about plain (not systemd, no deployment tools, only own simple
"start-stop-daemon" scripts under Gentoo) upgrade hammer to infernalis and see
no recommendations. Can I simple node-by-node mon+mds+osd's restart or need some
strict behaviour global per-service restart?
PS "setuser mat
On Mon, Feb 8, 2016 at 10:00 AM, Dzianis Kahanovich
wrote:
> I want to know about plain (not systemd, no deployment tools, only own simple
> "start-stop-daemon" scripts under Gentoo) upgrade hammer to infernalis and see
> no recommendations. Can I simple node-by-node mon+mds+osd's restart or need
I didn't find any other good K names, but I'm not sure anything would top
kraken anyway, so I didn't look too hard. :)
For L, the options I found were
luminous (flying squid)
longfin (squid)
long barrel (squid)
liliput (octopus)
Any other suggestions?
sage
On Mon, Feb 08, 2016 at 01:36:57PM -0500, Sage Weil wrote:
> I didn't find any other good K names, but I'm not sure anything would top
> kraken anyway, so I didn't look too hard. :)
>
> For L, the options I found were
>
> luminous (flying squid)
> longfin (squid)
> long barrel
On Mon, 8 Feb 2016, Karol Mroz wrote:
> On Mon, Feb 08, 2016 at 01:36:57PM -0500, Sage Weil wrote:
> > I didn't find any other good K names, but I'm not sure anything would top
> > kraken anyway, so I didn't look too hard. :)
> >
> > For L, the options I found were
> >
> > luminous (flying
I like Luminous. :)
Mark
On 02/08/2016 12:36 PM, Sage Weil wrote:
I didn't find any other good K names, but I'm not sure anything would top
kraken anyway, so I didn't look too hard. :)
For L, the options I found were
luminous (flying squid)
longfin (squid)
long barrel
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Too bad K isn't an LTS. It was be fun to release the Kraken many times.
I like liliput
-
Robert LeBlanc
PGP Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1
On Mon, Feb 8, 2016 at 11:36 AM, Sage Weil wrote:
> I did
I'm not sure what's normal, but I'm on Openstack Juno with ceph .94.5 using
separate pools for nova, glance, and cinder. Takes 16 seconds to start an
instance (el7 minimal).
Everything is on 10GE and I'm using cache tiering, which I'm sure speeds
things up. Can personally verify that COW is work
If Nova and Glance are properly configured, it should only require a quick
clone of the Glance image to create your Nova ephemeral image. Have you
double-checked your configuration against the documentation [1]? What version
of OpenStack are you using?
To answer your questions:
> - From Ceph
Your glance images need to be raw, also. A QCOW image will be
copied/converted.
On 2/8/2016 3:33 PM, Jason Dillaman wrote:
If Nova and Glance are properly configured, it should only require a quick
clone of the Glance image to create your Nova ephemeral image. Have you
double-checked your c
Le 08/02/2016 20:09, Robert LeBlanc a écrit :
> Too bad K isn't an LTS. It was be fun to release the Kraken many times.
Kraken is an awesome release name !
How I will miss being able to say/write to our clients that we just
released the Kraken on their infra :-/
Lionel
___
Hello,
I'm quite concerned by this (and the silence from the devs), however there
are a number of people doing similar things (at least with Hammer) and
you'd think they would have been bitten by this if it were a systemic bug.
More below.
On Sat, 6 Feb 2016 11:31:51 +0100 Udo Waechter wrote:
20 matches
Mail list logo