Re: [ceph-users] CentOS 7 iscsi gateway using lrbd

2016-02-08 Thread Nick Fisk
Hi Mike, Thanks for the update. I will keep a keen eye on the progress. Once you get to the point you think you have fixed the stability problems, let me know if you need somebody to help test. Nick > -Original Message- > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On B

[ceph-users] How to monitor health and connectivity of OSD

2016-02-08 Thread Mariusz Gronczewski
Is there an equivalent of 'ceph health' but for OSD ? Like warning about slowness or troubles with communication between OSDs? I've spent good amount of time debugging what looked like stuck pgs only but it turned out to be bad NIC and it was only apparent once I saw some OSD logs like 2016-02-0

[ceph-users] Tips for faster openstack instance boot

2016-02-08 Thread Vickey Singh
Hello Community I need some guidance how can i reduce openstack instance boot time using Ceph We are using Ceph Storage with openstack ( cinder, glance and nova ). All OpenStack images and instances are being stored on Ceph in different pools glance and nova pool respectively. I assume that Ceph

Re: [ceph-users] mds0: Client X failing to respond to capability release

2016-02-08 Thread Gregory Farnum
On Fri, Feb 5, 2016 at 10:19 PM, Michael Metz-Martini | SpeedPartner GmbH wrote: > Hi, > > Am 06.02.2016 um 07:15 schrieb Yan, Zheng: >>> On Feb 6, 2016, at 13:41, Michael Metz-Martini | SpeedPartner GmbH >>> wrote: >>> Am 04.02.2016 um 15:38 schrieb Yan, Zheng: > On Feb 4, 2016, at 17:00, M

Re: [ceph-users] How to monitor health and connectivity of OSD

2016-02-08 Thread Gregory Farnum
On Mon, Feb 8, 2016 at 3:25 AM, Mariusz Gronczewski wrote: > Is there an equivalent of 'ceph health' but for OSD ? > > Like warning about slowness or troubles with communication between OSDs? > > I've spent good amount of time debugging what looked like stuck pgs > only but it turned out to be bad

[ceph-users] Need help on benchmarking new erasure coding

2016-02-08 Thread Syed Hussain
Hi, I've been developing a new array type of erasure code. I'll be glad if you can send me few pointers for the two following items: (1) Required CRUSH map for Array code, e.g. RAID-DP or MSR erasure code. It is different than normal RS(n, k) or LRC code. For example, for RAID-DP or RDP (n, k) er

[ceph-users] Increasing time to save RGW objects

2016-02-08 Thread Kris Jurka
I've been testing the performance of ceph by storing objects through RGW. This is on Debian with Hammer using 40 magnetic OSDs, 5 mons, and 4 RGW instances. Initially the storage time was holding reasonably steady, but it has started to rise recently as shown in the attached chart. The tes

Re: [ceph-users] Increasing time to save RGW objects

2016-02-08 Thread Gregory Farnum
On Mon, Feb 8, 2016 at 8:49 AM, Kris Jurka wrote: > > I've been testing the performance of ceph by storing objects through RGW. > This is on Debian with Hammer using 40 magnetic OSDs, 5 mons, and 4 RGW > instances. Initially the storage time was holding reasonably steady, but it > has started to

[ceph-users] plain upgrade hammer to infernalis?

2016-02-08 Thread Dzianis Kahanovich
I want to know about plain (not systemd, no deployment tools, only own simple "start-stop-daemon" scripts under Gentoo) upgrade hammer to infernalis and see no recommendations. Can I simple node-by-node mon+mds+osd's restart or need some strict behaviour global per-service restart? PS "setuser mat

Re: [ceph-users] plain upgrade hammer to infernalis?

2016-02-08 Thread Gregory Farnum
On Mon, Feb 8, 2016 at 10:00 AM, Dzianis Kahanovich wrote: > I want to know about plain (not systemd, no deployment tools, only own simple > "start-stop-daemon" scripts under Gentoo) upgrade hammer to infernalis and see > no recommendations. Can I simple node-by-node mon+mds+osd's restart or need

[ceph-users] K is for Kraken

2016-02-08 Thread Sage Weil
I didn't find any other good K names, but I'm not sure anything would top kraken anyway, so I didn't look too hard. :) For L, the options I found were luminous (flying squid) longfin (squid) long barrel (squid) liliput (octopus) Any other suggestions? sage

Re: [ceph-users] K is for Kraken

2016-02-08 Thread Karol Mroz
On Mon, Feb 08, 2016 at 01:36:57PM -0500, Sage Weil wrote: > I didn't find any other good K names, but I'm not sure anything would top > kraken anyway, so I didn't look too hard. :) > > For L, the options I found were > > luminous (flying squid) > longfin (squid) > long barrel

Re: [ceph-users] K is for Kraken

2016-02-08 Thread Sage Weil
On Mon, 8 Feb 2016, Karol Mroz wrote: > On Mon, Feb 08, 2016 at 01:36:57PM -0500, Sage Weil wrote: > > I didn't find any other good K names, but I'm not sure anything would top > > kraken anyway, so I didn't look too hard. :) > > > > For L, the options I found were > > > > luminous (flying

Re: [ceph-users] K is for Kraken

2016-02-08 Thread Mark Nelson
I like Luminous. :) Mark On 02/08/2016 12:36 PM, Sage Weil wrote: I didn't find any other good K names, but I'm not sure anything would top kraken anyway, so I didn't look too hard. :) For L, the options I found were luminous (flying squid) longfin (squid) long barrel

Re: [ceph-users] K is for Kraken

2016-02-08 Thread Robert LeBlanc
-BEGIN PGP SIGNED MESSAGE- Hash: SHA256 Too bad K isn't an LTS. It was be fun to release the Kraken many times. I like liliput - Robert LeBlanc PGP Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1 On Mon, Feb 8, 2016 at 11:36 AM, Sage Weil wrote: > I did

Re: [ceph-users] Tips for faster openstack instance boot

2016-02-08 Thread Heath Albritton
I'm not sure what's normal, but I'm on Openstack Juno with ceph .94.5 using separate pools for nova, glance, and cinder. Takes 16 seconds to start an instance (el7 minimal). Everything is on 10GE and I'm using cache tiering, which I'm sure speeds things up. Can personally verify that COW is work

Re: [ceph-users] Tips for faster openstack instance boot

2016-02-08 Thread Jason Dillaman
If Nova and Glance are properly configured, it should only require a quick clone of the Glance image to create your Nova ephemeral image. Have you double-checked your configuration against the documentation [1]? What version of OpenStack are you using? To answer your questions: > - From Ceph

Re: [ceph-users] Tips for faster openstack instance boot

2016-02-08 Thread Jeff Bailey
Your glance images need to be raw, also. A QCOW image will be copied/converted. On 2/8/2016 3:33 PM, Jason Dillaman wrote: If Nova and Glance are properly configured, it should only require a quick clone of the Glance image to create your Nova ephemeral image. Have you double-checked your c

Re: [ceph-users] K is for Kraken

2016-02-08 Thread Lionel Bouton
Le 08/02/2016 20:09, Robert LeBlanc a écrit : > Too bad K isn't an LTS. It was be fun to release the Kraken many times. Kraken is an awesome release name ! How I will miss being able to say/write to our clients that we just released the Kraken on their infra :-/ Lionel ___

Re: [ceph-users] SSD-Cache Tier + RBD-Cache = Filesystem corruption?

2016-02-08 Thread Christian Balzer
Hello, I'm quite concerned by this (and the silence from the devs), however there are a number of people doing similar things (at least with Hammer) and you'd think they would have been bitten by this if it were a systemic bug. More below. On Sat, 6 Feb 2016 11:31:51 +0100 Udo Waechter wrote: