Re: [ceph-users] which kernel version can help avoid kernel client deadlock

2015-07-30 Thread Z Zhang
Subject: Re: [ceph-users] which kernel version can help avoid kernel client deadlock From: chaofa...@owtware.com Date: Thu, 30 Jul 2015 13:16:16 +0800 CC: idryo...@gmail.com; ceph-users@lists.ceph.com To: zhangz.da...@outlook.com On Jul 30, 2015, at 12:48 PM, Z Zhang wrote: We also hit the s

[ceph-users] How to identify MDS client failing to respond to capability release?

2015-07-30 Thread Oliver Schulz
Hello Ceph Experts, lately, "ceph status" on our cluster often states: mds0: Client CLIENT_ID failing to respond to capability release How can I identify which client is at fault (hostname or IP address) from the CLIENT_ID? What could be the source of the "failing to respond to capability

Re: [ceph-users] How to identify MDS client failing to respond to capability release?

2015-07-30 Thread John Spray
For sufficiently recent clients we do this for you (clients send some metadata like hostname, which is used in the MDS to generate an easier-to-understand identifier). To do it by hand, use the admin socket command "ceph daemon mds. session ls" command, and look out for the client IP addresses

[ceph-users] mount rbd image with iscsi

2015-07-30 Thread Daleep Bais
hi, I am trying to mount an RBD image using iSCSI following URL : *http://www.sebastien-han.fr/blog/2014/07/07/start-with-the-rbd-support-for-tgt/ * However, I don't get rbd flag when I give the command sudo tgtadm

Re: [ceph-users] Unable to mount Format 2 striped RBD image

2015-07-30 Thread Daleep Bais
hi Ilya, I had used the below command to create the rbd image rbd -p fool create strpimg --image-format 2 --order 22 --size 2048M --stripe-unit 65536 --stripe-count 3 --image-feature striping --image-shared I am confused when you say why I use sysfs instead of rbd cli tool. Can you please help

Re: [ceph-users] rbd-fuse Transport endpoint is not connected

2015-07-30 Thread Ilya Dryomov
On Wed, Jul 29, 2015 at 11:42 PM, pixelfairy wrote: > copied ceph.conf from the servers. hope this helps. should this be > concidered an unsupported feature? > > # rbd-fuse /cmnt -c /etc/ceph/ceph.conf -d > FUSE library version: 2.9.2 > nullpath_ok: 0 > nopath: 0 > utime_omit_ok: 0 > unique: 1, >

Re: [ceph-users] which kernel version can help avoid kernel client deadlock

2015-07-30 Thread Ilya Dryomov
On Thu, Jul 30, 2015 at 10:29 AM, Z Zhang wrote: > > > Subject: Re: [ceph-users] which kernel version can help avoid kernel client > deadlock > From: chaofa...@owtware.com > Date: Thu, 30 Jul 2015 13:16:16 +0800 > CC: idryo...@gmail.com; ceph-users@lists.ceph.com >

Re: [ceph-users] Unable to mount Format 2 striped RBD image

2015-07-30 Thread Ilya Dryomov
On Thu, Jul 30, 2015 at 11:15 AM, Daleep Bais wrote: > hi Ilya, > > I had used the below command to create the rbd image > > rbd -p fool create strpimg --image-format 2 --order 22 --size 2048M > --stripe-unit 65536 --stripe-count 3 --image-feature striping --image-shared > > I am confused when you

[ceph-users] Squeeze packages for 0.94.2

2015-07-30 Thread Sebastian Köhler
Hello, it seems that there are no Debian Squeeze packages in the repository for the current Hammer version. Is this an oversight or is there another reason those are not provided? Sebastian ___ ceph-users mailing list ceph-users@lists.ceph.com http://

[ceph-users] Crash and question

2015-07-30 Thread Khalid Ahsein
Hello everybody, I’m running since 4 months a ceph cluster configured with two monitors : 1 host : 16GB RAM - 12x 4TB disks - 12 OSD - 1 monitor - RAID-1 for system 1 host : 16GB RAM - 12x 4TB disks - 12 OSD - 1 monitor - RAID-1 for system This night I’ve encountered an issue with the crash of t

Re: [ceph-users] Squeeze packages for 0.94.2

2015-07-30 Thread Christian Balzer
Hello, On Thu, 30 Jul 2015 08:49:16 + Sebastian Köhler wrote: > Hello, > > it seems that there are no Debian Squeeze packages in the repository for > the current Hammer version. Is this an oversight or is there another > reason those are not provided? > Most likely because it's 2 versions

Re: [ceph-users] Crash and question

2015-07-30 Thread Christian Balzer
Hello, On Thu, 30 Jul 2015 10:55:30 +0200 Khalid Ahsein wrote: > Hello everybody, > > I’m running since 4 months a ceph cluster configured with two monitors : > > 1 host : 16GB RAM - 12x 4TB disks - 12 OSD - 1 monitor - RAID-1 for > system 1 host : 16GB RAM - 12x 4TB disks - 12 OSD - 1 monitor

Re: [ceph-users] Squeeze packages for 0.94.2

2015-07-30 Thread Sebastian Köhler
July 30 2015 11:05 AM, "Christian Balzer" wrote: > Is there any reason you can't use Wheezy or Jessie? Our cluster is running on trusty however nearly all our clients are running on squeeze and can not be updated for compatibility reasons in the short term. Packages of older Hammer versions wer

Re: [ceph-users] How to identify MDS client failing to respond to capability release?

2015-07-30 Thread Oliver Schulz
Hi John, thanks a lot - I was indeed able to identify the machine in question. As for the kernel, we'll certainly update to a newer kernel (3.16 and later 3.19) for the Ubuntu 14.04 clients. For the 12.04 clients, we'll have to see, but these machines will be phased out over time anyhow. I'd lik

Re: [ceph-users] A cache tier issue with rate only at 20MB/s when data move from cold pool to hot pool

2015-07-30 Thread Kenneth Waegeman
On 06/16/2015 01:17 PM, Kenneth Waegeman wrote: Hi! We also see this at our site: When we cat a large file from cephfs to /dev/null, we get about 10MB/s data transfer. I also do not see a system resource bottleneck. Our cluster consists of 14 servers with each 16 disks, together forming a EC

Re: [ceph-users] Crash and question

2015-07-30 Thread Khalid Ahsein
Good morning christian, thank you for your quick response. so I need to upgrade to 64 GB or 96 GB to be more secure ? And sorry I though that 2 monitors was the minimum. We will work to add a new host quickly. About osd_pool_default_min_size should I change something for the future ? thank yo

Re: [ceph-users] which kernel version can help avoid kernel client deadlock

2015-07-30 Thread Z Zhang
> Date: Thu, 30 Jul 2015 11:37:37 +0300 > Subject: Re: [ceph-users] which kernel version can help avoid kernel client > deadlock > From: idryo...@gmail.com > To: zhangz.da...@outlook.com > CC: chaofa...@owtware.com; ceph-users@lists.ceph.com > > On Thu, Jul 30, 2015 at 10:29 AM, Z Zhang wrote:

Re: [ceph-users] which kernel version can help avoid kernel client deadlock

2015-07-30 Thread Ilya Dryomov
On Thu, Jul 30, 2015 at 12:46 PM, Z Zhang wrote: > >> Date: Thu, 30 Jul 2015 11:37:37 +0300 >> Subject: Re: [ceph-users] which kernel version can help avoid kernel >> client deadlock >> From: idryo...@gmail.com >> To: zhangz.da...@outlook.com >> CC: chaofa...@owtware.com; ceph-users@lists.ceph.com

Re: [ceph-users] Crash and question

2015-07-30 Thread Khalid Ahsein
Hi, I tried to add a new monitor, but now I was unable to use ceph command after doing ceph-deploy mon create myhostname I’ve got : # ceph status 2015-07-30 10:42:39.682038 7f7b16d90700 0 librados: client.admin authentication error (1) Operation not permitted Error connecting to cluster: Permi

Re: [ceph-users] Weird behaviour of cephfs with samba

2015-07-30 Thread Jörg Henne
Gregory Farnum writes: > > You can mount subtrees with the -r option to ceph-fuse. Yay! That did the trick to properly mount via fuse. And I can confirm that the directory list results are now stable both locally and via samba. > Once you've started it up you should find a file like > "client.a

Re: [ceph-users] rbd-fuse Transport endpoint is not connected

2015-07-30 Thread Eric Eastman
It is great having access to features that are not fully production ready, but it would be nice to know which Ceph features are ready and which are not. Just like Ceph File System is well marked that it is not yet fully ready for production, it would be nice if rbd-fuse could be marked as not read

[ceph-users] dropping old distros: el6, precise 12.04, debian wheezy?

2015-07-30 Thread Sage Weil
As time marches on it becomes increasingly difficult to maintain proper builds and packages for older distros. For example, as we make the systemd transition, maintaining the kludgey sysvinit and udev support for centos6/rhel6 is a pain in the butt and eats up time and energy to maintain and t

Re: [ceph-users] dropping old distros: el6, precise 12.04, debian wheezy?

2015-07-30 Thread Jan “Zviratko” Schermer
I understand your reasons, but dropping support for LTS release like this is not right. You should lege artis support every distribution the LTS release could have ever been installed on - that’s what the LTS label is for and what we rely on once we build a project on top of it CentOS 6 in partic

Re: [ceph-users] dropping old distros: el6, precise 12.04, debian wheezy?

2015-07-30 Thread Jan “Zviratko” Schermer
I understand your reasons, but dropping support for LTS release like this is not right. You should lege artis support every distribution the LTS release could have ever been installed on - that’s what the LTS label is for and what we rely on once we build a project on top of it CentOS 6 in partic

Re: [ceph-users] dropping old distros: el6, precise 12.04, debian wheezy?

2015-07-30 Thread Jan Schermer
I understand your reasons, but dropping support for LTS release like this is not right. You should lege artis support every distribution the LTS release could have ever been installed on - that’s what the LTS label is for and what we rely on once we build a project on top of it CentOS 6 in partic

Re: [ceph-users] dropping old distros: el6, precise 12.04, debian wheezy?

2015-07-30 Thread Jon Meacham
If hammer and firefly bugfix releases will still be packaged for these distros, I don't see a problem with this. Anyone who is operating an existing LTS deployment on CentOS 6, etc. will continue to receive fixes for said LTS release. Jon From: ceph-users on behalf of Jan “Zviratko” Schermer D

Re: [ceph-users] dropping old distros: el6, precise 12.04, debian wheezy?

2015-07-30 Thread Mark Nelson
Hi Jan, From my reading of Sage's email, hammer would continue to be supported on older distros, but new development would not target those releases. Was that your impression as well? As a former system administrator I feel your pain. Upgrading to new distros is a ton of work and incurs a t

Re: [ceph-users] dropping old distros: el6, precise 12.04, debian wheezy?

2015-07-30 Thread Stijn De Weirdt
i would certainly like that all client libs and/or kernel modules stay tested and supported on these OSes for future ceph releases. not sure how much work that is, but the at least client side shouldn't be affected by the init move. stijn On 07/30/2015 04:43 PM, Marc wrote: Hi, much like de

Re: [ceph-users] dropping old distros: el6, precise 12.04, debian wheezy?

2015-07-30 Thread Marc
Hi, much like debian already has, I would suggest to not make systemd a dependency for Ceph (or anything for that matter). The reason being here that we desperately need sysvinit until the systemd forks are ready which offer the systemd init system without all those slapped-on appendages that

Re: [ceph-users] dropping old distros: el6, precise 12.04, debian wheezy?

2015-07-30 Thread Jan Schermer
Not at all. We have this: http://ceph.com/docs/master/releases/ I would expect that whatever distribution I install Ceph LTS release on will be supported for the time specified. That means if I install Hammer on CentOS 6 now it will stay supported until 3Q/2016. Of course if in the meantime the d

[ceph-users] Ceph Tech Talk Today!

2015-07-30 Thread Patrick McGarry
Hey cephers, Just sending a friendly reminder that our online CephFS Tech Talk is happening today at 13:00 EDT (17:00 UTC). Please stop by and hear a technical deep dive on CephFS and ask any questions you might have. Thanks! http://ceph.com/ceph-tech-talks/ direct link to the video conference:

Re: [ceph-users] dropping old distros: el6, precise 12.04, debian wheezy?

2015-07-30 Thread Asif Murad Khan
I'm don't prefer it. you have to maintain those releases up to their EOL. On Thu, Jul 30, 2015 at 8:48 PM, Stijn De Weirdt wrote: > i would certainly like that all client libs and/or kernel modules stay > tested and supported on these OSes for future ceph releases. not sure how > much work that

Re: [ceph-users] dropping old distros: el6, precise 12.04, debian wheezy?

2015-07-30 Thread Jan Schermer
It is possible I misunderstood Sage’s message - I apologize if that’s the case. This is what made me uncertain: >>> - We would probably continue building hammer and firefly packages for >>> future bugfix point releases. Decision for new releases (Infernalis, Jewel, K*) regarding distro support sh

Re: [ceph-users] dropping old distros: el6, precise 12.04, debian wheezy?

2015-07-30 Thread Udo Lembke
Hi, dropping debian wheezy are quite fast - till now there aren't packages for jessie?! Dropping of squeeze I understand, but wheezy at this time? Udo On 30.07.2015 15:54, Sage Weil wrote: > As time marches on it becomes increasingly difficult to maintain proper > builds and packages for older

Re: [ceph-users] dropping old distros: el6, precise 12.04, debian wheezy?

2015-07-30 Thread Brian Kroth
Sage Weil 2015-07-30 06:54: As time marches on it becomes increasingly difficult to maintain proper builds and packages for older distros. For example, as we make the systemd transition, maintaining the kludgey sysvinit and udev support for centos6/rhel6 is a pain in the butt and eats up time a

Re: [ceph-users] dropping old distros: el6, precise 12.04, debian wheezy?

2015-07-30 Thread Johannes Formann
I agree. For the existing stable series the distribution support should be continued. But for new releases (infernalis, jewel...) I see no problem dropping the older versions of the distributions. greetings Johannes > Am 30.07.2015 um 16:39 schrieb Jon Meacham : > > If hammer and firefly bugf

[ceph-users] Check networking first?

2015-07-30 Thread Quentin Hartman
Just wanted to drop a note to the group that I had my cluster go sideways yesterday, and the root of the problem was networking again. Using iperf I discovered that one of my nodes was only moving data at 1.7Mb / s. Moving that node to a different switch port with a different cable has resolved the

Re: [ceph-users] Check networking first?

2015-07-30 Thread Mark Nelson
Thanks for posting this! We see issues like this more often than you'd think. It's really important too because if you don't figure it out the natural inclination is to blame Ceph! :) Mark On 07/30/2015 12:50 PM, Quentin Hartman wrote: Just wanted to drop a note to the group that I had my c

Re: [ceph-users] Recovery question

2015-07-30 Thread Peter Hinman
For the record, I have been able to recover. Thank you very much for the guidance. I hate searching the web and finding only partial information on threads like this, so I'm going to document and post what I've learned as best I can in hopes that it will help someone else out in the future.

Re: [ceph-users] questions on editing crushmap for ceph cache tier

2015-07-30 Thread Robert LeBlanc
-BEGIN PGP SIGNED MESSAGE- Hash: SHA256 You are close... I've done it by creating a new SSD root in the CRUSH map, then put the SSD OSDs into a -ssd entry. I then created a new crush rule to choose from the SSD root, then have the tiering pool use that rule. If you look at the example in

Re: [ceph-users] Recovery question

2015-07-30 Thread Robert LeBlanc
-BEGIN PGP SIGNED MESSAGE- Hash: SHA256 I'm glad you were able to recover. I'm sure you learned a lot about Ceph through the exercise (always seems to be the case for me with things). I'll look forward to your report so that we can include it in our operations manual, just in case. - -

Re: [ceph-users] Elastic-sized RBD planned?

2015-07-30 Thread Robert LeBlanc
-BEGIN PGP SIGNED MESSAGE- Hash: SHA256 I'll take a stab at this. I don't think it will be a feature that you will find in Ceph due to the fact that Ceph doesn't really understand what is going on inside the RBD. There are too many technologies that can use RBD that it is not feasible to

Re: [ceph-users] dropping old distros: el6, precise 12.04, debian wheezy?

2015-07-30 Thread Robert LeBlanc
-BEGIN PGP SIGNED MESSAGE- Hash: SHA256 I agree that for the distros and version in question, Ceph releases already released on them should provide bug support until EoL of Ceph or the distro version, whichever is shorter. Since we are so far into Infernalis and Jewel development cycle, wo

Re: [ceph-users] ceph-mon cpu usage

2015-07-30 Thread Spillmann, Dieter
I saw this behavior when the servers are not in time sync. Check your ntp settings Dieter From: ceph-users mailto:ceph-users-boun...@lists.ceph.com>> on behalf of Quentin Hartman mailto:qhart...@direwolfdigital.com>> Date: Wednesday, July 29, 2015 at 5:47 PM To: Luis Periquito mailto:periqu...

Re: [ceph-users] ceph-mon cpu usage

2015-07-30 Thread Quentin Hartman
Thanks for the suggestion. NTP is fine in my case. Turns out it was a networking problem that wasn't triggering error counters on the NICs so it took a bit to track it down. QH On Thu, Jul 30, 2015 at 4:16 PM, Spillmann, Dieter < dieter.spillm...@arris.com> wrote: > I saw this behavior when the

[ceph-users] RGW + civetweb + SSL

2015-07-30 Thread Italo Santos
Hello, I’d like to know if someone know how to setup a SSL implementation of RGW with civetweb? The only “documentation” that I found about that is a “bug” - http://tracker.ceph.com/issues/11239 - which I’d like to know if this kind of implementation really works? Regards. Italo Santos http

Re: [ceph-users] which kernel version can help avoid kernel client deadlock

2015-07-30 Thread Z Zhang
> Date: Thu, 30 Jul 2015 13:11:11 +0300 > Subject: Re: [ceph-users] which kernel version can help avoid kernel client > deadlock > From: idryo...@gmail.com > To: zhangz.da...@outlook.com > CC: chaofa...@owtware.com; ceph-users@lists.ceph.com > > On Thu, Jul 30, 2015 at 12:46 PM, Z Zhang wrote:

Re: [ceph-users] questions on editing crushmap for ceph cache tier

2015-07-30 Thread van
> On Jul 31, 2015, at 2:55 AM, Robert LeBlanc wrote: > > -BEGIN PGP SIGNED MESSAGE- > Hash: SHA256 > > You are close... > > I've done it by creating a new SSD root in the CRUSH map, then put the > SSD OSDs into a -ssd entry. I then created a new crush rule to choose > from the SSD roo

Re: [ceph-users] Check networking first?

2015-07-30 Thread Stijn De Weirdt
wouldn't it be nice that ceph does something like this in background (some sort of network-scrub). debugging network like this is not that easy (can't expect admins to install e.g. perfsonar on all nodes and/or clients) something like: every X min, each service X pick a service Y on another h