Re: [ceph-users] Ceph Thin Provisioning on OpenStack Instances

2016-04-01 Thread Luis Periquito
You want to enable the "show_image_direct_url = True" option. full configuration information can be found http://docs.ceph.com/docs/master/rbd/rbd-openstack/ On Thu, Mar 31, 2016 at 10:49 PM, Mario Codeniera wrote: > Hi, > > Is there anyone done thin provisioning on OpenStack instances (virtual

Re: [ceph-users] Frozen Client Mounts

2016-04-01 Thread Diego Castro
Hello Oliver, this issue showed very hard to reproduce, i couldn't make it again. My best guess is something with the Azure's network since last week (when happened a lot) there were a ongoing maintenance. Here's the outputs: $ ceph -s cluster 25736883-dbf1-4d7a-8796-50e36f9de7a6 health

Re: [ceph-users] ceph pg query hangs for ever

2016-04-01 Thread Wido den Hollander
> Op 1 april 2016 om 1:28 schreef Goncalo Borges : > > > Hi Mart, Wido... > > A disclaimer: Not really an expert, just a regular site admin sharing my > experience. > Thanks! > At the beginning of the thread you give the idea that only osd.68 has > problems dealing with the problematic PG

Re: [ceph-users] Frozen Client Mounts

2016-04-01 Thread Oliver Dzombic
Hi Diego, ok so this is a new case scenario. Before you said its "until i put some load on it". Now you say, you can't reproduce it and mention that it happends during a (known) network maintenance. So i agree with you, we can assume that your problems were based on network issues. Thats also

Re: [ceph-users] Frozen Client Mounts

2016-04-01 Thread Diego Castro
Hello Oliver, sorry if i wasn't clear at my first post. I agree with you that a network issue isn't desirable but should it crash mount clients? I mean, doesn't the client be smart enough to retry connection or so? My point is cloud environments (public) doesn't have the same availability as a loca

Re: [ceph-users] Frozen Client Mounts

2016-04-01 Thread Oliver Dzombic
Hi Diego, you can see the network connection as your HDD cables. So if you get interruptions there, its like you are pulling out the HDD cables of your server/computer and putting it back. You can just check easily how much your server/computer will like that with your local HDD's ;-) And

Re: [ceph-users] Frozen Client Mounts

2016-04-01 Thread Diego Castro
Ok, i got it. Having a stable network will save the system from a node crash? What happens if a osd goes down? Will the clients suffer from freeze mounts and things like that? Just asking dummy questions to see if i'm the right path, since AFAIK ceph is meant to be a high available/fault tolerant

Re: [ceph-users] Latest ceph branch for using Infiniband/RoCE

2016-04-01 Thread kefu chai
sorry, should copy the list. On Sat, Apr 2, 2016 at 12:53 AM, kefu chai wrote: > wenda, > > On Wed, Mar 30, 2016 at 1:55 AM, Wenda Ni wrote: >> Dear all, >> >> We try to leverage RDMA as the underlying data transfer protocol to run >> ceph. A quick survey leads us to XioMessenger. >> >> When clo

[ceph-users] OSDs keep going down

2016-04-01 Thread Nate Curry
I am having some issues with my newly setup cluster. I am able to get all of my 32 OSDs to start after setting up udev rules for my journal partitions but they keep going down. It did seem like half of them would stay up at first but after I checked it this morning I found only 1/4 of them were u

Re: [ceph-users] Frozen Client Mounts

2016-04-01 Thread Oliver Dzombic
Hi Diego, IF a OSD goes down a hill and IF there are currently read/write requests on it, THEN you will have again the "pull the plug" event. Means again, read only mount of filesystem / IO Errors on that specific VM. --- I also asked already the same question here on the mailing list, but ther

[ceph-users] Using device mapper with journal on separate partition

2016-04-01 Thread Andrus, Brian Contractor
All, I am trying to use ceph-deploy to create an OSD on a multipath device but put the journal partition on the SSD the system boots from. I have created a partition on the SSD (/dev/sda5) but ceph-deploy does not seem to like it. I am trying: ceph-deploy osd create ceph01:/dev/mapper/mpathb:/d

Re: [ceph-users] ceph pg query hangs for ever

2016-04-01 Thread Florian Haas
On Fri, Apr 1, 2016 at 2:48 PM, Wido den Hollander wrote: > Somehow the PG got corrupted on one of the OSDs and it kept crashing on a > single > object. Vaguely reminds me of the E2BIG from that one issue way-back-when in Dumpling (https://www.hastexo.com/resources/hints-and-kinks/fun-extended

Re: [ceph-users] understand "client rmw"

2016-04-01 Thread Gregory Farnum
On Wed, Mar 30, 2016 at 11:56 PM, Zhongyan Gu wrote: > Hi ceph experts, > I know rmw means read modify write. Just don't understand what does client > rmw stand for. can any body tell me what is it and in what Scenario this > kind of requests will be generated. What context is this question in? I

Re: [ceph-users] Ceph.conf

2016-04-01 Thread Gregory Farnum
On Wed, Mar 30, 2016 at 9:27 PM, Adrian Saul wrote: > > > It is the monitors that ceph clients/daemons can connect to initially to > connect with the cluster. Not quite. The clients will use the "mon initial members" value to populate their search set, but other config options can fill it as well

Re: [ceph-users] Latest ceph branch for using Infiniband/RoCE

2016-04-01 Thread Shinobu Kinjo
At the Ceph day back to June 10th, 2014, there was a presentation done by Mellanox. This presentation was really good. They used xio-firefly branch. The following link would help you. https://www.cohortfs.com/sites/default/files/ceph%20day-boston-2014-06-10-matt-benjamin-cohortfs-mlx-xio-v5ez.pdf

Re: [ceph-users] PG Stuck active+undersized+degraded+inconsistent

2016-04-01 Thread Bob R
Calvin, What does your crushmap look like? ceph osd tree I find it strange that 1023 PGs are undersized when only one OSD failed. Bob On Thu, Mar 31, 2016 at 9:27 AM, Calvin Morrow wrote: > > > On Wed, Mar 30, 2016 at 5:24 PM Christian Balzer wrote: > >> On Wed, 30 Mar 2016 15:50:07 + Ca

Re: [ceph-users] OSDs keep going down

2016-04-01 Thread Bob R
Check your firewall rules On Fri, Apr 1, 2016 at 10:28 AM, Nate Curry wrote: > I am having some issues with my newly setup cluster. I am able to get all > of my 32 OSDs to start after setting up udev rules for my journal > partitions but they keep going down. It did seem like half of them woul

Re: [ceph-users] PG Stuck active+undersized+degraded+inconsistent

2016-04-01 Thread Calvin Morrow
On Fri, Apr 1, 2016 at 4:42 PM Bob R wrote: > Calvin, > > What does your crushmap look like? > ceph osd tree > [root@soi-ceph1 ~]# ceph osd tree # id weight type name up/down reweight -1 163.8 root default -2 54.6 host soi-ceph1 0 2.73 osd.0 up 1 5 2.73 osd.5 up 1 10 2.73 osd.10 up 1 15 2.73 osd.

Re: [ceph-users] OSDs keep going down

2016-04-01 Thread Nate Curry
That was it. I had recently rebuilt the OSD hosts and completely forgot to configure the firewall. Thanks, *Nate Curry* ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com