Re: [ceph-users] CephFS read IO caching, where it is happining?

2017-02-07 Thread Shinobu Kinjo
On Wed, Feb 8, 2017 at 3:05 PM, Ahmed Khuraidah wrote: > Hi Shinobu, I am using SUSE packages in scope of their latest SUSE > Enterprise Storage 4 and following documentation (method of deployment: > ceph-deploy) > But, I was able reproduce this issue on Ubuntu 14.04 with Ceph > repositories (als

Re: [ceph-users] CephFS read IO caching, where it is happining?

2017-02-07 Thread Ahmed Khuraidah
Hi Shinobu, I am using SUSE packages in scope of their latest SUSE Enterprise Storage 4 and following documentation (method of deployment: ceph-deploy) But, I was able reproduce this issue on Ubuntu 14.04 with Ceph repositories (also latest Jewel and ceph-deploy) as well. On Wed, Feb 8, 2017 at 3:

Re: [ceph-users] New mailing list: opensuse-c...@opensuse.org

2017-02-07 Thread Tim Serong
On 02/08/2017 01:36 PM, Tim Serong wrote: > Hi All, > > We've just created a new opensuse-c...@opensuse.org mailing list. The > purpose of this list is discussion of Ceph specifically on openSUSE. > For example, topics such as the following would all be welcome: > > * Maintainership of projects

[ceph-users] New mailing list: opensuse-c...@opensuse.org

2017-02-07 Thread Tim Serong
Hi All, We've just created a new opensuse-c...@opensuse.org mailing list. The purpose of this list is discussion of Ceph specifically on openSUSE. For example, topics such as the following would all be welcome: * Maintainership of projects on OBS under https://build.opensuse.org/project/show/f

Re: [ceph-users] CephFS read IO caching, where it is happining?

2017-02-07 Thread Shinobu Kinjo
Are you using opensource Ceph packages or suse ones? On Sat, Feb 4, 2017 at 3:54 PM, Ahmed Khuraidah wrote: > I Have opened ticket on http://tracker.ceph.com/ > > http://tracker.ceph.com/issues/18816 > > > My client and server kernels are the same, here is info: > # lsb_release -a > LSB Version:

[ceph-users] ceph-monstore-tool rebuild assert error

2017-02-07 Thread Sean Sullivan
I have a hammer cluster that died a bit ago (hammer 94.9) consisting of 3 monitors and 630 osds spread across 21 storage hosts. The clusters monitors all died due to leveldb corruption and the cluster was shut down. I was finally given word that I could try to revive the cluster this week! https:/

Re: [ceph-users] osd_snap_trim_sleep keeps locks PG during sleep?

2017-02-07 Thread Nick Fisk
Yeah it’s probably just the fact that they have more PG’s so they will hold more data and thus serve more IO. As they have a fixed IO limit, they will always hit the limit first and become the bottleneck. The main problem with reducing the filestore queue is that I believe you will start to

Re: [ceph-users] osd_snap_trim_sleep keeps locks PG during sleep?

2017-02-07 Thread Steve Taylor
Thanks, Nick. One other data point that has come up is that nearly all of the blocked requests that are waiting on subops are waiting for OSDs with more PGs than the others. My test cluster has 184 OSDs, 177 of which are 3TB, with 7 4TB OSDs. The cluster is well balanced based on OSD capacity,

Re: [ceph-users] osd being down and out

2017-02-07 Thread David Turner
The noup and/or noin flags could be useful for this. Depending on why you want to prevent it rejoining the cluster you would use one or the other, or both. [cid:image37fd7c.JPG@c0e7bb43.4bbeb276] David Turner | Cloud Operations E

Re: [ceph-users] Ceph pool resize

2017-02-07 Thread Vikhyat Umrao
On Tue, Feb 7, 2017 at 12:15 PM, Patrick McGarry wrote: > Moving this to ceph-user > > On Mon, Feb 6, 2017 at 3:51 PM, nigel davies wrote: > > Hay > > > > I am helping to run an small ceph cluster 2 nodes set up. > > > > We have recently bought a 3rd storage node and the management want to > > i

[ceph-users] Workaround for XFS lockup resulting in down OSDs

2017-02-07 Thread Thorvald Natvig
Hi, We've encountered a small "kernel feature" in XFS using Filestore. We have a workaround, and would like to share in case others have the same problem. Under high load, on slow storage, with lots of dirty buffers and low memory, there's a design choice with unfortunate side-effects if you have

Re: [ceph-users] osd_snap_trim_sleep keeps locks PG during sleep?

2017-02-07 Thread Nick Fisk
Hi Steve, >From what I understand, the issue is not with the queueing in Ceph, which is >correctly moving client IO to the front of the queue. The problem lies below >what Ceph controls, ie the scheduler and disk layer in Linux. Once the IO’s >leave Ceph it’s a bit of a free for all and the

Re: [ceph-users] Ceph pool resize

2017-02-07 Thread Patrick McGarry
Moving this to ceph-user On Mon, Feb 6, 2017 at 3:51 PM, nigel davies wrote: > Hay > > I am helping to run an small ceph cluster 2 nodes set up. > > We have recently bought a 3rd storage node and the management want to > increase the replication from two to three. > > As soon as i changed the poo

[ceph-users] Latency between datacenters

2017-02-07 Thread Daniel Picolli Biazus
Hi Guys, I have been planning to deploy a Ceph Cluster with the following hardware: *OSDs:* 4 Servers Xeon D 1520 / 32 GB RAM / 5 x 6TB SAS 2 (6 OSD daemon per server) Monitor/Rados Gateways 5 Servers Xeon D 1520 32 GB RAM / 2 x 1TB SAS 2 (5 MON daemon/ 4 rados daemon) Usage: Object Storage o

Re: [ceph-users] osd being down and out

2017-02-07 Thread Patrick McGarry
Moving this to ceph-user where it belongs. On Tue, Feb 7, 2017 at 8:33 AM, nigel davies wrote: > Hay > > Is their any way, to set ceph, so that if a OSD goes down and comes backup > ceph will not put it back in. service? > > Thanks -- Best Regards, Patrick McGarry Director Ceph Community |

Re: [ceph-users] osd_snap_trim_sleep keeps locks PG during sleep?

2017-02-07 Thread Steve Taylor
As I look at more of these stuck ops, it looks like more of them are actually waiting on subops than on osdmap updates, so maybe there is still some headway to be made with the weighted priority queue settings. I do see OSDs waiting for map updates all the time, but they aren’t blocking things a

Re: [ceph-users] osd_snap_trim_sleep keeps locks PG during sleep?

2017-02-07 Thread Steve Taylor
Sorry, I lost the previous thread on this. I apologize for the resulting incomplete reply. The issue that we’re having with Jewel, as David Turner mentioned, is that we can’t seem to throttle snap trimming sufficiently to prevent it from blocking I/O requests. On further investigation, I encoun

Re: [ceph-users] EC pool migrations

2017-02-07 Thread David Turner
If you successfully get every object into the cache tier and then flush it to the new pool, you've copied every object in your cluster twice. And as you mentioned, you can't guarantee that the flush will do what you need. I don't have much experience with RGW, but would it work to write a loop

Re: [ceph-users] EC pool migrations

2017-02-07 Thread Blair Bethwaite
On 7 February 2017 at 23:50, Blair Bethwaite wrote: > 1) insert a large enough temporary replicated pool as a cache tier > 2) somehow force promotion of every object into the cache (don't see > any way to do that other than actually read them - but at least some > creative scripting could do that

Re: [ceph-users] ceph mon unable to reach quorum

2017-02-07 Thread lee_yiu_ch...@yahoo.com
lee_yiu_ch...@yahoo.com 於 18/1/2017 11:17 寫道: Dear all, I have a ceph installation (dev site) with two nodes, each running a mon daemon and osd daemon. (Yes, I know running a cluster of two mon is bad, but I have no choice since I only have two nodes.) Now, the two nodes are migrated to anoth

[ceph-users] EC pool migrations

2017-02-07 Thread Blair Bethwaite
Hi all, Wondering if anyone has come up with a quick and minimal impact way of moving data between erasure coded pools? We want to shrink an existing EC pool (also changing the EC profile at the same time) that backs our main RGW buckets. Thus far the only successful way I've found of managing the

Re: [ceph-users] "Numerical argument out of domain" error occurs during rbd export-diff | rbd import-diff

2017-02-07 Thread Bernhard J . M . Grün
Hello, I just created a bug report for this: http://tracker.ceph.com/issues/18844 Not to be able to import-diff (already exported diffs) could result in a loss of data. Therefore I thought it would be wise to create a bug report for that. Best regards, Bernhard J. M. Grün -- Freundliche Grüße

Re: [ceph-users] Ceph -s require_jewel_osds pops up and disappears

2017-02-07 Thread Bernhard J . M . Grün
Hi, I also had that flickering indicator. The solution for me was quite simple: I forgot to restart one of the monitors after the upgrade (this is not done automatically on CentOS 7 at least). Hope this helps Bernhard Götz Reinicke schrieb am Di., 7. Feb. 2017 um 11:39 Uhr: > Hi, > > Ceph -s

Re: [ceph-users] virt-install into rbd hangs during Anaconda package installation

2017-02-07 Thread Tracy Reed
On Tue, Feb 07, 2017 at 12:25:08AM PST, koukou73gr spake thusly: > On 2017-02-07 10:11, Tracy Reed wrote: > > Weird. Now the VMs that were hung in interruptable wait state have now > > disappeared. No idea why. > > Have you tried the same procedure but with local storage instead? Yes. I have loca

[ceph-users] Ceph -s require_jewel_osds pops up and disappears

2017-02-07 Thread Götz Reinicke
Hi, Ceph -s shows like a direction indicator on of require_jewel_osds. I did recently an upgrade from centos 7.2 to 7.3 and ceph 10.2.3 to 10.2.5. May be I forgot to set an option? I thought I did a „ ceph osd set require_jewel_osds“ as described in the release notes https://ceph.com/geen-cat

Re: [ceph-users] virt-install into rbd hangs during Anaconda package installation

2017-02-07 Thread koukou73gr
On 2017-02-07 10:11, Tracy Reed wrote: > Weird. Now the VMs that were hung in interruptable wait state have now > disappeared. No idea why. Have you tried the same procedure but with local storage instead? -K. ___ ceph-users mailing list ceph-users@lis

Re: [ceph-users] virt-install into rbd hangs during Anaconda package installation

2017-02-07 Thread Tracy Reed
Weird. Now the VMs that were hung in interruptable wait state have now disappeared. No idea why. Additional information: ceph-mds-10.2.3-0.el7.x86_64 python-cephfs-10.2.3-0.el7.x86_64 ceph-osd-10.2.3-0.el7.x86_64 ceph-radosgw-10.2.3-0.el7.x86_64 libcephfs1-10.2.3-0.el7.x86_64 ceph-common-10.2.3-0