Re: [ceph-users] replace dead SSD journal

2015-05-05 Thread Matthew Monaco
On 05/05/2015 08:55 AM, Andrija Panic wrote: > Hi, > > small update: > > in 3 months - we lost 5 out of 6 Samsung 128Gb 850 PROs (just few days in > between of each SSD death) - cant believe it - NOT due to wearing out... I > really hope we got efective series from suplier... > That's ridiculou

Re: [ceph-users] ext4 external journal - anyone tried this?

2015-05-02 Thread Matthew Monaco
On 05/02/2015 02:53 AM, Matthew Monaco wrote: > It looks like you can get a pretty good performance benefit from using ext4 > with > an "external" SSD journal. Has anyone tried this with ceph? Take, for > example, a > system with a 3:1 HDD to SSD ratio. What are some

[ceph-users] ext4 external journal - anyone tried this?

2015-05-02 Thread Matthew Monaco
It looks like you can get a pretty good performance benefit from using ext4 with an "external" SSD journal. Has anyone tried this with ceph? Take, for example, a system with a 3:1 HDD to SSD ratio. What are some of your thoughts? 3 ceph journal partitions + 3 ext4 journal partitions per SSD for th

Re: [ceph-users] rbd: incorrect metadata

2015-04-14 Thread Matthew Monaco
On 04/14/2015 08:45 AM, Jason Dillaman wrote: > The C++ librados API uses STL strings so it can properly handle embedded > NULLs. You can make a backup copy of rbd_children using 'rados cp'. > However, if you don't care about the snapshots and you've already flattened > the all the images, you cou

Re: [ceph-users] rbd: incorrect metadata

2015-04-13 Thread Matthew Monaco
On 04/13/2015 03:17 PM, Jason Dillaman wrote: > Yes, when you flatten an image, the snapshots will remain associated to the > original parent. This is a side-effect from how librbd handles CoW with > clones. There is an open RBD feature request to add support for flattening > snapshots as well

Re: [ceph-users] rbd: incorrect metadata

2015-04-13 Thread Matthew Monaco
On 04/13/2015 07:51 AM, Jason Dillaman wrote: > Can you add "debug rbd = 20" to your config, re-run the rbd command, and > paste a link to the generated client log file? > I set both rbd and rados log to 20 VOL=volume-61241645-e20d-4fe8-9ce3-c161c3d34d55 SNAP="$VOL"@snapshot-d535f359-503a-4eaf-9

[ceph-users] rbd: incorrect metadata

2015-04-12 Thread Matthew Monaco
I have a pool used for RBD in a bit of an inconsistent state. Somehow, through OpenStack, the data associated with a child volume was deleted. If I try to unprotect the snapshot, librbd complains there is at least one child. If I try to list out the children, librbd errors out on looking up the ima

Re: [ceph-users] requests are blocked > 32 sec woes

2015-02-09 Thread Matthew Monaco
ournal partition and the rest a btrfs partition). I just don't get the "by contrast." If the OSD is btrfs+rotational, then why doesn't putting the journal on an SSD help (as much?) if writes are returned after journaling? > On Sun, Feb 8, 2015 at 8:48 PM, Matthew Monaco w

[ceph-users] requests are blocked > 32 sec woes

2015-02-08 Thread Matthew Monaco
Hello! *** Shameless plug: Sage, I'm working with Dirk Grunwald on this cluster; I believe some of the members of your thesis committee were students of his =) We have a modest cluster at CU Boulder and are frequently plagued by "requests are blocked" issues. I'd greatly appreciate any insight or