[ceph-users] Orphan PG

2015-06-06 Thread Marek Dohojda
I recently started with Ceph, and overall had very few issues. However during a process of Cluster creation I must have done something wrong which created orphan PG groups. I suspect it was broken when I removed OSD right after initial creation, but I am guessing. Currently here is the Ceph o

Re: [ceph-users] Orphan PG

2015-06-07 Thread Marek Dohojda
hed and it keeps trying. On Sun, Jun 7, 2015 at 12:18 AM, Alex Muntada wrote: > Marek Dohojda: > > One of the Stuck Inactive is 0.21 and here is the output of ceph pg map >> >> #ceph pg map 0.21 >> osdmap e579 pg 0.21 (0.21) -> up [] acting [] >> >> #

Re: [ceph-users] Orphan PG

2015-06-07 Thread Marek Dohojda
health OK. > > Running ceph health details should list those OSDs. Do you have any? > El dia 07/06/2015 16:16, "Marek Dohojda" > va escriure: > > Thank you. Unfortunately this won't work because 0.21 is already being >> creating: >> ~# ceph pg force_creat

Re: [ceph-users] Orphan PG

2015-06-07 Thread Marek Dohojda
files, or in any display of OSDs. On Sun, Jun 7, 2015 at 8:41 AM, Marek Dohojda wrote: > I think this is the issue. look at ceph health detail you will see that > 0.21 and others are orphan: > HEALTH_WARN 65 pgs stale; 22 pgs stuck inactive; 65 pgs stuck stale; 22 > pgs stuck unclean;

Re: [ceph-users] Orphan PG

2015-06-07 Thread Marek Dohojda
Unfortunately nothing. It done its thing, re-balanced it, and left with same thing in the end. BTW Thank you very much for the time and suggestion, I really appreciate it. ceph health detail HEALTH_WARN 65 pgs stale; 22 pgs stuck inactive; 65 pgs stuck stale; 22 pgs stuck unclean; too many PGs p

Re: [ceph-users] Orphan PG

2015-06-07 Thread Marek Dohojda
referencing non-existent OSDs. On Sun, Jun 7, 2015 at 2:00 PM, Marek Dohojda wrote: > Unfortunately nothing. It done its thing, re-balanced it, and left with > same thing in the end. BTW Thank you very much for the time and > suggestion, I really appreciate it. > > cep

[ceph-users] Fwd: Too many PGs

2015-06-15 Thread Marek Dohojda
I hate to bug, but I truly hope someone has an answer to below. Thank you kindly! -- Forwarded message -- From: Marek Dohojda Date: Wed, Jun 10, 2015 at 7:49 AM Subject: Too many PGs To: ceph-users-requ...@lists.ceph.com Hello I am running “Hammer” Ceph and I am getting

Re: [ceph-users] Fwd: Too many PGs

2015-06-16 Thread Marek Dohojda
hanks & Regards > Somnath > > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of > Marek Dohojda > Sent: Monday, June 15, 2015 1:05 PM > To: ceph-users@lists.ceph.com > Subject: [ceph-users] Fwd: Too many PGs > > I hate to bug, but

Re: [ceph-users] Is it safe to increase pg number in a production environment

2015-08-04 Thread Marek Dohojda
I have done this not that long ago. My original PG estimates were wrong and I had to increase them. After increasing the PG numbers the Ceph rebalanced, and that took a while. To be honest in my case the slowdown wasn’t really visible, but it took a while. My strong suggestion to you woul

Re: [ceph-users] Is it safe to increase pg number in a production environment

2015-08-05 Thread Marek Dohojda
realoctation in my case took over an hour to accomplish. > On Aug 4, 2015, at 7:43 PM, Jevon Qiao wrote: > > Thank you and Samuel for the prompt response. > On 5/8/15 00:52, Marek Dohojda wrote: >> I have done this not that long ago. My original PG estimates were wrong and &g

[ceph-users] Performance question

2015-11-23 Thread Marek Dohojda
I have a Hammer Ceph cluster on 7 nodes with total 14 OSDs. 7 of which are SSD and 7 of which are SAS 10K drives. I get typically about 100MB IO rates on this cluster. I have a simple question. Is 100MB within my configuration what I should expect, or should it be higher? I am not sure if I sho

Re: [ceph-users] Performance question

2015-11-23 Thread Marek Dohojda
No SSD and SAS are in two separate pools. On Mon, Nov 23, 2015 at 7:30 PM, Haomai Wang wrote: > On Tue, Nov 24, 2015 at 10:23 AM, Marek Dohojda > wrote: > > I have a Hammer Ceph cluster on 7 nodes with total 14 OSDs. 7 of which > are > > SSD and 7 of which are SAS 10K dri

Re: [ceph-users] Performance question

2015-11-23 Thread Marek Dohojda
Sorry I should have specified SAS is the 100 MB :) , but to be honest SSD isn't much faster. On Mon, Nov 23, 2015 at 7:38 PM, Haomai Wang wrote: > On Tue, Nov 24, 2015 at 10:35 AM, Marek Dohojda > wrote: > > No SSD and SAS are in two separate pools. > > > > On

Re: [ceph-users] Performance question

2015-11-24 Thread Marek Dohojda
e device – it might be better to use the SSDs > for journaling since you are not getting better performance with SSDs? > > > > *From:* ceph-users [mailto:ceph-users-boun...@lists.ceph.com] *On Behalf > Of *Marek Dohojda > *Sent:* Monday, November 23, 2015 10:24 PM >

Re: [ceph-users] Performance question

2015-11-24 Thread Marek Dohojda
ts the performance of the platform. With rados > bench you can specify how many threads you want to use. > > Regards, > > Mart > > > > > On 11/24/2015 04:37 PM, Marek Dohojda wrote: > > Yeah they are, that is one thing I was planning on changing, What I a

Re: [ceph-users] Performance question

2015-11-24 Thread Marek Dohojda
ench as a baseline, I would expect more performance with 7 > X 10K spinners journaled to SSDs. The fact that SSDs did not perform much > better may mean to a bottleneck elsewhere – network perhaps? > > *From:* Marek Dohojda [mailto:mdoho...@altitudedigital.com] > *Sent:* Tuesday, No

Re: [ceph-users] Performance question

2015-11-24 Thread Marek Dohojda
like whilst you are running rados bench, are the disks getting maxed out? > > > > *From:* ceph-users [mailto:ceph-users-boun...@lists.ceph.com] *On Behalf > Of *Marek Dohojda > *Sent:* 24 November 2015 16:27 > *To:* Alan Johnson > > *Cc:* ceph-users@lists.ceph.com > *

Re: [ceph-users] Performance question

2015-11-24 Thread Marek Dohojda
ers to 3Xrather than 6X > > > > *From:* Marek Dohojda [mailto:mdoho...@altitudedigital.com] > *Sent:* Tuesday, November 24, 2015 1:24 PM > *To:* Nick Fisk > *Cc:* Alan Johnson; ceph-users@lists.ceph.com > > *Subject:* Re: [ceph-users] Performance question > >

Re: [ceph-users] Performance question

2015-11-24 Thread Marek Dohojda
will be? There maybe other things that > can be done. > > > > *From:* ceph-users [mailto:ceph-users-boun...@lists.ceph.com] *On Behalf > Of *Marek Dohojda > *Sent:* 24 November 2015 18:32 > *To:* Alan Johnson > *Cc:* ceph-users@lists.ceph.com; Nick Fisk > > *Subject:*

Re: [ceph-users] Performance question

2015-11-24 Thread Marek Dohojda
t; > > > *From:* ceph-users [mailto:ceph-users-boun...@lists.ceph.com] *On Behalf > Of *Marek Dohojda > *Sent:* 24 November 2015 18:47 > *To:* Nick Fisk > > *Cc:* ceph-users@lists.ceph.com > *Subject:* Re: [ceph-users] Performance question > > > > I du

Re: [ceph-users] Performance question

2015-11-24 Thread Marek Dohojda
suggesting is that you create Nx5GB partitions on the > SSD's (where N is the number of OSD's you want to have fast journals for), > and use the rest of the space for OSDs that would form the SSD pool. > > Bill > > On Tue, Nov 24, 2015 at 10:56 AM, Marek Dohojda < > m

[ceph-users] OSD on a partition

2015-12-01 Thread Marek Dohojda
I am looking through google, and I am not seeing a good guide as to how to put an OSD on a partition (GPT) of a disk. I see lots of options for file system, or single physical drive but not partition. http://dachary.org/?p=2548 This is only thing I found but that

Re: [ceph-users] OSD on a partition

2015-12-01 Thread Marek Dohojda
. Down the road I will have more SSD but this won’t happen until new budget hits and I can get more servers. > On Dec 1, 2015, at 12:11 PM, Wido den Hollander wrote: > > On 12/01/2015 07:29 PM, Marek Dohojda wrote: >> I am looking through google, and I am not seeing a good

Re: [ceph-users] OSD on a partition

2015-12-01 Thread Marek Dohojda
h-users-boun...@lists.ceph.com >> <mailto:ceph-users-boun...@lists.ceph.com>] On Behalf Of >> Marek Dohojda >> Sent: 01 December 2015 19:34 >> To: Wido den Hollander mailto:w...@42on.com>> >> Cc: ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com> >

Re: [ceph-users] OSD on a partition

2015-12-01 Thread Marek Dohojda
YxLlVaf7faUWcjySwunW1SY/rc2FkUFe52VlZ5cbFfJ+ym0an5 > i5SdfLd0gk4zR5l35j7svdJZU9+QIZLcz/S12Nx5mwUxhnhEeqYMBS/ENSca > tKq4nlqyIGaCyDaLlcaECRLBjskrNRMeV7vnNUQ59BzJuMWOHhq571zHeXYO > tezS > =mxz9 > -END PGP SIGNATURE- > > Robert LeBlanc > PGP Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2

Re: [ceph-users] OSD on a partition

2015-12-01 Thread Marek Dohojda
Nx5mwUxhnhEeqYMBS/ENSca > tKq4nlqyIGaCyDaLlcaECRLBjskrNRMeV7vnNUQ59BzJuMWOHhq571zHeXYO > tezS > =mxz9 > -END PGP SIGNATURE- > > Robert LeBlanc > PGP Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1 > > > On Tue, Dec 1, 2015 at 2:46

[ceph-users] Migrating from one Ceph cluster to another

2016-06-08 Thread Marek Dohojda
I have a ceph cluster (Hammer) and I just built a new cluster (Infernalis). This cluster contains VM boxes based on KVM. What I would like to do is move all the data from one ceph cluster to another. However the only way I could find from my google searches would be to move each image to local d