Re: [ceph-users] cant get cluster to become healthy. "stale+undersized+degraded+peered"

2015-09-16 Thread Goncalo Borges
Hello Stefan... Those 64 PGs refer to the default rbd pool which is created. Can you please give us the output of # ceph osd pool ls detail # ceph pg dump_stuck The degraded / stale status means that the PGs can not be replicated according to your policies. My guess is that you sim

Re: [ceph-users] cant get cluster to become healthy. "stale+undersized+degraded+peered"

2015-09-16 Thread Jonas Björklund
On Wed, 16 Sep 2015, Stefan Eriksson wrote: I have a completely new cluster for testing and its three servers which all are monitors and hosts for OSD, they each have one disk. The issue is ceph status shows: 64 stale+undersized+degraded+peered health: health HEALTH_WARN clock

Re: [ceph-users] C example of using libradosstriper?

2015-09-16 Thread 张冬卯
Hi, src/tools/rados.c has some striper rados snippet. and I have this little project using striper rados. see:https://github.com/thesues/striprados wish could help you Dongmao Zhang 在 2015年09月17日 01:05, Paul Mansfield 写道: > Hello, > I'm using the C interface librados striper and am looking

Re: [ceph-users] rados bench seq throttling

2015-09-16 Thread Deneau, Tom
> -Original Message- > From: Gregory Farnum [mailto:gfar...@redhat.com] > Sent: Monday, September 14, 2015 5:32 PM > To: Deneau, Tom > Cc: ceph-users > Subject: Re: [ceph-users] rados bench seq throttling > > On Thu, Sep 10, 2015 at 1:02 PM, Deneau, Tom wrote: > > Running 9.0.3 rados be

[ceph-users] benefit of using stripingv2

2015-09-16 Thread Corin Langosch
Hi guys, afaik rbd always splits the image into chunks of size 2^order (2^22 = 4MB by default). What's the benefit of specifying the feature flag "STRIPINGV2"? I couldn't find any documenation about it except http://ceph.com/docs/master/man/8/rbd/#striping which doesn't explain the benefits (or

Re: [ceph-users] Receiving "failed to parse date for auth header"

2015-09-16 Thread Ramon Marco Navarro
That worked. Thank you! On Fri, Sep 4, 2015 at 11:31 PM Ilya Dryomov wrote: > On Fri, Sep 4, 2015 at 12:42 PM, Ramon Marco Navarro > wrote: > > Good day everyone! > > > > I'm having a problem using aws-java-sdk to connect to Ceph using > radosgw. I > > am reading a " NOTICE: failed to parse dat

Re: [ceph-users] question on reusing OSD

2015-09-16 Thread Robert LeBlanc
-BEGIN PGP SIGNED MESSAGE- Hash: SHA256 My understanding of growing file systems is the same as yours, it can only grow at the end not the beginning. In addition to that, having partition 2 before partition 1 just cries to me to have it fixed, but that is just aesthetic. Because the weigh

[ceph-users] C example of using libradosstriper?

2015-09-16 Thread Paul Mansfield
Hello, I'm using the C interface librados striper and am looking for examples on how to use it. Please can someone point me to any useful code snippets? All I've found so far is the source code :-( Thanks very much Paul ___ ceph-users mailing list cep

[ceph-users] cant get cluster to become healthy. "stale+undersized+degraded+peered"

2015-09-16 Thread Stefan Eriksson
I have a completely new cluster for testing and its three servers which all are monitors and hosts for OSD, they each have one disk. The issue is ceph status shows: 64 stale+undersized+degraded+peered health: health HEALTH_WARN clock skew detected on mon.ceph01-osd03

Re: [ceph-users] question on reusing OSD

2015-09-16 Thread John-Paul Robinson
Christian, Thanks for the feedback. I guess I'm wondering about step 4 "clobber partition, leaving data in tact and grow partition and the file system as needed". My understanding of xfs_growfs is that the free space must be at the end of the existing file system. In this case the existing part

Re: [ceph-users] Hammer reduce recovery impact

2015-09-16 Thread Robert LeBlanc
-BEGIN PGP SIGNED MESSAGE- Hash: SHA256 I was out of the office for a few days. We have some more hosts to add. I'll send some logs for examination. - Robert LeBlanc PGP Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1 On Fri, Sep 11, 2015 at 12:45 AM, Guan

[ceph-users] ceph osd won't boot, resource shortage?

2015-09-16 Thread Peter Sabaini
Hi all, I'm having trouble adding OSDs to a storage node; I've got about 28 OSDs running, but adding more fails. Typical log excerpt: 2015-09-16 13:55:58.083797 7f3e7b821800 1 journal _open /var/lib/ceph/osd/ceph-28/journal fd 20: 21474836480 bytes, block size 4096 bytes, directio = 1, aio = 1

Re: [ceph-users] Deploy osd with btrfs not success.

2015-09-16 Thread Ilya Dryomov
On Wed, Sep 16, 2015 at 2:06 PM, darko wrote: > Sorry is this was asked already. Is there an "optimal" file system one > should use for ceph? See http://ceph.com/docs/master/rados/configuration/filesystem-recommendations/#filesystems. Thanks, Ilya __

Re: [ceph-users] question on reusing OSD

2015-09-16 Thread Christian Balzer
Hello, On Wed, 16 Sep 2015 07:21:26 -0500 John-Paul Robinson wrote: > The move journal, partition resize, grow file system approach would > work nicely if the spare capacity were at the end of the disk. > That shouldn't matter, you can "safely" loose your journal in controlled circumstances. T

Re: [ceph-users] question on reusing OSD

2015-09-16 Thread John-Paul Robinson (Campus)
So I just realized I had described the partition error incorrectly in my initial post. The journal was placed at the 800GB mark leaving the 2TB data partition at the end of the disk. (See my follow-up to Lionel for details.) I'm working to correct that so I have a single large partition the siz

Re: [ceph-users] question on reusing OSD

2015-09-16 Thread John-Paul Robinson
The move journal, partition resize, grow file system approach would work nicely if the spare capacity were at the end of the disk. Unfortunately, the gdisk (0.8.1) end of disk location bug caused the journal placement to be at the 800GB mark, leaving the largest remaining partition at the end of

Re: [ceph-users] Deploy osd with btrfs not success.

2015-09-16 Thread Simon Hallam
This may help: http://lists.ceph.com/pipermail/ceph-users-ceph.com/2015-September/004295.html Cheers, Simon From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Vickie ch Sent: 16 September 2015 10:58 To: ceph-users Subject: [ceph-users] Deploy osd with btrfs not success.

Re: [ceph-users] Deploy osd with btrfs not success.

2015-09-16 Thread Ilya Dryomov
On Wed, Sep 16, 2015 at 12:57 PM, Vickie ch wrote: > Hi cephers, > Have anyone ever created osd with btrfs in Hammer 0.94.3 ? I can create > btrfs partition successfully. But once use "ceph-deploy" then always get > error like below. Another question there is no parameter " -f " with mkfs. >

[ceph-users] Deploy osd with btrfs not success.

2015-09-16 Thread Vickie ch
Hi cephers, Have anyone ever created osd with btrfs in Hammer 0.94.3 ? I can create btrfs partition successfully. But once use "ceph-deploy" then always get error like below. Another question there is no parameter " -f " with mkfs. Any suggestion is appreciated. ​--

Re: [ceph-users] Recommended way of leveraging multiple disks by Ceph

2015-09-16 Thread Max A. Krasilnikov
Здравствуйте! On Tue, Sep 15, 2015 at 04:16:47PM +, fangzhe.chang wrote: > Hi, > I'd like to run Ceph on a few machines, each of which has multiple disks. The > disks are heterogeneous: some are rotational disks of larger capacities while > others are smaller solid state disks. What are t