Re: [ceph-users] AWS SDK and MultiPart Problem

2014-12-12 Thread Georgios Dimitrakakis
How silly of me!!! I 've just noticed that the file isn't writable by the apache! I 'll be back with the logs... G. I 'd be more than happy to provide to you all the info but for some unknown reason my radosgw.log is empty. This is the part that I have in ceph.conf [client.radosgw.gatewa

Re: [ceph-users] Ceph Block device and Trim/Discard

2014-12-12 Thread Wido den Hollander
On 12/12/2014 01:17 PM, Max Power wrote: >> Wido den Hollander hat am 12. Dezember 2014 um 12:53 >> geschrieben: >> It depends. Kernel RBD does not support discard/trim yet. Qemu does >> under certain situations and with special configuration. > > Ah, Thank you. So this is my problem. I use rbd w

Re: [ceph-users] Ceph Block device and Trim/Discard

2014-12-12 Thread Robert Sander
On 12.12.2014 12:48, Max Power wrote: > It would be great to shrink the used space. Is there a way to achieve this? Or > have I done something wrong? In a professional environment you may can live > with > filesystems that only grow. But on my small home-cluster this really is a > problem. As Wi

Re: [ceph-users] system metrics monitoring

2014-12-12 Thread Thomas Foster
You can also try Sensu.. On Dec 12, 2014 1:05 AM, "pragya jain" wrote: > hello sir! > > According to TomiTakussaari/riak_zabbix > > Currently supported Zabbix keys: > > riak.ring_num_partitions > riak.memory_total > riak.memory_processes_used > riak

Re: [ceph-users] AWS SDK and MultiPart Problem

2014-12-12 Thread Yehuda Sadeh
In any case, I pushed earlier today another fix to the same branch that replaces the slash with a tilde. Let me know if that one works for you. Thanks, Yehuda On Fri, Dec 12, 2014 at 5:59 AM, Georgios Dimitrakakis wrote: > How silly of me!!! > > I 've just noticed that the file isn't writable by

[ceph-users] pgs stuck degraded, unclean, undersized

2014-12-12 Thread Lindsay Mathieson
Sending a new thread as I can't see my own to reply. Solved the stuck pg's by deleting the cephfs andf the pools I created for it. Health returned to ok instantly. Side Note: I had to guess the command "ceph fs rm" as I could not find docs on it anywhere, and just doing "ceph fs" gives: Invali

[ceph-users] pgs stuck degraded, unclean, undersized

2014-12-12 Thread Lindsay Mathieson
Whereabouts to go with this? ceph -s cluster f67ef302-5c31-425d-b0fe-cdc0738f7a62 health HEALTH_WARN 256 pgs degraded; 256 pgs stuck degraded; 256 pgs stuck unclean; 256 pgs stuck undersized; 256 pgs undersized; recovery 10418/447808 objects degraded (2.326%) monmap e7: 3 mons at

Re: [ceph-users] xfsprogs missing in rhel6 repository

2014-12-12 Thread mdw
On Fri, Dec 12, 2014 at 04:57:29PM +, Lukac, Erik wrote: > Hi Guys, > > xfsprogs is missing in http://ceph.com/rpm-giant/rhel6/x86_64/ > Unfortunately it is not avaivable in standard-rhel. > Could you please add it as in firefly AND update repodata? > > Thanks in advance > > Erik Um. Maybe

Re: [ceph-users] unable to repair PG

2014-12-12 Thread Gregory Farnum
What version of Ceph are you running? Is this a replicated or erasure-coded pool? On Fri, Dec 12, 2014 at 1:11 AM, Luis Periquito wrote: > Hi Greg, > > thanks for your help. It's always highly appreciated. :) > > On Thu, Dec 11, 2014 at 6:41 PM, Gregory Farnum wrote: >> >> On Thu, Dec 11, 2014 a

Re: [ceph-users] Missing some pools after manual deployment

2014-12-12 Thread Gregory Farnum
On Fri, Dec 12, 2014 at 11:06 AM, Patrick Darley wrote: > Hi there, > > I am using a custom Linux OS, with ceph v0.89. > > > I have been following the monitor bootstrap instructions [1]. > > I have a problem in that the OS is firmly on the systemd bandwagon > and lacks support to run the provided

[ceph-users] Missing some pools after manual deployment

2014-12-12 Thread Patrick Darley
Hi there, I am using a custom Linux OS, with ceph v0.89. I have been following the monitor bootstrap instructions [1]. I have a problem in that the OS is firmly on the systemd bandwagon and lacks support to run the provided init.d script that runs the nodes. I have tried using the systemd scr

Re: [ceph-users] tgt / rbd performance

2014-12-12 Thread Mike Christie
On 12/11/2014 11:39 AM, ano nym wrote: > > there is a ceph pool on a hp dl360g5 with 25 sas 10k (sda-sdy) on a > msa70 which gives me about 600 MB/s continous write speed with rados > write bench. tgt on the server with rbd backend uses this pool. mounting > local(host) with iscsiadm, sdz is the v

Re: [ceph-users] Ceph Block device and Trim/Discard

2014-12-12 Thread Ilya Dryomov
On Fri, Dec 12, 2014 at 2:53 PM, Wido den Hollander wrote: > On 12/12/2014 12:48 PM, Max Power wrote: >> I am new to Ceph and start discovering its features. I used ext4 partitions >> (also mounted with -o discard) to place several osd on them. Then I created >> an >> erasure coded pool in this c

[ceph-users] xfsprogs missing in rhel6 repository

2014-12-12 Thread Lukac, Erik
Hi Guys, xfsprogs is missing in http://ceph.com/rpm-giant/rhel6/x86_64/ Unfortunately it is not avaivable in standard-rhel. Could you please add it as in firefly AND update repodata? Thanks in advance Erik --

Re: [ceph-users] Empty Rados log

2014-12-12 Thread Georgios Dimitrakakis
This is very silly of me... The file wasn't writable by apache. I am writing it down for future reference. G. Hi all! I have a CEPH installation with radosgw and the radosgw.log in the /var/log/ceph directory is empty. In the ceph.conf I have log file = /var/log/ceph/radosgw.log debug ms =

Re: [ceph-users] AWS SDK and MultiPart Problem

2014-12-12 Thread Georgios Dimitrakakis
I 'd be more than happy to provide to you all the info but for some unknown reason my radosgw.log is empty. This is the part that I have in ceph.conf [client.radosgw.gateway] host = xxx keyring = /etc/ceph/keyring.radosgw.gateway rgw socket path = /tmp/radosgw.sock rgw dns name = xxx.example.co

Re: [ceph-users] AWS SDK and MultiPart Problem

2014-12-12 Thread Yehuda Sadeh
Ok, I've been digging a bit more. I don't have full radosgw logs for the issue, so if you could provide it (debug rgw = 20), it might help. However, as it is now, I think the issue is with the way the client library is signing the requests. Instead of using the undecoded uploadId, it uses the encod

Re: [ceph-users] Ceph Block device and Trim/Discard

2014-12-12 Thread Sebastien Han
Discard works with virtio-scsi controllers for disks in QEMU. Just use discard=unmap in the disk section (scsi disk). > On 12 Dec 2014, at 13:17, Max Power > wrote: > >> Wido den Hollander hat am 12. Dezember 2014 um 12:53 >> geschrieben: >> It depends. Kernel RBD does not support discard/tri

Re: [ceph-users] Ceph Block device and Trim/Discard

2014-12-12 Thread Max Power
> Wido den Hollander hat am 12. Dezember 2014 um 12:53 > geschrieben: > It depends. Kernel RBD does not support discard/trim yet. Qemu does > under certain situations and with special configuration. Ah, Thank you. So this is my problem. I use rbd with the kernel modules. I think I should port my

Re: [ceph-users] Ceph Block device and Trim/Discard

2014-12-12 Thread Wido den Hollander
On 12/12/2014 12:48 PM, Max Power wrote: > I am new to Ceph and start discovering its features. I used ext4 partitions > (also mounted with -o discard) to place several osd on them. Then I created an > erasure coded pool in this cluster. On top of this there is the rados block > device which holds

[ceph-users] Ceph Block device and Trim/Discard

2014-12-12 Thread Max Power
I am new to Ceph and start discovering its features. I used ext4 partitions (also mounted with -o discard) to place several osd on them. Then I created an erasure coded pool in this cluster. On top of this there is the rados block device which holds also an ext4 filesystem (of course mounted with -

[ceph-users] ceph & blk-mq

2014-12-12 Thread Dzianis Kahanovich
On latest kernels, for most HDDs (for me actual spinning with SCSI interface - IDE/SATA in AHCI mode & Megaraid) IO schedulers can be replaced by blk-mq per-CPU queue. Even I put one node with 3.18 kernel into this mode (Megaraid, scsi_mod.use_blk_mq=Y), planned to switch all nodes (include AHCI

Re: [ceph-users] AWS SDK and MultiPart Problem

2014-12-12 Thread Georgios Dimitrakakis
Dear Yehuda, I have installed the patched version as you can see: $ radosgw --version ceph version 0.80.7-1-gbd43759 (bd43759f6e76fa827e2534fa4e61547779ee10a5) $ ceph --version ceph version 0.80.7-1-gbd43759 (bd43759f6e76fa827e2534fa4e61547779ee10a5) $ sudo yum info ceph-radosgw Installed

Re: [ceph-users] unable to repair PG

2014-12-12 Thread Luis Periquito
Hi Greg, thanks for your help. It's always highly appreciated. :) On Thu, Dec 11, 2014 at 6:41 PM, Gregory Farnum wrote: > On Thu, Dec 11, 2014 at 2:57 AM, Luis Periquito > wrote: > > Hi, > > > > I've stopped OSD.16, removed the PG from the local filesystem and started > > the OSD again. After