Re: [ceph-users] RadosGW and S3-compatible clients for PC and OSX

2013-04-24 Thread Igor Laskovy
Ok. I will try, thanks. One further question - does needed manually start /etc/init.d/radosgw all time when this host have been rebooted? Why it is not part of service ceph -a start? On Tue, Apr 23, 2013 at 11:05 PM, Lorieri wrote: > I've made some tests again with s3cmd > > you need to have a

Re: [ceph-users] mds "laggy"

2013-04-24 Thread Varun Chandramouli
Hi All, I am running the MapReduce wordcount code (on a ceph cluster consisting of 2 VMs) on a data set consisting of 5000 odd files (approx. 10gb size in total). Periodically, the ceph health says that the mds is laggy/unresponsive, and I get messages like the following: 13/04/24 10:41:00 INFO m

Re: [ceph-users] rbd command error "librbd::ImageCtx: error finding header"

2013-04-24 Thread Dennis Chen
Hi guys, The "librbd: Error listing snapshots: (95) Operation not supported" issue has been resolved, the root cause is when I deployed the OSD, I just copy /usr/local/bin/* from MON-MDS node to the OSD, not include the /usr/lib/rados-classes. So I re-transfer the /usr/lib/rados-classes folder

[ceph-users] Best solution for shared FS on Ceph for web clusters

2013-04-24 Thread Maik Kulbe
Hi, I'm currently looking into several options on how to use ceph in a small to mid size web cluster. I've ruled out CephFS as it is sadly not stable enaugh. Then I went with RBD and different approaches. OCFS2 on RBD did the job well but had extrem performance issues when two processes where

[ceph-users] Ceph 0.56.4 - OSD low request

2013-04-24 Thread MinhTien MinhTien
Dear all I have 2 server: earch server: 1 card raid: -- raid 6: 54TB,divided into 4 OSD (format ext4) -- raid 0: 248G, journal for 4 OSD (ext4). My config file: [global] auth supported = cephx auth cluster required = cephx auth service required

Re: [ceph-users] Best solution for shared FS on Ceph for web clusters

2013-04-24 Thread Gandalf Corvotempesta
2013/4/24 Maik Kulbe : > At the moment I'm trying a solution that uses RBD with a normal FS like EXT4 > or ZFS and where two server export that block device via NFS(with heartbeat > for redundancy and failover) but that involves problems with file system > consistency. If you don't need load balan

[ceph-users] RBD single process read performance

2013-04-24 Thread Wido den Hollander
Hi, I've been working with a Ceph 0.56.4 setup and I've been seeing some RBD read performance issues with single processes / threads. The setup is: - 36 OSDs (2TB WD RE drives) - 9 hosts (4 per OSD) - 120GB Intel SSD as a journal per host - 32GB Ram per host - Quad Core Xeon CPU (E3-1220 V2 @

Re: [ceph-users] RBD single process read performance

2013-04-24 Thread Mark Nelson
On 04/24/2013 06:17 AM, Wido den Hollander wrote: Hi, I've been working with a Ceph 0.56.4 setup and I've been seeing some RBD read performance issues with single processes / threads. The setup is: - 36 OSDs (2TB WD RE drives) - 9 hosts (4 per OSD) - 120GB Intel SSD as a journal per host - 32GB

Re: [ceph-users] Best solution for shared FS on Ceph for web clusters

2013-04-24 Thread Mark Nelson
On 04/24/2013 05:18 AM, Maik Kulbe wrote: Hi, I'm currently looking into several options on how to use ceph in a small to mid size web cluster. I've ruled out CephFS as it is sadly not stable enaugh. Then I went with RBD and different approaches. OCFS2 on RBD did the job well but had extrem pe

Re: [ceph-users] Best solution for shared FS on Ceph for webclusters

2013-04-24 Thread Maik Kulbe
On 04/24/2013 05:18 AM, Maik Kulbe wrote: > Hi, > > I'm currently looking into several options on how to use ceph in a small > to mid size web cluster. > > I've ruled out CephFS as it is sadly not stable enaugh. > > Then I went with RBD and different approaches. OCFS2 on RBD did the job > well but

Re: [ceph-users] mds "laggy"

2013-04-24 Thread Noah Watkins
Varun, What version of Ceph are you running? Can you confirm that the MDS daemon (ceph-mds) is still running or has crashed when the MDS becomes laggy/unresponsive? If it has crashed checked the MDS log for a crash report. There were a couple Hadoop workloads that caused the MDS to misbehave fo

[ceph-users] "Recommended" cache size on MDS

2013-04-24 Thread Kevin Decherf
Hey world, I know that this question is tricky since it depends on the cluster size and objects profiles. For those who are using CephFS, what is your working cache size on your cluster? What problems have you encountered with this configuration? And for InkTank, do you have any recommendation o

Re: [ceph-users] "Recommended" cache size on MDS

2013-04-24 Thread Gregory Farnum
On Wed, Apr 24, 2013 at 8:39 AM, Kevin Decherf wrote: > Hey world, > > I know that this question is tricky since it depends on the cluster size > and objects profiles. > > For those who are using CephFS, what is your working cache size on your > cluster? What problems have you encountered with thi

Re: [ceph-users] RBD single process read performance

2013-04-24 Thread Wido den Hollander
On 04/24/2013 02:23 PM, Mark Nelson wrote: On 04/24/2013 06:17 AM, Wido den Hollander wrote: Hi, I've been working with a Ceph 0.56.4 setup and I've been seeing some RBD read performance issues with single processes / threads. The setup is: - 36 OSDs (2TB WD RE drives) - 9 hosts (4 per OSD) -

[ceph-users] cuttlefish countdown

2013-04-24 Thread Sage Weil
Hi everyone- We are down to a handful of urgent bugs (3!) and a cuttlefish release date that is less than a week away. Thank you to everyone who has been involved in coding, testing, and stabilizing this release. We are close! If you would like to test the current release candidate, your effo

Re: [ceph-users] cephfs bandwidth issue

2013-04-24 Thread Elso Andras
Hi I use the kernel module. I found only one mount parameter for readahead: rsize. But didnt help. > These settings are a bit silly. I think what you've got there is > logically equivalent to having the stripe_unit and object_size both > set to 512KB, but I'm not certain. 512KB is also a bit small

[ceph-users] bad crc message in error logs

2013-04-24 Thread James Harper
I'm seeing a few messages like this on my OSD logfiles: 2013-04-25 00:00:08.174869 e3ca2b70 0 bad crc in data 1652929673 != exp 2156854821 2013-04-25 00:00:08.179749 e3ca2b70 0 -- 192.168.200.191:6882/30908 >> 192.168.200.197:0/3338580093 pipe(0xc70e1c0 sd=24 :6882 s=0 pgs=0 cs=0 l=0).accept

[ceph-users] Libceph - socket error on read / write.

2013-04-24 Thread MinhTien MinhTien
Dear all I use ceph 0.56.4 - CentOS 6.3 up kernel 3.8.8-1.el6.elrepo.x86_64. I use multiple versions of the kernel, but I still encountering status: libceph: osd3 172.30.33.2:6810 socket closed (con state OPEN) libceph: osd3 172.30.33.2:6810 socket error on write libceph: osd3 172.30.33.2:6810 s

[ceph-users] Journal Information

2013-04-24 Thread Mandell Degerness
Given a partition, is there a command which can be run to validate if the partition is used as a journal of an OSD and, if so, what OSD it belongs to? ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph

Re: [ceph-users] Ceph error: active+clean+scrubbing+deep

2013-04-24 Thread David Zafman
I'm not sure what the point of running with replication set to 1, but a new feature adds ceph commands to turn off scrubbing: Check ceph --help to see if you have a version that has this. ceph osd set ceph osd unset You might want to turn off both kinds of scrubbing. ceph osd set noscru

Re: [ceph-users] Journal Information

2013-04-24 Thread Mike Dawson
Mandell, Not sure if you can start with a partition to see which OSD it belongs to, but you can start with the OSDs to see what journal partition belongs to it: ceph --admin-daemon /var/run/ceph/ceph-osd.0.asok config show | grep osd_journal | grep -v size - Mike On 4/24/2013 9:05 PM, Man

Re: [ceph-users] Ceph error: active+clean+scrubbing+deep

2013-04-24 Thread MinhTien MinhTien
Hi David Zafman I use Ceph 0.56.4 I don't use command "ceph osd set nodeep-scrub" This command only use in version 0.60 ? Thanks. TienBm Skype: tien.bm0805 On Thu, Apr 25, 2013 at 8:12 AM, David Zafman wrote: > > I'm not sure what the point of running with replication set to 1, but a > new f

Re: [ceph-users] mds "laggy"

2013-04-24 Thread Varun Chandramouli
Ceph version was a 0.58 build i cloned from github master branch (0.58-500-gaf3b163 (af3b16349a49a8aee401e27c1b71fd704b31297c), The mds daemon had crashed when it became laggy, I restarted it and the MR code continued to execute. I am unable to see any mds logs in /var/log though. Should I be enabl

Re: [ceph-users] mds "laggy"

2013-04-24 Thread Noah Watkins
You may need to be root to look at the logs in /var/log/ceph. Turning up logging is helpful, too. Is the bug reproducible? It'd be great if you could get a core dump file for the crashed MDS process. -Noah On Apr 24, 2013, at 9:53 PM, Varun Chandramouli wrote: > Ceph version was a 0.58 build