Re: [ceph-users] see warning logs in ceph.log

2014-07-06 Thread hua peng
Your PGs are not active+clean, so no I/O is possible. Are you OSDs running? $ sudo ceph -s That should give you more information about what to do. Wido Thanks. this is the info output, I saw osd is running. Can you help more? thanks. root@ceph2:~# ceph -s health HEALTH_WARN 192 pgs st

Re: [ceph-users] see warning logs in ceph.log

2014-07-06 Thread Wido den Hollander
On 07/07/2014 08:38 AM, hua peng wrote: When creating an image, I see the warnings in ceph.log, but what do they mean? I am new to ceph. Thanks. 2014-07-07 14:33:04.838406 mon.0 172.17.6.176:6789/0 31 : [INF] pgmap v2060: 192 pgs: 192 stale+active+clean; 1221 MB data, 66451 MB used, 228 GB / 30

[ceph-users] see warning logs in ceph.log

2014-07-06 Thread hua peng
When creating an image, I see the warnings in ceph.log, but what do they mean? I am new to ceph. Thanks. 2014-07-07 14:33:04.838406 mon.0 172.17.6.176:6789/0 31 : [INF] pgmap v2060: 192 pgs: 192 stale+active+clean; 1221 MB data, 66451 MB used, 228 GB / 308 GB avail 2014-07-07 14:34:39.635483 osd.

Re: [ceph-users] Question about placing different pools on different osds

2014-07-06 Thread Wido den Hollander
On 07/06/2014 11:29 PM, Erik Logtenberg wrote: Hi, I have some osd's on hdd's and some on ssd's, just like the example in these docs: http://ceph.com/docs/firefly/rados/operations/crush-map/ Now I'd like to place an erasure encoded pool on the hdd's and a replicated (cache) pool on the ssd's.

[ceph-users] About conditional PUT on ETag

2014-07-06 Thread Ray Lv
Hi there, We’re extending the use case of RADOSGW to store some lightweight metadata which is accessed concurrently by multiple clients. The metadata is stored as a RGW object in CEPH. The payload data is JSON array encoded as string. The following is a sample payload. [ { "id": "foo",

[ceph-users] Question about placing different pools on different osds

2014-07-06 Thread Erik Logtenberg
Hi, I have some osd's on hdd's and some on ssd's, just like the example in these docs: http://ceph.com/docs/firefly/rados/operations/crush-map/ Now I'd like to place an erasure encoded pool on the hdd's and a replicated (cache) pool on the ssd's. In order to do that, I have to split the crush ma

Re: [ceph-users] write performance per disk

2014-07-06 Thread VELARTIS Philipp Dürhammer
hi, yes i did a test now with 16 instances with 16 and 32 threads each. the absolute maximum was 1100mb/sec but network was still not seturated. all disks had the same load with about 110mb/sec - the maximum of the disks i got using direct access was 170/mb sec writes... this is not a too bad va

[ceph-users] Ceph Setup Woes

2014-07-06 Thread Chris @ VeeroTech.net
Hello all, Looking for some support on installation. I've followed the installation guide located on the main website. I have a virtual server that I will be using for the admin node and first monitor (ceph-master), three physical servers for OSD usage, two will be used for monitors (ceph1, c

Re: [ceph-users] Combining MDS Nodes

2014-07-06 Thread Wido den Hollander
On 07/06/2014 02:42 PM, Lazuardi Nasution wrote: Hi, Is it possible to combine MDS and MON or OSD inside the same node? Which one is better, MON with MDS or OSD with MDS? Yes, not a problem. Be aware, the MDS can be memory hungry depending on your active data set. There is no golden rule f

[ceph-users] Combining MDS Nodes

2014-07-06 Thread Lazuardi Nasution
Hi, Is it possible to combine MDS and MON or OSD inside the same node? Which one is better, MON with MDS or OSD with MDS? How do I configure OSD and MDS to allow two kind of public network connections, standard (1 GbEs) and jumbo (10 GbEs). I want to take advantage of jumbo frame for some support