When creating an image, I see the warnings in ceph.log, but what do they
mean? I am new to ceph. Thanks.
2014-07-07 14:33:04.838406 mon.0 172.17.6.176:6789/0 31 : [INF] pgmap
v2060: 192 pgs: 192 stale+active+clean; 1221 MB data, 66451 MB used, 228
GB / 308 GB avail
2014-07-07 14:34:39.635483 osd.
Your PGs are not active+clean, so no I/O is possible.
Are you OSDs running?
$ sudo ceph -s
That should give you more information about what to do.
Wido
Thanks. this is the info output, I saw osd is running.
Can you help more? thanks.
root@ceph2:~# ceph -s
health HEALTH_WARN 192 pgs st
You'd have to see why the other daemons are not running, try:
$ ceph osd tree
And start the missing OSDs.
But I have ceph osd rm them. so?
Thanks.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users
Hi,
I have 135 pgs degraded in the system. How can I remove them?
They are in test environment, all data are not important.
Thanks for the kind helps.
root@ceph2:~# ceph osd tree
# idweight type name up/down reweight
-1 0.8398 root default
-2 0.8398 host ceph2
0
Hi,
I have resolved it by run these:
root@ceph2:~# ceph osd crush rm osd.0
removed item id 0 name 'osd.0' from crush map
root@ceph2:~# ceph osd crush rm osd.1
removed item id 1 name 'osd.1' from crush map
root@ceph2:~# ceph osd crush rm osd.2
removed item id 2 name 'osd.2' from crush map
root@cep
what're the IO throughput (MB/s) for the test cases?
Thanks.
On 14-7-9 下午6:57, Xabier Elkano wrote:
Hi,
I was doing some tests in my cluster with fio tool, one fio instance
with 70 jobs, each job writing 1GB random with 4K block size. I did this
test with 3 variations:
1- Creating 70 images