hello,
After installing ceph I tried to watch it with ceph -w,
2015-10-28 14:54:08.035995 mon.0 [INF] pgmap v82: 192 pgs: 104
active+degraded+remapped, 88 creating+incomplete; 0 bytes data, 36775 MB
used, 113 GB / 156 GB avail
2015-10-28 14:54:12.327050 mon.0 [INF] pgmap v83: 192 pgs: 104
act
Hello,
$ ceph osd stat
osdmap e18: 2 osds: 2 up, 2 in
this is what it shows.
does it mean I need to add up to 3 osds? I just use the default setup.
thx.
On 2015/10/28 星期三 19:53, Gurjar, Unmesh wrote:
Are all the OSDs being reported as 'up' and 'in'? This can be checked by
executing 'c
:38, Lindsay Mathieson wrote:
On 29 October 2015 at 10:29, Wah Peng mailto:wah_p...@yahoo.com.sg>> wrote:
$ ceph osd stat
osdmap e18: 2 osds: 2 up, 2 in
this is what it shows.
does it mean I need to add up to 3 osds? I just use the default setup.
If you went wi
四 8:55, Robert LeBlanc wrote:
Please paste 'ceph osd tree'.
Robert LeBlanc
Sent from a mobile device please excuse any typos.
On Oct 28, 2015 6:54 PM, "Wah Peng" mailto:wah_p...@yahoo.com.sg>> wrote:
Hello,
Just did it, but still no good health. can you he
wow this sounds hard to me. can you show the details?
thanks a lot.
On 2015/10/29 星期四 9:01, Robert LeBlanc wrote:
You need to change the CRUSH map to select osd instead of host.
Robert LeBlanc
Sent from a mobile device please excuse any typos.
On Oct 28, 2015 7:00 PM, "Wah
Behalf Of Wah
Peng
Sent: 2015年10月29日 9:14
To: Robert LeBlanc
Cc: Lindsay Mathieson; Gurjar, Unmesh; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] creating+incomplete issues
wow this sounds hard to me. can you show the details?
thanks a lot.
On 2015/10/29 星期四 9:01, Robert LeBlanc wrote:
Y
h-map/#editing-a-crush-map
Robert LeBlanc
Sent from a mobile device please excuse any typos.
On Oct 28, 2015 7:46 PM, "Wah Peng" mailto:wah_p...@yahoo.com.sg>> wrote:
Is there a ceph sub-command existing instead of changing the config
file? :)
On 2015/10/29 星期四 9:24
I have changed this line,
step chooseleaf firstn 0 type osd
the type from "host" to "osd".
Now the health looks fine:
$ ceph health
HEALTH_OK
Thanks for all the helps.
On 2015/10/29 星期四 10:35, Wah Peng wrote:
Hello,
this shows the content of crush-map file, what con
hello,
do you know why this happens when I did it following the official
ducumentation.
$ sudo rbd map foo --name client.admin
rbd: add failed: (5) Input/output error
the OS kernel,
$ uname -a
Linux ceph.yygamedev.com 3.2.0-23-generic #36-Ubuntu SMP Tue Apr 10
20:39:51 UTC 2012 x86_64 x86
onfirm if the rbd module is loaded (sudo modprobe rbd) on
the ceph-client node and give it a retry.
If you still encounter the issue, post back the snippet of error logs in syslog
or dmesg to take it forward.
Regards,
Unmesh G.
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@li
$ ceph -v
ceph version 0.80.10 (ea6c958c38df1216bf95c927f143d8b13c4a9e70)
thanks.
On 2015/10/29 星期四 18:23, Ilya Dryomov wrote:
What's your ceph version and what does dmesg say? 3.2 is*way* too
old, you are probably missing more than one required feature bit. See
http://docs.ceph.com/docs/ma
Hello,
for production application (for example, openstack's block storage), is
it better to setup data to be stored with two replicas, or three
replicas? is two replicas with better performance and lower cost?
Thanks.
___
ceph-users mailing list
cep
Hello,
I follow these steps trying to rollback a snapshot but it seems fail,
the files are lost.
root@ceph3:/mnt/ceph-block-device# rbd snap create rbd/bar@snap1
root@ceph3:/mnt/ceph-block-device# rbd snap ls rbd/bar
SNAPID NAME SIZE
2 snap1 10240 MB
root@ceph3:/mnt/ceph-block-device
Hello,
what's the disadvantage if setup PG_Number too large or too small
against OSD number?
Thanks.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
why data lost happens? thanks.
On 2015/11/13 星期五 16:13, Vickie ch wrote:
In the other side, pg number too large but OSD number too small that
have a chance to cause data lost.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com
15 matches
Mail list logo