Hi, I tried to download firefly rpm package, but found two rpms existing in
different folders, what is the difference of 0.87.0 and 0.80.7?
http://ceph.com/rpm/el6/x86_64/ceph-0.87-0.el6.x86_64.rpm
http://ceph.com/rpm-firefly/el6/x86_64/ceph-0.80.7-0.el6.x86_64.rpm
Wei Cao (Buddy
host
step emit
}
Wei Cao (Buddy)
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
.8@-1(probing).data_health(0)
update_stats avail 93% total 302379456 used 3150804 avail 283868652
2014-06-12 17:12:10.267453 7f1f619fa700 0 mon.8@-1(probing).data_health(0)
update_stats avail 93% total 302379456 used 3150808 avail 283868648
2014-06-12 17:13:10.267622 7f1f619f
Thanks Bonin. Do you have totally 48 OSDs or there are 48 OSDs on each storage
node? Do you think "kernel.pid_max = 4194303" is reasonable since it increase
a lot from the default OS setting.
Wei Cao (Buddy)
-Original Message-
From: Maciej Bonin [mailto:maciej.bo...@m247.
Hi, what is the recommended value for /proc/sys/kernel/pid_max? Is 32768 enough
for Ceph cluster with 4 nodes (40 1T OSDs on each node)? My ceph node already
run into "create thread fail" problem in osd log which root cause at pid_max.
Wei
Hi,
Is ““osd ops threads”” parameter still valid in Firefly? I did not find any
info related to it in ceph.com online document.
Wei Cao (Buddy)
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users
7f2ce92cb700 time
> 2014-06-05
> 10:27:54.703693
Wei Cao (Buddy)
-Original Message-
From: Cao, Buddy
Sent: Thursday, June 5, 2014 11:19 PM
To: 'Sage Weil'
Cc: ceph-users@lists.ceph.com
Subject: RE: [ceph-users] ceph osd down and out
Sage,
Yes, I already set the max o
Cao (Buddy)
-Original Message-
From: Sage Weil [mailto:s...@inktank.com]
Sent: Thursday, June 5, 2014 11:11 PM
To: Cao, Buddy
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] ceph osd down and out
This usually happens on larger clusters when you hit the max fd limit.
Add
3 keyvaluestore
1/ 3 journal
0/ 5 ms
1/ 5 mon
0/10 monc
1/ 5 paxos
0/ 5 tp
1/ 5 auth
1/ 5 crypto
1/ 1 finisher
1/ 5 heartbeatmap
1/ 5 perfcounter
1/ 5 rgw
1/ 5 javaclient
1/ 5 asok
1/ 1 throttle
-2/-2 (syslog threshold)
-1/-1 (stderr threshold)
max_recent 1
max_new 1000
log_file /var/log/ceph/ceph-osd.0.log
--- end dump of recent events ---
Wei Cao (Buddy)
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi,
some of the osds in my env continues to try to connect to monitors/ceph nodes,
but get connection refused and down/out. It even worse when I try to initialize
100+ osds (800G HDD for each osd), most of the osds would run into the same
problem to connect to monitor. I checked the monitor sta
Hello, one of my osd log keeps returning the log as below, do you know what it
is?
2014-06-02 19:01:18.222089 7f246ac1d700 0
xfsfilestorebackend(/var/lib/ceph/osd/osd10) set_extsize: FSSETXATTR: (22)
Invalid argument
Wei Cao
___
ceph-users mailing
-30 19:02:38.458643 7fa53a9527a0 0 ceph version 0.80
(b78644e7dee100e48dfeca32c9270a6b210d3003), process ceph-osd, pid 5479
2014-05-30 19:02:38.458744 7fa53a9527a0 -1 ^[[0;31m ** ERROR: unable to open
OSD superblock on /var/lib/ceph/osd/osd11: (2) No such file or directory^[[0m
Wei Cao (Buddy)
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Thanks Ashish. It returns result as below
[root@jf-n1 ~]# stop ceph-mon id=0
stop: Unknown job: ceph-mon
[root@jf-n1 ~]#
Wei Cao (Buddy)
From: Ashish Chandra [mailto:mail.ashishchan...@gmail.com]
Sent: Friday, May 30, 2014 5:16 PM
To: Cao, Buddy
Cc: ceph-users@lists.ceph.com
Subject: Re
I tried to stop monitor thru “#service ceph -a stop mon.3”, However, after a
while, the monitor is still in the cluster. I only see document to “remove
monitor”, can Ceph support just stop monitor?
___
ceph-users mailing list
ceph-users@lists.ceph.com
2.168.123.115:6789/0},
election epoch 40, quorum 0,2,3,4 1,12,2,0
mdsmap e4: 1/1/1 up {0=0=up:active}
osdmap e126: 20 osds: 15 up, 15 in
pgmap v323: 2880 pgs, 3 pools, 1884 bytes data, 20 objects
604 MB used, 76055 MB / 76660 MB avail
2880 active+clean
Cao (Buddy)
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
In Firefly, I added below lines to [global] section in ceph.conf, however,
after creating the cluster, the default pool “metadata/data/rbd”’s pg num is
still over 900 but not 375. Any suggestion?
osd pool default pg num = 375
osd pool default pgp num = 375
___
Thanks Peng, it works!
Wei Cao (Buddy)
-Original Message-
From: xan.peng [mailto:xanp...@gmail.com]
Sent: Friday, May 16, 2014 3:34 PM
To: Cao, Buddy
Cc: ceph-us...@ceph.com
Subject: Re: [ceph-users] “ceph pg dump summary –f json” question
Weired. Maybe you can check the source code
09458 909458
pool 5 3 0 0 0 262 4 4
pool 6 0 0 0 0 0 0 0
pool 7 0 0 0 0 0 0 0
pool 8 0 0 0 0 0 0 0
sum35755 0 56480 149680760857
Sage, does firefly require to manually set "ulimit -n" while add a new storage
node with 16 osds(500G disks)?
Wei Cao (Buddy)
-Original Message-
From: Sage Weil [mailto:s...@inktank.com]
Sent: Thursday, May 15, 2014 10:49 PM
To: Cao, Buddy
Cc: ceph-us...@ceph.com
Subject:
Hi there,
“ceph pg dump summary –f json” does not returns data as much as “ceph pg dump
summary”, are there any ways to get the fully Json format data for “ceph pg
dump summary”?
Wei Cao (Buddy)
___
ceph-users mailing list
ceph-users@lists.ceph.com
8.7710.99 1701.901465844 226897024
sda 4.55 0.60 1331.50 80010 177516216
Wei Cao (Buddy)
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
eout 10 $BINDIR/ceph -c $conf --name=osd.$id --keyring=$osd_keyring
osd crush create-or-move -- $id ${osd_weight:-${defaultweight:-1}}
$osd_location || :"
Wei Cao (Buddy)
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.c
Hi,
I notice after create ceph cluster, the ruleset for the default pools (data,
metadata, rbd) are 0,1,2 respectively. After creating the cluster, are there
any impact if I change the default ruleset to other ruleset?
Wei Cao (Buddy)
___
ceph-users
BTW, I'd like to know, after I change the "from rack" to "from host", if I add
more racks with host/osds in the cluster, will ceph choose the osds for pg only
from one zone? or ceph will randomly choose from several different zones?
Wei Cao (Buddy)
-Original Mess
Thanks Gregory so much,it solved the problem!
Wei Cao (Buddy)
-Original Message-
From: Gregory Farnum [mailto:g...@inktank.com]
Sent: Wednesday, May 14, 2014 2:00 AM
To: Cao, Buddy
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] crushmap question
You just use a type other than
if I add a new rack into the crushmap, the pg status will finally get to
active+clean. However, my customer do ONLY have one rack in their env, so hard
for me to have workaround to ask him setup several racks.
Wei Cao (Buddy)
___
ceph-users mail
Thanks Sage, it clears my confusion especially for osd journal/data/keyring.
And good to know ceph-disk is the right tool to use.
Wei Cao (Buddy)
-Original Message-
From: Sage Weil [mailto:s...@inktank.com]
Sent: Tuesday, May 6, 2014 10:18 PM
To: Cao, Buddy
Cc: ceph-us...@ceph.com
corresponding
rulesets. Will stuck unclean pgs or other odd status come?
Wei Cao (Buddy)
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
According to the change of ceph-deploy from mkcephfs, I feel ceph.conf is not a
recommended way to manage ceph configuration. Is it true? If so, how do I get
the configurations previous configured in ceph.conf? e.g., data drive, journal
drive, [osd] conf, etc.
Wei Cao (Buddy
memory ceph.conf
On 30.04.2014 09:38, Cao, Buddy wrote:
> Thanks Robert. The auto-created ceph.conf file in local working directory is
> too simple, almost nothing inside it. How do I know the osd.x created by
> ceph-deploy, and populate these kinda necessary information into ceph.conf?
This inf
Do you think "osd journal size=0" would cause any problems?
Wei Cao (Buddy)
-Original Message-
From: Haomai Wang [mailto:haomaiw...@gmail.com]
Sent: Wednesday, April 30, 2014 3:48 PM
To: Cao, Buddy
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] mkcephfs questions
I f
Thanks Robert. The auto-created ceph.conf file in local working directory is
too simple, almost nothing inside it. How do I know the osd.x created by
ceph-deploy, and populate these kinda necessary information into ceph.conf?
Wei Cao (Buddy)
-Original Message-
From: ceph-users-boun
Thanks for your reply Haomai. There is no /etc/ceph/ceph.conf on any ceph
nodes, that is why I raised the question at beginning.
It's what ceph-deploy shall do? Some people tell me ceph-deploy don't
distribute ceph.conf to ceph node, but stored in memory, is it true?
Wei
Thanks your reply, Haomai. What I don't understand is that, why the stuck
unclean pgs keep the same numbers after 12 hours. It's the common behavior or
not?
Wei Cao (Buddy)
-Original Message-
From: Haomai Wang [mailto:haomaiw...@gmail.com]
Sent: Wednesday, April 30, 2014 1
Hi,
After I setup ceph cluster via ceph-deploy, how do I get the ceph.conf? I did
not see any ceph.conf generated on storage node, is it in memory?
Wei Cao (Buddy)
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com
osd.9 up 1
-13 1 host vsm4_sata_zone_a
17 1 osd.17 up 1
-23 0.0 zone zone_c_sata
-28 0.0 zone zone_b_sata
Wei Cao (Bu
37 matches
Mail list logo