Hi ,
Even I have seen same logs in osds , but the steps I followed were:
Setup : 4 osds-nodes node1,node2,node3,node4
Each node contains 8 osds.
Node1 got rebooted.
But osd in node2 went down.
Logs from monitor:
2015-02-02 20:28:28.766087 7fbaabea4700 1 mon.rack1-ram-6@0(leader).osd e248
pr
Hi Mika,
The below command will set ruleset to the pool:
ceph osd pool set crush_ruleset 1
For more info : http://ceph.com/docs/master/rados/operations/crush-map/
Thanks
Sahana
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Vickie
ch
Sent: Tuesday, February 03, 2015
Lokeshappa
Test Development Engineer I
SanDisk Corporation
3rd Floor, Bagmane Laurel, Bagmane Tech Park
C V Raman nagar, Bangalore 560093
T: +918042422283
sahana.lokesha...@sandisk.com
From: Ta Ba Tuan [mailto:tua...@vccloud.vn]
Sent: Friday, November 07, 2014 5:18 PM
To: Sahana Lokeshappa; ceph-users
which are in degraded
state. The objects included in that pg are in degraded state
Thanks
Sahana Lokeshappa
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Ta Ba
Tuan
Sent: Friday, November 07, 2014 2:49 PM
To: ceph-users@lists.ceph.com
Subject: [ceph-users] How to
state
Thanks
Sahana Lokeshappa
Test Development Engineer I
SanDisk Corporation
3rd Floor, Bagmane Laurel, Bagmane Tech Park
C V Raman nagar, Bangalore 560093
T: +918042422283
sahana.lokesha...@sandisk.com
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Beha
restarting osds. It didn’t work.
Thanks
Sahana Lokeshappa
Test Development Engineer I
SanDisk Corporation
3rd Floor, Bagmane Laurel, Bagmane Tech Park
C V Raman nagar, Bangalore 560093
T: +918042422283
sahana.lokesha...@sandisk.com
From: Craig Lewis [mailto:cle...@centraldesktop.com]
Sent
Replies Inline :
Sahana Lokeshappa
Test Development Engineer I
SanDisk Corporation
3rd Floor, Bagmane Laurel, Bagmane Tech Park
C V Raman nagar, Bangalore 560093
T: +918042422283
sahana.lokesha...@sandisk.com
-Original Message-
From: Sage Weil [mailto:sw...@redhat.com]
Sent: Wednesday
Hi All,
Anyone can help me out here.
Sahana Lokeshappa
Test Development Engineer I
From: Varada Kari
Sent: Monday, September 22, 2014 11:52 PM
To: Sage Weil; Sahana Lokeshappa; ceph-us...@ceph.com;
ceph-commun...@lists.ceph.com
Subject: RE: [Ceph-community] Pgs are in stale+down+peering state
ast acting [12,25,23]
Please, can anyone explain why pgs are in this state.
Sahana Lokeshappa
Test Development Engineer I
SanDisk Corporation
3rd Floor, Bagmane Laurel, Bagmane Tech Park
C V Raman nagar, Bangalore 560093
T: +918042422283
sahana.lokesha...@sandisk.com
P
HI Santhosh,
Copy updated ceph.conf and keyrings from admin node to all cluster nodes
(present in /etc/ceph/) . If you are using ceph-deploy , use this command from
admin node.
ceph-deploy –overwrite-conf admin cluster-node1 cluster-node2
Sahana Lokeshappa
Test Development Engineer I
SanDisk
hreadPool::WorkThreadSharded::entry()+0x10) [0xa4c420]
10: (()+0x8182) [0x7f920f579182]
11: (clone()+0x6d) [0x7f920d91a30d]
Raised tracker : http://tracker.ceph.com/issues/8887
Logs are attached to tracker.
Thanks
Sahana Lokeshappa
Test Development Engineer I
[cid:image001.png@01CE9342.
19:01 ceph-osd.1.log.1.gz
-rw-r--r-- 1 root root0 Jul 17 06:39 ceph-osd.2.log
-rw-r--r-- 1 root root 17969054 Jul 16 19:01 ceph-osd.2.log.1.gz
Due to this , I lost logs, until I restarted the osds.
thanks
Sahana Lokeshappa
Test Development Engineer I
3rd Floor, Bagmane Laurel, Bagmane Tech
Sahana Lokeshappa
Test Development Engineer I
[cid:image001.png@01CE9342.6D040E30]
3rd Floor, Bagmane Laurel, Bagmane Tech Park
C V Raman nagar, Bangalore 560093
T: +918042422283
sahana.lokesha...@sandisk.com
From: Ceph-community [mailto:ceph-community-boun...@lists.ceph.com] On Behalf
Of
8'2116 lcod
1551'2200 mlcod 0'0 active+clean] agent_stop
-3018> 2014-06-19 08:28:27.037570 7fa95b007700 10 osd.23 pg_epoch: 1551
pg[4.53( v 1551'2203 (0'0,1551'2203] local-les=1500 n=461 ec=108 les/c
1500/1510 1499/1499/1499) [23,18,8] r=0 lpr=1499 luod=0'0 c
14 matches
Mail list logo