Also the ceph osdmap.  (ceph osd getmap -o /tmp/map will put the
osdmap in /tmp/map).
-Sam

On Mon, Apr 15, 2013 at 10:09 AM, Samuel Just <[email protected]> wrote:
> Can you post the output of ceph osd tree?
> -Sam
>
> On Mon, Apr 15, 2013 at 9:52 AM, Jeppesen, Nelson
> <[email protected]> wrote:
>> Thanks for the help but how do I track down this issue? If data is 
>> inaccessible, that's a very bad thing given this is production.
>>
>> # ceph osd dump | grep pool
>> pool 13 '.rgw.buckets' rep size 2 crush_ruleset 0 object_hash rjenkins 
>> pg_num 4800 pgp_num 4800 last_change 1198 owner 0
>> pool 14 '.rgw' rep size 2 crush_ruleset 0 object_hash rjenkins pg_num 8 
>> pgp_num 8 last_change 242 owner 18446744073709551615
>> pool 15 '.rgw.gc' rep size 2 crush_ruleset 0 object_hash rjenkins pg_num 8 
>> pgp_num 8 last_change 243 owner 18446744073709551615
>> pool 16 '.rgw.control' rep size 2 crush_ruleset 0 object_hash rjenkins 
>> pg_num 8 pgp_num 8 last_change 244 owner 18446744073709551615
>> pool 17 '.users.uid' rep size 2 crush_ruleset 0 object_hash rjenkins pg_num 
>> 8 pgp_num 8 last_change 246 owner 0
>> pool 18 '.users.email' rep size 2 crush_ruleset 0 object_hash rjenkins 
>> pg_num 8 pgp_num 8 last_change 248 owner 0
>> pool 19 '.users' rep size 2 crush_ruleset 0 object_hash rjenkins pg_num 8 
>> pgp_num 8 last_change 250 owner 0
>> pool 20 '.usage' rep size 2 crush_ruleset 0 object_hash rjenkins pg_num 8 
>> pgp_num 8 last_change 256 owner 18446744073709551615
>> pool 21 '.users.swift' rep size 2 crush_ruleset 0 object_hash rjenkins 
>> pg_num 8 pgp_num 8 last_change 1138 owner 0
>>
>> Nelson Jeppesen
>>    Disney Technology Solutions and Services
>>    Phone 206-588-5001
>>
>> -----Original Message-----
>> From: Gregory Farnum [mailto:[email protected]]
>> Sent: Monday, April 15, 2013 9:34 AM
>> To: Jeppesen, Nelson
>> Cc: [email protected]
>> Subject: Re: [ceph-users] ceph -w question
>>
>> "Incomplete" means that there are fewer than the minimum copies of the 
>> placement group (by default, half of the requested size, rounded up).
>> In general rebooting one node shouldn't do that unless you've changed your 
>> minimum size on the pool, and it does mean that data in those PGs is 
>> unaccessible.
>> -Greg
>> Software Engineer #42 @ http://inktank.com | http://ceph.com
>>
>>
>> On Mon, Apr 15, 2013 at 9:01 AM, Jeppesen, Nelson 
>> <[email protected]> wrote:
>>> When I reboot any node in my prod environment with no activity I see
>>> incomplete pgs. Is that a concern? Does that mean some data is unavailable?
>>> Thank you.
>>>
>>>
>>>
>>> # ceph -v
>>>
>>> ceph version 0.56.4 (63b0f854d1cef490624de5d6cf9039735c7de5ca)
>>>
>>>
>>>
>>> # ceph -w
>>>
>>> 2013-04-15 08:57:27.712065 mon.0 [INF] pgmap v585220: 4864 pgs: 4443
>>> active+clean, 1 active+degraded, 420 incomplete; 3177 GB data, 6504 GB
>>> active+used,
>>> 38186 GB / 44691 GB avail; 252/8168154 degraded (0.003%)
>>>
>>>
>>> _______________________________________________
>>> ceph-users mailing list
>>> [email protected]
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>> _______________________________________________
>> ceph-users mailing list
>> [email protected]
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to