On Tue, Jul 1, 2014 at 11:57 AM, Brian Lovett
<brian.lov...@prosperent.com> wrote:
> Gregory Farnum <greg@...> writes:
>
>> ...and one more time, because apparently my brain's out to lunch today:
>>
>> ceph osd tree
>>
>> *sigh*
>>
>
> haha, we all have those days.
>
> [root@monitor01 ceph]# ceph osd tree
> # id    weight  type name       up/down reweight
> -1      14.48   root default
> -2      7.24            host ceph01
> 0       2.72                    osd.0   up      1
> 1       0.9                     osd.1   up      1
> 2       0.9                     osd.2   up      1
> 3       2.72                    osd.3   up      1
> -3      7.24            host ceph02
> 4       2.72                    osd.4   up      1
> 5       0.9                     osd.5   up      1
> 6       0.9                     osd.6   up      1
> 7       2.72                    osd.7   up      1
>
> I notice that the weights are all over the place. I was planning on the
> following once I got things going.
>
> 6 1tb ssd osd's (across 3 hosts) as a writeback cache pool, and 6 3tb sata's
> behind them in another pool for data that isn't accessed as often.

So those disks are actually different sizes, in proportion to their
weights? It could be having an impact on this, although it *shouldn't*
be an issue. And your tree looks like it's correct, which leaves me
thinking that something is off about your crush rules. :/
Anyway, having looked at that, what are your crush rules? ("ceph osd
crush dump" will provide that and some other useful data in json
format. I checked the command this time.)
And can you run "ceph pg dump" and put that on pastebin for viewing?
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to