[ceph-users] learning about increasing osd / pg_num for a pol

2016-02-12 Thread John Hogenmiller (yt)
I started a cluster with 9 OSD across 3 nodes. Then I expanded it to 419 OSDs across 7 nodes. Along the way, I increased the pg_num/pgp_num in the rbd pool. Thanks to help earlier on this list, I was able to do that. Tonight I started to do some perf testing and quickly realized that I never upda

[ceph-users] ceph-disk activate fails (after 33 osd drives)

2016-02-12 Thread John Hogenmiller (yt)
I have 7 servers, each containing 60 x 6TB drives in jbod mode. When I first started, I only activated a couple drives on 3 nodes as Ceph OSDs. Yesterday, I went to expand to the remaining nodes as well as prepare and activate all the drives. ceph-disk prepare worked just fine. However, ceph-disk