Hello Frédéric,
Thank you very much for the input. I would like to ask for some feedback
from you, as well as the ceph-users list at large.
The PGCalc tool was created to help steer new Ceph users in the right
direction, but it's certainly difficult to account for every possible
scenario. I'm
ur clone of pgcalc, the newly created pool
> didn't follow my values in the "Add Pool" dialog. For example, no matter
> what I fill in "Pool Name", I always get "newPool" as the name.
>
> By the way, where can I find the git repository of pgcalc? I can'
Hello John,
Apologies for the error. We will be working to correct it, but in the
interim, you can use http://linuxkidd.com/ceph/pgcalc.html
Thanks,
Michael J. Kidd
Sr. Software Maintenance Engineer
Red Hat Ceph Storage
+1 919-442-8878
On Wed, Jan 11, 2017 at 12:03 AM, 林自均 wrote:
> Hi all,
Hello Martin,
The proper way is to perform the following process:
For all Pools utilizing the same bucket of OSDs:
(Pool1_pg_num * Pool1_size) + (Pool2_pg_num * Pool2_size) + ...
(Pool(n)_pg_num * Pool(n)_size)
-
Bryan,
If you can read the disk that was osd.102, you may wish to attempt this
process to recover your data:
https://ceph.com/community/incomplete-pgs-oh-my/
Good luck!
Michael J. Kidd
Sr. Software Maintenance Engineer
Red Hat Ceph Storage
On Mon, Jan 4, 2016 at 8:32 AM, Bryan Wright wrote:
Hello Alexander,
One other point on your email.. You indicate you desire each OSD to have
~100 PGs, but depending on your pool size, it seems you may have forgetting
about the additional PGs associated with replication itself.
Assuming 3x replication in your environment:
70,000 * 3
For Firefly / Giant installs, I've had success with the following:
yum install ceph ceph-common --disablerepo=base --disablerepo=epel
Let us know if this works for you as well.
Thanks,
Michael J. Kidd
Sr. Storage Consultant
Inktank Professional Services
- by Red Hat
On Wed, Apr 8, 2015 at 8:5
Michael J. Kidd
Sr. Storage Consultant
Inktank Professional Services
- by Red Hat
On Wed, Apr 8, 2015 at 9:07 PM, Michael Kidd wrote:
> For Firefly / Giant installs, I've had success with the following:
>
> yum install ceph ceph-common --disablerepo=base --disablerepo=epel
>
>
This indicates you have multiple networks on the new mon host, but no
definition in your ceph.conf as to which network is public.
In your ceph.conf, add:
public network = 192.168.1.0/24
cluster network = 192.168.2.0/24
(Fix the subnet definitions for your environment)
Then, re-try your new mon