Hi Konstantin,
Mon, 23 Dec 2019 13:47:55 +0700
Konstantin Shalygin ==> Lars Täuber :
> On 12/18/19 2:16 PM, Lars Täuber wrote:
> > the situation after moving the PGs with osdmaptool is not really better
> > than without:
> >
> > $ ceph osd df class hdd
> > […]
> > MIN/MAX VAR: 0.86/1.08 STDDEV
On 12/18/19 2:16 PM, Lars Täuber wrote:
the situation after moving the PGs with osdmaptool is not really better than
without:
$ ceph osd df class hdd
[…]
MIN/MAX VAR: 0.86/1.08 STDDEV: 2.04
The OSD with the fewest PGs has 66 of them, the one with the most has 83.
Is this the expected result?
Hi Konstantin,
the situation after moving the PGs with osdmaptool is not really better than
without:
$ ceph osd df class hdd
[…]
MIN/MAX VAR: 0.86/1.08 STDDEV: 2.04
The OSD with the fewest PGs has 66 of them, the one with the most has 83.
Is this the expected result? I'm unsure how much unus
Hi Konstantin,
the cluster has finished it's backfilling.
I got this situation:
$ ceph osd df class hdd
[…]
MIN/MAX VAR: 0.86/1.08 STDDEV: 2.05
Now I created a new upmap.sh and sourced it. The cluster is busy again with 3%
of its objects.
I'll report the result.
Thanks for all your hints.
Re
Mon, 16 Dec 2019 15:38:30 +0700
Konstantin Shalygin ==> Lars Täuber :
> On 12/16/19 3:25 PM, Lars Täuber wrote:
> > Here it comes.
>
> Maybe some bug in osdmaptool, when defined pools is less than one no
> actually do_upmap is executed.
>
> Try like this:
>
> `osdmaptool osdmap.om --upmap u
On 12/16/19 3:25 PM, Lars Täuber wrote:
Here it comes.
Maybe some bug in osdmaptool, when defined pools is less than one no
actually do_upmap is executed.
Try like this:
`osdmaptool osdmap.om --upmap upmap.sh --upmap-pool=cephfs_data
--upmap-pool=cephfs_metadata --upmap-deviation=0 --upmap
Mon, 16 Dec 2019 15:17:37 +0700
Konstantin Shalygin ==> Lars Täuber :
> On 12/16/19 2:42 PM, Lars Täuber wrote:
> > There seems to be a bug in nautilus.
> >
> > I think about increasing the number of PG's for the data pool again,
> > because the average number of PG's per OSD now is 76.8.
> > Wh
On 12/16/19 2:42 PM, Lars Täuber wrote:
There seems to be a bug in nautilus.
I think about increasing the number of PG's for the data pool again, because
the average number of PG's per OSD now is 76.8.
What do you say?
May be bug in Nautilus, may be in osdmaptool.
Please, upload your binary
Hi Konstantin,
the number PG's for the metadata pool now is 16.
The number PG's for the data pool now is 512.
I removed the backward-compatible weight-set.
But now:
$ ceph osd getmap -o om ; osdmaptool om --upmap upmap.sh
--upmap-pool=cephfs_data --upmap-deviation=0
got osdmap epoch 94013
osdma
On 12/4/19 4:04 PM, Lars Täuber wrote:
So I just wait for the remapping and merging being done and see what happens.
Thanks so far!
Also don't forget to call `ceph osd crush weight-set rm-compat`.
And stop mgr balancer `ceph balancer off`.
After your rebalance is complete you can try:
`ceph
Hi Konstantin,
thanks for your suggestions.
> Lars, you have too much PG's for this OSD's. I suggest to disable PG
> autoscaler and:
>
> - reduce number of PG's for cephfs_metada pool to something like 16 PG's.
Done.
>
> - reduce number of PG's for cephfs_data to something like 512.
Done.
On 12/3/19 1:30 PM, Lars Täuber wrote:
here it comes:
$ ceph osd df tree
ID CLASS WEIGHTREWEIGHT SIZERAW USE DATAOMAPMETAAVAIL
%USE VAR PGS STATUS TYPE NAME
-1 195.40730- 195 TiB 130 TiB 128 TiB 58 GiB 476 GiB 66 TiB
66.45 1.00 -root defau
Hi,
I have set
upmap_max_iterations 2
w/o any impact.
In my opinion the issue is that the evaluation of OSDs data load is not
working.
Or can you explain why osdmaptool does not report anything to do?
Regards
Thomas
Am 03.12.2019 um 08:26 schrieb Harald Staub:
> Hi all
>
> Something to try:
> c
Hi all
Something to try:
ceph config set mgr mgr/balancer/upmap_max_iterations 20
(Default is 100.)
Cheers
Harry
On 03.12.19 08:02, Lars Täuber wrote:
BTW: The osdmaptool doesn't see anything to do either:
$ ceph osd getmap -o om
$ osdmaptool om --upmap /tmp/upmap.sh --upmap-pool cephfs_dat
BTW: The osdmaptool doesn't see anything to do either:
$ ceph osd getmap -o om
$ osdmaptool om --upmap /tmp/upmap.sh --upmap-pool cephfs_data
osdmaptool: osdmap file 'om'
writing upmap command output to: /tmp/upmap.sh
checking for upmap cleanups
upmap, max-count 100, max deviation 0.01
limiting
Hi Konstantin,
Tue, 3 Dec 2019 10:01:34 +0700
Konstantin Shalygin ==> Lars Täuber ,
ceph-users@ceph.io :
> Please paste your `ceph osd df tree`, `ceph osd pool ls detail`, `ceph
> osd crush rule dump`.
here it comes:
$ ceph osd df tree
ID CLASS WEIGHTREWEIGHT SIZERAW USE DATAOMA
On 12/2/19 5:55 PM, Lars Täuber wrote:
Here we have a similar situation.
After adding some OSDs to the cluster the PGs are not equally distributed over
the OSDs.
The balancing mode is set to upmap.
The docshttps://docs.ceph.com/docs/master/rados/operations/balancer/#modes say:
"This CRUSH mode
Hi there!
Here we have a similar situation.
After adding some OSDs to the cluster the PGs are not equally distributed over
the OSDs.
The balancing mode is set to upmap.
The docs https://docs.ceph.com/docs/master/rados/operations/balancer/#modes say:
"This CRUSH mode will optimize the placement o
Hi,
the folline upmap reports "no upmaps proposed".
root@ld3955:/home# osdmaptool om --upmap hdd-upmap.sh
--upmap-pool=hdb_backup --upmap-deviation 0
osdmaptool: osdmap file 'om'
writing upmap command output to: hdd-upmap.sh
checking for upmap cleanups
upmap, max-count 100, max deviation 0
limiti
On 11/19/19 4:01 PM, Thomas Schneider wrote:
If Ceph is not cabable to manage rebalancing automatically, how can I
proceed to rebalance the data manually?
Use offline upmap for your target pool:
ceph osd getmap -o om; osdmaptool om --upmap upmap.sh
--upmap-pool=hdd_backup --upmap-deviation
Hello Paul,
thanks for your analysis.
I want to share more statistics of my cluster to follow-up on your
response "You have way too few PGs in one of the roots".
Here are the pool details:
root@ld3955:~# ceph osd pool ls detail
pool 11 'hdb_backup' replicated size 3 min_size 2 crush_rule 1
objec
You have way too few PGs in one of the roots. Many OSDs have so few
PGs that you should see a lot of health warnings because of it.
The other root has a factor 5 difference in disk size which isn't ideal either.
Paul
--
Paul Emmerich
Looking for help with your Ceph cluster? Contact us at https
22 matches
Mail list logo