Yes, increasing the PG count for the data pool will be what you want to do
when you add osds to your cluster.
On Wed, Nov 22, 2017, 9:25 AM gjprabu wrote:
> Hi David,
>
> Thanks, will check osd weight settings and we are not using rbd
> and will delete. As per the pg calculation for the
Hi David,
Thanks, will check osd weight settings and we are not using rbd and
will delete. As per the pg calculation for the 8 osd we should keep 512 pg but
our cause unfortunately have set 256 for meta data and 256 for data. Now is
that ok to increase the pg count in data pool alon
Your rbd pool can be removed (unless you're planning to use it) which will
delete those PGs from your cluster/OSDs. Also all of your backfilling
finished and has settled. Now you just need to work on balancing the
weights for the OSDs in your cluster.
There are multiple ways to balance the usage
Hi David,
This is our current status.
~]# ceph status
cluster b466e09c-f7ae-4e89-99a7-99d30eba0a13
health HEALTH_WARN
mds0: Client integ-hm3 failing to respond to cache pressure
mds0: Client integ-hm9-bkp failing to respond to cache pressure
What is your current `ceph status` and `ceph df`? The status of your
cluster has likely changed a bit in the last week.
On Mon, Nov 20, 2017 at 6:00 AM gjprabu wrote:
> Hi David,
>
> Sorry for the late reply and its completed OSD Sync and more
> ever still fourth OSD available size i
Hi David,
Sorry for the late reply and its completed OSD Sync and more ever
still fourth OSD available size is keep reducing. Is there any option to check
or fix .
ID WEIGHT REWEIGHT SIZE USEAVAIL %USE VAR PGS
0 3.29749 1.0 3376G 2320G 1056G 68.71 1.10 144
On Mon, Nov 13, 2017 at 4:57 AM, David Turner wrote:
> You cannot reduce the PG count for a pool. So there isn't anything you can
> really do for this unless you create a new FS with better PG counts and
> migrate your data into it.
>
> The problem with having more PGs than you need is in the m
You cannot reduce the PG count for a pool. So there isn't anything you can
really do for this unless you create a new FS with better PG counts and
migrate your data into it.
The problem with having more PGs than you need is in the memory footprint
for the osd daemon. There are warning thresholds
Hi David,Thanks for your valuable reply , once complete the
backfilling for new osd and will consider by increasing replica value asap. Is
it possible to decrease the metadata pg count ? if the pg count for metadata
for value same as data count what kind of issue
What's the output of `ceph df` to see if your PG counts are good or not?
Like everyone else has said, the space on the original osds can't be
expected to free up until the backfill from adding the new osd has finished.
You don't have anything in your cluster health to indicate that your
cluster wi
I think that more pgs help to distribute the data more evenly, but I
dont know if its recommended with a low OSDs number. I remember read
somewhere in the docs a guideline for the mas pgs number/OSD, but was
from an really old ceph version, maybe things has changed.
Em 11/12/2017 12:39 PM, gj
Hi Cassiano,
Thanks for your valuable feedback and will wait for some time till new
osd sync get complete. Also for by increasing pg count it is the issue will
solve? our setup pool size for data and metadata pg number is 250. Is this
correct for 7 OSD with 2 replica. Also currently st
I am also not an expert, but it looks like you have big data volumes on
few PGs, from what I've seen, the pg data is only deleted from the old
OSD when is completed copied to the new osd.
So, if 1 pg have 100G por example, only when it is fully copied to the
new OSD, the space will be released
Hi
Thanks Sebastian, If anybody help on this issue it will be highly appropriated
.
Regards
Prabu GJ
On Sun, 12 Nov 2017 19:14:02 +0530 Sébastien VIGNERON
wrote
I’m not an expert either so if someone in the list have some ideas on this
Hi
If anybody help on this issue it will be highly appropriated .
Regards
Prabu GJ
On Sun, 12 Nov 2017 19:14:02 +0530 Sébastien VIGNERON
wrote
I’m not an expert either so if someone in the list have some ideas on this
problem, don’t be sh
I’m not an expert either so if someone in the list have some ideas on this
problem, don’t be shy, share them with us.
For now, I only have hypothese that the OSD space will be recovered as soon as
the recovery process is complete.
Hope everything will get back in order soon (before reaching 95%
Hi,
Have you tried to query pg state for some stuck or undersized pgs? Maybe some
OSD daemons are not right, blocking the reconstruction.
ceph pg 3.be query
ceph pg 4.d4 query
ceph pg 4.8c query
http://docs.ceph.com/docs/jewel/rados/troubleshooting/troubleshooting-pg/
Cordialement / Best regar
Hi Sebastien
Thanks for you reply , yes undersize pgs and recovery in process becuase of we
added new osd after getting 2 OSD is near full warning . Yes newly added osd
is reblancing the size.
[root@intcfs-osd6 ~]# ceph osd df
ID WEIGHT REWEIGHT SIZE USEAVAIL %USE VAR PGS
0
Hi,
Can you share:
- your placement rules: ceph osd crush rule dump
- your CEPH version: ceph versions
- your pools definitions: ceph osd pool ls detail
With these we can determine is your pgs are stuck because of a misconfiguration
or something else.
You seems to have some undersized pgs an
Hi Team,
We have ceph setup with 6 OSD and we got alert with 2 OSD is near full
. We faced issue like slow in accessing ceph from client. So i have added 7th
OSD and still 2 OSD is showing near full ( OSD.0 and OSD.4) , I have restarted
ceph service in osd.0 and osd.4 . Kindly check
20 matches
Mail list logo