Hi,
I am trying to get some statistics via the Python API but fail to run the
equivalent of "ceph df detail".
One the command line I get:
# ceph -f json df |jq .pools[0]
{
"name": "rbd",
"id": 1,
"stats": {
"stored": 27410520278,
"objects": 6781,
"kb_used": 80382849,
"byte
Which ceph version is this? Since Nautilus you can decrease pg numbers
(or let pg-autoscaler do that for you).
Zitat von "Szabo, Istvan (Agoda)" :
Hi,
Originally this pool was created with 512PG which makes couple of
OSDs having 500PG 😲
What is the safe steps to copy over this pool?
The
Alright. I wouldn't expect any issues but you never know, so it's
definitely a good idea to keep it around for a while.
Zitat von "Szabo, Istvan (Agoda)" :
Luminous
I've done actually, now monitoring is there any issue, if not, I'll
remove the old pool.
-Original Message-
From:
Hello cephers,
I run Nautilus (14.2.15)
Here is my context : each night a script take a snapshot from each RBD
volume in a pool (all the disks of the VMs hosted) on my ceph production
cluster. Then each snapshot is exported (rbd export-diff | rbd
import-diff in SSH) towards my ceph bakup clus
Hi,
can you stat the object? If yes, does it have a newer timestamp from
your rebuild?
rados -p stat rbd_object_map.781776950f8dd5.b740
Zitat von Rafael Diaz Maurin :
Hello cephers,
I run Nautilus (14.2.15)
Here is my context : each night a script take a snapshot from each
Hi,
Originally this pool was created with 512PG which makes couple of OSDs having
500PG 😲
What is the safe steps to copy over this pool?
These are the files in this pool:
default.realm
period_config.f320e60d-8cff-4824-878e-c316423cc519
periods.18d63a25-8a50-4e17-9561-d452621f62fa.latest_epoch
de
Luminous
I've done actually, now monitoring is there any issue, if not, I'll remove the
old pool.
-Original Message-
From: Eugen Block
Sent: Friday, January 15, 2021 3:47 PM
To: ceph-users@ceph.io
Subject: [Suspicious newsletter] [ceph-users] Re: .rgw.root was created wit a
lot of PG
On Fri, Jan 15, 2021 at 4:36 AM Rafael Diaz Maurin
wrote:
>
> Hello cephers,
>
> I run Nautilus (14.2.15)
>
> Here is my context : each night a script take a snapshot from each RBD volume
> in a pool (all the disks of the VMs hosted) on my ceph production cluster.
> Then each snapshot is exporte
Le 15/01/2021 à 15:39, Jason Dillaman a écrit :
4. But the error is still here :
2021-01-15 09:33:58.775 7fa088e350c0 -1 librbd::DiffIterate: diff_object_map:
failed to load object map rbd_object_map.781776950f8dd5.b740
You would need to rebuild the snapshot object map. First find
On Fri, Jan 15, 2021 at 10:12 AM Rafael Diaz Maurin
wrote:
>
> Le 15/01/2021 à 15:39, Jason Dillaman a écrit :
>
> 4. But the error is still here :
> 2021-01-15 09:33:58.775 7fa088e350c0 -1 librbd::DiffIterate: diff_object_map:
> failed to load object map rbd_object_map.781776950f8dd5.000
Le 15/01/2021 à 16:29, Jason Dillaman a écrit :
On Fri, Jan 15, 2021 at 10:12 AM Rafael Diaz Maurin
wrote:
Le 15/01/2021 à 15:39, Jason Dillaman a écrit :
4. But the error is still here :
2021-01-15 09:33:58.775 7fa088e350c0 -1 librbd::DiffIterate: diff_object_map:
failed to load object map r
Am 15.01.21 um 09:24 schrieb Robert Sander:
> Hi,
>
> I am trying to get some statistics via the Python API but fail to run the
> equivalent of "ceph df detail".
>
>
> ...snip...
cluster.mon_command(json.dumps({'prefix': 'df detail', 'format': 'json'}),
b'')
> (-22, '', u'command n
Hello Seena,
Which Version of ceph you are using?
IIRC there Was a bug in an older luminous which caused an empty list...
HTH
Mehmet
Am 19. Dezember 2020 19:47:10 MEZ schrieb Seena Fallah :
>Hi,
>
>I'm facing something strange! One of the PGs in my pool got
>inconsistent
>and when I run `rados
All of my daemons are 14.2.24
On Sat, Jan 16, 2021 at 2:39 AM wrote:
> Hello Seena,
>
> Which Version of ceph you are using?
>
> IIRC there Was a bug in an older luminous which caused an empty list...
>
> HTH
> Mehmet
>
> Am 19. Dezember 2020 19:47:10 MEZ schrieb Seena Fallah <
> seenafal...@gma
14 matches
Mail list logo