prune -a" but
this is a fresh install.
Can I continue safely to deploy or is this a problem ?
Thanks
Patrick
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
--
Enrico Bocchi
CERN Eur
--
Alexander Patrakov
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email
causes the high number of tombstones.
Regards,
Nima
On Tue, Apr 29, 2025, 2:33 PM Enrico Bocchi wrote:
Hello Nima,
Unsure if you have found the root cause of the problem in the
meantime>
From the top of my head, if any useful:
- Quincy changes the default scheduler
: 39,
"sum": 0.004502210,
"avgtime": 0.000115441
},
"alloc_wal_max_lat": 0.0,
"alloc_db_max_lat": 0.000113831,
"alloc_slow_max_lat": 0.000301347
},
config show:
"bluestore_rocksdb_cf": "true",
"bluestore_rocksdb_cfs"
Hi Kasper,
Would you mind sharing the output of `perf dump` and `config show` from the
daemon socket of one of the OSDs reporting blues spillover? I am interested in
the bluefs part of the former and in the bluestore_rocksdb options of the
latter.
The warning about slow ops in bluestore is a d
bucket, because I am not sure if I can stop the
resharding if the PGs become laggy.
Cheers
Boris
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
--
Enrico Bocchi
CERN European Laboratory
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
--
Enrico Bocchi
CERN European Labo
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
--
Enrico Bocchi
CERN European Laborat
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
--
Enrico Bocchi
CERN European Laboratory for Particle Physics
IT - Storage & Data Management - General Storage Services
Mailbox: G20
Dear All,
We will be having a Ceph science/research/big cluster call on Tuesday
March 25th.
This is an informal open call of community members mostly from
hpc/htc/research/big cluster environments, though anyone is welcome,
where we discuss whatever is on our minds regarding Ceph. Updates,
u on Tuesday!
Enrico
On 3/20/25 11:05, Enrico Bocchi wrote:
Dear All,
We will be having a Ceph science/research/big cluster call on Tuesday
March 25th.
This is an informal open call of community members mostly from
hpc/htc/research/big cluster environments, though anyone is welcome,
wher
://tracker.ceph.com/issues/69105
Regards
Christian
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
--
Enrico Bocchi
CERN European Laboratory for Particle Physics
IT - Storage & Data Manage
ph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...
e filesystem into production again?
Dietmar
_______
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
--
Enrico Bocchi
CERN European Laboratory for Particle Physics
IT - Storage & Data Management - Gener
ph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
--
Enrico Bocchi
CERN European Laboratory for Particle Physics
IT - Storage & Data Management - General Storage Services
Mailbox: G20500 - Offi
ore frequently now.
Kindest regards,
Ivan Clayson
-- Ivan Clayson
-
Scientific Computing Officer
Room 2N249
Structural Studies
MRC Laboratory of Molecular Biology
Francis Crick Ave, Cambridge
CB2 0QH
___
__
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
--
Enrico Bocchi
CERN European
ration in the
future and to cleanup the error text.
On Wed, Mar 10, 2021 at 8:37 AM Enrico Bocchi wrote:
Hello Jason,
# rados -p volumes listomapvals rbd_trash
id_5afa5e5a07b8bc
value (71 bytes) :
02 01 41 00 00 00 00 2b 00 00 00 76 6f 6c 75 6d
|..A+...volum|
0010 65 2d 30 3
lumes listomapvals rbd_trash"?
On Wed, Mar 10, 2021 at 8:03 AM Enrico Bocchi wrote:
Hello everyone,
We have an unpurgeable image living in the trash of one of our clusters:
# rbd --pool volumes trash ls
5afa5e5a07b8bc volume-02d959fe-a693-4acb-95e2-ca04b965389b
If we try to purge the whole trash i
understand how an image might get in this state.
Has anyone seen this before?
Many thanks!
Cheers,
Enrico
--
Enrico Bocchi
CERN European Laboratory for Particle Physics
IT - Storage Group - General Storage Services
Mailbox: G20500 - Office: 31-2-010
1211 Genève 23
Switzerland
20 matches
Mail list logo