Hello Chris, Igor,
I came here to say two things.
Firstly, thank you for this thread. I've not run perf dump or bluefs
stats before and found it helpful in diagnosing the same problem you had.
Secondly, yes 'ceph-volume lvm migrate' was effective (in Quincy 17.2.7)
to finalise the migration
Hi,
On 19/3/21 1:11 pm, Stefan Kooman wrote:
Is it going to continue to be supported? We use it (and
uncontainerised packages) for all our clusters, so I'd be a bit
alarmed if it was going to go away...
Just a reminder to all of you. Please fill in the Ceph-user survey and > make
your voice
I deployed Ceph 14.2.16 with ceph-ansible stable-4.0 a while back, and
want to test upgrading. So for now I am trying rolling_update.yml for
latest 14.x (before trying stable-5.0 and 15.x) but getting some errors,
which seem to indicate empty or missing variables.
Initially monitor_interface w
Hi Dylan and Dimitri,
On 19/8/21 9:56 pm, Dimitri Savineau wrote:
If I'm right, I would suggest you to not store your {group,host)_vars
directories inside the ceph-ansible sources.
A better solution would be to keep the variables and the inventory in
the same directory like /etc/ansible in your
On 12/1/24 22:32, Drew Weaver wrote:
So we were going to replace a Ceph cluster with some hardware we had
laying around using SATA HBAs but I was told that the only right way to
build Ceph in 2023 is with direct attach NVMe.
These kinds of statements make me at least ask questions. Dozens of 14
On 16/1/24 11:39, Anthony D'Atri wrote:
by “RBD for cloud”, do you mean VM / container general-purposes volumes
on which a filesystem is usually built? Or large archive / backup
volumes that are read and written sequentially without much concern for
latency or throughput?
General purpose vol
anual/scripted, but it's not very hard to script it if you are unsure
about the amount of work the autoscaler will start at any given time.
--
May the most significant bit of your life be positive.
___
ceph-users mailing list -- ceph-users@ceph
client impact has to be
avoided if possible, we decided to let that run for a couple of hours.
Then reevaluate the situation and maybe increase the backfills a bit more.
Thanks!
Zitat von Gregory Orange :
We are in the middle of splitting 16k EC 8+3 PGs on 2600x 16TB OSDs
with NVME RocksDB,
On 17/11/24 19:44, Roland Giesler wrote:
> On 2024/11/16 18:38, Anthony D'Atri wrote:
>> Disabling mclock as described here
>> https://docs.ceph.com/en/reef/rados/configuration/mclock-config-ref/ might
>> help
>
> I cannot see any option that allows me to disable mclock...
It's not so much disab
On 15/11/24 17:11, Roland Giesler wrote:
> How do I determine the primary osd?
ceph pg map $pg
ceph pg $pg query | jq .info.stats.acting_primary
You can jq and less to take a look at other values which might be
informative too.
Greg.
___
ceph-users ma
On 27/11/24 13:48, Marc wrote:
> How should I rewrite this to ceph.conf
>
> ceph config set mon mon_warn_on_insecure_global_id_reclaim false
> ceph config set mon mon_warn_on_insecure_global_id_reclaim_allowed false
The way to do it would be
ceph config set mon mon_warn_on_insecure_global_id_recl
On 25/11/24 15:57, Stefan Kooman wrote:
> Update: The Ceph Developer Summit is nearing capacity for "Developers".
> There is still room for "Power Users" to register for the afternoon
> session. See below for details...
>
> However, it's unclear to me if you need to register for the "Power
> Users
On 26/11/24 09:47, Stefan Kooman wrote:
> The dev event is full. So unable to register anymore and leave a note
> like you did. Hence my question.
I guess the contact email address on that page is worth a shot
ceph-devsummit-2...@cern.ch
___
ceph-users
On 4/2/25 15:46, Jan Kasprzak wrote:
> I wonder whether is it possible to go the other way round as well,
> i.e. to remove a metadata device from an OSD and merge metadata back
> to the main storage?
Yes.
I actually used ceph-volume last time I did this, so I wonder if I
should be using a cephadm
On 5/2/25 23:04, Gregory Farnum wrote:
> We are soliciting feedback on the impact these options will have on
> our various downstreams and direct upstream consumers. The CSC's
> *early and tentative* preference is for spring 2026
This looks like a good option from all of the context you provided.
On 18/12/24 02:30, Janek Bevendorff wrote:
> I did increase the pgp_num of a pool a while back, totally
> forgot about that. Due to the ongoing rebalancing it was stuck half way,
> but now suddenly started up again. The current PG number of that pool is
> not quite final yet, but definitely higher
tps://ceph2024.sched.com/event/1ktWK/get-that-cluster-back-online-but-hurry-slowly-gregory-orange-pawsey-supercomputing-centre
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
On 24/1/25 06:45, Stillwell, Bryan wrote:
> ceph report 2>/dev/null | jq '(.osdmap_last_committed -
> .osdmap_first_committed)'
>
> This number should be between 500-1000 on a healthy cluster. I've seen
> this as high as 4.8 million before (roughly 50% of the data stored on
> the cluster ended up
On 28/1/25 19:33, Enrico Bocchi wrote:
> Also, unsure if a new Quincy release is expected (17.2.8 should be the
> latest of the Quincy series).
https://pad.ceph.com/p/csc-weekly-minutes
17.2.8 - EOL
That seems pretty clear to me. Also it's in line with expectations (one
more point release after $
On 24/12/24 04:37, Anthony D'Atri wrote:
> humans and everyone else mainly think base-2 units (TiB).
For my part, I see humans thinking in base-10 units. I'm glad we've at
least got terms for the base-2 ones, so it can be clear in technical
settings. At best however I see people having to clarify
On 15/4/24 19:58, Ondřej Kukla wrote:
> If you have a quite large amount of data you can maybe try the Chorus from
> CLYSO.
In March we completed a migration of 17PB of data between two local Ceph
clusters using Chorus. It took some work to prepare network
configurations and test it and increase
On 12/4/25 20:56, Tim Holloway wrote:
> Which brings up something I've wondered about for some time. Shouldn't
> it be possible for OSDs to be portable? That is, if a box goes bad, in
> theory I should be able to remove the drive and jack it into a hot-swap
> bay on another server and have that ser
e don't go a long
time without noticing disks which are dying.
--
Gregory Orange
System Administrator, Scientific Platforms Team
Pawsey Supercomputing Centre, CSIRO
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
_______
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
--
Gregory Orange
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
24 matches
Mail list logo