Hi folks!
The Cephalocon 2024 recordings are available on the YouTube channel!
- Channel: https://www.youtube.com/@Cephstorage/videos
- Cephalocon 2024 playlist:
https://www.youtube.com/watch?v=ECkgu2zZzeQ&list=PLrBUGiINAakPfVfFfPQ5wLMQJFsLKTQCv
Thanks,
Matt
__
Hi folks, the perf meeting for today will be cancelled for Boxing Day!
Happy holidays!
We will (probably) resume next week, on the 2nd.
Thanks,
Matt
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@cep
As you discovered, it looks like there are no upmap items in your
cluster right now. The `ceph osd dump` command will list them, in JSON
as you show, or you can `grep ^pg_upmap` without JSON as well (same
output, different format).
I think the balancer would have been enabled by default in Nau
Hi folks, the perf meeting for today will be cancelled for US
thanksgiving!
As a heads up, next week will also be cancelled for Cephalocon.
Thanks,
Matt
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...
Hi folks, the perf meeting will be cancelled today, Mark is flying from
a conference!
Thanks,
Matt
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
I would normally vouch for ZFS for this sort of thing, but the mix of
drive sizes will be... and inconvenience, at best. You could get
creative with the hierarchy (making zraid{2,3} of mirrors of same-sized
drives, or something), but it would be far from ideal. I use ZFS for my
own home machine
Hi folks!
Thanks for a great Ceph Day event in NYC! I wanted to make sure I posted
my slides before I forget (and encourage others to do the same). Feel
free to reach out in the Ceph Slack
https://ceph.io/en/community/connect/
How we Operate Ceph at Scale (DigitalOcean):
-
https://do-matt-
Hi,
I would expect that almost every PG in the cluster is going to have to
move once you start standardizing CRUSH weights, and I wouldn't want to
move data twice. My plan would look something like:
- Make sure the cluster is healthy (no degraded PGs)
- Set nobackfill, norebalance flags to pr
We've had a specific set of drives that we've had to enable
bdev_enable_discard and bdev_async_discard for in order to maintain
acceptable performance on block clusters. I wrote the patch that Igor
mentioned in order to try and send more parallel discards to the
devices, but these ones in parti
We have had success using pgremapper[1] for this sort of thing, in both
index and data augments.
1. Set nobackfill, norebalance
2. Add OSDs
3. pgremapper cancel-backfill
4. Unset flags
5. Slowly loop `pgremapper undo-upmaps` at our desired rate, or allow
the balancer to do this work
There's s
sure what the difference will be from our case versus a single large
volume with a big snapshot.
On 2023-01-28 20:45, Victor Rodriguez wrote:
On 1/29/23 00:50, Matt Vandermeulen wrote:
I've observed a similar horror when upgrading a cluster from Luminous
to Nautilus, which had the same ef
I've observed a similar horror when upgrading a cluster from Luminous to
Nautilus, which had the same effect of an overwhelming amount of
snaptrim making the cluster unusable.
In our case, we held its hand by setting all OSDs to have zero max
trimming PGs, unsetting nosnaptrim, and then slowly
We have been doing a zfs send piped to s3 uploads for backups. We use
awscli for that, since it can take a stream from stdin. We have never
considered using cephfs for that.
It ultimately ends up looking something like one of the following,
depending full/incremental:
zfs send -wv $datas
wrote:
On 11/8/22 15:10, Mike Perez wrote:
Hi everyone,
Ceph Virtual 2022 is starting! Today's topic is Scale. We will hear
from Matt Vandermeulen about how Digital Ocean, a Ceph Foundation
Premier member, scales Ceph for their needs. Unfortunately, our other
scheduled presentation for today,
That output suggests that the mgr is configured to only listen on the
loopback address.
I don't think that's a default... does a `ceph config dump | grep mgr`
suggest it's been configured that way?
On 2022-10-10 10:56, Ackermann, Christoph wrote:
Hello list member
after subsequent installa
I think you're likely to get a lot of mixed opinions and experiences
with this question. I might suggest trying to grab a few samples from
different vendors, and making sure they meet your needs (throw some
workloads at them, qualify them), then make sure your vendors have a
reasonable lead ti
It sounds like this is from a PG merge, so I'm going to _guess_ that you
don't want to straight up cancel the current backfill and instead pause
it to catch your breath.
You can set `nobackfill` and/or `norebalance` which should pause the
backfill. Alternatively, use `ceph config set osd.* os
This might be easiest to work about in two steps: Draining hosts, and
doing a PG merge. You can do it in either order (though thinking about
it, doing the merge first will give you more cluster-wide resources to
do it faster).
Draining the hosts can be done in a few ways, too. If you want t
Yep, just change the CRUSH rule:
ceph osd pool set my_cephfs_metadata_pool crush_rule replicated_nvme
If you have a rule set called replicated_nvme, that'll set it on the
pool named my_cephfs_metadata_pool.
Of course this will cause a significant data movement.
If you need to add the rule,
It appears to have been, and we have an application that's pending an
internal review before we can submit... so we're hopeful that it has
been!
On 2021-12-10 15:21, Bobby wrote:
Hi all,
Has the CfP deadline for Cephalcoon 2022 been extended to 19 December
2022? Please confirm if anyone kno
All the index data will be in OMAP, which you can see a listing of with
`ceph osd df tree`
Do you have large buckets (many, many objects in a single bucket) with
few shards? You may have to reshard one (or some) of your buckets.
It'll take some reading if you're using multisite, in order to
Hi Szabo,
For what it's worth, I have a two clusters in a multisite that has never
appeared to be synced either, but have never found a single object that
can't be found in both clusters.
There are always at least a few recovering shards, while the "data sync
source" is always "syncing" with
22 matches
Mail list logo