On Tue, Feb 14, 2023 at 12:05 AM 郑亮 wrote:
>
> Hi all,
>
> Does cephfs subvolume have commands similar to rbd perf to query iops,
> bandwidth, and latency of rbd image? `ceph fs perf stats` shows metrics of
> the client side, not the metrics of the cephfs subvolume. What I want to
> get is the me
Can anyone please point me at a doc that explains the most efficient procedure
to rename a ceph node WITHOUT causing a massive misplaced objects churn?
When my node came up with a new name, it properly joined the cluster and owned
the OSDs, but the original node with no devices remained. I expe
That table is definitely a bit out of date. We've been doing some testing
with more recent podman versions and the only issues I'm aware of specific
to the podman version are https://tracker.ceph.com/issues/58532 and
https://tracker.ceph.com/issues/57018 (which are really the same issue
affecting t
Has anybody run into issues with Quincy and podman 4.2?
4x podman series are not mentioned in
https://docs.ceph.com/en/quincy/cephadm/compatibility/ but
podman 3x is no longer available in Alma Linux
Vlad
___
ceph-users mailing list -- ceph-users@
Hi all,
Does cephfs subvolume have commands similar to rbd perf to query iops,
bandwidth, and latency of rbd image? `ceph fs perf stats` shows metrics of
the client side, not the metrics of the cephfs subvolume. What I want to
get is the metrics at the subvolume level like below.
[root@smd-expor
OK, pushed a little soon on the send button..
But in datacenter fail over modus the replication size changes to 2. And that's
why I believe the RATIO should be 2 instead of 4 or the Raw Capacity should be
doubled.
Am I wrong or should someone make a choice?
From:
Hey Greg,
I'm just analyzing this issue and it isn't strange the total cluster size is
half the total size (or the smallest of both clusters). Because you shouldn't
write more data to the cluster than the smallest datacenter can handle.
Second when in datacenter fail over modus, the cluster size
A "backtrace" is an xattr on the RADOS object storing data for a given
file, and it contains the file's (versioned) path from the root. So a
bad backtrace means there's something wrong with that — possibly just
that there's a bug in the version of the code that's checking it,
because they're genera
Hi all,
We have a cluster on 15.2.12. We are experiencing an unusual scenario in
S3. User send PUT request to upload an object and RGW returns 200 as a
response status code. The object has been uploaded and can be downloaded
but it does not exist in the bucket list. We also tried to get the bucket
On Mon, Feb 13, 2023 at 4:16 AM Sake Paulusma wrote:
>
> Hello,
>
> I configured a stretched cluster on two datacenters. It's working fine,
> except this weekend the Raw Capicity exceeded 50% and the error
> POOL_TARGET_SIZE_BYTES_OVERCOMMITED showed up.
>
> The command "ceph df" is showing the
Years ago I moved from a replicated pool to an EC pool, but with a downtime
of the service (and at that time I didn't have too many data)
Basically after having stopped the radosgw services, I created the new pool
and moved the data from the old replicated pool to new EC one:
ceph osd pool create
On Mon, Feb 13, 2023 at 8:41 AM Boris Behrens wrote:
>
> I've tried it the other way around and let cat give out all escaped chars
> and the did the grep:
>
> # cat -A omapkeys_list | grep -aFn '/'
> 9844:/$
> 9845:/^@v913^@$
> 88010:M-^@1000_/^@$
> 128981:M-^@1001_/$
>
> Did anyone ever saw somet
On Mon, Feb 13, 2023 at 4:31 AM Boris Behrens wrote:
>
> Hi Casey,
>
>> changes to the user's default placement target/storage class don't
>> apply to existing buckets, only newly-created ones. a bucket's default
>> placement target/storage class can't be changed after creation
>
>
> so I can easi
Hi,
you need these keyrings each time you want to bootstrap one of these
daemons so it's probably not a good idea to remove them.
Thanks,
On Fri, 10 Feb 2023 at 00:49, Zhongzhou Cai wrote:
> Hi,
>
> I'm on Ceph version 16.2.10, and I found there are a bunch of bootstrap
> keyrings (i.e., clien
On 2/10/23 08:50, Andrej Filipcic wrote:
FYI, the damage went away after a couple of days, not quite sure how.
Best,
Andrej
Hi,
there is mds damage on our cluster, version 17.2.5,
[
{
"damage_type": "backtrace",
"id": 2287166658,
"ino": 3298564401782,
"path":
I've tried it the other way around and let cat give out all escaped chars
and the did the grep:
# cat -A omapkeys_list | grep -aFn '/'
9844:/$
9845:/^@v913^@$
88010:M-^@1000_/^@$
128981:M-^@1001_/$
Did anyone ever saw something like this?
Am Mo., 13. Feb. 2023 um 14:31 Uhr schrieb Boris Behrens
So here is some more weirdness:
I've piped a list of all omapkeys into a file: (dedacted customer data with
placeholders in <>)
# grep -aFn '//' omapkeys_list
9844://
9845://v913
88010:�1000_//
128981:�1001_//
# grep -aFn '/'
omapkeys_list
# vim omapkeys_list +88010 (copy pasted from terminal)
Hi,
I have one bucket that showed up with a large omap warning, but the amount
of objects in the bucket, does not align with the amount of omap keys. The
bucket is sharded to get rid of the "large omapkeys" warning.
I've counted all the omapkeys of one bucket and it came up with 33.383.622
(rados
The RATIO for cephfs.application-acc.data shouldn't be over 1.0, I believe this
triggered the error.
All weekend I was thinking about this issue, but couldn't find an option to
correct this.
But minutes after posting I found a blog about the autoscaler
(https://ceph.io/en/news/blog/2022/autosc
Hi list,
A little bit of background: we provide S3 buckets using RGW (running
quincy), but users are not allowed to manage their buckets, just read and
write objects in them. Buckets are created by an admin user, and read/write
permissions are given to end users using S3 bucket policies. We set th
Hello,
I configured a stretched cluster on two datacenters. It's working fine, except
this weekend the Raw Capicity exceeded 50% and the error
POOL_TARGET_SIZE_BYTES_OVERCOMMITED showed up.
The command "ceph df" is showing the correct cluster size, but "ceph osd pool
autoscale-status" is showi
Hi!
Thank you 😊
Your message was very helpful!
The main reason why “ceph df“ went to “100% USAGE” was because of the crush
rule said this:
"min_size": 2
"max_size": 2
And the new “size” was 3, so the rule did not want to work with the pool.
After creating a new rule and setting the pools to th
Hi Casey,
changes to the user's default placement target/storage class don't
> apply to existing buckets, only newly-created ones. a bucket's default
> placement target/storage class can't be changed after creation
>
so I can easily update the placement rules for this user and can migrate
existin
That doesn't really help, the startup log should contain information
why the MDS is going into read-only mode, here's an example from the
mailing list archive:
2020-07-30 18:14:44.835 7f646f33e700 -1 mds.0.159432 unhandled write error
(90) Message too long, force readonly...
2020-07-30 18:14:
Am 13.02.23 um 06:31 schrieb farhad kh:
Is it possible to recover data when two nodes with all physical disks are
lost for any reason?
You have one copy of each object on each node and each node runs a MON.
If two nodes fail then the cluster will cease to function as the
remaining MON will n
25 matches
Mail list logo