As a data point: we've been running Octopus (solely for CephFS) on
Ubuntu 20.04 with 5.4.0(-122) for some time now, with packages from
download.ceph.com.
On 11/05/2023 07.12, Szabo, Istvan (Agoda) wrote:
I can answer my question, even in the official ubuntu repo they are using by
default the
On Thu, May 11, 2023 at 7:13 AM Szabo, Istvan (Agoda)
wrote:
>
> I can answer my question, even in the official ubuntu repo they are using by
> default the octopus version so for sure it works with kernel 5.
>
> https://packages.ubuntu.com/focal/allpackages
>
>
> -Original Message-
> From
Dear Xiubo,
thanks for your reply.
> BTW, did you enabled the async dirop ? Currently this is disabled by
> default in 4.18.0-486.el8.x86_64.
I have never heard about that option until now. How do I check that and how to
I disable it if necessary?
I'm in meetings pretty much all day and will t
Dear Gregory,
> I would start by looking at what xattrs exist and if there's an obvious bad
> one, deleting it.
I can't see any obvious bad ones and I also can't just delete them, they are
required for ACLs. I'm not convinced that one of the xattrs that can be dumped
with 'getfattr -d -m ".*"'
Dear Gregory,
sorry, forgot one question: I would like to create an intact copy of the file
and move the "broken" one to a different location, hopefully being able to
reproduce and debug the issue with the moved one. I need to preserve all info
that samba attached to the file. If I do a "cp -p"
As a KRDB client, I believe that 5.4 also introduces better support for RBD
features including fast-diff
> On May 11, 2023, at 3:59 AM, Gerdriaan Mulder wrote:
>
> As a data point: we've been running Octopus (solely for CephFS) on Ubuntu
> 20.04 with 5.4.0(-122) for some time now, with packag
Dear Xiubo,
please see also my previous e-mail about the async dirop config.
I have a bit more log output from dmesg on the file server here:
https://pastebin.com/9Y0EPgDD . This covers a reboot after the one in my
previous e-mail as well as another fail at the end. When I checked around 16:30
Hi Everyone,
This email was originally posted to d...@ceph.io, but Marc mentioned that
he thought this would be useful to post on the user list so I'm
re-posting here as well.
David Orman mentioned in the CLT meeting this morning that there are a
number of people on the mailing list asking a
On Tue, May 9, 2023 at 3:11 PM Tarrago, Eli (RIS-BCT)
wrote:
>
> East and West Clusters have been upgraded to quincy, 17.2.6.
>
> We are still seeing replication failures. Deep diving the logs, I found the
> following interesting items.
>
> What is the best way to continue to troubleshoot this?
ceph version is: v16.2.10
But I had close the ration with "ceph config set mon
mon_warn_pg_not_deep_scrubbed_ratio 0"
show the prints:
[root@smd-node01 deeproute]# ceph config get mon
mon_warn_pg_not_deep_scrubbed_ratio
0.00
___
ceph-users mailing
Hello,
I have found out that the issue seems to be in this change -
https://github.com/ceph/ceph/pull/47207
When I’ve commented out the change and replaced it with the previous value the
dashboard works as expected.
Ondrej
___
ceph-users mailing list
Hey all,
I recently had two OSDs fail. The first one I just removed and expected
replication to fix for me. Replication froze and I restarted the OSDs after
seeing heartbeat failures. It allowed replication to resume but one of the
OSD's RocksDB became corrupted, showing this error when I try to b
Hi everyone,
I invite you to Ceph Days Vancouver on June 15, co-located with the OpenInfra
Summit. Ceph Days are one-day events dedicated to multiple breakout and BoF
sessions with a wide range of topics around Ceph.
You can receive a nice discount to attend Ceph Days and the OpenInfra Summit i
Hi folks,
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi guys,
With Quincy release, does anyone know how multisite sync deals with multipart
uploads? I mean those part objects of some incomplete multipart uploads. Are
those objects also sync-ed over either with full-sync or incremental sync? I
did a quick experiment and notice that these objects a
sync doesn't distinguish between multipart and regular object uploads.
once a multipart upload completes, sync will replicate it as a single
object using an s3 GetObject request
replicating the parts individually would have some benefits. for
example, when sync retries are necessary, we might only
following PRs should be addressing the issues (feel free to review):
the lib64 problem on centos8: https://github.com/ceph/ceph/pull/51453
missing dependencies on cephadm image:
https://github.com/ceph/ceph-container/pull/2117
"script put" documentation: https://github.com/ceph/ceph/pull/51422
On
along the path you mentioned, it is fixed by changing the owner of
/var/lib/ceph to 167:167 from root. The cluster was deployed with non root
user, and files permission is in a bit of mess. After the change systemctl
daemon-reload and restart brings it up.
for another manager in bootstrap host, jo
On 5/10/23 19:35, Frank Schilder wrote:
Hi Xiubo.
IMO evicting the corresponding client could also resolve this issue
instead of restarting the MDS.
Yes, it can get rid of the stuck caps release request, but it will also make
any process accessing the file system crash. After a client evicti
On 5/11/23 18:26, Frank Schilder wrote:
Dear Xiubo,
thanks for your reply.
BTW, did you enabled the async dirop ? Currently this is disabled by
default in 4.18.0-486.el8.x86_64.
I have never heard about that option until now. How do I check that and how to
I disable it if necessary?
I'm in
On 5/11/23 20:12, Frank Schilder wrote:
Dear Xiubo,
please see also my previous e-mail about the async dirop config.
I have a bit more log output from dmesg on the file server
here:https://pastebin.com/9Y0EPgDD .
1.
[Wed May 10 16:03:06 2023] ceph: corrupt snap message from mds1
2.
[
21 matches
Mail list logo