I can answer my question, even in the official ubuntu repo they are using by
default the octopus version so for sure it works with kernel 5.
https://packages.ubuntu.com/focal/allpackages
-Original Message-
From: Szabo, Istvan (Agoda)
Sent: Thursday, May 11, 2023 11:20 AM
To: Ceph Use
Hi,
In octopus documentation we can see kernel 4 as recommended, however we've
changed our test cluster yesterday from centos 7 / 8 to Ubuntu 20.04.6 LTS with
kernel 5.4.0-148 and seems working, I just want to make sure before I move to
prod there isn't any caveats.
Thank you
Hey Frank,
On 5/10/23 21:44, Frank Schilder wrote:
The kernel message that shows up on boot on the file server in text format:
May 10 13:56:59 rit-pfile01 kernel: WARNING: CPU: 3 PID: 34 at
fs/ceph/caps.c:689 ceph_add_cap+0x53e/0x550 [ceph]
May 10 13:56:59 rit-pfile01 kernel: Modules linked in
I don't know about Octopus, but in Quincy there's a config option
rbd_move_to_trash_on_remove Default: false
I've set this to true on my instance.
I can move a image with snapshots to treash, but I can't purge the image
from trash without first deleting the snapshots.
Am Mi., 10. Mai 2023 um 2
Awesome! Thanks.
What is the default then for RBD images? Is it the default to delete them
and not to use the trash? Or, do we need a configuration to make Ceph use
the trash?
We are using Ceph Octopus.
On Wed, May 10, 2023 at 6:33 PM Reto Gysi wrote:
> Hi
>
> For me with ceph version 17.2.6 rbd
Hi
For me with ceph version 17.2.6 rbd doesn't allow me to (delete (I've
configured that delete only moves image to trash)/) purge an image that
still has snapshots. I need to first delete all the snapshots.
from man page:
rbd rm image-spec
Delete an rbd image (including all data bl
Hi Janek,
All this indicates is that you have some files with binary keys that
cannot be decoded as utf-8. Unfortunately, the rados python library
assumes that omap keys can be decoded this way. I have a ticket here:
https://tracker.ceph.com/issues/59716
I hope to have a fix soon.
On Thu, May
Hello guys,
We have a doubt regarding snapshot management, when a protected snapshot is
created, should it be deleted when its RBD image is removed from the system?
If not, how can we list orphaned snapshots in a pool?
___
ceph-users mailing list -- ceph
in /var/lib/ceph// on the host with that mgr
reporting the error, there should be a unit.run file that shows what is
being done to start the mgr as well as a few files that get mounted into
the mgr on startup, notably the "config" and "keyring" files. That config
file should include the mon host ad
Hi Gregory,
using the more complicated rados way, I found the path. I assume you are
referring to attribs I can read with getfattr. The output of a dump is:
# getfattr -d
/mnt/cephfs/shares/rit-oil/Projects/CSP/Chalk/CSP1.A.03/99_Personal\
folders/Eugenio/Tests/Eclipse/19_imbLab/19_IMBLAB.EGRI
Hi Cephers,
These are the topics that we just covered in today's meeting:
- *Issues recording our meetings in Jitsi (Mike Perez)*
- David Orman suggested using a self-hosted Jitsi instance:
https://jitsi.github.io/handbook/docs/devops-guide/. Tested on a
single container, wit
Hi,
This cluster is deployed by cephadm 17.2.5,containerized.
It ends up in this(no active mgr):
[root@8cd2c0657c77 /]# ceph -s
cluster:
id: ad3a132e-e9ee-11ed-8a19-043f72fb8bf9
health: HEALTH_WARN
6 hosts fail cephadm check
no active mgr
1/3 mons d
On Wed, May 10, 2023 at 7:33 AM Frank Schilder wrote:
>
> Hi Gregory,
>
> thanks for your reply. Yes, I forgot, I can also inspect the rados head
> object. My bad.
>
> The empty xattr might come from a crash of the SAMBA daemon. We export to
> windows and this uses xattrs extensively to map to w
Hi Gregory,
thanks for your reply. Yes, I forgot, I can also inspect the rados head object.
My bad.
The empty xattr might come from a crash of the SAMBA daemon. We export to
windows and this uses xattrs extensively to map to windows ACLs. It might be
possible that a crash at an inconvenient mo
This is a very strange assert to be hitting. From a code skim my best
guess is the inode somehow has an xattr with no value, but that's just
a guess and I've no idea how it would happen.
Somebody recently pointed you at the (more complicated) way of
identifying an inode path by looking at its RADOS
Now I tested if an MDS fail-over to a stand-by changes anything. Unfortunately,
it doesn't. The MDS ceph-23 failed over to ceph-10 and on this new MDS I
observe the same crash/cache-corruption after fail-over was completed:
# ceph tell "mds.ceph-10" dump inode 0x20011d3e5cb
2023-05-10T16:04:09.8
The kernel message that shows up on boot on the file server in text format:
May 10 13:56:59 rit-pfile01 kernel: WARNING: CPU: 3 PID: 34 at
fs/ceph/caps.c:689 ceph_add_cap+0x53e/0x550 [ceph]
May 10 13:56:59 rit-pfile01 kernel: Modules linked in: ceph libceph
dns_resolver nls_utf8 isofs cirrus drm
For the "mds dump inode" command I could find the crash in the log; see below.
Most of the log contents is the past OPS dump from the 3 MDS restarts that
happened. It contains the 1 last OPS before the crash and I can upload the
log if someone can use it. The crash stack trace somewhat trunc
Hi all,
I have an annoying problem with a specific ceph fs client. We have a file
server on which we re-export kernel mounts via samba (all mounts with noshare
option). On one of these re-exports we have recurring problems. Today I caught
it with
2023-05-10T13:39:50.963685+0200 mds.ceph-23 (md
I'm talking about bluestore db+wal caching. It's good to know cache
tier is deprecated now, I should check why.
It's not possible because I don't have enough slots on servers. I'm
considering buying nvme in pci form.
Now I'm trying to speed up the rep 2 pool for the file size between
10K-700K mill
Hi Xiubo.
> IMO evicting the corresponding client could also resolve this issue
> instead of restarting the MDS.
Yes, it can get rid of the stuck caps release request, but it will also make
any process accessing the file system crash. After a client eviction we usually
have to reboot the server
Hello,
I have found out that the issue seems to be in this change -
https://github.com/ceph/ceph/pull/47207
When I’ve commented out the change and replaced it with the previous value the
dashboard works as expected.
Ondrej
___
ceph-users mailing list
Adding a second host worked as well after adding the ceph.pub key to
the authorized_keys of the "deployer" user.
Zitat von Eugen Block :
I used the default to create a new user, so umask is 022. And the
/tmp/var/lib/ceph directory belongs to the root user. I haven't
tried to add another ho
thanks Thomas!
opened this tracker: https://tracker.ceph.com/issues/59697 should cover the
missing dependencies for luarocks on the centos8 container (feel free to
add anything missing there...).
still trying to figure out the lib64 issue you found.
regarding the "script put" issue - I will add tha
Hi,
I recommend to create your own (local) container registry to have full
control over the images. I also don't use the latest tag but always a
specific version tag, that's also what the docs suggest [1]. The docs
for an isolated environment [2] briefly describe how to set up your
local
What is with this latency issue? From what I have read here on mailing list, to
me this looks bad. Until someone from ceph/redhat says it is not.
https://tracker.ceph.com/issues/58530
https://www.mail-archive.com/ceph-users@ceph.io/msg19012.html
>
> We're happy to announce the 13th backport r
I used the default to create a new user, so umask is 022. And the
/tmp/var/lib/ceph directory belongs to the root user. I haven't tried
to add another host yet, I understood that in your case it already
failed during the initial boostrap, but I can try to add one more host.
Zitat von Ben :
Hey Zakhar,
You do need to restart OSDs to bring performance back to normal anyway,
don't you? So yeah, we're not aware of better way so far - all the
information I have is from you and Nikola. And you both tell us about
the need for restart.
Apparently there is no need to restart every OSD
28 matches
Mail list logo