Hey Venky,
thank you very much for your response!
> It would help if you could enable debug log for ceph-mgr, repeat the
> steps you mention above and upload the log in the tracker.
I have already collected log files after enabling the debug log by `ceph config
set mgr mgr/snap_schedule/log_le
Hey Manuel,
On Thu, Jan 27, 2022 at 8:57 PM Manuel Holtgrewe wrote:
>
> OK, reconstructed with another example:
>
> -- source file system --
>
> 0|0[root@gw-1 ~]# find /data/cephfs-2/test/x2 | xargs stat
> File: /data/cephfs-2/test/x2
> Size: 1 Blocks: 0 IO Block: 65536
Hi,
thanks for the reply.
Actually, mounting the source and remote fs on linux with kernel
driver (Rocky Linux 8.5 default kernel), I can `rsync`.
Is this to be expected?
Cheers,
On Fri, Jan 28, 2022 at 10:44 AM Venky Shankar wrote:
>
> Hey Manuel,
>
> On Thu, Jan 27, 2022 at 8:57 PM Manuel H
On Fri, Jan 28, 2022 at 3:20 PM Manuel Holtgrewe wrote:
>
> Hi,
>
> thanks for the reply.
>
> Actually, mounting the source and remote fs on linux with kernel
> driver (Rocky Linux 8.5 default kernel), I can `rsync`.
You are probably running rsync with --no-perms or a custom --chmod (or
one of --
On Fri, Jan 28, 2022 at 3:03 PM Sebastian Mazza wrote:
>
> Hey Venky,
>
> thank you very much for your response!
>
> > It would help if you could enable debug log for ceph-mgr, repeat the
> > steps you mention above and upload the log in the tracker.
>
>
> I have already collected log files after
I'm running rsync "-Wa", see below for a reproduction from scratch
that actually syncs as root when no permissions are given on the
directories.
-- full mount options --
172.16.62.10,172.16.62.11,172.16.62.11,172.16.62.12,172.16.62.13,172.16.62.30:/
on /data/cephfs-2 type ceph
(rw,noatime,name=sa
Hi All,
We are trying to deploy the ceph (12.6.7) cluster on production using
cephadm. Unfortunately, we encountered the following situation.
Description
The cephadm(v16.2.7) bootstrap by default chooses container images
quay.io/ceph/ceph:v16 and docker.io/ceph/daemon-base:latest-pacific-devel.
Hey Venky,
I would be happy if you create the issue.
Under this link: https://www.filemail.com/d/skgyuyszdlgrkxw
you can download the log file and also my description of the problem. The txt
also includes the most interesting lines of the log.
Cheers,
Sebastian
> On 28.01.2022, at 11:07, Venk
On Fri, Jan 28, 2022 at 3:42 PM Manuel Holtgrewe wrote:
>
> I'm running rsync "-Wa", see below for a reproduction from scratch
> that actually syncs as root when no permissions are given on the
> directories.
>
> -- full mount options --
>
> 172.16.62.10,172.16.62.11,172.16.62.11,172.16.62.12,172.
OK, so there is a different in semantics of the kernel and the user
space driver?
Which one would you consider to be desired?
>From what I can see, the kernel semantics (apparently: root can do
everything) would allow to sync between file systems no matter what.
With the current user space semant
On Fri, Jan 28, 2022 at 4:22 PM Manuel Holtgrewe wrote:
>
> OK, so there is a different in semantics of the kernel and the user
> space driver?
Right.
>
> Which one would you consider to be desired?
The kernel driver is probably doing the right thing.
>
> From what I can see, the kernel semant
Great.
No, thank *you* for such excellent software!
On Fri, Jan 28, 2022 at 1:20 PM Venky Shankar wrote:
>
> On Fri, Jan 28, 2022 at 4:22 PM Manuel Holtgrewe wrote:
> >
> > OK, so there is a different in semantics of the kernel and the user
> > space driver?
>
> Right.
>
> >
> > Which one would
Hey folks - We’ve been using a hack to get bind mounts into our manager
containers for various reasons. We’ve realized that this quickly breaks down
when our “hacks” don’t exist inside “cephadm” in the manager container and we
execute a “ceph orch upgrade”. Is there an official way to add a bind
>
> Hey folks - We’ve been using a hack to get bind mounts into our manager
> containers for various reasons. We’ve realized that this quickly breaks
> down when our “hacks” don’t exist inside “cephadm” in the manager
> container and we execute a “ceph orch upgrade”. Is there an official way
> to
Point 1 (Why are we running as root?):
All Ceph containers are instantiated as root (Privileged - for "reasons") but
daemons inside the container run a user 167 ("ceph" user).
I don't understand your second point, if you're saying that the "container" is
what specifies mount points that's incorr
On 1/26/2022 1:18 AM, Sebastian Mazza wrote:
Hey Igor,
thank you for your response!
Do you suggest to disable the HDD write-caching and / or the bluefs_buffered_io
for productive clusters?
Generally upstream recommendation is to disable disk write caching, there were
multiple complains it
Hello Vlad,
Just some insight into how CEPHADM_STRAY_DAEMON works: This health warning
is specifically designed to point out daemons in the cluster that cephadm
is not aware of/in control of. It does this by comparing the daemons it has
cached info on (this cached info is what you see in "ceph orc
Hmm, I'm not seeing anything that could be a cause in any of that output. I
did notice, however, from your "ceph orch ls" output that none of your
services have been refreshed since the 24th. Cephadm typically tries to
refresh these things every 10 minutes so that signals something is quite
wrong.
Dear all,
Recently, there were some very specific questions regarding
reinstalling an OSD node while keeping the disks intact. The
discussion went around corner cases. I think that I have a very easy
case
- vanilla cluster setup with ansible playbooks
- adopted by cephadm
- latest pacific 16.2.7
Hi!
> Hmm, I'm not seeing anything that could be a cause in any of that output. I
> did notice, however, from your "ceph orch ls" output that none of your
> services have been refreshed since the 24th. Cephadm typically tries to
> refresh these things every 10 minutes so that signals something is
I had a situation like this, and the only operation that solved was a full
reboot of the cluster (it was due the a watchdog alarm), but when the
cluster return, the stray osds were gone.
On Fri, 28 Jan 2022, 19:32 Adam King, wrote:
> Hello Vlad,
>
> Just some insight into how CEPHADM_STRAY_DAEMO
21 matches
Mail list logo