[ceph-users] Using ceph.conf for CephFS kernel client with Nautilus cluster

2022-02-03 Thread William Edwards

Hi,

I need to set options from 
https://docs.ceph.com/en/nautilus/cephfs/client-config-ref/ . I assume 
these should be placed in the 'client' section in ceph.conf.


The documentation for Nautilus says that ceph.conf should be placed when 
FUSE is used, see: 
https://docs.ceph.com/en/nautilus/cephfs/mount-prerequisites/ . However, 
ceph.conf is not mentioned on 
https://docs.ceph.com/en/nautilus/cephfs/fstab/#kernel-driver . 
Therefore, the clients don't currently have an /etc/ceph/ceph.conf.


In contrast, the documentation for Pacific says that there **must** be a 
ceph.conf in any case: https://docs.ceph.com/en/latest/cephfs/mount 
-prerequisites/#general-pre-requisite-for-mounting-cephfs


Newer Ceph versions contain the command 'ceph config 
generate-minimal-conf'. I can deduce from the command's code what 
ceph.conf on the client should look like: 
https://github.com/ceph/ceph/blob/master/src/mon/ConfigMonitor.cc#L423


L428: [global]
L429: fsid
L430 - L448: mon_host (not sure what 'is_legacy' and 'size() == 1' 
entail; I guess I'll see)

L449: newline
L450 - L458: This is deduced from 
https://github.com/ceph/ceph/blob/a67d1cf2a7a4031609a5d37baa01ffdfef80e993/src/mon/ConfigMap.cc#L98 
. get_minimal_conf only adds options with the flags FLAG_NO_MON_UPDATE 
or FLAG_MINIMAL_CONF, but I don't see any 'set_flags' statements in 
master; so I'm not sure which options have those flags.


So the resulting config would contain the global section with 'fsid' and 
'mon_host', my custom options in 'client', and possibly 'keyring'.


Questions:

- Is it acceptable to use a ceph.conf on the kernel client when using a 
Nautilus cluster? It can be specified as the 'conf' mount option, but as 
the documentation barely mentions it for kernel clients, I'm not 100% 
sure.

- Is my evaluation of the 'minimal' config correct?
- Which options have the FLAG_NO_MON_UPDATE and FLAG_MINIMAL_CONF flags? 
/ Where are flags set?


The cluster is running Ceph 14.2.22. The clients are running Ceph 
12.2.11. All clients use the kernel client.


--
With kind regards,

William Edwards

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Using ceph.conf for CephFS kernel client with Nautilus cluster

2022-02-03 Thread Konstantin Shalygin
Hi,

> On 3 Feb 2022, at 14:01, William Edwards  wrote:
> 
> - Is it acceptable to use a ceph.conf on the kernel client when using a 
> Nautilus cluster? 


If you use kernel client you don't need ceph.conf

Just setup fstab like this (this is example for msgr2 cluster only), for 
example for CentOS Stream kernel:

172.16.16.2:3300,172.16.16.3:3300,172.16.16.4:3300:/folder /srv/folder ceph 
name=client_name,secret=,dirstat,ms_mode=prefer-crc,_netdev


Gold luck,
k
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Using ceph.conf for CephFS kernel client with Nautilus cluster

2022-02-03 Thread William Edwards

Hi,

Konstantin Shalygin schreef op 2022-02-03 12:09:

Hi,


On 3 Feb 2022, at 14:01, William Edwards 
wrote:
- Is it acceptable to use a ceph.conf on the kernel client when
using a Nautilus cluster?


If you use kernel client you don't need ceph.conf


That's what the documentation implies, but...



Just setup fstab like this (this is example for msgr2 cluster only),
for example for CentOS Stream kernel:

172.16.16.2:3300,172.16.16.3:3300,172.16.16.4:3300:/folder /srv/folder
ceph
name=client_name,secret=,dirstat,ms_mode=prefer-crc,_netdev


... the options I want to set from 
https://docs.ceph.com/en/nautilus/cephfs/client-config-ref/ aren't 
listed as possible mount options at 
https://docs.ceph.com/en/nautilus/man/8/mount.ceph/#options . 'conf' is.




Gold luck,
k


--
With kind regards,

William Edwards

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: cephadm picks development/latest tagged image for daemon-base (docker.io/ceph/daemon-base:latest-pacific-devel)

2022-02-03 Thread Adam King
Hi Arun,

As you pointed out in your message, those containers whose image name is
using a container digest are the same container as the two using the tag
(you know for sure because the image ids in "ceph orch ps" don't differ for
those daemons). The reason for this difference is that the first mgr and
mon are deployed directly by bootstrap while all the other daemons were
deployed by the cephadm mgr module later on. The cephadm mgr module handles
converting image tags to digests. so those first two daemons aren't able to
be deployed using the digest name, but ultimately this should be irrelevant
as, as mentioned before, they're actually the same image in this case. So,
to sort of answer your question, there is only one image being used in the
cluster (the one specified with --image in bootstrap) and the only
difference between that first mgr and mon and all the other daemons is
purely superficial. There is no need to use upgrade on the cluster right
after deploying when using the --image flag. Unfortunately, I can't really
speak to this specific mon health warning. Maybe something related to the
config file passed? It's possible just redeploying that mon without the
health warning may have got it in line with the others. Not really sure on
that front.

- Adam King

On Thu, Feb 3, 2022 at 2:24 AM Arun Vinod  wrote:

> Hi Adam,
>
> Big Thanks for the responses and clarifying the global usage of the
> --image parameter.  Eventhough, I gave --image during bootstrap only mgr &
> mon daemons on the bootstrap host are getting created with that image and
> the rest of the demons are created on the image daemon-base as I mentioned
> earlier.
>
> So, there are two images coming into action here. First one can be
> controlled with the --image parameter in bootstrap( which worked when
> supplied in front of bootstrap keyword).
> The second container image is controlled by the variable 'container_image'
> which is set to 'docker.io/ceph/daemon-base:latest-pacific-devel' by
> default.
> Even Though it can be modified at runtime after bootstrap the existing
> daemons will not be modified. But, that case can be handled with the 'ceph
> orch upgrade' command like you mentioned at first.
> However, it is observed that if we mention this variable in the bootstrap
> config file, all the daemons will be created with the mentioned image from
> bootstrap itself.
>
> So, the take away from this is we mention the first image using the
> '--image' argument in bootstrap command and second image using the variable
> 'container_image' in the bootstrap config file all dameons will be created
> with the same image.
>
> So the question is, is cephadm really require two images?
>
> Also, one more observation I had is; even though I gave the same image in
> both the above said provisions, I can see a difference among the same type
> of daemons created on different hosts. (Even Though all dameons uses single
> image in effect)
>
> Following is a result of a cluster created on 3 hosts. The bootstrap
> command is below:(rest of the services are deployed using ceph orch)
>
> 'sudo cephadm --image quay.io/ceph/ceph:v16.2.7 bootstrap
> --skip-monitoring-stack --mon-ip 10.175.41.11 --cluster-network
> 10.175.42.0/24 --ssh-user ceph_deploy --ssh-private-key
> /home/ceph_deploy/.ssh/id_rsa --ssh-public-key
> /home/ceph_deploy/.ssh/id_rsa.pub --config
> /home/ceph_deploy/ceph_bootstrap/ceph.conf --initial-dashboard-password
> Qwe4Rt6D33 --dashboard-password--noupdate --no-minimize-config
>
> [root@hcictrl01 stack_orchestrator]# ceph orch ls
> NAMEPORTS   RUNNING  REFRESHED  AGE  PLACEMENT
>
> crash   3/3  9m ago 15m  *
>
> mds.cephfs  3/3  9m ago 9m
> hcictrl02;hcictrl03;hcictrl01;count:3
> mgr 3/3  9m ago 13m
>  hcictrl02;hcictrl03;hcictrl01;count:3
> mon 3/5  9m ago 15m  count:5
>
> osd   8  9m ago -
>
> rgw.rgw ?:7480  3/3  9m ago 9m
> hcictrl02;hcictrl03;hcictrl01;count:3
>
>
> [root@hcictrl01 stack_orchestrator]# ceph orch ps
> NAME HOST   PORTS   STATUS REFRESHED
>  AGE  MEM USE  MEM LIM  VERSION  IMAGE ID  CONTAINER ID
> crash.hcictrl01  hcictrl01  running (15m) 9m ago
>  15m6983k-  16.2.7   231fd40524c4  f6f866f4be92
> crash.hcictrl02  hcictrl02  running (14m) 9m ago
>  14m6987k-  16.2.7   231fd40524c4  1cb62e191c07
> crash.hcictrl03  hcictrl03  running (14m) 9m ago
>  14m6995k-  16.2.7   231fd40524c4  3e03f99065c0
> mds.cephfs.hcictrl01.vuamjy  hcictrl01  running (10m) 9m ago
>  10m13.0M-  16.2.7   231fd40524c4  9b3aeab68115
> mds.cephfs.hcictrl02.myohpi  hcictrl02  running (10m) 9m ago
>  10m15.6M-  16.2.7   231fd40524c4  5cded1208028
> mds.cephfs.hcictrl03.jziler  hcictrl03  running (10m) 9m ago
>  10m 

[ceph-users] Re: Using ceph.conf for CephFS kernel client with Nautilus cluster

2022-02-03 Thread Jeff Layton
On Thu, 2022-02-03 at 12:01 +0100, William Edwards wrote:
> Hi,
> 
> I need to set options from 
> https://docs.ceph.com/en/nautilus/cephfs/client-config-ref/ . I assume 
> these should be placed in the 'client' section in ceph.conf.
> 
> The documentation for Nautilus says that ceph.conf should be placed when 
> FUSE is used, see: 
> https://docs.ceph.com/en/nautilus/cephfs/mount-prerequisites/ . However, 
> ceph.conf is not mentioned on 
> https://docs.ceph.com/en/nautilus/cephfs/fstab/#kernel-driver . 
> Therefore, the clients don't currently have an /etc/ceph/ceph.conf.
> 
> In contrast, the documentation for Pacific says that there **must** be a 
> ceph.conf in any case: https://docs.ceph.com/en/latest/cephfs/mount 
> -prerequisites/#general-pre-requisite-for-mounting-cephfs
> 
> Newer Ceph versions contain the command 'ceph config 
> generate-minimal-conf'. I can deduce from the command's code what 
> ceph.conf on the client should look like: 
> https://github.com/ceph/ceph/blob/master/src/mon/ConfigMonitor.cc#L423
> 
> L428: [global]
> L429: fsid
> L430 - L448: mon_host (not sure what 'is_legacy' and 'size() == 1' 
> entail; I guess I'll see)
> L449: newline
> L450 - L458: This is deduced from 
> https://github.com/ceph/ceph/blob/a67d1cf2a7a4031609a5d37baa01ffdfef80e993/src/mon/ConfigMap.cc#L98
>  
> . get_minimal_conf only adds options with the flags FLAG_NO_MON_UPDATE 
> or FLAG_MINIMAL_CONF, but I don't see any 'set_flags' statements in 
> master; so I'm not sure which options have those flags.
> 
> So the resulting config would contain the global section with 'fsid' and 
> 'mon_host', my custom options in 'client', and possibly 'keyring'.
> 
> Questions:
> 
> - Is it acceptable to use a ceph.conf on the kernel client when using a 
> Nautilus cluster? It can be specified as the 'conf' mount option, but as 
> the documentation barely mentions it for kernel clients, I'm not 100% 
> sure.
> - Is my evaluation of the 'minimal' config correct?
> - Which options have the FLAG_NO_MON_UPDATE and FLAG_MINIMAL_CONF flags? 
> / Where are flags set?
> 
> The cluster is running Ceph 14.2.22. The clients are running Ceph 
> 12.2.11. All clients use the kernel client.
> 

The in-kernel client itself does not pay any attention to ceph.conf. The
mount helper program (mount.ceph) will look at that ceph configs and
keyrings to search for mon addresses and secrets for mounting if you
don't provide them in the device string and mount options.

-- 
Jeff Layton 

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Using ceph.conf for CephFS kernel client with Nautilus cluster

2022-02-03 Thread William Edwards

Hi,

Jeff Layton schreef op 2022-02-03 14:45:

On Thu, 2022-02-03 at 12:01 +0100, William Edwards wrote:

Hi,

I need to set options from
https://docs.ceph.com/en/nautilus/cephfs/client-config-ref/ . I assume
these should be placed in the 'client' section in ceph.conf.

The documentation for Nautilus says that ceph.conf should be placed 
when

FUSE is used, see:
https://docs.ceph.com/en/nautilus/cephfs/mount-prerequisites/ . 
However,

ceph.conf is not mentioned on
https://docs.ceph.com/en/nautilus/cephfs/fstab/#kernel-driver .
Therefore, the clients don't currently have an /etc/ceph/ceph.conf.

In contrast, the documentation for Pacific says that there **must** be 
a

ceph.conf in any case: https://docs.ceph.com/en/latest/cephfs/mount
-prerequisites/#general-pre-requisite-for-mounting-cephfs

Newer Ceph versions contain the command 'ceph config
generate-minimal-conf'. I can deduce from the command's code what
ceph.conf on the client should look like:
https://github.com/ceph/ceph/blob/master/src/mon/ConfigMonitor.cc#L423

L428: [global]
L429: fsid
L430 - L448: mon_host (not sure what 'is_legacy' and 'size() == 1'
entail; I guess I'll see)
L449: newline
L450 - L458: This is deduced from
https://github.com/ceph/ceph/blob/a67d1cf2a7a4031609a5d37baa01ffdfef80e993/src/mon/ConfigMap.cc#L98
. get_minimal_conf only adds options with the flags FLAG_NO_MON_UPDATE
or FLAG_MINIMAL_CONF, but I don't see any 'set_flags' statements in
master; so I'm not sure which options have those flags.

So the resulting config would contain the global section with 'fsid' 
and

'mon_host', my custom options in 'client', and possibly 'keyring'.

Questions:

- Is it acceptable to use a ceph.conf on the kernel client when using 
a
Nautilus cluster? It can be specified as the 'conf' mount option, but 
as

the documentation barely mentions it for kernel clients, I'm not 100%
sure.
- Is my evaluation of the 'minimal' config correct?
- Which options have the FLAG_NO_MON_UPDATE and FLAG_MINIMAL_CONF 
flags?

/ Where are flags set?

The cluster is running Ceph 14.2.22. The clients are running Ceph
12.2.11. All clients use the kernel client.



The in-kernel client itself does not pay any attention to ceph.conf. 
The

mount helper program (mount.ceph) will look at that ceph configs and
keyrings to search for mon addresses and secrets for mounting if you
don't provide them in the device string and mount options.


Are you saying that the options from 
https://docs.ceph.com/en/nautilus/cephfs/client-config-ref/ won't take 
effect when using the kernel client?


--
With kind regards,

William Edwards

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Using ceph.conf for CephFS kernel client with Nautilus cluster

2022-02-03 Thread Jeff Layton
On Thu, 2022-02-03 at 15:26 +0100, William Edwards wrote:
> Hi,
> 
> Jeff Layton schreef op 2022-02-03 14:45:
> > On Thu, 2022-02-03 at 12:01 +0100, William Edwards wrote:
> > > Hi,
> > > 
> > > I need to set options from
> > > https://docs.ceph.com/en/nautilus/cephfs/client-config-ref/ . I assume
> > > these should be placed in the 'client' section in ceph.conf.
> > > 
> > > The documentation for Nautilus says that ceph.conf should be placed 
> > > when
> > > FUSE is used, see:
> > > https://docs.ceph.com/en/nautilus/cephfs/mount-prerequisites/ . 
> > > However,
> > > ceph.conf is not mentioned on
> > > https://docs.ceph.com/en/nautilus/cephfs/fstab/#kernel-driver .
> > > Therefore, the clients don't currently have an /etc/ceph/ceph.conf.
> > > 
> > > In contrast, the documentation for Pacific says that there **must** be 
> > > a
> > > ceph.conf in any case: https://docs.ceph.com/en/latest/cephfs/mount
> > > -prerequisites/#general-pre-requisite-for-mounting-cephfs
> > > 
> > > Newer Ceph versions contain the command 'ceph config
> > > generate-minimal-conf'. I can deduce from the command's code what
> > > ceph.conf on the client should look like:
> > > https://github.com/ceph/ceph/blob/master/src/mon/ConfigMonitor.cc#L423
> > > 
> > > L428: [global]
> > > L429: fsid
> > > L430 - L448: mon_host (not sure what 'is_legacy' and 'size() == 1'
> > > entail; I guess I'll see)
> > > L449: newline
> > > L450 - L458: This is deduced from
> > > https://github.com/ceph/ceph/blob/a67d1cf2a7a4031609a5d37baa01ffdfef80e993/src/mon/ConfigMap.cc#L98
> > > . get_minimal_conf only adds options with the flags FLAG_NO_MON_UPDATE
> > > or FLAG_MINIMAL_CONF, but I don't see any 'set_flags' statements in
> > > master; so I'm not sure which options have those flags.
> > > 
> > > So the resulting config would contain the global section with 'fsid' 
> > > and
> > > 'mon_host', my custom options in 'client', and possibly 'keyring'.
> > > 
> > > Questions:
> > > 
> > > - Is it acceptable to use a ceph.conf on the kernel client when using 
> > > a
> > > Nautilus cluster? It can be specified as the 'conf' mount option, but 
> > > as
> > > the documentation barely mentions it for kernel clients, I'm not 100%
> > > sure.
> > > - Is my evaluation of the 'minimal' config correct?
> > > - Which options have the FLAG_NO_MON_UPDATE and FLAG_MINIMAL_CONF 
> > > flags?
> > > / Where are flags set?
> > > 
> > > The cluster is running Ceph 14.2.22. The clients are running Ceph
> > > 12.2.11. All clients use the kernel client.
> > > 
> > 
> > The in-kernel client itself does not pay any attention to ceph.conf. 
> > The
> > mount helper program (mount.ceph) will look at that ceph configs and
> > keyrings to search for mon addresses and secrets for mounting if you
> > don't provide them in the device string and mount options.
> 
> Are you saying that the options from 
> https://docs.ceph.com/en/nautilus/cephfs/client-config-ref/ won't take 
> effect when using the kernel client?
> 

Yes. Those are ignored by the kernel client.
-- 
Jeff Layton 

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Automatic OSD creation / Floating IP for ceph dashboard

2022-02-03 Thread Ricardo Alonso
Is there at least a way to blacklist certain devices, so ceph won't try to
add them ?

Or should I completely disable ceph orch to stop it trying to add new
devices all the time?

On Mon, Jan 31, 2022 at 9:59 AM Ricardo Alonso 
wrote:

> Hey all,
>
> For more that I'm enjoying this discussion, it's completely out of my
> original question:
>
> 
>
> How to stop the automatic OSD creation from Ceph orchestrator?
>
>
> 
>
> The problem happens because using cinderlib, ovirt uses krbd (not librbd)
> and because of this, the kernel ( and Ceph orch) sees the disk. If there's
> no partition, Ceph tries to add it as an OSD, but fails,  leaving the
> cluster in WARN state.
>
> The solution state in the manual doesn't work:
>
> # ceph orch apply osd --all-available-devices --unmanaged=true
>
>
> Besides this issue, Cinderlib is working pretty decent:
> - Disk creation/expansion works
> - live machine migration works
> - snapshot works
>
> Of course there are missing items, like:
>
> - live storage migration
> - disk moving from/to image storage domains (only copy works)
> - statistics from the pool ( like used/available space)
>
> But in general, it's production ready.
>
>
> /Ricardo
>
> On Mon, 31 Jan 2022, 09:37 Konstantin Shalygin,  wrote:
>
>> Hi,
>>
>> On 31 Jan 2022, at 11:38, Marc  wrote:
>>
>> This is incorrect. I am using live migration with Nautilus and stock
>> kernel on CentOS7
>>
>>
>>
>> Mark, I think that you are confusing live migration of virtual machines
>> [1] and live migration of RBD images [2] inside the cluster (between pools,
>> for example) when the client is running
>>
>>
>> [1] https://libvirt.org/migration.html
>> [2]
>> https://docs.ceph.com/en/latest/rbd/rbd-live-migration/#image-live-migration
>>
>> k
>>
>

-- 
Ricardo Alonso
ricardoalon...@gmail.com
+44 7340-546916 - UK
+55 (31) 4042-0266 - Brazil
Skype: ricardoalonso
GPG Fingerprint: FC7E 4A5F B7A4
87F4 6876 5325 D95F BFBF B7AC EE54
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Using ceph.conf for CephFS kernel client with Nautilus cluster

2022-02-03 Thread William Edwards

Hi,

Jeff Layton schreef op 2022-02-03 15:36:

On Thu, 2022-02-03 at 15:26 +0100, William Edwards wrote:

Hi,

Jeff Layton schreef op 2022-02-03 14:45:
> On Thu, 2022-02-03 at 12:01 +0100, William Edwards wrote:
> > Hi,
> >
> > I need to set options from
> > https://docs.ceph.com/en/nautilus/cephfs/client-config-ref/ . I assume
> > these should be placed in the 'client' section in ceph.conf.
> >
> > The documentation for Nautilus says that ceph.conf should be placed
> > when
> > FUSE is used, see:
> > https://docs.ceph.com/en/nautilus/cephfs/mount-prerequisites/ .
> > However,
> > ceph.conf is not mentioned on
> > https://docs.ceph.com/en/nautilus/cephfs/fstab/#kernel-driver .
> > Therefore, the clients don't currently have an /etc/ceph/ceph.conf.
> >
> > In contrast, the documentation for Pacific says that there **must** be
> > a
> > ceph.conf in any case: https://docs.ceph.com/en/latest/cephfs/mount
> > -prerequisites/#general-pre-requisite-for-mounting-cephfs
> >
> > Newer Ceph versions contain the command 'ceph config
> > generate-minimal-conf'. I can deduce from the command's code what
> > ceph.conf on the client should look like:
> > https://github.com/ceph/ceph/blob/master/src/mon/ConfigMonitor.cc#L423
> >
> > L428: [global]
> > L429: fsid
> > L430 - L448: mon_host (not sure what 'is_legacy' and 'size() == 1'
> > entail; I guess I'll see)
> > L449: newline
> > L450 - L458: This is deduced from
> > 
https://github.com/ceph/ceph/blob/a67d1cf2a7a4031609a5d37baa01ffdfef80e993/src/mon/ConfigMap.cc#L98
> > . get_minimal_conf only adds options with the flags FLAG_NO_MON_UPDATE
> > or FLAG_MINIMAL_CONF, but I don't see any 'set_flags' statements in
> > master; so I'm not sure which options have those flags.
> >
> > So the resulting config would contain the global section with 'fsid'
> > and
> > 'mon_host', my custom options in 'client', and possibly 'keyring'.
> >
> > Questions:
> >
> > - Is it acceptable to use a ceph.conf on the kernel client when using
> > a
> > Nautilus cluster? It can be specified as the 'conf' mount option, but
> > as
> > the documentation barely mentions it for kernel clients, I'm not 100%
> > sure.
> > - Is my evaluation of the 'minimal' config correct?
> > - Which options have the FLAG_NO_MON_UPDATE and FLAG_MINIMAL_CONF
> > flags?
> > / Where are flags set?
> >
> > The cluster is running Ceph 14.2.22. The clients are running Ceph
> > 12.2.11. All clients use the kernel client.
> >
>
> The in-kernel client itself does not pay any attention to ceph.conf.
> The
> mount helper program (mount.ceph) will look at that ceph configs and
> keyrings to search for mon addresses and secrets for mounting if you
> don't provide them in the device string and mount options.

Are you saying that the options from
https://docs.ceph.com/en/nautilus/cephfs/client-config-ref/ won't take
effect when using the kernel client?



Yes. Those are ignored by the kernel client.


Thanks. I was hoping to set 'client cache size'. Is there any other way 
to set it when using the kernel client? I doubt switching to FUSE will 
help in solving the performance issue I'm trying to tackle (which is 
what I want to set 'client cache size' for :-) ).


--
With kind regards,

William Edwards

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Using ceph.conf for CephFS kernel client with Nautilus cluster

2022-02-03 Thread Jeff Layton
On Thu, 2022-02-03 at 16:52 +0100, William Edwards wrote:
> Hi,
> 
> Jeff Layton schreef op 2022-02-03 15:36:
> > On Thu, 2022-02-03 at 15:26 +0100, William Edwards wrote:
> > > Hi,
> > > 
> > > Jeff Layton schreef op 2022-02-03 14:45:
> > > > On Thu, 2022-02-03 at 12:01 +0100, William Edwards wrote:
> > > > > Hi,
> > > > > 
> > > > > I need to set options from
> > > > > https://docs.ceph.com/en/nautilus/cephfs/client-config-ref/ . I assume
> > > > > these should be placed in the 'client' section in ceph.conf.
> > > > > 
> > > > > The documentation for Nautilus says that ceph.conf should be placed
> > > > > when
> > > > > FUSE is used, see:
> > > > > https://docs.ceph.com/en/nautilus/cephfs/mount-prerequisites/ .
> > > > > However,
> > > > > ceph.conf is not mentioned on
> > > > > https://docs.ceph.com/en/nautilus/cephfs/fstab/#kernel-driver .
> > > > > Therefore, the clients don't currently have an /etc/ceph/ceph.conf.
> > > > > 
> > > > > In contrast, the documentation for Pacific says that there **must** be
> > > > > a
> > > > > ceph.conf in any case: https://docs.ceph.com/en/latest/cephfs/mount
> > > > > -prerequisites/#general-pre-requisite-for-mounting-cephfs
> > > > > 
> > > > > Newer Ceph versions contain the command 'ceph config
> > > > > generate-minimal-conf'. I can deduce from the command's code what
> > > > > ceph.conf on the client should look like:
> > > > > https://github.com/ceph/ceph/blob/master/src/mon/ConfigMonitor.cc#L423
> > > > > 
> > > > > L428: [global]
> > > > > L429: fsid
> > > > > L430 - L448: mon_host (not sure what 'is_legacy' and 'size() == 1'
> > > > > entail; I guess I'll see)
> > > > > L449: newline
> > > > > L450 - L458: This is deduced from
> > > > > https://github.com/ceph/ceph/blob/a67d1cf2a7a4031609a5d37baa01ffdfef80e993/src/mon/ConfigMap.cc#L98
> > > > > . get_minimal_conf only adds options with the flags FLAG_NO_MON_UPDATE
> > > > > or FLAG_MINIMAL_CONF, but I don't see any 'set_flags' statements in
> > > > > master; so I'm not sure which options have those flags.
> > > > > 
> > > > > So the resulting config would contain the global section with 'fsid'
> > > > > and
> > > > > 'mon_host', my custom options in 'client', and possibly 'keyring'.
> > > > > 
> > > > > Questions:
> > > > > 
> > > > > - Is it acceptable to use a ceph.conf on the kernel client when using
> > > > > a
> > > > > Nautilus cluster? It can be specified as the 'conf' mount option, but
> > > > > as
> > > > > the documentation barely mentions it for kernel clients, I'm not 100%
> > > > > sure.
> > > > > - Is my evaluation of the 'minimal' config correct?
> > > > > - Which options have the FLAG_NO_MON_UPDATE and FLAG_MINIMAL_CONF
> > > > > flags?
> > > > > / Where are flags set?
> > > > > 
> > > > > The cluster is running Ceph 14.2.22. The clients are running Ceph
> > > > > 12.2.11. All clients use the kernel client.
> > > > > 
> > > > 
> > > > The in-kernel client itself does not pay any attention to ceph.conf.
> > > > The
> > > > mount helper program (mount.ceph) will look at that ceph configs and
> > > > keyrings to search for mon addresses and secrets for mounting if you
> > > > don't provide them in the device string and mount options.
> > > 
> > > Are you saying that the options from
> > > https://docs.ceph.com/en/nautilus/cephfs/client-config-ref/ won't take
> > > effect when using the kernel client?
> > > 
> > 
> > Yes. Those are ignored by the kernel client.
> 
> Thanks. I was hoping to set 'client cache size'. Is there any other way 
> to set it when using the kernel client? I doubt switching to FUSE will 
> help in solving the performance issue I'm trying to tackle (which is 
> what I want to set 'client cache size' for :-) ).
> 

No, not really.

We don't limit the amount of pagecache in use by a particular mount in
the kernel. If you want to limit the amount of pagecache in use, then
you have to tune generic VM settings like the /proc/sys/vm/dirty_*
settings.

Alternately you can investigate cgroups if you want to limit the amount
of memory a particular application is allowed to dirty at a time.
-- 
Jeff Layton 

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: cephadm picks development/latest tagged image for daemon-base (docker.io/ceph/daemon-base:latest-pacific-devel)

2022-02-03 Thread Arun Vinod
Hi Adam,

Thanks for reviewing the long output.

Like you said, it makes total sense now since the first mon and mgr are
created by cephamd bootstrap and the rest of the dameons by the mgr module.

But, even if I gave --image flag with bootstrap the daemons created by mgr
module are using the daemon-base image, in our case its '
docker.io/ceph/daemon-base:latest-pacific-devel'.
Which I guess is because, mgr daemon takes into consideration the
configuration parameter 'container_image', whose default value is '
docker.io/ceph/daemon-base:latest-pacific-devel'.

What we guess is even if we provide --image flag in cephadm bootstrap,
cephadm is not updating the variable container_image with this value.
Hence, all the remaining daemons are getting created using
daemon-base image.

Below is the value of config 'container_image' after bootstrapping with
--image flag provided.

[root@hcictrl01 stack_orchestrator]# ceph-conf -D | grep -i container_image
container_image = docker.io/ceph/daemon-base:latest-pacific-devel

However, one workaround is to provide this config in the initial bootstrap
config file and present it to the cepham bootstrap using the flag --config,
which updates the image name and all the daemons are getting created with
the same image.

Also, regarding the non-uniform behaviour of the first mon even if created
using the same image is quite surprising. I double checked the
configuration of all mon, and could not find a major difference between
first and remaining mons. I tried to reconfigt the first mon which ended up
in the same corner. However, redeploying the specific mon with command
'ceph orch redeploy  quay.io/ceph/ceph:v16.2.7, caused the first mon
also showing the same warning as rest, as it got redeployed by the mgr.

Are we expecting any difference between the mon deployed by cephadm
bootstrap and mon deployed by mgr, even if we'r using the same image?
We have only the lack of warning in the first mon to state that there might
be a difference in the first mon and rest of the mons.

Thanks again Adam for checking this. Your insights into this will be highly
appreciated.

Thanks and Regards,
Arun Vinod
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Error-405!! Ceph( version 17.0.0 - Quincy)S3 bucket replication api not working

2022-02-03 Thread Casey Bodley
it looks like PutBucketReplication API is disabled and returns 405
until you do some extra setup with radosgw-admin.
https://docs.ceph.com/en/pacific/radosgw/multisite-sync-policy/ should
provide some guidance there

this feature has been 'experimental' since it was added for the
octopus release, and it's unfortunate that nobody has worked on it
since. the feature doesn't have any regression tests, so things have
broken since it was written. some of the crash bugs are tracked in:

https://tracker.ceph.com/issues/48418
https://tracker.ceph.com/issues/48415
https://tracker.ceph.com/issues/52044

On Thu, Feb 3, 2022 at 12:19 PM Shraddha Ghatol
 wrote:
>
> Hello All,
>
>
> I would like to experiment with the S3 replication API but I am getting 
> MethodNotAllowed error. I am using Quincy, Rados version 17.0.0.
>
>
> I am using POSTMAN to generate curl request. Other S3 APIs like tagging, 
> versioning are working fine on this setup.
>
>
> Is there anything I am missing?
>
> Following is the curl request :
>
>
>  curl --location --request PUT 
> 'ssc-vm-g4-rhev4-0621.colo.seagate.com:8000/shr-bucket?replication' \
> --header 'X-Amz-Content-Sha256: 
> beaead3198f7da1e70d03ab969765e0821b24fc913697e929e726aeaebf0eba3' \
> --header 'X-Amz-Date: 20220203T130542Z' \
> --header 'Authorization: AWS4-HMAC-SHA256 
> Credential=TGSNZ9CTOIFJ0K9AIJYF/20220203/us-east-1/s3/aws4_request, 
> SignedHeaders=host;x-amz-content-sha256;x-amz-date, 
> Signature=f180ae972982f0a3ee8299f7fc84d0416911bb20e4a31ea6f143e4a86a1cd93e' \
> --header 'Content-Type: text/plain' \
> --data-raw ' xmlns="http://s3.amazonaws.com/doc/2006-03-01/";>arn:aws:iam:::role/application_abc/component_xyz/S3Access1EnabledRule-1us-east1Disabledhem-bucketus-west'
>
> Note: Added rgw extension fields also -
> us-east
> hem-bucketus-west
>
> But getting following response-
>
> 
> 
> MethodNotAllowed
> tx08fc374d29349db5a-0061fbd325-16fcf-us-east
> 16fcf-us-east-us
> 
>
> Getting  following logs in log file::
>
> 2022-02-03T02:48:03.736-0700 7efc0bfab700  1 req 8851738751138730485 
> 0.0s handler->ERRORHANDLER: err_no=-2003 new_err_no=-2003
> 2022-02-03T02:48:03.736-0700 7efc0bfab700  2 req 8851738751138730485 
> 0.0s http status=405
> 2022-02-03T02:48:03.736-0700 7efc0bfab700  1 == req done 
> req=0x7efcb18de780 op status=0 http_status=405 latency=0.0s ==
>
> Please refer following attachments for reference-
> screenshot of ceph version and running instances for status.
> screenshot of error logs generated.
>
> Regards,
> Shraddha Ghatol
> shraddha.j.gha...@seagate.com
> shraddhagha...@gmail.com
>
>
>
> Seagate Internal
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: cephadm picks development/latest tagged image for daemon-base (docker.io/ceph/daemon-base:latest-pacific-devel)

2022-02-03 Thread Adam King
>
> But, even if I gave --image flag with bootstrap the daemons created by mgr
> module are using the daemon-base image, in our case its '
> docker.io/ceph/daemon-base:latest-pacific-devel'.
> Which I guess is because, mgr daemon takes into consideration the
> configuration parameter 'container_image', whose default value is '
> docker.io/ceph/daemon-base:latest-pacific-devel'.
> What we guess is even if we provide --image flag in cephadm bootstrap,
> cephadm is not updating the variable container_image with this value.
> Hence, all the remaining daemons are getting created using
> daemon-base image.


This is not how it's supposed to work. If you provide "--image
" to bootstrap all ceph daemons deployed, including the mon/mgr
deployed during bootstrap AND the daemons deployed by the cephadm mgr
module afterwards should be deployed with the image provided to the
"--image" parameter. You shouldn't need to set any config options or do
anything extra to get that to work. If you're providing "--image" to
bootstrap and this is not happening there is a serious bug (not including
the fact that the bootstrap mgr/mon show the tag while others show the
digest, that's purely cosmetic). If that's the case if you could post the
full bootstrap output and the contents of the config file you're passing to
bootstrap and maybe we can debug. I've never seen this issue before
anywhere else so I have no way to recreate it (for me passing --image in
bootstrap causes all ceph daemons to be deployed with that image until I
explicitly specify another image through upgrade or other means).

Also, regarding the non-uniform behaviour of the first mon even if created
> using the same image is quite surprising. I double checked the
> configuration of all mon, and could not find a major difference between
> first and remaining mons. I tried to reconfigt the first mon which ended up
> in the same corner. However, redeploying the specific mon with command
> 'ceph orch redeploy  quay.io/ceph/ceph:v16.2.7, caused the first
> mon also showing the same warning as rest, as it got redeployed by the mgr.


Are we expecting any difference between the mon deployed by cephadm
> bootstrap and mon deployed by mgr, even if we'r using the same image?
> We have only the lack of warning in the first mon to state that there
> might be a difference in the first mon and rest of the mons.


I could maybe see some difference if you add specific config options as the
mon deployed during bootstrap is deployed with basic settings. Since we
can't infer config settings into the mon store until there is an existing
monitor this is sort of necessary and could maybe cause some differences
between that mon and others. This should be resolved by a redeploy of the
mon. Can you tell me if you're setting any mon related config options in
the conf you're providing to bootstrap (or if you've set any config options
elsewhere). It may be that cephadm needs to actively redeploy the mon if
certain options are provided in and I can look into it if I know which
sorts of config options are causing the health warning. I haven't seen that
health warning in my own testing (on the bootstrap mon or those deployed by
the mgr module) so I'd need to know what's causing it to come about to come
up with a good fix.


- Adam King

On Thu, Feb 3, 2022 at 11:29 AM Arun Vinod  wrote:

> Hi Adam,
>
> Thanks for reviewing the long output.
>
> Like you said, it makes total sense now since the first mon and mgr are
> created by cephamd bootstrap and the rest of the dameons by the mgr module.
>
> But, even if I gave --image flag with bootstrap the daemons created by mgr
> module are using the daemon-base image, in our case its '
> docker.io/ceph/daemon-base:latest-pacific-devel'.
> Which I guess is because, mgr daemon takes into consideration the
> configuration parameter 'container_image', whose default value is '
> docker.io/ceph/daemon-base:latest-pacific-devel'.
>
> What we guess is even if we provide --image flag in cephadm bootstrap,
> cephadm is not updating the variable container_image with this value.
> Hence, all the remaining daemons are getting created using
> daemon-base image.
>
> Below is the value of config 'container_image' after bootstrapping with
> --image flag provided.
>
> [root@hcictrl01 stack_orchestrator]# ceph-conf -D | grep -i
> container_image
> container_image = docker.io/ceph/daemon-base:latest-pacific-devel
>
> However, one workaround is to provide this config in the initial bootstrap
> config file and present it to the cepham bootstrap using the flag --config,
> which updates the image name and all the daemons are getting created with
> the same image.
>
> Also, regarding the non-uniform behaviour of the first mon even if created
> using the same image is quite surprising. I double checked the
> configuration of all mon, and could not find a major difference between
> first and remaining mons. I tried to reconfigt the first mon which ended up
> in the same corner. Howe

[ceph-users] Re: cephadm picks development/latest tagged image for daemon-base (docker.io/ceph/daemon-base:latest-pacific-devel)

2022-02-03 Thread Arun Vinod
Hi Adam,

Thanks for the update. In that case this looks like a bug like you
mentioned.

Here are the contents of the config file used for bootstrapping.

[global]

osd pool default size = 2

osd pool default min size = 1

osd pool default pg num = 8

osd pool default pgp num = 8

osd recovery delay start = 60

osd memory target = 1610612736

osd failsafe full ratio = 1.0

mon pg warn max object skew = 20

mon osd nearfull ratio = 0.8

mon osd backfillfull ratio = 0.87

mon osd full ratio = 0.95

mon max pg per osd = 400

debug asok = 0/0

debug auth = 0/0

debug buffer = 0/0

debug client = 0/0

debug context = 0/0

debug crush = 0/0
debug filer = 0/0
debug filestore = 0/0
debug finisher = 0/0
debug heartbeatmap = 0/0
debug journal = 0/0
debug journaler = 0/0
debug lockdep = 0/0
debug mds = 0/0
debug mds balancer = 0/0
debug mds locker = 0/0
debug mds log = 0/0
debug mds log expire = 0/0
debug mds migrator = 0/0
debug mon = 0/0
debug monc = 0/0
debug ms = 0/0
debug objclass = 0/0
debug objectcacher = 0/0
debug objecter = 0/0
debug optracker = 0/0
debug osd = 0/0
debug paxos = 0/0
debug perfcounter = 0/0
debug rados = 0/0
debug rbd = 0/0
debug rgw = 0/0
debug throttle = 0/0
debug timer = 0/0
debug tp = 0/0
[osd]
bluestore compression mode = passive
[mon]
mon osd allow primary affinity = true
mon allow pool delete = true
[client]
rbd cache = true
rbd cache writethrough until flush = true
rbd concurrent management ops = 20
admin socket = /var/run/ceph/$cluster-$type.$id.$pid.$cctid.asok
log file = /var/log/ceph/client.$pid.log

Output of bootstrap command:

[root@hcictrl01 stack_orchestrator]# sudo cephadm --image
quay.io/ceph/ceph:v16.2.7 bootstrap --skip-monitoring-stack --mon-ip
10.175.41.11 --clus
ter-network 10.175.42.0/24 --ssh-user ceph_deploy --ssh-private-key
/home/ceph_deploy/.ssh/id_rsa --ssh-public-key
/home/ceph_deploy/.ssh/id_rsa.p
ub --config /home/ceph_deploy/ceph_bootstrap/ceph.conf
--initial-dashboard-password J959ABCFRFGE --dashboard-password-noupdate
--no-minimize-confi
g --skip-pull

Verifying podman|docker is present...

Verifying lvm2 is present...

Verifying time synchronization is in place...

Unit chronyd.service is enabled and running

Repeating the final host check...

podman (/bin/podman) version 3.3.1 is present

systemctl is present

lvcreate is present

Unit chronyd.service is enabled and running

Host looks OK

Cluster fsid: dba72000-8525-11ec-b1e7-0015171590ba

Verifying IP 10.175.41.11 port 3300 ...

Verifying IP 10.175.41.11 port 6789 ...

Mon IP `10.175.41.11` is in CIDR network `10.175.41.0/24`

Ceph version: ceph version 16.2.7
(dd0603118f56ab514f133c8d2e3adfc983942503) pacific (stable)

Extracting ceph user uid/gid from container image...

Creating initial keys...
Creating initial monmap...
Creating mon...
Waiting for mon to start...
Waiting for mon...
mon is available
Setting mon public_network to 10.175.41.0/24
Setting cluster_network to 10.175.42.0/24
Wrote config to /etc/ceph/ceph.conf
Wrote keyring to /etc/ceph/ceph.client.admin.keyring
Creating mgr...
Verifying port 9283 ...
Waiting for mgr to start...
Waiting for mgr...
mgr not available, waiting (1/15)...
mgr not available, waiting (2/15)...
mgr not available, waiting (3/15)...
mgr not available, waiting (4/15)...
mgr is available
Enabling cephadm module...
Waiting for the mgr to restart...
Waiting for mgr epoch 5...
mgr epoch 5 is available
Setting orchestrator backend to cephadm...
Using provided ssh keys...
Adding host hcictrl01...
Deploying mon service with default placement...
Deploying mgr service with default placement...
Deploying crash service with default placement...
Enabling the dashboard module...
Waiting for the mgr to restart...
Waiting for mgr epoch 9...
mgr epoch 9 is available
Generating a dashboard self-signed certificate...
Creating initial admin user...
Fetching dashboard port number...
Ceph Dashboard is now available at:

 URL: https://hcictrl01.enclouden.com:8443/
User: admin
Password: J959ABCFRFGE

Enabling client.admin keyring and conf on hosts with "admin" label
You can access the Ceph CLI with:

sudo /sbin/cephadm shell --fsid
dba72000-8525-11ec-b1e7-0015171590ba -c /etc/ceph/ceph.conf -k
/etc/ceph/ceph.client.admin.keyring

Please consider enabling telemetry to help improve Ceph:

ceph telemetry on

For more information see:

https://docs.ceph.com/docs/pacific/mgr/telemetry/

Bootstrap complete.


List of containers created after bootstrap:

[root@hcictrl01 stack_orchestrator]# podman ps
CONTAINER ID  IMAGECOMMAND
  CREATED STATUS PORTS   NAMES
c7bfdf3b5831  quay.io/ceph/ceph:v16.2.7-n
mon.hcictrl01 ...  7 minutes ago   Up 7 minutes ago
ceph-dba72000-8525-11ec-b1e7-0015171590ba-mon-hcictrl01
67c1e6f2ff1f  quay.io/ceph/ceph:v16.2.7-n
mgr.hcictrl01  7 minutes ago   Up 7 minutes ago
ceph-db

[ceph-users] The Return of Ceph Planet

2022-02-03 Thread Mike Perez
Hi everyone,

Ceph Planet was a collection of imported blog posts companies or
individuals using Ceph. I'm happy to announce that we're bringing this
back to the Ceph website.

If you have a Ceph category feed you would like added; please email me
your RSS feed URL. The URL should contain a ceph category as we don't
want to be pulling in any unrelated Ceph posts. Thanks!

--
Mike Perez

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: cephadm picks development/latest tagged image for daemon-base (docker.io/ceph/daemon-base:latest-pacific-devel)

2022-02-03 Thread Adam King
Hi Arun,

A couple questions. First, from where did you pull your cephadm binary from
(the python file used for bootstrap). I know we swapped everything over to
quay quite a bit ago (
https://github.com/ceph/ceph/commit/b291aa47825ece9fcfe9831546e1d8355b3202e4)
so I want to make sure if I try to recreate this I have the same version o
the binary. Secondly, I'm curious what your reason is for supplying the
"--no-minimize-config" flag. Were you getting some unwanted behavior
without it?

I'll see if I can figure out what's going on here. Again, I've never seen
this before so it might be difficult for me to recreate but I'll see what I
can do. In the meantime, hopefully using the upgrade for a workaround is at
least okay for you.

- Adam King

On Thu, Feb 3, 2022 at 2:32 PM Arun Vinod  wrote:

> Hi Adam,
>
> Thanks for the update. In that case this looks like a bug like you
> mentioned.
>
> Here are the contents of the config file used for bootstrapping.
>
> [global]
>
> osd pool default size = 2
>
> osd pool default min size = 1
>
> osd pool default pg num = 8
>
> osd pool default pgp num = 8
>
> osd recovery delay start = 60
>
> osd memory target = 1610612736
>
> osd failsafe full ratio = 1.0
>
> mon pg warn max object skew = 20
>
> mon osd nearfull ratio = 0.8
>
> mon osd backfillfull ratio = 0.87
>
> mon osd full ratio = 0.95
>
> mon max pg per osd = 400
>
> debug asok = 0/0
>
> debug auth = 0/0
>
> debug buffer = 0/0
>
> debug client = 0/0
>
> debug context = 0/0
>
> debug crush = 0/0
> debug filer = 0/0
> debug filestore = 0/0
> debug finisher = 0/0
> debug heartbeatmap = 0/0
> debug journal = 0/0
> debug journaler = 0/0
> debug lockdep = 0/0
> debug mds = 0/0
> debug mds balancer = 0/0
> debug mds locker = 0/0
> debug mds log = 0/0
> debug mds log expire = 0/0
> debug mds migrator = 0/0
> debug mon = 0/0
> debug monc = 0/0
> debug ms = 0/0
> debug objclass = 0/0
> debug objectcacher = 0/0
> debug objecter = 0/0
> debug optracker = 0/0
> debug osd = 0/0
> debug paxos = 0/0
> debug perfcounter = 0/0
> debug rados = 0/0
> debug rbd = 0/0
> debug rgw = 0/0
> debug throttle = 0/0
> debug timer = 0/0
> debug tp = 0/0
> [osd]
> bluestore compression mode = passive
> [mon]
> mon osd allow primary affinity = true
> mon allow pool delete = true
> [client]
> rbd cache = true
> rbd cache writethrough until flush = true
> rbd concurrent management ops = 20
> admin socket = /var/run/ceph/$cluster-$type.$id.$pid.$cctid.asok
> log file = /var/log/ceph/client.$pid.log
>
> Output of bootstrap command:
>
> [root@hcictrl01 stack_orchestrator]# sudo cephadm --image
> quay.io/ceph/ceph:v16.2.7 bootstrap --skip-monitoring-stack --mon-ip
> 10.175.41.11 --clus
> ter-network 10.175.42.0/24 --ssh-user ceph_deploy --ssh-private-key
> /home/ceph_deploy/.ssh/id_rsa --ssh-public-key
> /home/ceph_deploy/.ssh/id_rsa.p
> ub --config /home/ceph_deploy/ceph_bootstrap/ceph.conf
> --initial-dashboard-password J959ABCFRFGE --dashboard-password-noupdate
> --no-minimize-confi
> g --skip-pull
>
> Verifying podman|docker is present...
>
> Verifying lvm2 is present...
>
> Verifying time synchronization is in place...
>
> Unit chronyd.service is enabled and running
>
> Repeating the final host check...
>
> podman (/bin/podman) version 3.3.1 is present
>
> systemctl is present
>
> lvcreate is present
>
> Unit chronyd.service is enabled and running
>
> Host looks OK
>
> Cluster fsid: dba72000-8525-11ec-b1e7-0015171590ba
>
> Verifying IP 10.175.41.11 port 3300 ...
>
> Verifying IP 10.175.41.11 port 6789 ...
>
> Mon IP `10.175.41.11` is in CIDR network `10.175.41.0/24`
> 
>
> Ceph version: ceph version 16.2.7
> (dd0603118f56ab514f133c8d2e3adfc983942503) pacific (stable)
>
> Extracting ceph user uid/gid from container image...
>
> Creating initial keys...
> Creating initial monmap...
> Creating mon...
> Waiting for mon to start...
> Waiting for mon...
> mon is available
> Setting mon public_network to 10.175.41.0/24
> Setting cluster_network to 10.175.42.0/24
> Wrote config to /etc/ceph/ceph.conf
> Wrote keyring to /etc/ceph/ceph.client.admin.keyring
> Creating mgr...
> Verifying port 9283 ...
> Waiting for mgr to start...
> Waiting for mgr...
> mgr not available, waiting (1/15)...
> mgr not available, waiting (2/15)...
> mgr not available, waiting (3/15)...
> mgr not available, waiting (4/15)...
> mgr is available
> Enabling cephadm module...
> Waiting for the mgr to restart...
> Waiting for mgr epoch 5...
> mgr epoch 5 is available
> Setting orchestrator backend to cephadm...
> Using provided ssh keys...
> Adding host hcictrl01...
> Deploying mon service with default placement...
> Deploying mgr service with default placement...
> Deploying crash service with default placement...
> Enabling the dashboard module...
> Waiting for the mgr to restart...
> Waiting for mgr epoch 9...
> mgr epoch 9 is available
> Generating a dashboard self-signed certificate...
> Creating initial admin user...
> Fetching dashboard port number...
> C

[ceph-users] File access issue with root_squashed fs client

2022-02-03 Thread Nicola Mori

Dear Ceph users,

I'm facing a problem when setting the root_squash cap for my fs client:

  # ceph auth get client.wizardfs
  [client.wizardfs]
  key = . . .
  caps mds = "allow rw fsname=wizardfs root_squash"
  caps mon = "allow r fsname=wizardfs"
  caps osd = "allow rw tag cephfs data=wizardfs"

When I create a test file with one client machine mounting the fs 
everything works:


  [07:23 mori@farm mori]$ ls -l test.txt
  -rw-r--r-- 1 mori lsfuser 21 Feb  4 07:23 test.txt

  [07:23 mori@farm mori]$ cat test.txt
  Content of test file

But accessing it from another machine using the same client the file is 
empty:


  [07:34 mori@farm-34 mori]$ ls -l test.txt
  -rw-r--r-- 1 mori lsfuser 0 Feb  4 07:23 test.txt

  [07:34 mori@farm-34 mori]$ cat test.txt

Removing root_squash from the mds client capabilities makes everything 
work as expected.


Both the machines used for the test are just fs clients running CentOS 7 
and Ceph Octopus 15.2.15, and not part of the cluster (which in turn 
consists of RockyLinux 8 machines managed by cephadm and running Ceph 
Pacific 16.2.7). Both clients mount the fs via autofs with the  kernel 
driver using this autofs configuration:


  ceph-test -fstype=ceph,name=wizardfs,noatime 172.16.253.2,172.16.253.1:/

I guess this could be a configuration problem but I can't figure out 
what I might be doing wrong here. So I'd greatly appreciate any help or 
suggestion.


Nicola Mori
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io