[ceph-users] Re: samba cephfs

2021-07-12 Thread Jan-Philipp Litza
That package probably contains the vfs_ceph module for Samba. However,
further down, the same page says:

> The above share configuration uses the Linux kernel CephFS client, which is 
> recommended for performance reasons.
> As an alternative, the Samba vfs_ceph module can also be used to communicate 
> with the Ceph cluster.

So when you use a kernel mount, you shouldn't need the package at all.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: samba cephfs

2021-07-12 Thread Magnus HAGDORN
We are using SL7 to export our cephfs via samba to windows. The
RHEL7/Centos7/SL7 distros do not come with packages for the samba
cephfs module. This is one of the reasons why we are mounting the file
system locally using the kernel cephfs module with the automounter and
reexporting it using vanilla samba. it works a treat.
magnus


On Sun, 2021-07-11 at 22:34 +, Marc wrote:
> This email was sent to you by someone outside the University.
> You should only click on links or attachments if you are certain that
> the email is genuine and the content is safe.
>
> I wanted to have a look at using cephfs with samba, and found this
> suse page[1]
>
> But where does this samba-ceph package come from?
> Is there a manual for rhel?
>
>
>
> [1]
> https://documentation.suse.com/ses/6/html/ses-all/cha-ses-cifs.html
>
>
>
>
>
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
The University of Edinburgh is a charitable body, registered in Scotland, with 
registration number SC005336. Is e buidheann carthannais a th’ ann an Oilthigh 
Dhùn Èideann, clàraichte an Alba, àireamh clàraidh SC005336.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Single ceph client usage with multiple ceph cluster

2021-07-12 Thread Ramanathan S
Hi ciphers,

We have two ceph clusters in our lab.  We are experimenting to use single
server as a client for two ceph clusters. Can we use the same client server
to store keyring for different clusters in ceph.conf file.  Another query
is can we use a single client with multiple vms in it for two different
clusters?

Regards,
Ram.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Installing ceph Octopus in centos 7

2021-07-12 Thread Michel Niyoyita
Dear Ceph users,

I would like to ask if it is possible to deploy Ceph Octopus in Centos 7.

waiting for your best reply

Michel
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: samba cephfs

2021-07-12 Thread Marc
Oh thanks Magnus for clearing this up. I thought that there was some other 
fancy config.



Sent: Monday, 12 July 2021 9:40 AM
To: ceph-users@ceph.io
Subject: [ceph-users] Re: samba cephfs

We are using SL7 to export our cephfs via samba to windows. The
RHEL7/Centos7/SL7 distros do not come with packages for the samba
cephfs module. This is one of the reasons why we are mounting the file
system locally using the kernel cephfs module with the automounter and
reexporting it using vanilla samba. it works a treat.
magnus


On Sun, 2021-07-11 at 22:34 +, Marc wrote:
> This email was sent to you by someone outside the University.
> You should only click on links or attachments if you are certain that
> the email is genuine and the content is safe.
>
> I wanted to have a look at using cephfs with samba, and found this
> suse page[1]
>
> But where does this samba-ceph package come from?
> Is there a manual for rhel?
>
>
>
> [1]
> https://documentation.suse.com/ses/6/html/ses-all/cha-ses-cifs.html
>
>
>
>
>
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
The University of Edinburgh is a charitable body, registered in Scotland, with 
registration number SC005336. Is e buidheann carthannais a th’ ann an Oilthigh 
Dhùn Èideann, clàraichte an Alba, àireamh clàraidh SC005336.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: CEPHADM_HOST_CHECK_FAILED after reboot of nodes

2021-07-12 Thread mabi
I have now opened a bug issue as this must be a bug with cephadm:

https://tracker.ceph.com/issues/51629

Hopefully someone has time to look into that.

Thank you in advance.

‐‐‐ Original Message ‐‐‐

On Friday, July 9th, 2021 at 8:11 AM, mabi  wrote:

> Hello,
>
> I rebooted all the 8 nodes of my Octopus 15.2.13 cluster which runs on Ubuntu 
> 20.04 LTS with cephadm and since then cephadm see 7 nodes as unreachable as 
> you can see below:
>
> [WRN] CEPHADM_HOST_CHECK_FAILED: 7 hosts fail cephadm check
>
> host ceph1d failed check: Can't communicate with remote host `ceph1d`, 
> possibly because python3 is not installed there: [Errno 32] Broken pipe
>
> host ceph1g failed check: Can't communicate with remote host `ceph1g`, 
> possibly because python3 is not installed there: [Errno 32] Broken pipe
>
> host ceph1c failed check: Can't communicate with remote host `ceph1c`, 
> possibly because python3 is not installed there: [Errno 32] Broken pipe
>
> host ceph1e failed check: Can't communicate with remote host `ceph1e`, 
> possibly because python3 is not installed there: [Errno 32] Broken pipe
>
> host ceph1f failed check: Can't communicate with remote host `ceph1f`, 
> possibly because python3 is not installed there: [Errno 32] Broken pipe
>
> host ceph1b failed check: Can't communicate with remote host `ceph1b`, 
> possibly because python3 is not installed there: [Errno 32] Broken pipe
>
> host ceph1h failed check: Failed to connect to ceph1h (ceph1h).
>
> Please make sure that the host is reachable and accepts connections using the 
> cephadm SSH key
>
> To add the cephadm SSH key to the host:
>
> > ceph cephadm get-pub-key > ~/ceph.pub
> >
> > ssh-copy-id -f -i ~/ceph.pub root@ceph1h
>
> To check that the host is reachable:
>
> > ceph cephadm get-ssh-config > ssh_config
> >
> > ceph config-key get mgr/cephadm/ssh_identity_key > ~/cephadm_private_key
> >
> > chmod 0600 ~/cephadm_private_key
> >
> > ssh -F ssh_config -i ~/cephadm_private_key root@ceph1h
>
> I checked and SSH is working and python3 is installed on all nodes.
>
> As you can see here "ceph orch host ls" also shows nodes as offline:
>
> ceph orch host ls
> =
>
> HOST ADDR LABELS STATUS
>
> ceph1a ceph1a _admin mon
>
> ceph1b ceph1b _admin mon Offline
>
> ceph1c ceph1c _admin mon Offline
>
> ceph1d ceph1d Offline
>
> ceph1e ceph1e Offline
>
> ceph1f ceph1f Offline
>
> ceph1g ceph1g mds Offline
>
> ceph1h ceph1h mds Offline
>
> Does anyone have a clue how I can fix that? cephadm seems to be broken...
>
> Thank you for your help.
>
> Regards,
>
> Mabi
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] resharding and s3cmd empty listing

2021-07-12 Thread Jean-Sebastien Landry
Hi everyone, something strange here with bucket resharding vs. bucket 
listing.


I have a bucket with about 1M objects in it, I increased the bucket 
quota from 1M to 2M, and manually resharded from 11 to 23. (dynamic 
resharding is disabled)
Since then, the user can't list objects in some paths. The objects are 
there, but the client can't list them.


Using this example: s3://bucket/dir1/dir2/dir3/dir4

s3cmd can't list the objects in dir2 and dir4 but rclone works and list 
all objects.

s3cmd don't give any errors, just list the path with no object in it.

I reshard to 1, everything is ok, s3cmd can list all objects in all paths.
I reshard to 11, s3cmd works with dir2 but can't list the objects in dir4.
I reshard to 13, s3cmd can't list dir2 and dir4.
I reshard to 7, s3cmd works with all the paths.

s3cmd always works with dir1 and dir3, regardless of the shard number, 
the problem is just with dir2 and dir4.
s3cmd, s3browser and "aws s3 ls" are problematic, "aws s3api 
list-objects" and rclone always work.


I did a "bucket check --fix --check-objects", scrub/deep-scrub of the 
index pgs, "bi list" looks good to me, charset & etags looks good too, 
s3cmd in debug mode doesn't report any error, no xml error, no http-4xx 
everything is http-200. I can't find anything suspicious in the 
haproxy/beast syslog. resharding process didn't give any error, 
everything is HEALTH_OK.


Maybe the next step is to look for a s3cmd/python bug, but I'm curious 
if someone here have ever experienced something like this.

Any thoughts are welcome :-)
Thanks!
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] RBD clone to change data pool

2021-07-12 Thread Gilles Mocellin
Hello Cephers,

I'm disappointed. I thought I'v had found a good way to migrate from one data 
pool to 
another, without too much downtime.

I use XFS on RBD, via KRBD to store backups (see another thread). XFS with 
reflink and crc 
(accelerate Veeam merges).
Also, I want to migrate from a EC k3m2 to EC k8m2 on 13 nodes / 130 osds.

I cannot use rbd migration without downtime, due to krbd. Even, I tried and it 
was really 
slow.

But I saw the layering capacity offered by cloning, and thought of this method :
- unmount filesystem / unmap rbd device
- take snapshot / protect it
- clone snapshot to new image, changing data pool
- rename old image, new image for the new image ti have the name of the 
original one
- map new image and mount filesystem

This is fast, because it's only COW.

Starting from here, I thought that all new write will be on the new data pool.
And waiting for the backup retentions, I can migrate without doing anything.
To finalize, when there will not be too much data to move, I would do a rbd 
flatten, and be 
able to delete the source image on old data pool.

But...

I have problems.
On one image, my backups constantly fail, or a retry is triggered. No clear 
message, but I/
O seems slow, perhaps timeouts.

On another image, I had XFS kernel errors (on metadata) and the filesystem 
shuts down 
during the night (and backups).

Jul 10 01:32:28 fidcl-mrs4-vbr-repo-02 kernel: [6108174.156744] XFS (rbd1): 
metadata I/O 
error in "xfs_buf_iodone_callback_error" at daddr 0x8a2d4a4f8 len 8 error 5
Jul 10 01:32:47 fidcl-mrs4-vbr-repo-02 kernel: [6108193.510924] XFS (rbd1): 
metadata I/O 
error in "xfs_buf_iodone_callback_error" at daddr 0x8a2d4a4f8 len 8 error 5
Jul 10 01:32:58 fidcl-mrs4-vbr-repo-02 kernel: [6108204.696929] XFS (rbd1): 
metadata I/O 
error in "xfs_buf_iodone_callback_error" at daddr 0x8a2d4a4f8 len 8 error 5
Jul 10 01:33:13 fidcl-mrs4-vbr-repo-02 kernel: [6108219.857228] XFS (rbd1): 
metadata I/O 
error in "xfs_buf_iodone_callback_error" at daddr 0x8a2d4a4f8 len 8 error 5

I  unmount it and try to remount without success. xfs_repairs tells me I had to 
mount it to 
replay journal, and if I cannot, ignore it and... loose data.

Last unattended thing. On one image that is still mounted, and seems to work, I 
tries to 
launch a flatten operation, to see how long it can last and if it manage to 
finish, if my 
backups are doing better on it.

But her are the output I have, thought it seems to continue...

Image flatten: 2% complete...2021-07-12T23:22:55.998+0200 7f511a7fc700 -1 
librbd::operation::FlattenRequest: 0x7f50fc000f20 should_complete: encountered 
error: 
(85) Interrupted system call should be restarted
Image flatten: 0% complete...2021-07-12T23:23:32.142+0200 7f5119ffb700 -1 
librbd::operation::FlattenRequest: 0x7f50fc0015d0 should_complete: encountered 
error: 
(85) Interrupted system call should be restarted
2021-07-12T23:23:47.382+0200 7f5119ffb700 -1 librbd::operation::FlattenRequest: 
0x7f50fc0015d0 should_complete: encountered error: (85) Interrupted system call 
should 
be restarted
Image flatten: 2% complete...2021-07-12T23:23:58.926+0200 7f5119ffb700 -1 
librbd::operation::FlattenRequest: 0x7f50fc0015d0 should_complete: encountered 
error: 
(85) Interrupted system call should be restarted
Image flatten: 0% complete...2021-07-12T23:24:01.318+0200 7f5119ffb700 -1 
librbd::operation::FlattenRequest: 0x7f50fc0015d0 should_complete: encountered 
error: 
(85) Interrupted system call should be restarted
2021-07-12T23:24:07.422+0200 7f5119ffb700 -1 librbd::operation::FlattenRequest: 
0x7f50fc0015d0 should_complete: encountered error: (85) Interrupted system call 
should 
be restarted

So, either I it bugs, or layering, cloning, flattening is not supposed to work 
in my context... 
Perhaps due to changing data pool ? Erasure Coding ?

I'm now stuck.
I have ~400To of data to move from an EC3+2 to EC8+2 pool, and I'm only seeing 
one 
solution : stopping my backups during the copy, that will last weeks...

(No, I can't stay on EC3+2, I'v sold to my management and my colleagues that 
we'll have 
near 1PB usable on that cluster).

Thanx for reading, if you're still there !
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: samba cephfs

2021-07-12 Thread Anthony D'Atri
FWIW I’ve corresponded with someone else who has had more success with this 
route than with vfs_ceph, especially when using distributions for which it is 
not prepackaged.

> On Jul 12, 2021, at 7:09 AM, Marc  wrote:
> 
> Oh thanks Magnus for clearing this up. I thought that there was some other 
> fancy config.
> 
> 
> 
> Sent: Monday, 12 July 2021 9:40 AM
> To: ceph-users@ceph.io
> Subject: [ceph-users] Re: samba cephfs
> 
> We are using SL7 to export our cephfs via samba to windows. The
> RHEL7/Centos7/SL7 distros do not come with packages for the samba
> cephfs module. This is one of the reasons why we are mounting the file
> system locally using the kernel cephfs module with the automounter and
> reexporting it using vanilla samba. it works a treat.
> magnus
> 
> 
> On Sun, 2021-07-11 at 22:34 +, Marc wrote:
>> This email was sent to you by someone outside the University.
>> You should only click on links or attachments if you are certain that
>> the email is genuine and the content is safe.
>> 
>> I wanted to have a look at using cephfs with samba, and found this
>> suse page[1]
>> 
>> But where does this samba-ceph package come from?
>> Is there a manual for rhel?
>> 
>> 
>> 
>> [1]
>> https://documentation.suse.com/ses/6/html/ses-all/cha-ses-cifs.html
>> 
>> 
>> 
>> 
>> 
>> ___
>> ceph-users mailing list -- ceph-users@ceph.io
>> To unsubscribe send an email to ceph-users-le...@ceph.io
> The University of Edinburgh is a charitable body, registered in Scotland, 
> with registration number SC005336. Is e buidheann carthannais a th’ ann an 
> Oilthigh Dhùn Èideann, clàraichte an Alba, àireamh clàraidh SC005336.
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Single ceph client usage with multiple ceph cluster

2021-07-12 Thread Anthony D'Atri

> Hi ciphers,
> 
> We have two ceph clusters in our lab.  We are experimenting to use single
> server as a client for two ceph clusters. Can we use the same client server
> to store keyring for different clusters in ceph.conf file.

The keys are usually in their own files in /etc/ceph, not in ceph.conf.  Give 
each cluster’s files unique names — it’s not always obvious, but if referenced 
explicitly the filename is often fairly arbitrary.


> Another query is can we use a single client with multiple vms in it for two 
> different
> clusters?

Please describe your deployment in a bit more detail.  I suspect that the 
answer is that the VMs are the clients not the server they run on.

For example, when you have, say, a QEMU/KVM based system like OpenStack, 
typically each VM has a qemu process that runs on the hypervisor system.  Ceph 
RBD volume attachments are done via librbd and libvirt, independently for each 
attached volume — which may easily be on different clusters.  The XML config 
for each such VM either references a ceph*conf file by name, or has the mon IP 
addresses etc. inlined.



> 
> Regards,
> Ram.
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io