[ceph-users] Re: MDS Performance and PG/PGP value

2022-10-13 Thread Yoann Moulin
eed it. The workloads are unpredictable. Thanks for your help. Best regards, -- Yoann Moulin EPFL IC-IT ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] MDS Performance and PG/PGP value

2022-10-05 Thread Yoann Moulin
crubbing+deep 1active+clean+scrubbing Is there any mechanism to increase the number of PG automatically in such a situation ? Or this is something to do manually ? Is 256 good value in our case ? We have 80TB of data with more than 300M files. Thank you for your help, -- Yoann M

[ceph-users] Re: Cephfs IO halt on Node failure

2020-05-25 Thread Yoann Moulin
mudhan P : >>>>>> >>>>>>> Hi, >>>>>>> >>>>>>> I am using ceph Nautilus cluster with below configuration. >>>>>>> >>>>>>> 3 node's (Ubuntu 18.04) each has 12 OSD's, and mds, mon

[ceph-users] Re: librados : handle_auth_bad_method server allowed_methods [2] but i only support [2,1]

2020-04-02 Thread Yoann Moulin
ey = XX== caps mds = "allow *" caps mgr = "allow *" caps mon = "allow *" caps osd = "allow *" But if I use client.admin user, it works. [client.admin] key =

[ceph-users] librados : handle_auth_bad_method server allowed_methods [2] but i only support [2,1]

2020-04-02 Thread Yoann Moulin
700 -1 monclient(hunting): > handle_auth_bad_method server allowed_methods [2] but i only support [2,1] > 2020-04-02 12:44:59.900 7fd78a6a2700 -1 monclient(hunting): > handle_auth_bad_method server allowed_methods [2] but i only support [2,1] > failed to fetch mon config (--no-mon-conf

[ceph-users] Nautilus cephfs usage

2020-03-10 Thread Yoann Moulin
67 MiB 186 102 MiB > 063 TiB N/A N/A 186 > 0 B 0 B > device_health_metrics 12 1.2 MiB 145 1.2 MiB > 0 63 TiB

[ceph-users] Re: Monitors' election failed on VMs : e4 handle_auth_request failed to assign global_id

2020-03-10 Thread Yoann Moulin
st, Yoann > On Tue, 10 Mar 2020, Paul Emmerich wrote: > >> On Tue, Mar 10, 2020 at 8:18 AM Yoann Moulin wrote: >>> I have added 3 new monitors on 3 VMs and I'd like to stop the 3 old >>> monitors daemon. But I soon as I stop the 3rd old monitor, the cluste

[ceph-users] Monitors' election failed on VMs : e4 handle_auth_request failed to assign global_id

2020-03-10 Thread Yoann Moulin
.icvm0017@3(probing) e4 > handle_auth_request failed to assign global_id Did I miss something? In attachment : some logs and ceph.conf Thanks for your help. Best, -- Yoann Moulin EPFL IC-IT # Please do not change this file directly since it is managed by Ansible and will be overwr

[ceph-users] cephf_metadata: Large omap object found

2020-02-03 Thread Yoann Moulin
ot;, > artemis@icitsrv5:~$ rados -p cephfs_metadata listxattr mds3_openfiles.0 > artemis@icitsrv5:~$ rados -p cephfs_metadata getomapheader mds3_openfiles.0 > header (42 bytes) : > 13 00 00 00 63 65 70 68 20 66 73 20 76 6f 6c 75 |ceph fs volu| > 0010 6d 65 20

[ceph-users] Re: cephfs : write error: Operation not permitted

2020-01-29 Thread Yoann Moulin
"data": "cephfs" } } "cephfs_metadata" { "cephfs": { "metadata": "cephfs" } } Thanks a lot, that has fixed my issue! Best, -- Yoann Moulin EPFL IC-IT ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: cephfs : write error: Operation not permitted

2020-01-24 Thread Yoann Moulin
Le 23.01.20 à 15:51, Ilya Dryomov a écrit : On Wed, Jan 22, 2020 at 8:58 AM Yoann Moulin wrote: Hello, On a fresh install (Nautilus 14.2.6) deploy with ceph-ansible playbook stable-4.0, I have an issue with cephfs. I can create a folder, I can create empty files, but cannot write data on

[ceph-users] Re: cephfs : write error: Operation not permitted

2020-01-23 Thread Yoann Moulin
osd] allow class-read object_prefix rbd_children, allow rw pool=cephfs_data I opened a bug on tracker : https://tracker.ceph.com/issues/43761 This is independent of the replication type of cephfs_data. Yup, this is what I understood. Yoann ____ From: Yoann

[ceph-users] Re: cephfs : write error: Operation not permitted

2020-01-23 Thread Yoann Moulin
clusters, not the same hw config (No SSD on the dslab2020 cluster) and cephfs_data on an 8+3 EC pool on Artemis (see the end of artemis.txt). in attachement, I put the result of the commands I did on both cluster without the same behaviors at the end. Best, Yoann ______

[ceph-users] cephfs : write error: Operation not permitted

2020-01-21 Thread Yoann Moulin
d = "allow rw tag cephfs pool=cephfs_data " I don't where to look to get more information about that issue. Anyone can help me? Thanks Best regards, -- Yoann Moulin EPFL IC-IT ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Full FLash NVME Cluster recommendation

2019-11-18 Thread Yoann Moulin
725b) look fast for me, don't they? > What is your IO Profile? Read/Write split? IO Profile will be mixed cases, this cephfs will be used as scratch storage to compute processes in our K8s clusters. > It may be the case that EC is not the best fit for the workload you are > tryin

[ceph-users] Re: Full FLash NVME Cluster recommendation

2019-11-18 Thread Yoann Moulin
40G NVME: Dell 6.4TB NVME PCI-E Drive (Samsung PM1725b), only 1 slot available Each server is used in a k8s cluster to give access to GPUs and CPUs for X-learning labs. Ceph have to share the CPU and memory with the compute K8s cluster. > 128GB of RAM per node ought to do, if you have less than 14 filesystems? I plan to have only 1 filesystem. Thanks to all those useful information. Best regards, -- Yoann Moulin EPFL IC-IT ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Full FLash NVME Cluster recommendation

2019-11-15 Thread Yoann Moulin
up or mistakes to avoid? I use ceph-ansible to deploy all myclusters. Best regards, -- Yoann Moulin EPFL IC-IT ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: download.ceph.com repository changes

2019-09-18 Thread Yoann Moulin
the >> repository, which would require to rebuild the repository again, so >> that the metadata is updated with the difference in the binaries. >> >> Caveats: time intensive process, almost like cutting a new release >> which takes about a day (and sometimes longer). Error prone since the >> process wouldn't be the same (one off, just when a version needs to be >> removed) >> >> Pros: all urls for download.ceph.com and its structure are kept the same. > ___ > ceph-users mailing list -- ceph-users@ceph.io > To unsubscribe send an email to ceph-users-le...@ceph.io > -- Yoann Moulin EPFL IC-IT ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: 14.2.4 Packages Avaliable

2019-09-17 Thread Yoann Moulin
reviously. > > 14.2.4 is a bug-fix release for https://tracker.ceph.com/issues/41660 > > There are no other changes beside this fix My reaction was not on this specific release but with this sentence : « Never install packages until there is an announcement. » And also with this one : «

[ceph-users] Re: 14.2.4 Packages Avaliable

2019-09-17 Thread Yoann Moulin
ot ready to publish? I plan to install a new cluster with ceph-ansible, I don't have to take care of the release number as soon as it's the latest package available on the official stable repo. Even on a short period, an « almost ready but not completely full tested » packages can really h

[ceph-users] Re: Best osd scenario + ansible config?

2019-09-04 Thread Yoann Moulin
>>>>> Tue, 3 Sep 2019 11:28:20 +0200 >>>>> Yoann Moulin ==> ceph-users@ceph.io : >>>>>> Is it better to put all WAL on one SSD and all DBs on the other one? Or >>>>>> put WAL and DB of the first 5 OSDs on the first SSD an

[ceph-users] Re: Best osd scenario + ansible config?

2019-09-04 Thread Yoann Moulin
Le 04/09/2019 à 11:01, Lars Täuber a écrit : > Wed, 4 Sep 2019 10:32:56 +0200 > Yoann Moulin ==> ceph-users@ceph.io : >> Hello, >> >>> Tue, 3 Sep 2019 11:28:20 +0200 >>> Yoann Moulin ==> ceph-users@ceph.io : >>>> Is it better to put all WAL

[ceph-users] Re: Best osd scenario + ansible config?

2019-09-04 Thread Yoann Moulin
Hello, > Tue, 3 Sep 2019 11:28:20 +0200 > Yoann Moulin ==> ceph-users@ceph.io : >> Is it better to put all WAL on one SSD and all DBs on the other one? Or put >> WAL and DB of the first 5 OSDs on the first SSD and the 5 others on >> the second one. > > I

[ceph-users] Re: Best osd scenario + ansible config?

2019-09-03 Thread Yoann Moulin
not require a later recovery. It is mostly a read-only cluster to distribute public datasets over S3 inside our network, it is fine for me if write operations are not fully protected during a couple of days. All writes operations are managed by us to update datasets. But as mentioned above, 8+3 m

[ceph-users] Re: Best osd scenario + ansible config?

2019-09-03 Thread Yoann Moulin
the nexts version allow access data with the EC numbers. I think it is still possible to set min_size = k in Nautilus but it is not recommended. Best, Yoann > -Mensaje original- > De: Yoann Moulin > Enviado el: martes, 3 de septiembre de 2019 11:28 > Para: ceph-users@ceph.io

[ceph-users] Best osd scenario + ansible config?

2019-09-03 Thread Yoann Moulin
disk in a mixed case? It looks like I must configure LVM before running the playbook but I am not sure if I missed something. Is wal_vg and db_vg can be identical (on VG per SSD shared with multiple OSDs)? Thanks for your help. Best regards, -- Yoann Moulin EPFL