[ceph-users] Re: Possible data corruption with 14.2.3 and 14.2.4

2019-11-18 Thread Igor Fedotov
Hi Simon, On 11/15/2019 6:02 PM, Simon Ironside wrote: Hi Igor, On 15/11/2019 14:22, Igor Fedotov wrote: Do you mean both standalone DB and(!!) standalone WAL devices/partitions by having SSD DB/WAL? No, 1x combined DB/WAL partition on an SSD and 1x data partition on an HDD per OSD. I.e. c

[ceph-users] Re: Possible data corruption with 14.2.3 and 14.2.4

2019-11-18 Thread Simon Ironside
Hi Igor, Thanks very much for providing all this detail. On 18/11/2019 10:43, Igor Fedotov wrote: - Check how full their DB devices are? For your case it makes sense to check this. And then safely wait for 14.2.5 if its not full. bluefs.db_used_bytes / bluefs_db_total_bytes is only around 1

[ceph-users] Re: Full FLash NVME Cluster recommendation

2019-11-18 Thread Darren Soothill
Hi Yoann, So I would not be putting 1 x 6.4TB device in but multiple smaller devices in each node. What CPU are you thinking of using? How many CPU's? If you have 1 PCIe card then it will only be connected to 1 CPU so will you be able to use all of the performance of multiple CPU's. What networ

[ceph-users] Re: Full FLash NVME Cluster recommendation

2019-11-18 Thread Yoann Moulin
Hello Nathan, I'm going to deploy a new cluster soon based on 6.4TB NVME PCI-E Cards, I will have only 1 NVME card per node and 38 nodes. The use case is to offer cephfs volumes for a k8s platform, I plan to use an EC-POOL 8+3 for the cephfs_data pool. Do you h

[ceph-users] Re: Full FLash NVME Cluster recommendation

2019-11-18 Thread Yoann Moulin
Hi Darren, >> I'm going to deploy a new cluster soon based on 6.4TB NVME PCI-E Cards, I >> will have only 1 NVME card per node and 38 nodes. >> >> The use case is to offer cephfs volumes for a k8s platform, I plan to use an >> EC-POOL 8+3 for the cephfs_data pool. >> >> Do you have rec

[ceph-users] Re: nfs ganesha rgw write errors

2019-11-18 Thread Daniel Gryniewicz
On 11/17/19 1:42 PM, Marc Roos wrote: Hi Daniel, I am able to mount the buckets with your config, however when I try to write something, my logs get a lot of these errors: svc_732] nfs4_Errno_verbose :NFS4 :CRIT :Error I/O error in nfs4_write_cb converted to NFS4ERR_IO but was set non-retry

[ceph-users] Ssd cache question

2019-11-18 Thread Wesley Peng
HelloFor today ceph deployment, is SSD cache pool the must for performance stuff? Thank you.Regards ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Balancing PGs across OSDs

2019-11-18 Thread Thomas Schneider
Hi, in this blog post I find this statement: "So, in our ideal world so far (assuming equal size OSDs), every OSD now has the same number of PGs assigned." My issue is that accross all pools the number of PGs per OSD is not equal. An

[ceph-users] Ceph manager causing MGR active switch

2019-11-18 Thread Thomas Schneider
Hi, I can see the following error message regularely in MGR log: 2019-11-18 14:25:48.847 7fd9e6a3a700  0 mgr[dashboard] [18/Nov/2019:14:25:48] ENGINE Error in HTTPServer.tick Traceback (most recent call last):   File "/usr/lib/python2.7/dist-packages/cherrypy/wsgiserver/__init__.py", line 2021, in

[ceph-users] Re: Ssd cache question

2019-11-18 Thread Janne Johansson
or nvme. Den mån 18 nov. 2019 kl 14:54 skrev Wesley Peng : > Hello > > For today ceph deployment, is SSD cache pool the must for performance > stuff? Thank you. > > Regards > > ___ > ceph-users mailing list -- ceph-users@ceph.io > To unsubscribe send an

[ceph-users] Re: osdmaps not trimmed until ceph-mon's restarted (if cluster has a down osd)

2019-11-18 Thread Dan van der Ster
On Fri, Nov 15, 2019 at 4:45 PM Joao Eduardo Luis wrote: > > On 19/11/14 11:04AM, Gregory Farnum wrote: > > On Thu, Nov 14, 2019 at 8:14 AM Dan van der Ster > > wrote: > > > > > > Hi Joao, > > > > > > I might have found the reason why several of our clusters (and maybe > > > Bryan's too) are get

[ceph-users] add debian buster stable support for ceph-deploy

2019-11-18 Thread Jelle de Jong
Hello everybody, Can somebody add support for Debian buster and ceph-deploy: https://tracker.ceph.com/issues/42870 Highly appreciated, Regards, Jelle de Jong ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-use

[ceph-users] Re: add debian buster stable support for ceph-deploy

2019-11-18 Thread Paul Emmerich
We maintain an unofficial mirror for Buster packages: https://croit.io/2019/07/07/2019-07-07-debian-mirror Paul -- Paul Emmerich Looking for help with your Ceph cluster? Contact us at https://croit.io croit GmbH Freseniusstr. 31h 81247 München www.croit.io Tel: +49 89 1896585 90 On Mon, Nov

[ceph-users] Re: Balancing PGs across OSDs

2019-11-18 Thread Paul Emmerich
You have way too few PGs in one of the roots. Many OSDs have so few PGs that you should see a lot of health warnings because of it. The other root has a factor 5 difference in disk size which isn't ideal either. Paul -- Paul Emmerich Looking for help with your Ceph cluster? Contact us at https

[ceph-users] Re: add debian buster stable support for ceph-deploy

2019-11-18 Thread Daniel Swarbrick
It looks like Debian's own packaging efforts have finally kicked back into life: https://salsa.debian.org/ceph-team/ceph/commits/debian/14.2.4-1 Perhaps we can expect some official Debian packages again soon. ___ ceph-users mailing list -- ceph-users@ce

[ceph-users] Re: add debian buster stable support for ceph-deploy

2019-11-18 Thread Kevin Olbrich
I don't think debian teams will package them for buster, as the policy forbids that. Maybe backports (like v10 vs. v12 for stretch) but we will only know for sure when it's there. Kevin Am Mo., 18. Nov. 2019 um 20:48 Uhr schrieb Daniel Swarbrick < daniel.swarbr...@gmail.com>: > It looks like De

[ceph-users] Re: add debian buster stable support for ceph-deploy

2019-11-18 Thread Daniel Swarbrick
Yes of course, these packages first have to make it through -testing before they can even be considered for buster-backports. FWIW, we have successfully been running cross-ported mimic packages from Ubuntu on buster for a few months now, rebuilt with buster toolchain, along with a few minor pat

[ceph-users] Re: msgr2 not used on OSDs in some Nautilus clusters

2019-11-18 Thread Bryan Stillwell
I cranked up debug_ms to 20 on two of these clusters today and I'm still not understanding why some of the clusters use v2 and some just use v1. Here's the boot/peering process for the cluster which uses v2: 2019-11-18 16:46:03.027 7fabb6281dc0 0 osd.0 39101 done with init, starting boot proce

[ceph-users] Re: Ssd cache question

2019-11-18 Thread Wesley Peng
Thanks Manuel for letting me know this.  18.11.2019, 22:11, "EDH - Manuel Rios Fernandez" :Hi Wesley Its a common issue think about a SSD cache will help in ceph. Normally it produces other issue also related to performance, the’res a lot of mailist about this. Our recommendation , in RBD setup don

[ceph-users] corporate video production company in bangalore

2019-11-18 Thread vhtnow11
Best Video Production Company in Bangalore : We at VHTnow create visual masterpieces that engage, inspire and impact people's lives. Our services also include ad film and corporate film production in bangalore visit:https://vhtnow.com/ ___ ceph-users ma