Re: [ceph-users] NFS

2019-10-17 Thread Brent Kennedy
Awesome, thank you giving an overview of these features, sounds like the correct direction then! -Brent -Original Message- From: Daniel Gryniewicz Sent: Thursday, October 3, 2019 8:20 AM To: Brent Kennedy Cc: Marc Roos ; ceph-users Subject: Re: [ceph-users] NFS So, Ganesha is an

Re: [ceph-users] [Nfs-ganesha-devel] 2.7.3 with CEPH_FSAL Crashing

2019-10-10 Thread David C
Thanks, Patrick. Looks like the fix is awaiting review, I guess my options are to hold tight for 14.2.5 or patch myself if I get desperate. I've seen this crash about 4 times over the past 96 hours, is there anything I can do to mitigate the issue in the meantime? On Wed, Oct 9, 2019 at 9:23 PM Pa

Re: [ceph-users] [Nfs-ganesha-devel] 2.7.3 with CEPH_FSAL Crashing

2019-10-09 Thread Patrick Donnelly
Looks like this bug: https://tracker.ceph.com/issues/41148 On Wed, Oct 9, 2019 at 1:15 PM David C wrote: > > Hi Daniel > > Thanks for looking into this. I hadn't installed ceph-debuginfo, here's the > bt with line numbers: > > #0 operator uint64_t (this=0x10) at > /usr/src/debug/ceph-14.2.2/sr

Re: [ceph-users] [Nfs-ganesha-devel] 2.7.3 with CEPH_FSAL Crashing

2019-10-09 Thread David C
Hi Daniel Thanks for looking into this. I hadn't installed ceph-debuginfo, here's the bt with line numbers: #0 operator uint64_t (this=0x10) at /usr/src/debug/ceph-14.2.2/src/include/object.h:123 #1 Client::fill_statx (this=this@entry=0x274b980, in=0x0, mask=mask@entry=341, stx=stx@entry=0x7fcc

Re: [ceph-users] [Nfs-ganesha-devel] 2.7.3 with CEPH_FSAL Crashing

2019-10-07 Thread Daniel Gryniewicz
Client::fill_statx() is a fairly large function, so it's hard to know what's causing the crash. Can you get line numbers from your backtrace? Daniel On 10/7/19 9:59 AM, David C wrote: Hi All Further to my previous messages, I upgraded to libcephfs2-14.2.2-0.el7.x86_64 as suggested and thing

Re: [ceph-users] [Nfs-ganesha-devel] 2.7.3 with CEPH_FSAL Crashing

2019-10-07 Thread David C
Hi All Further to my previous messages, I upgraded to libcephfs2-14.2.2-0.el7.x86_64 as suggested and things certainly seem a lot more stable, I have had some crashes though, could someone assist in debugging this latest crash please? (gdb) bt #0 0x7fce4e9fc1bb in Client::fill_statx(Inode*,

Re: [ceph-users] NFS

2019-10-03 Thread Marc Roos
the access key/irrelevant. -Original Message- Subject: Re: [ceph-users] NFS Hi Mark, Here's an example that should work--userx and usery are RGW users created in different tenants, like so: radosgw-admin --tenant tnt1 --uid userx --display-name "tnt1-userx" \

Re: [ceph-users] NFS

2019-10-03 Thread Daniel Gryniewicz
01 : ganesha.nfsd-4722[sigmgr] > config_errs_to_log :CONFIG :CRIT :Config File > (/etc/ganesha/ganesha.conf:209): 1 validation errors in block EXPORT > 03/10/2019 16:15:37 : epoch 5d8d274c : c01 : ganesha.nfsd-4722[sigmgr] > config_errs_to_log :CONFIG :CRIT :Config File > (/etc/ganesha/ganesh

Re: [ceph-users] NFS

2019-10-03 Thread Matt Benjamin
1 : ganesha.nfsd-4722[sigmgr] > config_errs_to_log :CONFIG :CRIT :Config File > (/etc/ganesha/ganesha.conf:216): Errors processing block (FSAL) > 03/10/2019 16:15:37 : epoch 5d8d274c : c01 : ganesha.nfsd-4722[sigmgr] > config_errs_to_log :CONFIG :CRIT :Config File > (/etc/ganesha/ganesha.con

Re: [ceph-users] NFS

2019-10-03 Thread Nathan Fish
nfsd-4722[sigmgr] > config_errs_to_log :CONFIG :CRIT :Config File > (/etc/ganesha/ganesha.conf:209): 1 validation errors in block EXPORT > 03/10/2019 16:15:37 : epoch 5d8d274c : c01 : ganesha.nfsd-4722[sigmgr] > config_errs_to_log :CONFIG :CRIT :Config File > (/etc/ganesha/ganesha.co

Re: [ceph-users] NFS

2019-10-03 Thread Marc Roos
block EXPORT 03/10/2019 16:15:37 : epoch 5d8d274c : c01 : ganesha.nfsd-4722[sigmgr] config_errs_to_log :CONFIG :CRIT :Config File (/etc/ganesha/ganesha.conf:209): Errors processing block (EXPORT) -Original Message- Subject: Re: [ceph-users] NFS RGW NFS can support any NFS style of au

Re: [ceph-users] NFS

2019-10-03 Thread Matt Benjamin
RGW NFS can support any NFS style of authentication, but users will have the RGW access of their nfs-ganesha export. You can create exports with disjoint privileges, and since recent L, N, RGW tenants. Matt On Tue, Oct 1, 2019 at 8:31 AM Marc Roos wrote: > > I think you can run into problems >

Re: [ceph-users] NFS

2019-10-03 Thread Daniel Gryniewicz
> as a NAS backend is new to me ( thus I might be over thinking this ) :) > > -Brent > > -Original Message- > From: Daniel Gryniewicz > Sent: Tuesday, October 1, 2019 8:20 AM > To: Marc Roos ; bkennedy ; > ceph-users > Subject: Re: [ceph-users] NFS > &

Re: [ceph-users] NFS

2019-10-01 Thread Brent Kennedy
; ceph-users Subject: Re: [ceph-users] NFS Ganesha can export CephFS or RGW. It cannot export anything else (like iscsi or RBD). Config for RGW looks like this: EXPORT { Export_ID=1; Path = "/"; Pseudo = "/rgw"; Access_Type = RW;

Re: [ceph-users] NFS

2019-10-01 Thread Marc Roos
= "/var/log/ganesha.log"; # enable = default; # } } -Original Message- Subject: Re: [ceph-users] NFS Ganesha can export CephFS or RGW. It cannot export anything else (like iscsi or RBD). Config for RGW looks like this: EXPORT { Export_ID=1;

Re: [ceph-users] NFS

2019-10-01 Thread Daniel Gryniewicz
Ganesha can export CephFS or RGW. It cannot export anything else (like iscsi or RBD). Config for RGW looks like this: EXPORT { Export_ID=1; Path = "/"; Pseudo = "/rgw"; Access_Type = RW; Protocols = 4; Transports = TCP; FSAL {

Re: [ceph-users] NFS

2019-09-30 Thread Marc Roos
Just install these http://download.ceph.com/nfs-ganesha/ nfs-ganesha-rgw-2.7.1-0.1.el7.x86_64 nfs-ganesha-vfs-2.7.1-0.1.el7.x86_64 libnfsidmap-0.25-19.el7.x86_64 nfs-ganesha-mem-2.7.1-0.1.el7.x86_64 nfs-ganesha-xfs-2.7.1-0.1.el7.x86_64 nfs-ganesha-2.7.1-0.1.el7.x86_64 nfs-ganesha-ceph-2.7.1-0.1.

Re: [ceph-users] [Nfs-ganesha-devel] 2.7.3 with CEPH_FSAL Crashing

2019-07-19 Thread David C
Thanks, Jeff. I'll give 14.2.2 a go when it's released. On Wed, 17 Jul 2019, 22:29 Jeff Layton, wrote: > Ahh, I just noticed you were running nautilus on the client side. This > patch went into v14.2.2, so once you update to that you should be good > to go. > > -- Jeff > > On Wed, 2019-07-17 at

Re: [ceph-users] [Nfs-ganesha-devel] 2.7.3 with CEPH_FSAL Crashing

2019-07-17 Thread Jeff Layton
Ahh, I just noticed you were running nautilus on the client side. This patch went into v14.2.2, so once you update to that you should be good to go. -- Jeff On Wed, 2019-07-17 at 17:10 -0400, Jeff Layton wrote: > This is almost certainly the same bug that is fixed here: > > https://github.com/ce

Re: [ceph-users] [Nfs-ganesha-devel] 2.7.3 with CEPH_FSAL Crashing

2019-07-17 Thread Jeff Layton
This is almost certainly the same bug that is fixed here: https://github.com/ceph/ceph/pull/28324 It should get backported soon-ish but I'm not sure which luminous release it'll show up in. Cheers, Jeff On Wed, 2019-07-17 at 10:36 +0100, David C wrote: > Thanks for taking a look at this, Daniel

Re: [ceph-users] [Nfs-ganesha-devel] 2.7.3 with CEPH_FSAL Crashing

2019-07-17 Thread David C
Thanks for taking a look at this, Daniel. Below is the only interesting bit from the Ceph MDS log at the time of the crash but I suspect the slow requests are a result of the Ganesha crash rather than the cause of it. Copying the Ceph list in case anyone has any ideas. 2019-07-15 15:06:54.624007 7

Re: [ceph-users] Nfs-ganesha with rados_kv backend

2019-05-29 Thread Jeff Layton
On Wed, 2019-05-29 at 13:49 +, Stolte, Felix wrote: > Hi, > > is anyone running an active-passive nfs-ganesha cluster with cephfs backend > and using the rados_kv recovery backend? My setup runs fine, but takeover is > giving me a headache. On takeover I see the following messages in ganesha

Re: [ceph-users] NFS-Ganesha CEPH_FSAL | potential locking issue

2019-05-17 Thread David C
Thanks for your response on that, Jeff. Pretty sure this is nothing to do with Ceph or Ganesha, sorry for wasting your time. What I'm seeing is related to writeback on the client. I can mitigate the behaviour a bit by playing around with the vm.dirty* parameters. On Tue, Apr 16, 2019 at 7:07 PM

Re: [ceph-users] NFS-Ganesha CEPH_FSAL | potential locking issue

2019-04-16 Thread Jeff Layton
On Tue, Apr 16, 2019 at 10:36 AM David C wrote: > > Hi All > > I have a single export of my cephfs using the ceph_fsal [1]. A CentOS 7 > machine mounts a sub-directory of the export [2] and is using it for the home > directory of a user (e.g everything under ~ is on the server). > > This works f

Re: [ceph-users] NFS-Ganesha Mounts as a Read-Only Filesystem

2019-04-09 Thread Paul Emmerich
Looks like you are trying to write to the pseudo-root, mount /cephfs instead of /. Paul -- Paul Emmerich Looking for help with your Ceph cluster? Contact us at https://croit.io croit GmbH Freseniusstr. 31h 81247 München www.croit.io Tel: +49 89 1896585 90 On Sat, Apr 6, 2019 at 1:07 PM wrote

Re: [ceph-users] NFS-Ganesha Mounts as a Read-Only Filesystem

2019-04-08 Thread junk
Possibly the client doesn't like the server returning SecType = "none"; Maybe try SecType = "sys":? Leon L. Robinson > On 6 Apr 2019, at 12:06, > wrote: > > Hi all, > > I have recently setup a Ceph cluster and on request using CephFS (MDS > version: ceph version 13.2.5 (cbff874f9007f1869b

Re: [ceph-users] [Nfs-ganesha-devel] NFS-Ganesha CEPH_FSAL ceph.quota.max_bytes not enforced

2019-03-04 Thread David C
On Mon, Mar 4, 2019 at 5:53 PM Jeff Layton wrote: > > On Mon, 2019-03-04 at 17:26 +, David C wrote: > > Looks like you're right, Jeff. Just tried to write into the dir and am > > now getting the quota warning. So I guess it was the libcephfs cache > > as you say. That's fine for me, I don't n

Re: [ceph-users] [Nfs-ganesha-devel] NFS-Ganesha CEPH_FSAL ceph.quota.max_bytes not enforced

2019-03-04 Thread David C
Looks like you're right, Jeff. Just tried to write into the dir and am now getting the quota warning. So I guess it was the libcephfs cache as you say. That's fine for me, I don't need the quotas to be too strict, just a failsafe really. Interestingly, if I create a new dir, set the same 100MB quo

Re: [ceph-users] nfs-ganesha version in Ceph repos

2018-10-09 Thread Marc Roos
://github.com/ceph/ceph/pull/19358 -Original Message- From: Erik McCormick [mailto:emccorm...@cirrusseven.com] Sent: dinsdag 9 oktober 2018 20:49 To: Alfredo Deza Cc: ceph-users Subject: Re: [ceph-users] nfs-ganesha version in Ceph repos On Tue, Oct 9, 2018, 1:48 PM Alfredo Deza wrote

Re: [ceph-users] nfs-ganesha version in Ceph repos

2018-10-09 Thread Erik McCormick
On Tue, Oct 9, 2018 at 2:55 PM Erik McCormick wrote: > > > > On Tue, Oct 9, 2018, 2:17 PM Kevin Olbrich wrote: >> >> I had a similar problem: >> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-September/029698.html >> >> But even the recent 2.6.x releases were not working well for me (ma

Re: [ceph-users] nfs-ganesha version in Ceph repos

2018-10-09 Thread Erik McCormick
On Tue, Oct 9, 2018, 2:17 PM Kevin Olbrich wrote: > I had a similar problem: > > http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-September/029698.html > > But even the recent 2.6.x releases were not working well for me (many many > segfaults). I am on the master-branch (2.7.x) and that w

Re: [ceph-users] nfs-ganesha version in Ceph repos

2018-10-09 Thread Erik McCormick
On Tue, Oct 9, 2018, 1:48 PM Alfredo Deza wrote: > On Tue, Oct 9, 2018 at 1:39 PM Erik McCormick > wrote: > > > > On Tue, Oct 9, 2018 at 1:27 PM Erik McCormick > > wrote: > > > > > > Hello, > > > > > > I'm trying to set up an nfs-ganesha server with the Ceph FSAL, and > > > running into difficu

Re: [ceph-users] nfs-ganesha version in Ceph repos

2018-10-09 Thread Kevin Olbrich
I had a similar problem: http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-September/029698.html But even the recent 2.6.x releases were not working well for me (many many segfaults). I am on the master-branch (2.7.x) and that works well with less crashs. Cluster is 13.2.1/.2 with nfs-ganes

Re: [ceph-users] nfs-ganesha version in Ceph repos

2018-10-09 Thread Alfredo Deza
On Tue, Oct 9, 2018 at 1:39 PM Erik McCormick wrote: > > On Tue, Oct 9, 2018 at 1:27 PM Erik McCormick > wrote: > > > > Hello, > > > > I'm trying to set up an nfs-ganesha server with the Ceph FSAL, and > > running into difficulties getting the current stable release running. > > The versions in t

Re: [ceph-users] nfs-ganesha version in Ceph repos

2018-10-09 Thread Erik McCormick
On Tue, Oct 9, 2018 at 1:27 PM Erik McCormick wrote: > > Hello, > > I'm trying to set up an nfs-ganesha server with the Ceph FSAL, and > running into difficulties getting the current stable release running. > The versions in the Luminous repo is stuck at 2.6.1, whereas the > current stable version

Re: [ceph-users] NFS-ganesha with RGW

2018-05-30 Thread Matt Benjamin
Hi Josef, The main thing to make sure is that you have set up the host/vm running nfs-ganesha exactly as if it were going to run radosgw. For example, you need an appropriate keyring and ceph config. If radosgw starts and services requests, nfs-ganesha should too. With the debug settings you've

Re: [ceph-users] NFS-ganesha with RGW

2018-05-30 Thread Josef Zelenka
Hi, thanks for the quick reply. As for 1. I mentioned that i'm running ubuntu 16.04, kernel 4.4.0-121 - as it seems the platform package(nfs-ganesha-ceph) does not include the rgw fsal. 2. Nfsd was running - after rebooting i managed to get ganesha to bind, rpcbind is running, though i still c

Re: [ceph-users] NFS-ganesha with RGW

2018-05-30 Thread Matt Benjamin
Hi Josef, 1. You do need the Ganesha fsal driver to be present; I don't know your platform and os version, so I couldn't look up what packages you might need to install (or if the platform package does not build the RGW fsal) 2. The most common reason for ganesha.nfsd to fail to bind to a port is

Re: [ceph-users] NFS-ganesha with RGW

2018-05-30 Thread Marc Roos
I think it is not working, I'am having the same problem. I'am on the ganesha mailing list and they have given me a patch for detailed logging on this issue, so they can determine what is going on. (Didn't have time to this though) -Original Message- From: Josef Zelenka [mailto:jos

Re: [ceph-users] Nfs-ganesha 2.6 packages in ceph repo

2018-05-16 Thread Oliver Freyermuth
Hi David, thanks for the reply! Interesting that the package was not installed - it was for us, but the machines we run the nfs-ganesha servers on are also OSDs, so it might have been pulled in via ceph-packages for us. In any case, I'd say this means librados2 as dependency is missing either

Re: [ceph-users] Nfs-ganesha 2.6 packages in ceph repo

2018-05-16 Thread David C
Hi Oliver Thanks for following up. I just picked this up again today and it was indeed librados2...the package wasn't installed! It's working now, haven't tested much but I haven't noticed any problems yet. This is with nfs-ganesha-2.6.1-0.1.el7.x86_64, libcephfs2-12.2.5-0.el7.x86_64 and librados2

Re: [ceph-users] Nfs-ganesha 2.6 packages in ceph repo

2018-05-16 Thread Oliver Freyermuth
Hi David, did you already manage to check your librados2 version and manage to pin down the issue? Cheers, Oliver Am 11.05.2018 um 17:15 schrieb Oliver Freyermuth: > Hi David, > > Am 11.05.2018 um 16:55 schrieb David C: >> Hi Oliver >> >> Thanks for the detailed reponse! I've downgrad

Re: [ceph-users] Nfs-ganesha 2.6 packages in ceph repo

2018-05-11 Thread Oliver Freyermuth
Hi David, Am 11.05.2018 um 16:55 schrieb David C: > Hi Oliver > > Thanks for the detailed reponse! I've downgraded my libcephfs2 to 12.2.4 and > still get a similar error: > > load_fsal :NFS STARTUP :CRIT :Could not dlopen > module:/usr/lib64/ganesha/libfsalceph.so Error:/lib64/libcephfs.so.2:

Re: [ceph-users] Nfs-ganesha 2.6 packages in ceph repo

2018-05-11 Thread David C
Hi Oliver Thanks for the detailed reponse! I've downgraded my libcephfs2 to 12.2.4 and still get a similar error: load_fsal :NFS STARTUP :CRIT :Could not dlopen module:/usr/lib64/ganesha/libfsalceph.so Error:/lib64/libcephfs.so.2: undefined symbol: _Z14common_ preinitRK18CephInitParameters18code_

Re: [ceph-users] Nfs-ganesha 2.6 packages in ceph repo

2018-05-10 Thread Oliver Freyermuth
Hi David, for what it's worth, we are running with nfs-ganesha 2.6.1 from Ceph repos on CentOS 7.4 with the following set of versions: libcephfs2-12.2.4-0.el7.x86_64 nfs-ganesha-2.6.1-0.1.el7.x86_64 nfs-ganesha-ceph-2.6.1-0.1.el7.x86_64 Of course, we plan to upgrade to 12.2.5 soon-ish... Am 11.

Re: [ceph-users] Nfs-ganesha rgw config for multi tenancy rgw users

2018-04-26 Thread Marc Roos
Anybody using ganesha with rgw and multi user? -Original Message- From: Marc Roos Sent: maandag 23 april 2018 5:33 To: ceph-users Subject: [ceph-users] Nfs-ganesha rgw config for multi tenancy rgw users I have problems exporting a bucket that really does exist. I have tried Path = "/t

Re: [ceph-users] NFS-Ganesha: Files disappearing?

2018-02-13 Thread Martin Emrich
When this happens, I see this log line from the rgw component in the FSAL: 2018-02-13 12:24:15.434086 7ff4e2ffd700  0 lookup_handle handle lookup failed <13234489286997512229,9160472602707183340>(need persistent handles) For a short time, I cannot stat the mentioned directories. After a minut

Re: [ceph-users] nfs-ganesha rpm build script has not been adapted for this -

2018-01-09 Thread Daniel Gryniewicz
This was fixed on next (for 2.6, currently in -rc1) but not backported to 2.5. Daniel On 01/09/2018 12:41 PM, Marc Roos wrote: The script has not been adapted for this - at the end http://download.ceph.com/nfs-ganesha/rpm-V2.5-stable/luminous/x86_64/ nfs-ganesha-rgw-2.5.4-.el7.x86_64.rp

Re: [ceph-users] nfs-ganesha / cephfs issues

2017-10-01 Thread David
Cephfs does have repair tools but I wouldn't jump the gun, your metadata pool is probably fine. Unless you're getting health errors or seeing errors in your MDS log? Are you exporting a fuse or kernel mount with Ganesha (i.e using the vfs FSAL) or using the Ceph FSAL? Have you tried any tests dire

Re: [ceph-users] nfs-ganesha and rados gateway, Cannot find supported RGW runtime. Disabling RGW fsal build

2016-11-16 Thread Ken Dreyer
On Fri, Nov 4, 2016 at 2:14 AM, 于 姜 wrote: > ceph version 10.2.3 > ubuntu 14.04 server > nfs-ganesha 2.4.1 > ntirpc 1.4.3 > > cmake -DUSE_FSAL_RGW=ON ../src/ > > -- Found rgw libraries: /usr/lib > -- Could NOT find RGW: Found unsuitable version ".", but required is at > least "1.1" (found /usr) >

Re: [ceph-users] NFS gateway

2016-09-10 Thread jan hugo prins
Hi John, > Exporting kernel client mounts with the kernel NFS server is tested as > part of the regular testing we do on CephFS, so you should find it > pretty stable. This is definitely a legitimate way of putting a layer > of security between your application servers and your storage cluster. >

Re: [ceph-users] NFS gateway

2016-09-10 Thread jan hugo prins
I really think that doing async on big production environments is a no go. But it could very well explain the issues. Last week I used to test Ganesha and so far the results look promising. Jan Hugo. On 09/07/2016 06:31 PM, David wrote: > I have clients accessing CephFS over nfs (kernel nfs). I

Re: [ceph-users] NFS gateway

2016-09-10 Thread jan hugo prins
Hi Sean, Thanks for the advice. I'm currently looking at it. First results are promising. Jan Hugo On 09/07/2016 04:48 PM, Sean Redmond wrote: > Have you seen this : > > https://github.com/nfs-ganesha/nfs-ganesha/wiki/Fsalsupport#CEPH > > -- Met vriendelijke groet / Best regards, Jan Hugo Pr

Re: [ceph-users] NFS gateway

2016-09-10 Thread jan hugo prins
Based on the advice of some people on this list I have started testing Ganesha-NFS in combination with Ceph. First results are very good and the product looks promising. When I want to use this I need to create a setup where different systems can mount different parts of the tree. How do I configur

Re: [ceph-users] NFS gateway

2016-09-07 Thread John Spray
On Wed, Sep 7, 2016 at 3:30 PM, jan hugo prins wrote: > Hi, > > One of the use-cases I'm currently testing is the possibility to replace > a NFS storage cluster using a Ceph cluster. > > The idea I have is to use a server as an intermediate gateway. On the > client side it will expose a NFS share

Re: [ceph-users] NFS gateway

2016-09-07 Thread David
I have clients accessing CephFS over nfs (kernel nfs). I was seeing slow writes with sync exports. I haven't had a chance to investigate and in the meantime I'm exporting with async (not recommended, but acceptable in my environment). I've been meaning to test out Ganesha for a while now @Sean, h

Re: [ceph-users] NFS gateway

2016-09-07 Thread Sean Redmond
Have you seen this : https://github.com/nfs-ganesha/nfs-ganesha/wiki/Fsalsupport#CEPH On Wed, Sep 7, 2016 at 3:30 PM, jan hugo prins wrote: > Hi, > > One of the use-cases I'm currently testing is the possibility to replace > a NFS storage cluster using a Ceph cluster. > > The idea I have is to

Re: [ceph-users] nfs over rbd problem

2015-12-25 Thread Tyler Bishop
I didn't read the whole thing but if your trying to do HA NFS, you need to run OCFS2 on your RBD and disable read/write caching on the rbd client. From: "Steve Anthony" To: ceph-users@lists.ceph.com Sent: Friday, December 25, 2015 12:39:01 AM Subject: Re: [ceph-user

Re: [ceph-users] nfs over rbd problem

2015-12-24 Thread Steve Anthony
I've run into many problems trying to run RBD/NFS Pacemaker like you describe on a two node cluster. In my case, most of the problems were a result of a) no quorum and b) no STONITH. If you're going to be running this setup in production, I *highly* recommend adding more nodes (if only to maintain

Re: [ceph-users] NFS interaction with RBD

2015-06-15 Thread Simon Leinen
Christian Schnidrig writes: > Well that’s strange. I wonder why our systems behave so differently. One point about our cluster (I work with Christian, who's still on vacation, and Jens-Christian) is that it has 124 OSDs and 2048 PGs (I think) in the pool used for these RBD volumes. As a result, e

Re: [ceph-users] NFS interaction with RBD

2015-06-15 Thread Simon Leinen
Trent Lloyd writes: > Jens-Christian Fischer writes: >> >> I think we (i.e. Christian) found the problem: >> We created a test VM with 9 mounted RBD volumes (no NFS server). As soon as > he hit all disks, we started to experience these 120 second timeouts. We > realized that the QEMU process on

Re: [ceph-users] NFS interaction with RBD

2015-06-11 Thread Christian Schnidrig
Hi George In order to experience the error it was enough to simply run mkfs.xfs on all the volumes. In the meantime it became clear what the problem was: ~ ; cat /proc/183016/limits ... Max open files1024 4096 files .. This can be changed by settin

Re: [ceph-users] NFS interaction with RBD

2015-06-11 Thread Christian Schnidrig
Hi George Well that’s strange. I wonder why our systems behave so differently. We’ve got: Hypervisors running on Ubuntu 14.04. VMs with 9 ceph volumes: 2TB each. XFS instead of your ext4 Maybe the number of placement groups plays a major role as well. Jens-Christian may be able to give you th

Re: [ceph-users] NFS interaction with RBD

2015-05-29 Thread John-Paul Robinson
In the end this came down to one slow OSD. There were no hardware issues so have to just assume something gummed up during rebalancing and peering. I restarted the osd process after setting the cluster to noout. After the osd was restarted the rebalance completed and the cluster returned to heal

Re: [ceph-users] NFS interaction with RBD

2015-05-29 Thread Georgios Dimitrakakis
All, I 've tried to recreate the issue without success! My configuration is the following: OS (Hypervisor + VM): CentOS 6.6 (2.6.32-504.1.3.el6.x86_64) QEMU: qemu-kvm-0.12.1.2-2.415.el6.3ceph.x86_64 Ceph: ceph version 0.80.9 (b5a67f0e1d15385bc0d60a6da6e7fc810bde6047), 20x4TB OSDs equally distr

Re: [ceph-users] NFS interaction with RBD

2015-05-28 Thread Georgios Dimitrakakis
Thanks a million for the feedback Christian! I 've tried to recreate the issue with 10RBD Volumes mounted on a single server without success! I 've issued the "mkfs.xfs" command simultaneously (or at least as fast I could do it in different terminals) without noticing any problems. Can you p

Re: [ceph-users] NFS interaction with RBD

2015-05-28 Thread John-Paul Robinson
To follow up on the original post, Further digging indicates this is a problem with RBD image access and is not related to NFS-RBD interaction as initially suspected. The nfsd is simply hanging as a result of a hung request to the XFS file system mounted on our RBD-NFS gateway.This hung XFS c

Re: [ceph-users] NFS interaction with RBD

2015-05-27 Thread Trent Lloyd
Jens-Christian Fischer writes: > > I think we (i.e. Christian) found the problem: > We created a test VM with 9 mounted RBD volumes (no NFS server). As soon as he hit all disks, we started to experience these 120 second timeouts. We realized that the QEMU process on the hypervisor is opening a

Re: [ceph-users] NFS interaction with RBD

2015-05-27 Thread Jens-Christian Fischer
George, I will let Christian provide you the details. As far as I know, it was enough to just do a ‘ls’ on all of the attached drives. we are using Qemu 2.0: $ dpkg -l | grep qemu ii ipxe-qemu 1.0.0+git-2013.c3d1e78-2ubuntu1 all PXE boot firmware - ROM

Re: [ceph-users] NFS interaction with RBD

2015-05-26 Thread Georgios Dimitrakakis
Jens-Christian, how did you test that? Did you just tried to write to them simultaneously? Any other tests that one can perform to verify that? In our installation we have a VM with 30 RBD volumes mounted which are all exported via NFS to other VMs. No one has complaint for the moment but th

Re: [ceph-users] NFS interaction with RBD

2015-05-26 Thread Jens-Christian Fischer
I think we (i.e. Christian) found the problem: We created a test VM with 9 mounted RBD volumes (no NFS server). As soon as he hit all disks, we started to experience these 120 second timeouts. We realized that the QEMU process on the hypervisor is opening a TCP connection to every OSD for every

Re: [ceph-users] NFS interaction with RBD

2015-05-24 Thread Christian Balzer
Hello, lets compare your case with John-Paul's. Different OS and Ceph versions (thus we can assume different NFS versions as well). The only common thing is that both of you added OSDs and are likely suffering from delays stemming from Ceph re-balancing or deep-scrubbing. Ceph logs will only pi

Re: [ceph-users] NFS interaction with RBD

2015-05-23 Thread Jens-Christian Fischer
We see something very similar on our Ceph cluster, starting as of today. We use a 16 node, 102 OSD Ceph installation as the basis for an Icehouse OpenStack cluster (we applied the RBD patches for live migration etc) On this cluster we have a big ownCloud installation (Sync & Share) that stores

Re: [ceph-users] NFS over CEPH - best practice

2014-05-13 Thread Dimitri Maziuk
On 5/13/2014 9:43 AM, Andrei Mikhailovsky wrote: Dima, do you have any examples / howtos for this? I would love to give it a go. Not really: I haven't done this myself. Google for "tgtd failover with heartbeat", you should find something useful. The setups I have are heartbeat (3.0.x) managi

Re: [ceph-users] NFS over CEPH - best practice

2014-05-13 Thread Andrei Mikhailovsky
Dima, do you have any examples / howtos for this? I would love to give it a go. Cheers - Original Message - From: "Dimitri Maziuk" To: ceph-users@lists.ceph.com Sent: Monday, 12 May, 2014 3:38:11 PM Subject: Re: [ceph-users] NFS over CEPH - best practice On 5/12/20

Re: [ceph-users] NFS over CEPH - best practice

2014-05-12 Thread Dimitri Maziuk
On 05/12/2014 01:17 PM, McNamara, Bradley wrote: > The underlying file system on the RBD needs to be a clustered file system, like OCFS2, GFS2, etc., and a cluster between the two, or more, iSCSI target servers needs to be created to manage the clustered file system. Looks like we aren't sure wha

Re: [ceph-users] NFS over CEPH - best practice

2014-05-12 Thread McNamara, Bradley
Andrei Mikhailovsky Sent: Sunday, May 11, 2014 1:25 PM To: l...@consolejunkie.net Cc: ceph-users@lists.ceph.com Subject: Re: [ceph-users] NFS over CEPH - best practice Sorry if these questions will sound stupid, but I was not able to find an answer by googling. 1. Does iSCSI protocol support having

Re: [ceph-users] NFS over CEPH - best practice

2014-05-12 Thread Leen Besselink
On Mon, May 12, 2014 at 12:08:24PM -0500, Dimitri Maziuk wrote: > PS. (now that I looked) see e.g. > http://blogs.mindspew-age.com/2012/04/05/adventures-in-high-availability-ha-iscsi-with-drbd-iscsi-and-pacemaker/ > > > Dima Didn't you say you wanted multiple servers to write to the same LUN ?

Re: [ceph-users] NFS over CEPH - best practice

2014-05-12 Thread Leen Besselink
ts.ceph.com > Cc: "Andrei Mikhailovsky" > Sent: Sunday, 11 May, 2014 11:41:08 PM > Subject: Re: [ceph-users] NFS over CEPH - best practice > > On Sun, May 11, 2014 at 09:24:30PM +0100, Andrei Mikhailovsky wrote: > > Sorry if these questions will sound stupid, but

Re: [ceph-users] NFS over CEPH - best practice

2014-05-12 Thread Dimitri Maziuk
PS. (now that I looked) see e.g. http://blogs.mindspew-age.com/2012/04/05/adventures-in-high-availability-ha-iscsi-with-drbd-iscsi-and-pacemaker/ Dima signature.asc Description: OpenPGP digital signature ___ ceph-users mailing list ceph-users@lists

Re: [ceph-users] NFS over CEPH - best practice

2014-05-12 Thread Dimitri Maziuk
On 5/12/2014 4:52 AM, Andrei Mikhailovsky wrote: Leen, thanks for explaining things. I does make sense now. Unfortunately, it does look like this technology would not fulfill my requirements as I do need to have an ability to perform maintenance without shutting down vms. I've no idea how muc

Re: [ceph-users] NFS over CEPH - best practice

2014-05-12 Thread Andrei Mikhailovsky
for all your help Andrei - Original Message - From: "Leen Besselink" To: ceph-users@lists.ceph.com Cc: "Andrei Mikhailovsky" Sent: Sunday, 11 May, 2014 11:41:08 PM Subject: Re: [ceph-users] NFS over CEPH - best practice On Sun, May 11, 2014 at 09:2

Re: [ceph-users] NFS over CEPH - best practice

2014-05-11 Thread Leen Besselink
ou should test when you've build the setup. > Cheers > Hope that helps. > Andrei > - Original Message - > > From: "Leen Besselink" > To: ceph-users@lists.ceph.com > Sent: Saturday, 10 May, 2014 8:31:02 AM > Subject: Re: [ceph-users] NFS over CEPH

Re: [ceph-users] NFS over CEPH - best practice

2014-05-11 Thread Andrei Mikhailovsky
possible with iscsi? Cheers Andrei - Original Message - From: "Leen Besselink" To: ceph-users@lists.ceph.com Sent: Saturday, 10 May, 2014 8:31:02 AM Subject: Re: [ceph-users] NFS over CEPH - best practice On Fri, May 09, 2014 at 12:37:57PM +0100, Andrei Mikhailovsky wrote:

Re: [ceph-users] NFS over CEPH - best practice

2014-05-10 Thread Leen Besselink
t;Leen Besselink" > To: ceph-users@lists.ceph.com > Sent: Thursday, 8 May, 2014 9:35:21 PM > Subject: Re: [ceph-users] NFS over CEPH - best practice > > On Thu, May 08, 2014 at 01:24:17AM +0200, Gilles Mocellin wrote: > > Le 07/05/2014 15:23, Vlad Gorbunov a écrit :

Re: [ceph-users] NFS over CEPH - best practice

2014-05-09 Thread Maciej Bonin
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Andrei Mikhailovsky Sent: 09 May 2014 12:38 To: l...@consolejunkie.net Cc: ceph-users@lists.ceph.com Subject: Re: [ceph-users] NFS over CEPH - best practice Ideally I would like to have a setup with 2+ iscsi servers, so that I

Re: [ceph-users] NFS over CEPH - best practice

2014-05-09 Thread Andrei Mikhailovsky
mounted on several servers. Would the suggested setup not work for my requirements? Andrei - Original Message - From: "Leen Besselink" To: ceph-users@lists.ceph.com Sent: Thursday, 8 May, 2014 9:35:21 PM Subject: Re: [ceph-users] NFS over CEPH - best practice On T

Re: [ceph-users] NFS over CEPH - best practice

2014-05-09 Thread Andrei Mikhailovsky
2014 12:26:17 AM Subject: Re: [ceph-users] NFS over CEPH - best practice On 07/05/14 19:46, Andrei Mikhailovsky wrote: > Hello guys, > > I would like to offer NFS service to the XenServer and VMWare > hypervisors for storing vm images. I am currently running ceph rbd with &

Re: [ceph-users] NFS over CEPH - best practice

2014-05-08 Thread Stuart Longland
On 07/05/14 19:46, Andrei Mikhailovsky wrote: > Hello guys, > > I would like to offer NFS service to the XenServer and VMWare > hypervisors for storing vm images. I am currently running ceph rbd with > kvm, which is working reasonably well. > > What would be the best way of running NFS services o

Re: [ceph-users] NFS over CEPH - best practice

2014-05-08 Thread Leen Besselink
On Thu, May 08, 2014 at 01:24:17AM +0200, Gilles Mocellin wrote: > Le 07/05/2014 15:23, Vlad Gorbunov a écrit : > >It's easy to install tgtd with ceph support. ubuntu 12.04 for example: > > > >Connect ceph-extras repo: > >echo deb http://ceph.com/packages/ceph-extras/debian $(lsb_release > >-sc) ma

Re: [ceph-users] NFS over CEPH - best practice

2014-05-07 Thread Gilles Mocellin
Le 07/05/2014 15:23, Vlad Gorbunov a écrit : It's easy to install tgtd with ceph support. ubuntu 12.04 for example: Connect ceph-extras repo: echo deb http://ceph.com/packages/ceph-extras/debian $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph-extras.list Install tgtd with rbd

Re: [ceph-users] NFS over CEPH - best practice

2014-05-07 Thread Vladislav Gorbunov
" > To: "Sergey Malinin" > Cc: "Andrei Mikhailovsky" , ceph-users@lists.ceph.com > Sent: Wednesday, 7 May, 2014 2:23:52 PM > > Subject: Re: [ceph-users] NFS over CEPH - best practice > > It's easy to install tgtd with ceph support. ubuntu 12.04 for

Re: [ceph-users] NFS over CEPH - best practice

2014-05-07 Thread Andrei Mikhailovsky
ot;Vlad Gorbunov" To: "Sergey Malinin" Cc: "Andrei Mikhailovsky" , ceph-users@lists.ceph.com Sent: Wednesday, 7 May, 2014 2:23:52 PM Subject: Re: [ceph-users] NFS over CEPH - best practice It's easy to install tgtd with ceph support. ubuntu 12.04 for example: Conne

Re: [ceph-users] NFS over CEPH - best practice

2014-05-07 Thread Vlad Gorbunov
is there a howto somewhere describing the steps on how to setup iscsi multipathing over ceph? It looks like a good alternative to nfs Thanks From: "Vlad Gorbunov" To: "Andrei Mikhailovsky" Cc: ceph-users@lists.ceph.com Sent: Wednesday, 7 May, 2014 12:02:09 PM Subject: Re: [ce

Re: [ceph-users] NFS over CEPH - best practice

2014-05-07 Thread Sergey Malinin
ke a good alternative to nfs > > Thanks > > From: "Vlad Gorbunov" mailto:vadi...@gmail.com)> > To: "Andrei Mikhailovsky" mailto:and...@arhont.com)> > Cc: ceph-users@lists.ceph.com (mailto:ceph-users@lists.ceph.com) > Sent: Wednesday, 7 May, 2014 12:02:09 PM &

Re: [ceph-users] NFS over CEPH - best practice

2014-05-07 Thread Andrei Mikhailovsky
sday, 7 May, 2014 12:02:09 PM Subject: Re: [ceph-users] NFS over CEPH - best practice For XenServer or VMware is better to use iscsi client to tgtd with ceph support. You can install tgtd on osd or monitor server and use multipath for failover. On Wed, May 7, 2014 at 9:47 PM, Andre

Re: [ceph-users] NFS over CEPH - best practice

2014-05-07 Thread Cedric Lemarchand
I am surprised that CephFS isn't proposed as an option, in the way it removes the not negligible block storage layer from the picture. I always feel uncomfortable to stack storage technologies or file systems (here NFS over XFS over iSCSI over RDB over Rados) and try to stay as possible on the "KIS

Re: [ceph-users] NFS over CEPH - best practice

2014-05-07 Thread Vlad Gorbunov
For XenServer or VMware is better to use iscsi client to tgtd with ceph support. You can install tgtd on osd or monitor server and use multipath for failover. On Wed, May 7, 2014 at 9:47 PM, Andrei Mikhailovsky wrote: > Hello guys, > I would like to offer NFS service to the XenServer and VMWa

Re: [ceph-users] NFS over CEPH - best practice

2014-05-07 Thread Andrija Panic
; *From: *"Wido den Hollander" > *To: *ceph-users@lists.ceph.com > *Sent: *Wednesday, 7 May, 2014 11:15:39 AM > *Subject: *Re: [ceph-users] NFS over CEPH - best practice > > On 05/07/2014 11:46 AM, Andrei Mikhailovsky wrote: > > Hello guys, > > > > I would like

Re: [ceph-users] NFS over CEPH - best practice

2014-05-07 Thread Andrei Mikhailovsky
ssage - From: "Wido den Hollander" To: ceph-users@lists.ceph.com Sent: Wednesday, 7 May, 2014 11:15:39 AM Subject: Re: [ceph-users] NFS over CEPH - best practice On 05/07/2014 11:46 AM, Andrei Mikhailovsky wrote: > Hello guys, > > I would like to offer NFS service to

  1   2   >