Awesome, thank you giving an overview of these features, sounds like the
correct direction then!
-Brent
-Original Message-
From: Daniel Gryniewicz
Sent: Thursday, October 3, 2019 8:20 AM
To: Brent Kennedy
Cc: Marc Roos ; ceph-users
Subject: Re: [ceph-users] NFS
So, Ganesha is an
Thanks, Patrick. Looks like the fix is awaiting review, I guess my options
are to hold tight for 14.2.5 or patch myself if I get desperate. I've seen
this crash about 4 times over the past 96 hours, is there anything I can do
to mitigate the issue in the meantime?
On Wed, Oct 9, 2019 at 9:23 PM Pa
Looks like this bug: https://tracker.ceph.com/issues/41148
On Wed, Oct 9, 2019 at 1:15 PM David C wrote:
>
> Hi Daniel
>
> Thanks for looking into this. I hadn't installed ceph-debuginfo, here's the
> bt with line numbers:
>
> #0 operator uint64_t (this=0x10) at
> /usr/src/debug/ceph-14.2.2/sr
Hi Daniel
Thanks for looking into this. I hadn't installed ceph-debuginfo, here's the
bt with line numbers:
#0 operator uint64_t (this=0x10) at
/usr/src/debug/ceph-14.2.2/src/include/object.h:123
#1 Client::fill_statx (this=this@entry=0x274b980, in=0x0, mask=mask@entry=341,
stx=stx@entry=0x7fcc
Client::fill_statx() is a fairly large function, so it's hard to know
what's causing the crash. Can you get line numbers from your backtrace?
Daniel
On 10/7/19 9:59 AM, David C wrote:
Hi All
Further to my previous messages, I upgraded
to libcephfs2-14.2.2-0.el7.x86_64 as suggested and thing
Hi All
Further to my previous messages, I upgraded
to libcephfs2-14.2.2-0.el7.x86_64 as suggested and things certainly seem a
lot more stable, I have had some crashes though, could someone assist in
debugging this latest crash please?
(gdb) bt
#0 0x7fce4e9fc1bb in Client::fill_statx(Inode*,
the access key/irrelevant.
-Original Message-
Subject: Re: [ceph-users] NFS
Hi Mark,
Here's an example that should work--userx and usery are RGW users
created in different tenants, like so:
radosgw-admin --tenant tnt1 --uid userx --display-name "tnt1-userx" \
01 : ganesha.nfsd-4722[sigmgr]
> config_errs_to_log :CONFIG :CRIT :Config File
> (/etc/ganesha/ganesha.conf:209): 1 validation errors in block EXPORT
> 03/10/2019 16:15:37 : epoch 5d8d274c : c01 : ganesha.nfsd-4722[sigmgr]
> config_errs_to_log :CONFIG :CRIT :Config File
> (/etc/ganesha/ganesh
1 : ganesha.nfsd-4722[sigmgr]
> config_errs_to_log :CONFIG :CRIT :Config File
> (/etc/ganesha/ganesha.conf:216): Errors processing block (FSAL)
> 03/10/2019 16:15:37 : epoch 5d8d274c : c01 : ganesha.nfsd-4722[sigmgr]
> config_errs_to_log :CONFIG :CRIT :Config File
> (/etc/ganesha/ganesha.con
nfsd-4722[sigmgr]
> config_errs_to_log :CONFIG :CRIT :Config File
> (/etc/ganesha/ganesha.conf:209): 1 validation errors in block EXPORT
> 03/10/2019 16:15:37 : epoch 5d8d274c : c01 : ganesha.nfsd-4722[sigmgr]
> config_errs_to_log :CONFIG :CRIT :Config File
> (/etc/ganesha/ganesha.co
block EXPORT
03/10/2019 16:15:37 : epoch 5d8d274c : c01 : ganesha.nfsd-4722[sigmgr]
config_errs_to_log :CONFIG :CRIT :Config File
(/etc/ganesha/ganesha.conf:209): Errors processing block (EXPORT)
-Original Message-
Subject: Re: [ceph-users] NFS
RGW NFS can support any NFS style of au
RGW NFS can support any NFS style of authentication, but users will
have the RGW access of their nfs-ganesha export. You can create
exports with disjoint privileges, and since recent L, N, RGW tenants.
Matt
On Tue, Oct 1, 2019 at 8:31 AM Marc Roos wrote:
>
> I think you can run into problems
>
> as a NAS backend is new to me ( thus I might be over thinking this ) :)
>
> -Brent
>
> -Original Message-
> From: Daniel Gryniewicz
> Sent: Tuesday, October 1, 2019 8:20 AM
> To: Marc Roos ; bkennedy ;
> ceph-users
> Subject: Re: [ceph-users] NFS
>
&
;
ceph-users
Subject: Re: [ceph-users] NFS
Ganesha can export CephFS or RGW. It cannot export anything else (like iscsi
or RBD). Config for RGW looks like this:
EXPORT
{
Export_ID=1;
Path = "/";
Pseudo = "/rgw";
Access_Type = RW;
= "/var/log/ganesha.log";
# enable = default;
# }
}
-Original Message-
Subject: Re: [ceph-users] NFS
Ganesha can export CephFS or RGW. It cannot export anything else (like
iscsi or RBD). Config for RGW looks like this:
EXPORT
{
Export_ID=1;
Ganesha can export CephFS or RGW. It cannot export anything else (like
iscsi or RBD). Config for RGW looks like this:
EXPORT
{
Export_ID=1;
Path = "/";
Pseudo = "/rgw";
Access_Type = RW;
Protocols = 4;
Transports = TCP;
FSAL {
Just install these
http://download.ceph.com/nfs-ganesha/
nfs-ganesha-rgw-2.7.1-0.1.el7.x86_64
nfs-ganesha-vfs-2.7.1-0.1.el7.x86_64
libnfsidmap-0.25-19.el7.x86_64
nfs-ganesha-mem-2.7.1-0.1.el7.x86_64
nfs-ganesha-xfs-2.7.1-0.1.el7.x86_64
nfs-ganesha-2.7.1-0.1.el7.x86_64
nfs-ganesha-ceph-2.7.1-0.1.
Thanks, Jeff. I'll give 14.2.2 a go when it's released.
On Wed, 17 Jul 2019, 22:29 Jeff Layton, wrote:
> Ahh, I just noticed you were running nautilus on the client side. This
> patch went into v14.2.2, so once you update to that you should be good
> to go.
>
> -- Jeff
>
> On Wed, 2019-07-17 at
Ahh, I just noticed you were running nautilus on the client side. This
patch went into v14.2.2, so once you update to that you should be good
to go.
-- Jeff
On Wed, 2019-07-17 at 17:10 -0400, Jeff Layton wrote:
> This is almost certainly the same bug that is fixed here:
>
> https://github.com/ce
This is almost certainly the same bug that is fixed here:
https://github.com/ceph/ceph/pull/28324
It should get backported soon-ish but I'm not sure which luminous
release it'll show up in.
Cheers,
Jeff
On Wed, 2019-07-17 at 10:36 +0100, David C wrote:
> Thanks for taking a look at this, Daniel
Thanks for taking a look at this, Daniel. Below is the only interesting bit
from the Ceph MDS log at the time of the crash but I suspect the slow
requests are a result of the Ganesha crash rather than the cause of it.
Copying the Ceph list in case anyone has any ideas.
2019-07-15 15:06:54.624007 7
On Wed, 2019-05-29 at 13:49 +, Stolte, Felix wrote:
> Hi,
>
> is anyone running an active-passive nfs-ganesha cluster with cephfs backend
> and using the rados_kv recovery backend? My setup runs fine, but takeover is
> giving me a headache. On takeover I see the following messages in ganesha
Thanks for your response on that, Jeff. Pretty sure this is nothing to do
with Ceph or Ganesha, sorry for wasting your time. What I'm seeing is
related to writeback on the client. I can mitigate the behaviour a bit by
playing around with the vm.dirty* parameters.
On Tue, Apr 16, 2019 at 7:07 PM
On Tue, Apr 16, 2019 at 10:36 AM David C wrote:
>
> Hi All
>
> I have a single export of my cephfs using the ceph_fsal [1]. A CentOS 7
> machine mounts a sub-directory of the export [2] and is using it for the home
> directory of a user (e.g everything under ~ is on the server).
>
> This works f
Looks like you are trying to write to the pseudo-root, mount /cephfs
instead of /.
Paul
--
Paul Emmerich
Looking for help with your Ceph cluster? Contact us at https://croit.io
croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90
On Sat, Apr 6, 2019 at 1:07 PM wrote
Possibly the client doesn't like the server returning SecType = "none";
Maybe try SecType = "sys":?
Leon L. Robinson
> On 6 Apr 2019, at 12:06,
> wrote:
>
> Hi all,
>
> I have recently setup a Ceph cluster and on request using CephFS (MDS
> version: ceph version 13.2.5 (cbff874f9007f1869b
On Mon, Mar 4, 2019 at 5:53 PM Jeff Layton wrote:
>
> On Mon, 2019-03-04 at 17:26 +, David C wrote:
> > Looks like you're right, Jeff. Just tried to write into the dir and am
> > now getting the quota warning. So I guess it was the libcephfs cache
> > as you say. That's fine for me, I don't n
Looks like you're right, Jeff. Just tried to write into the dir and am now
getting the quota warning. So I guess it was the libcephfs cache as you
say. That's fine for me, I don't need the quotas to be too strict, just a
failsafe really.
Interestingly, if I create a new dir, set the same 100MB quo
://github.com/ceph/ceph/pull/19358
-Original Message-
From: Erik McCormick [mailto:emccorm...@cirrusseven.com]
Sent: dinsdag 9 oktober 2018 20:49
To: Alfredo Deza
Cc: ceph-users
Subject: Re: [ceph-users] nfs-ganesha version in Ceph repos
On Tue, Oct 9, 2018, 1:48 PM Alfredo Deza wrote
On Tue, Oct 9, 2018 at 2:55 PM Erik McCormick
wrote:
>
>
>
> On Tue, Oct 9, 2018, 2:17 PM Kevin Olbrich wrote:
>>
>> I had a similar problem:
>> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-September/029698.html
>>
>> But even the recent 2.6.x releases were not working well for me (ma
On Tue, Oct 9, 2018, 2:17 PM Kevin Olbrich wrote:
> I had a similar problem:
>
> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-September/029698.html
>
> But even the recent 2.6.x releases were not working well for me (many many
> segfaults). I am on the master-branch (2.7.x) and that w
On Tue, Oct 9, 2018, 1:48 PM Alfredo Deza wrote:
> On Tue, Oct 9, 2018 at 1:39 PM Erik McCormick
> wrote:
> >
> > On Tue, Oct 9, 2018 at 1:27 PM Erik McCormick
> > wrote:
> > >
> > > Hello,
> > >
> > > I'm trying to set up an nfs-ganesha server with the Ceph FSAL, and
> > > running into difficu
I had a similar problem:
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-September/029698.html
But even the recent 2.6.x releases were not working well for me (many many
segfaults). I am on the master-branch (2.7.x) and that works well with less
crashs.
Cluster is 13.2.1/.2 with nfs-ganes
On Tue, Oct 9, 2018 at 1:39 PM Erik McCormick
wrote:
>
> On Tue, Oct 9, 2018 at 1:27 PM Erik McCormick
> wrote:
> >
> > Hello,
> >
> > I'm trying to set up an nfs-ganesha server with the Ceph FSAL, and
> > running into difficulties getting the current stable release running.
> > The versions in t
On Tue, Oct 9, 2018 at 1:27 PM Erik McCormick
wrote:
>
> Hello,
>
> I'm trying to set up an nfs-ganesha server with the Ceph FSAL, and
> running into difficulties getting the current stable release running.
> The versions in the Luminous repo is stuck at 2.6.1, whereas the
> current stable version
Hi Josef,
The main thing to make sure is that you have set up the host/vm
running nfs-ganesha exactly as if it were going to run radosgw. For
example, you need an appropriate keyring and ceph config. If radosgw
starts and services requests, nfs-ganesha should too.
With the debug settings you've
Hi, thanks for the quick reply. As for 1. I mentioned that i'm running
ubuntu 16.04, kernel 4.4.0-121 - as it seems the platform
package(nfs-ganesha-ceph) does not include the rgw fsal.
2. Nfsd was running - after rebooting i managed to get ganesha to bind,
rpcbind is running, though i still c
Hi Josef,
1. You do need the Ganesha fsal driver to be present; I don't know
your platform and os version, so I couldn't look up what packages you
might need to install (or if the platform package does not build the
RGW fsal)
2. The most common reason for ganesha.nfsd to fail to bind to a port
is
I think it is not working, I'am having the same problem. I'am on the
ganesha mailing list and they have given me a patch for detailed logging
on this issue, so they can determine what is going on. (Didn't have time
to this though)
-Original Message-
From: Josef Zelenka [mailto:jos
Hi David,
thanks for the reply!
Interesting that the package was not installed - it was for us, but the
machines we run the nfs-ganesha servers on are also OSDs, so it might have been
pulled in via ceph-packages for us.
In any case, I'd say this means librados2 as dependency is missing either
Hi Oliver
Thanks for following up. I just picked this up again today and it was
indeed librados2...the package wasn't installed! It's working now, haven't
tested much but I haven't noticed any problems yet. This is with
nfs-ganesha-2.6.1-0.1.el7.x86_64, libcephfs2-12.2.5-0.el7.x86_64 and
librados2
Hi David,
did you already manage to check your librados2 version and manage to pin down
the issue?
Cheers,
Oliver
Am 11.05.2018 um 17:15 schrieb Oliver Freyermuth:
> Hi David,
>
> Am 11.05.2018 um 16:55 schrieb David C:
>> Hi Oliver
>>
>> Thanks for the detailed reponse! I've downgrad
Hi David,
Am 11.05.2018 um 16:55 schrieb David C:
> Hi Oliver
>
> Thanks for the detailed reponse! I've downgraded my libcephfs2 to 12.2.4 and
> still get a similar error:
>
> load_fsal :NFS STARTUP :CRIT :Could not dlopen
> module:/usr/lib64/ganesha/libfsalceph.so Error:/lib64/libcephfs.so.2:
Hi Oliver
Thanks for the detailed reponse! I've downgraded my libcephfs2 to 12.2.4
and still get a similar error:
load_fsal :NFS STARTUP :CRIT :Could not dlopen
module:/usr/lib64/ganesha/libfsalceph.so
Error:/lib64/libcephfs.so.2: undefined symbol: _Z14common_
preinitRK18CephInitParameters18code_
Hi David,
for what it's worth, we are running with nfs-ganesha 2.6.1 from Ceph repos on
CentOS 7.4 with the following set of versions:
libcephfs2-12.2.4-0.el7.x86_64
nfs-ganesha-2.6.1-0.1.el7.x86_64
nfs-ganesha-ceph-2.6.1-0.1.el7.x86_64
Of course, we plan to upgrade to 12.2.5 soon-ish...
Am 11.
Anybody using ganesha with rgw and multi user?
-Original Message-
From: Marc Roos
Sent: maandag 23 april 2018 5:33
To: ceph-users
Subject: [ceph-users] Nfs-ganesha rgw config for multi tenancy rgw users
I have problems exporting a bucket that really does exist. I have tried
Path = "/t
When this happens, I see this log line from the rgw component in the FSAL:
2018-02-13 12:24:15.434086 7ff4e2ffd700 0 lookup_handle handle lookup
failed <13234489286997512229,9160472602707183340>(need persistent handles)
For a short time, I cannot stat the mentioned directories. After a
minut
This was fixed on next (for 2.6, currently in -rc1) but not backported
to 2.5.
Daniel
On 01/09/2018 12:41 PM, Marc Roos wrote:
The script has not been adapted for this - at the end
http://download.ceph.com/nfs-ganesha/rpm-V2.5-stable/luminous/x86_64/
nfs-ganesha-rgw-2.5.4-.el7.x86_64.rp
Cephfs does have repair tools but I wouldn't jump the gun, your metadata
pool is probably fine. Unless you're getting health errors or seeing errors
in your MDS log?
Are you exporting a fuse or kernel mount with Ganesha (i.e using the vfs
FSAL) or using the Ceph FSAL? Have you tried any tests dire
On Fri, Nov 4, 2016 at 2:14 AM, 于 姜 wrote:
> ceph version 10.2.3
> ubuntu 14.04 server
> nfs-ganesha 2.4.1
> ntirpc 1.4.3
>
> cmake -DUSE_FSAL_RGW=ON ../src/
>
> -- Found rgw libraries: /usr/lib
> -- Could NOT find RGW: Found unsuitable version ".", but required is at
> least "1.1" (found /usr)
>
Hi John,
> Exporting kernel client mounts with the kernel NFS server is tested as
> part of the regular testing we do on CephFS, so you should find it
> pretty stable. This is definitely a legitimate way of putting a layer
> of security between your application servers and your storage cluster.
>
I really think that doing async on big production environments is a no
go. But it could very well explain the issues.
Last week I used to test Ganesha and so far the results look promising.
Jan Hugo.
On 09/07/2016 06:31 PM, David wrote:
> I have clients accessing CephFS over nfs (kernel nfs). I
Hi Sean,
Thanks for the advice. I'm currently looking at it. First results are
promising.
Jan Hugo
On 09/07/2016 04:48 PM, Sean Redmond wrote:
> Have you seen this :
>
> https://github.com/nfs-ganesha/nfs-ganesha/wiki/Fsalsupport#CEPH
>
>
--
Met vriendelijke groet / Best regards,
Jan Hugo Pr
Based on the advice of some people on this list I have started testing
Ganesha-NFS in combination with Ceph. First results are very good and
the product looks promising. When I want to use this I need to create a
setup where different systems can mount different parts of the tree. How
do I configur
On Wed, Sep 7, 2016 at 3:30 PM, jan hugo prins wrote:
> Hi,
>
> One of the use-cases I'm currently testing is the possibility to replace
> a NFS storage cluster using a Ceph cluster.
>
> The idea I have is to use a server as an intermediate gateway. On the
> client side it will expose a NFS share
I have clients accessing CephFS over nfs (kernel nfs). I was seeing slow
writes with sync exports. I haven't had a chance to investigate and in the
meantime I'm exporting with async (not recommended, but acceptable in my
environment).
I've been meaning to test out Ganesha for a while now
@Sean, h
Have you seen this :
https://github.com/nfs-ganesha/nfs-ganesha/wiki/Fsalsupport#CEPH
On Wed, Sep 7, 2016 at 3:30 PM, jan hugo prins wrote:
> Hi,
>
> One of the use-cases I'm currently testing is the possibility to replace
> a NFS storage cluster using a Ceph cluster.
>
> The idea I have is to
I didn't read the whole thing but if your trying to do HA NFS, you need to run
OCFS2 on your RBD and disable read/write caching on the rbd client.
From: "Steve Anthony"
To: ceph-users@lists.ceph.com
Sent: Friday, December 25, 2015 12:39:01 AM
Subject: Re: [ceph-user
I've run into many problems trying to run RBD/NFS Pacemaker like you
describe on a two node cluster. In my case, most of the problems were a
result of a) no quorum and b) no STONITH. If you're going to be running
this setup in production, I *highly* recommend adding more nodes (if
only to maintain
Christian Schnidrig writes:
> Well that’s strange. I wonder why our systems behave so differently.
One point about our cluster (I work with Christian, who's still on
vacation, and Jens-Christian) is that it has 124 OSDs and 2048 PGs (I
think) in the pool used for these RBD volumes. As a result, e
Trent Lloyd writes:
> Jens-Christian Fischer writes:
>>
>> I think we (i.e. Christian) found the problem:
>> We created a test VM with 9 mounted RBD volumes (no NFS server). As soon as
> he hit all disks, we started to experience these 120 second timeouts. We
> realized that the QEMU process on
Hi George
In order to experience the error it was enough to simply run mkfs.xfs on all
the volumes.
In the meantime it became clear what the problem was:
~ ; cat /proc/183016/limits
...
Max open files1024 4096 files
..
This can be changed by settin
Hi George
Well that’s strange. I wonder why our systems behave so differently.
We’ve got:
Hypervisors running on Ubuntu 14.04.
VMs with 9 ceph volumes: 2TB each.
XFS instead of your ext4
Maybe the number of placement groups plays a major role as well. Jens-Christian
may be able to give you th
In the end this came down to one slow OSD. There were no hardware
issues so have to just assume something gummed up during rebalancing and
peering.
I restarted the osd process after setting the cluster to noout. After
the osd was restarted the rebalance completed and the cluster returned
to heal
All,
I 've tried to recreate the issue without success!
My configuration is the following:
OS (Hypervisor + VM): CentOS 6.6 (2.6.32-504.1.3.el6.x86_64)
QEMU: qemu-kvm-0.12.1.2-2.415.el6.3ceph.x86_64
Ceph: ceph version 0.80.9 (b5a67f0e1d15385bc0d60a6da6e7fc810bde6047),
20x4TB OSDs equally distr
Thanks a million for the feedback Christian!
I 've tried to recreate the issue with 10RBD Volumes mounted on a
single server without success!
I 've issued the "mkfs.xfs" command simultaneously (or at least as fast
I could do it in different terminals) without noticing any problems. Can
you p
To follow up on the original post,
Further digging indicates this is a problem with RBD image access and is
not related to NFS-RBD interaction as initially suspected. The nfsd is
simply hanging as a result of a hung request to the XFS file system
mounted on our RBD-NFS gateway.This hung XFS c
Jens-Christian Fischer writes:
>
> I think we (i.e. Christian) found the problem:
> We created a test VM with 9 mounted RBD volumes (no NFS server). As soon as
he hit all disks, we started to experience these 120 second timeouts. We
realized that the QEMU process on the hypervisor is opening a
George,
I will let Christian provide you the details. As far as I know, it was enough
to just do a ‘ls’ on all of the attached drives.
we are using Qemu 2.0:
$ dpkg -l | grep qemu
ii ipxe-qemu 1.0.0+git-2013.c3d1e78-2ubuntu1
all PXE boot firmware - ROM
Jens-Christian,
how did you test that? Did you just tried to write to them
simultaneously? Any other tests that one can perform to verify that?
In our installation we have a VM with 30 RBD volumes mounted which are
all exported via NFS to other VMs.
No one has complaint for the moment but th
I think we (i.e. Christian) found the problem:
We created a test VM with 9 mounted RBD volumes (no NFS server). As soon as he
hit all disks, we started to experience these 120 second timeouts. We realized
that the QEMU process on the hypervisor is opening a TCP connection to every
OSD for every
Hello,
lets compare your case with John-Paul's.
Different OS and Ceph versions (thus we can assume different NFS versions
as well).
The only common thing is that both of you added OSDs and are likely
suffering from delays stemming from Ceph re-balancing or deep-scrubbing.
Ceph logs will only pi
We see something very similar on our Ceph cluster, starting as of today.
We use a 16 node, 102 OSD Ceph installation as the basis for an Icehouse
OpenStack cluster (we applied the RBD patches for live migration etc)
On this cluster we have a big ownCloud installation (Sync & Share) that stores
On 5/13/2014 9:43 AM, Andrei Mikhailovsky wrote:
Dima, do you have any examples / howtos for this? I would love to give
it a go.
Not really: I haven't done this myself. Google for "tgtd failover with
heartbeat", you should find something useful.
The setups I have are heartbeat (3.0.x) managi
Dima, do you have any examples / howtos for this? I would love to give it a go.
Cheers
- Original Message -
From: "Dimitri Maziuk"
To: ceph-users@lists.ceph.com
Sent: Monday, 12 May, 2014 3:38:11 PM
Subject: Re: [ceph-users] NFS over CEPH - best practice
On 5/12/20
On 05/12/2014 01:17 PM, McNamara, Bradley wrote:
> The underlying file system on the RBD needs to be a clustered file
system, like OCFS2, GFS2, etc., and a cluster between the two, or more,
iSCSI target servers needs to be created to manage the clustered file
system.
Looks like we aren't sure wha
Andrei
Mikhailovsky
Sent: Sunday, May 11, 2014 1:25 PM
To: l...@consolejunkie.net
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] NFS over CEPH - best practice
Sorry if these questions will sound stupid, but I was not able to find an
answer by googling.
1. Does iSCSI protocol support having
On Mon, May 12, 2014 at 12:08:24PM -0500, Dimitri Maziuk wrote:
> PS. (now that I looked) see e.g.
> http://blogs.mindspew-age.com/2012/04/05/adventures-in-high-availability-ha-iscsi-with-drbd-iscsi-and-pacemaker/
>
>
> Dima
Didn't you say you wanted multiple servers to write to the same LUN ?
ts.ceph.com
> Cc: "Andrei Mikhailovsky"
> Sent: Sunday, 11 May, 2014 11:41:08 PM
> Subject: Re: [ceph-users] NFS over CEPH - best practice
>
> On Sun, May 11, 2014 at 09:24:30PM +0100, Andrei Mikhailovsky wrote:
> > Sorry if these questions will sound stupid, but
PS. (now that I looked) see e.g.
http://blogs.mindspew-age.com/2012/04/05/adventures-in-high-availability-ha-iscsi-with-drbd-iscsi-and-pacemaker/
Dima
signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists
On 5/12/2014 4:52 AM, Andrei Mikhailovsky wrote:
Leen,
thanks for explaining things. I does make sense now.
Unfortunately, it does look like this technology would not fulfill my
requirements as I do need to have an ability to perform maintenance
without shutting down vms.
I've no idea how muc
for all your help
Andrei
- Original Message -
From: "Leen Besselink"
To: ceph-users@lists.ceph.com
Cc: "Andrei Mikhailovsky"
Sent: Sunday, 11 May, 2014 11:41:08 PM
Subject: Re: [ceph-users] NFS over CEPH - best practice
On Sun, May 11, 2014 at 09:2
ou should test when you've build the setup.
> Cheers
>
Hope that helps.
> Andrei
> - Original Message -
>
> From: "Leen Besselink"
> To: ceph-users@lists.ceph.com
> Sent: Saturday, 10 May, 2014 8:31:02 AM
> Subject: Re: [ceph-users] NFS over CEPH
possible with iscsi?
Cheers
Andrei
- Original Message -
From: "Leen Besselink"
To: ceph-users@lists.ceph.com
Sent: Saturday, 10 May, 2014 8:31:02 AM
Subject: Re: [ceph-users] NFS over CEPH - best practice
On Fri, May 09, 2014 at 12:37:57PM +0100, Andrei Mikhailovsky wrote:
t;Leen Besselink"
> To: ceph-users@lists.ceph.com
> Sent: Thursday, 8 May, 2014 9:35:21 PM
> Subject: Re: [ceph-users] NFS over CEPH - best practice
>
> On Thu, May 08, 2014 at 01:24:17AM +0200, Gilles Mocellin wrote:
> > Le 07/05/2014 15:23, Vlad Gorbunov a écrit :
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Andrei
Mikhailovsky
Sent: 09 May 2014 12:38
To: l...@consolejunkie.net
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] NFS over CEPH - best practice
Ideally I would like to have a setup with 2+ iscsi servers, so that I
mounted on several servers.
Would the suggested setup not work for my requirements?
Andrei
- Original Message -
From: "Leen Besselink"
To: ceph-users@lists.ceph.com
Sent: Thursday, 8 May, 2014 9:35:21 PM
Subject: Re: [ceph-users] NFS over CEPH - best practice
On T
2014 12:26:17 AM
Subject: Re: [ceph-users] NFS over CEPH - best practice
On 07/05/14 19:46, Andrei Mikhailovsky wrote:
> Hello guys,
>
> I would like to offer NFS service to the XenServer and VMWare
> hypervisors for storing vm images. I am currently running ceph rbd with
&
On 07/05/14 19:46, Andrei Mikhailovsky wrote:
> Hello guys,
>
> I would like to offer NFS service to the XenServer and VMWare
> hypervisors for storing vm images. I am currently running ceph rbd with
> kvm, which is working reasonably well.
>
> What would be the best way of running NFS services o
On Thu, May 08, 2014 at 01:24:17AM +0200, Gilles Mocellin wrote:
> Le 07/05/2014 15:23, Vlad Gorbunov a écrit :
> >It's easy to install tgtd with ceph support. ubuntu 12.04 for example:
> >
> >Connect ceph-extras repo:
> >echo deb http://ceph.com/packages/ceph-extras/debian $(lsb_release
> >-sc) ma
Le 07/05/2014 15:23, Vlad Gorbunov a écrit :
It's easy to install tgtd with ceph support. ubuntu 12.04 for example:
Connect ceph-extras repo:
echo deb http://ceph.com/packages/ceph-extras/debian $(lsb_release
-sc) main | sudo tee /etc/apt/sources.list.d/ceph-extras.list
Install tgtd with rbd
"
> To: "Sergey Malinin"
> Cc: "Andrei Mikhailovsky" , ceph-users@lists.ceph.com
> Sent: Wednesday, 7 May, 2014 2:23:52 PM
>
> Subject: Re: [ceph-users] NFS over CEPH - best practice
>
> It's easy to install tgtd with ceph support. ubuntu 12.04 for
ot;Vlad Gorbunov"
To: "Sergey Malinin"
Cc: "Andrei Mikhailovsky" , ceph-users@lists.ceph.com
Sent: Wednesday, 7 May, 2014 2:23:52 PM
Subject: Re: [ceph-users] NFS over CEPH - best practice
It's easy to install tgtd with ceph support. ubuntu 12.04 for example:
Conne
is there a howto somewhere describing the steps on how to setup iscsi
multipathing over ceph? It looks like a good alternative to nfs
Thanks
From: "Vlad Gorbunov"
To: "Andrei Mikhailovsky"
Cc: ceph-users@lists.ceph.com
Sent: Wednesday, 7 May, 2014 12:02:09 PM
Subject: Re: [ce
ke a good alternative to nfs
>
> Thanks
>
> From: "Vlad Gorbunov" mailto:vadi...@gmail.com)>
> To: "Andrei Mikhailovsky" mailto:and...@arhont.com)>
> Cc: ceph-users@lists.ceph.com (mailto:ceph-users@lists.ceph.com)
> Sent: Wednesday, 7 May, 2014 12:02:09 PM
&
sday, 7 May, 2014 12:02:09 PM
Subject: Re: [ceph-users] NFS over CEPH - best practice
For XenServer or VMware is better to use iscsi client to tgtd with ceph
support. You can install tgtd on osd or monitor server and use multipath for
failover.
On Wed, May 7, 2014 at 9:47 PM, Andre
I am surprised that CephFS isn't proposed as an option, in the way it
removes the not negligible block storage layer from the picture. I
always feel uncomfortable to stack storage technologies or file systems
(here NFS over XFS over iSCSI over RDB over Rados) and try to stay as
possible on the "KIS
For XenServer or VMware is better to use iscsi client to tgtd with ceph
support. You can install tgtd on osd or monitor server and use multipath for
failover.
On Wed, May 7, 2014 at 9:47 PM, Andrei Mikhailovsky
wrote:
> Hello guys,
> I would like to offer NFS service to the XenServer and VMWa
; *From: *"Wido den Hollander"
> *To: *ceph-users@lists.ceph.com
> *Sent: *Wednesday, 7 May, 2014 11:15:39 AM
> *Subject: *Re: [ceph-users] NFS over CEPH - best practice
>
> On 05/07/2014 11:46 AM, Andrei Mikhailovsky wrote:
> > Hello guys,
> >
> > I would like
ssage -
From: "Wido den Hollander"
To: ceph-users@lists.ceph.com
Sent: Wednesday, 7 May, 2014 11:15:39 AM
Subject: Re: [ceph-users] NFS over CEPH - best practice
On 05/07/2014 11:46 AM, Andrei Mikhailovsky wrote:
> Hello guys,
>
> I would like to offer NFS service to
1 - 100 of 103 matches
Mail list logo