Re: [ceph-users] SSD MTBF

2014-09-30 Thread Kingsley Tart
On Tue, 2014-09-30 at 00:30 +0900, Christian Balzer wrote:
> On Mon, 29 Sep 2014 11:15:21 +0200 Emmanuel Lacour wrote:
> 
> > On Mon, Sep 29, 2014 at 05:57:12PM +0900, Christian Balzer wrote:
> > > 
> > > Given your SSDs, are they failing after more than 150TB have been
> > > written?
> > 
> > between 30 and 40 TB ...
> > 
> That's low. One wonders what is going on here, Samsung being overly
> optimistic or something else...

This isn't something I know much about so please do correct me if I'm
wrong, but might this be something to do with actual data size vs
written block size on the SSD?

-- 
Cheers,
Kingsley.

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph release cadence

2017-09-06 Thread Kingsley Tart
On Wed, 2017-09-06 at 15:23 +, Sage Weil wrote:
> Hi everyone,
> 
> Traditionally, we have done a major named "stable" release twice a year, 
> and every other such release has been an "LTS" release, with fixes 
> backported for 1-2 years.
> 
> With kraken and luminous we missed our schedule by a lot: instead of 
> releasing in October and April we released in January and August.
> 
> A few observations:
[snip]

Firstly, I'd like to qualify my comments by saying that I haven't yet
tried Ceph[1], though I have been loosely following its progress. This
is partly because I've been busy doing other things.

[1] OK, this is a slight fib - I had a very brief play with it a few
years back but didn't really get anywhere with it and then got diverted
onto other things.

Unless I absolutely have to deploy now, I find myself doing this:

10 not long for new release, wait a bit
20 new release is here, but there's talk of a new one
30 goto 10

Having frequent minor updates and fixes is reassuring, but having
frequent major update changes with the "L" in "LTS" not being
particularly long tends to put me off a bit, largely because I find the
thought of upgrading something so mission critical quite daunting. I
can't speak from any Ceph experience on this one, obviously, but if
there's an easy rollback (even if it's never needed) without having to
rebuild the entire cluster then that would make me more willing to do
it.

-- 
Cheers,
Kingsley.

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] CephFS

2017-01-17 Thread Kingsley Tart
How did you find the fuse client performed?

I'm more interested in the fuse client because I'd like to use CephFS
for shared volumes, and my understanding of the kernel client is that it
uses the volume as a block device.

Cheers,
Kingsley.

On Tue, 2017-01-17 at 11:46 +, Sean Redmond wrote:
> I found the kernel clients to perform better in my case. 
> 
> 
> I ran into a couple of issues with some metadata pool corruption and
> omap inconsistencies. That said the repair tools are useful and
> managed to get things back up and running. 
> 
> 
> The community has been very responsive to any issues I have ran into,
> this really increases my confidence levels in any open source
> project. 
> 
> On Tue, Jan 17, 2017 at 6:39 AM, w...@42on.com  wrote:
> 
> 
> 
> 
> Op 17 jan. 2017 om 03:47 heeft Tu Holmes 
> het volgende geschreven:
> 
> 
> > I could use either one. I'm just trying to get a feel for
> > how stable the technology is in general. 
> > 
> 
> 
> Stable. Multiple customers of me run it in production with the
> kernel client and serious load on it. No major problems.
> 
> 
> Wido
> 
> > On Mon, Jan 16, 2017 at 3:19 PM Sean Redmond
> >  wrote:
> > 
> > What's your use case? Do you plan on using kernel or
> > fuse clients? 
> > 
> > 
> > On 16 Jan 2017 23:03, "Tu Holmes"
> >  wrote:
> > 
> > So what's the consensus on CephFS?
> > 
> > 
> > Is it ready for prime time or not?
> > 
> > 
> > //Tu
> > 
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > 
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> > 
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> > 
> 
> 
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] CephFS

2017-01-17 Thread Kingsley Tart
On Tue, 2017-01-17 at 13:49 +0100, Loris Cuoghi wrote:
> I think you're confusing CephFS kernel client and RBD kernel client.
> 
> The Linux kernel contains both:
> 
> * a module ceph.ko for accessing a CephFS
> * a module rbd.ko for accessing an RBD (Rados Block Device)
> 
> You can mount a CephFS using the kernel driver [0], or using an 
> userspace helper for FUSE [1].
> 
> [0] http://docs.ceph.com/docs/master/cephfs/kernel/
> [1] http://docs.ceph.com/docs/master/cephfs/fuse/

Hi,

Thanks for your reply.

I specifically didn't want a block device because I would like to mount
the same volume on multiple machines to share the files, like you would
with NFS. This is why I thought ceph-fuse would be what I needed.

-- 
Cheers,
Kingsley.

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] CephFS

2017-01-17 Thread Kingsley Tart
Hi,

Are these all sharing the same volume?

Cheers,
Kingsley.

On Tue, 2017-01-17 at 12:19 -0500, Alex Evonosky wrote:
> for whats its worth, I have been using CephFS shared between six
> servers (all kernel mounted) and no issues.  Running three monitors
> and 2 meta servers (one as backup).  This has been running great.
> 
> On Tue, Jan 17, 2017 at 12:14 PM, Kingsley Tart 
> wrote:
> On Tue, 2017-01-17 at 13:49 +0100, Loris Cuoghi wrote:
> > I think you're confusing CephFS kernel client and RBD kernel
> client.
> >
> > The Linux kernel contains both:
> >
> > * a module ceph.ko for accessing a CephFS
> > * a module rbd.ko for accessing an RBD (Rados Block Device)
> >
> > You can mount a CephFS using the kernel driver [0], or using
> an
> > userspace helper for FUSE [1].
> >
> > [0] http://docs.ceph.com/docs/master/cephfs/kernel/
> > [1] http://docs.ceph.com/docs/master/cephfs/fuse/
> 
> Hi,
> 
> Thanks for your reply.
> 
> I specifically didn't want a block device because I would like
> to mount
> the same volume on multiple machines to share the files, like
> you would
> with NFS. This is why I thought ceph-fuse would be what I
> needed.
> 
> --
> Cheers,
> Kingsley.
> 
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
> 

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] CephFS

2017-01-17 Thread Kingsley Tart
Oh that's good. I thought the kernel clients only supported block
devices. I guess that has changed since I last looked.

Cheers,
Kingsley.

On Tue, 2017-01-17 at 12:29 -0500, Alex Evonosky wrote:
> example:

> Each server looks like this on their mounting:
> 
> /bin/mount -t ceph -o name=admin,secret=
> 10.10.10.138,10.10.10.252,10.10.10.103:/ /media/network-storage
> 
> 
> 
> all points to the monitor servers.
> 
> On Tue, Jan 17, 2017 at 12:27 PM, Alex Evonosky
>  wrote:
> yes they are.  I created one volume all shared by the
> webservers.  So essentially is acting like a NAS using NFS.
> All servers see the same data.
> 
> On Tue, Jan 17, 2017 at 12:26 PM, Kingsley Tart
>  wrote:
> Hi,
> 
> Are these all sharing the same volume?
> 
> Cheers,
> Kingsley.
> 
> On Tue, 2017-01-17 at 12:19 -0500, Alex Evonosky
> wrote:
> > for whats its worth, I have been using CephFS shared
> between six
> > servers (all kernel mounted) and no issues.  Running
> three monitors
> > and 2 meta servers (one as backup).  This has been
>     running great.
> >
> > On Tue, Jan 17, 2017 at 12:14 PM, Kingsley Tart
> 
> > wrote:
> > On Tue, 2017-01-17 at 13:49 +0100, Loris
> Cuoghi wrote:
> > > I think you're confusing CephFS kernel
> client and RBD kernel
> > client.
> > >
> > > The Linux kernel contains both:
> > >
> > > * a module ceph.ko for accessing a CephFS
> > > * a module rbd.ko for accessing an RBD
> (Rados Block Device)
> > >
> > > You can mount a CephFS using the kernel
> driver [0], or using
> > an
> > > userspace helper for FUSE [1].
> > >
> > > [0]
> http://docs.ceph.com/docs/master/cephfs/kernel/
> > > [1]
> http://docs.ceph.com/docs/master/cephfs/fuse/
> >
> > Hi,
> >
> > Thanks for your reply.
> >
> > I specifically didn't want a block device
> because I would like
> > to mount
> > the same volume on multiple machines to
> share the files, like
> > you would
> > with NFS. This is why I thought ceph-fuse
> would be what I
> > needed.
> >
> > --
> > Cheers,
> > Kingsley.
> >
> >
>  ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> >
>  http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> >
> 
> 
> 
> 
> 

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] CephFS

2017-01-17 Thread Kingsley Tart
On Tue, 2017-01-17 at 19:04 +0100, Ilya Dryomov wrote:
> On Tue, Jan 17, 2017 at 6:49 PM, Kingsley Tart  wrote:
> > Oh that's good. I thought the kernel clients only supported block
> > devices. I guess that has changed since I last looked.
> 
> That has always been the case -- block device support came about a year
> after the filesystem was merged into the kernel ;)

Oh interesting. In that case, is there any reason you would ever want to
use ceph-fuse (assuming that the kernel is a new enough version)?

-- 
Cheers,
Kingsley.

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com