On Thu, 30 Oct 2014, Florian Haas wrote:
> Hi Sage,
>
> sorry to be late to this thread; I just caught this one as I was
> reviewing the Giant release notes. A few questions below:
>
> On Mon, Oct 13, 2014 at 8:16 PM, Sage Weil wrote:
> > [...]
> > * ACLs: implemented, tested for kernel client.
On Thu, Oct 30, 2014 at 10:55 AM, Florian Haas wrote:
>> * ganesha NFS integration: implemented, no test coverage.
>
> I understood from a conversation I had with John in London that
> flock() and fcntl() support had recently been added to ceph-fuse, can
> this be expected to Just Work⢠in Ganesha
Hi Sage,
sorry to be late to this thread; I just caught this one as I was
reviewing the Giant release notes. A few questions below:
On Mon, Oct 13, 2014 at 8:16 PM, Sage Weil wrote:
> [...]
> * ACLs: implemented, tested for kernel client. not implemented for
> ceph-fuse.
> [...]
> * samba VFS
On 10/15/2014 08:43 AM, Amon Ott wrote:
Am 14.10.2014 16:23, schrieb Sage Weil:
On Tue, 14 Oct 2014, Amon Ott wrote:
Am 13.10.2014 20:16, schrieb Sage Weil:
We've been doing a lot of work on CephFS over the past few months. This
is an update on the current state of things as of Giant.
...
*
For the humble ceph user I am it is really hard to follow what version
of what product will get the changes I requiere.
Let me explain myself. I use ceph in my company is specialised in disk
recovery, my company needs a flexible, easy to maintain, trustable way
to store the data from the disks
On Wed, 15 Oct 2014, Amon Ott wrote:
> Am 15.10.2014 14:11, schrieb Ric Wheeler:
> > On 10/15/2014 08:43 AM, Amon Ott wrote:
> >> Am 14.10.2014 16:23, schrieb Sage Weil:
> >>> On Tue, 14 Oct 2014, Amon Ott wrote:
> Am 13.10.2014 20:16, schrieb Sage Weil:
> > We've been doing a lot of work
Am 15.10.2014 14:11, schrieb Ric Wheeler:
> On 10/15/2014 08:43 AM, Amon Ott wrote:
>> Am 14.10.2014 16:23, schrieb Sage Weil:
>>> On Tue, 14 Oct 2014, Amon Ott wrote:
Am 13.10.2014 20:16, schrieb Sage Weil:
> We've been doing a lot of work on CephFS over the past few months.
> This
>>
We've been doing a lot of work on CephFS over the past few months. This
is an update on the current state of things as of Giant.
...
* Either the kernel client (kernel 3.17 or later) or userspace (ceph-fuse
or libcephfs) clients are in good working order.
Thanks for all the work and speci
Am 14.10.2014 16:23, schrieb Sage Weil:
> On Tue, 14 Oct 2014, Amon Ott wrote:
>> Am 13.10.2014 20:16, schrieb Sage Weil:
>>> We've been doing a lot of work on CephFS over the past few months. This
>>> is an update on the current state of things as of Giant.
>> ...
>>> * Either the kernel client (k
This sounds like any number of readdir bugs that Zheng has fixed over the
last 6 months.
sage
On Tue, 14 Oct 2014, Alphe Salas wrote:
> Hello sage, last time I used CephFS it had a strange behaviour when if used in
> conjunction with a nfs reshare of the cephfs mount point, I experienced a
> p
Hello sage, last time I used CephFS it had a strange behaviour when if
used in conjunction with a nfs reshare of the cephfs mount point, I
experienced a partial random disapearance of the tree folders.
According to people in the mailing list it was a kernel module bug (not
using ceph-fuse) do
On Tue, 14 Oct 2014, Amon Ott wrote:
> Am 13.10.2014 20:16, schrieb Sage Weil:
> > We've been doing a lot of work on CephFS over the past few months. This
> > is an update on the current state of things as of Giant.
> ...
> > * Either the kernel client (kernel 3.17 or later) or userspace (ceph-fuse
Am 13.10.2014 20:16, schrieb Sage Weil:
> We've been doing a lot of work on CephFS over the past few months. This
> is an update on the current state of things as of Giant.
...
> * Either the kernel client (kernel 3.17 or later) or userspace (ceph-fuse
> or libcephfs) clients are in good working
On Tue, 14 Oct 2014, Thomas Lemarchand wrote:
> Thanks for theses informations.
>
> I plan to use CephFS on Giant, with production workload, knowing the
> risks and having a hot backup near. I hope to be able to provide useful
> feedback.
>
> My cluster is made of 7 servers (3mon, 3osd (27 osd in
On Tue, 14 Oct 2014, Amon Ott wrote:
> Am 13.10.2014 20:16, schrieb Sage Weil:
> > We've been doing a lot of work on CephFS over the past few months. This
> > is an update on the current state of things as of Giant.
> ...
> > * Either the kernel client (kernel 3.17 or later) or userspace (ceph-fuse
Thanks for theses informations.
I plan to use CephFS on Giant, with production workload, knowing the
risks and having a hot backup near. I hope to be able to provide useful
feedback.
My cluster is made of 7 servers (3mon, 3osd (27 osd inside), 1mds). I
use ceph-fuse on clients.
You wrote about h
On 10/13/2014 4:56 PM, Sage Weil wrote:
On Mon, 13 Oct 2014, Eric Eastman wrote:
I would be interested in testing the Samba VFS and Ganesha NFS integration
with CephFS. Are there any notes on how to configure these two interfaces
with CephFS?
For ganesha I'm doing something like:
FSAL
{
CE
On Mon, 13 Oct 2014, Eric Eastman wrote:
> I would be interested in testing the Samba VFS and Ganesha NFS integration
> with CephFS. Are there any notes on how to configure these two interfaces
> with CephFS?
For samba, based on
https://github.com/ceph/ceph-qa-suite/blob/master/tasks/samba.py#L1
I would be interested in testing the Samba VFS and Ganesha NFS
integration with CephFS. Are there any notes on how to configure these
two interfaces with CephFS?
Eric
We've been doing a lot of work on CephFS over the past few months.
This
is an update on the current state of things as of G
On Mon, 13 Oct 2014, Wido den Hollander wrote:
> On 13-10-14 20:16, Sage Weil wrote:
> > With Giant, we are at a point where we would ask that everyone try
> > things out for any non-production workloads. We are very interested in
> > feedback around stability, usability, feature gaps, and performa
On 13-10-14 20:16, Sage Weil wrote:
> We've been doing a lot of work on CephFS over the past few months. This
> is an update on the current state of things as of Giant.
>
> What we've working on:
>
> * better mds/cephfs health reports to the monitor
> * mds journal dump/repair tool
> * many kerne
We've been doing a lot of work on CephFS over the past few months. This
is an update on the current state of things as of Giant.
What we've working on:
* better mds/cephfs health reports to the monitor
* mds journal dump/repair tool
* many kernel and ceph-fuse/libcephfs client bug fixes
* file si
22 matches
Mail list logo