Hi Sage,
a couple of months ago (maybe last year) I was able to change the
assignment of Directorlies and Files of CephFS to different pools
back and forth (with cephfs set_layout as well as with setfattr).
Now (with ceph v0.81 and Kernel 3.10 an the client side)
neither 'cephfs set_layout' nor
See
>
>
> https://github.com/ceph/ceph/blob/master/qa/workunits/fs/misc/layout_vxattrs.sh
>
> The nice part about this interface is no new tools are necessary (just
> standard 'attr' or 'setfattr' commands) and it is the same with both
> ceph-fus
Hi Sage,
it seems the pools must be added to the MDS first:
ceph mds add_data_pool 3# = SSD-r2
ceph mds add_data_pool 4# = SAS-r2
After these commands the "setfattr -n ceph.dir.layout.pool" worked.
Thanks,
-Dieter
On Mon, Aug 18, 2014 at 10:19:08PM +0200, Kasper Di
Hi Sébastien,
On Thu, Aug 28, 2014 at 06:11:37PM +0200, Sebastien Han wrote:
> Hey all,
(...)
> We have been able to reproduce this on 3 distinct platforms with some
> deviations (because of the hardware) but the behaviour is the same.
> Any thoughts will be highly appreciated, only getting 3,2k
Hi Xiaoxi,
we are really running Ceph on CentOS-6.4
(6 server nodes, 3 client nodes, 160 OSDs).
We put a 3.8.13 Kernel on top and installed the ceph-0.61.4 cluster with
mkcephfs,
because ceph-deploy seems to be still very buggy and has big dependencies to
the newest python.
ceph.ko
rbd.ko
and
ssage-
> From: ceph-devel-ow...@vger.kernel.org
> [mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of Kasper Dieter
> Sent: Wednesday, July 17, 2013 2:17 PM
> To: Chen, Xiaoxi
> Cc: ceph-de...@vger.kernel.org; ceph-us...@ceph.com
> Subject: Re: Any concern about Ceph on CentOS
>
subscribe
Thanks,
Dieter
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On Mon, Aug 12, 2013 at 03:19:04PM +0200, Jeff Moskow wrote:
> Hi,
>
> The activity on our ceph cluster has gone up a lot. We are using exclusively
> RBD
> storage right now.
>
> Is there a tool/technique that could be used to find out which rbd images are
> receiving the most activity (somethi
On Thu, Aug 22, 2013 at 03:32:35PM +0200, raj kumar wrote:
>ceph cluster is running fine in centos6.4.
>
>Now I would like to export the block device to client using rbd.
>
>my question is,
>
>Raj
>
>
>
>On Fri, Aug 23, 2013 at 4:03 PM, Kasper Dieter
On Wed, Aug 28, 2013 at 04:24:59PM +0200, Gandalf Corvotempesta wrote:
> 2013/6/20 Matthew Anderson :
> > Hi All,
> >
> > I've had a few conversations on IRC about getting RDMA support into Ceph and
> > thought I would give it a quick attempt to hopefully spur some interest.
> > What I would like t
Hi,
under
http://eu.ceph.com/docs/wip-rpm-doc/config-cluster/rbd-config-ref/
I found a good description about RBD cache parameters.
But, I am missing information
- by whom these parameters are evaluated and
- when will this happen ?
My assumption:
- the rbd_cache* parameter will be read by MONs
Hi Greg,
on http://comments.gmane.org/gmane.comp.file-systems.ceph.user/1705
I found a statement from you regarding snapshots on cephfs:
---snip---
Filesystem snapshots exist and you can experiment with them on CephFS
(there's a hidden ".snaps" folder; you can create or remove snapshots
by creati
On Wed, Nov 27, 2013 at 04:34:00PM +0100, Gregory Farnum wrote:
> On Wed, Nov 27, 2013 at 7:28 AM, Mark Nelson wrote:
> > On 11/27/2013 09:25 AM, Gregory Farnum wrote:
> >>
> >> On Wed, Nov 27, 2013 at 1:31 AM, Jens-Christian Fischer
> >> wrote:
>
> The largest group of threads is those
On Thu, Sep 18, 2014 at 03:36:48PM +0200, Alexandre DERUMIER wrote:
> >>Have anyone ever testing multi volume performance on a *FULL* SSD setup?
>
> I known that Stefan Priebe run full ssd clusters in production, and have done
> benchmark.
> (Ad far I remember, he have benched around 20k peak w
On Wed, Sep 24, 2014 at 08:49:21PM +0200, Alexandre DERUMIER wrote:
> >>What about writes with Giant?
>
> I'm around
> - 4k iops (4k random) with 1osd (1 node - 1 osd)
> - 8k iops (4k random) with 2 osd (1 node - 2 osd)
> - 16K iops (4k random) with 4 osd (2 nodes - 2 osd by node)
> - 22K iops
On Tue, Sep 30, 2014 at 04:38:41PM +0200, Mark Nelson wrote:
> On 09/29/2014 03:58 AM, Dan Van Der Ster wrote:
> > Hi Emmanuel,
> > This is interesting, because we?ve had sales guys telling us that those
> > Samsung drives are definitely the best for a Ceph journal O_o !
>
> Our sales guys or Sam
When using "rbd create ... --image-format 2" in some cases this CMD is rejected
by
EINVAL with the message "librbd: STRIPINGV2 and format 2 or later required for
non-default striping"
But, in v0.61.9 "STRIPINGV2 and format 2" should be supported
[root@rx37-3 ~]# rbd create --pool SSD-r2 --size
type 2 images.
>
>
>
>On Tue, Mar 11, 2014 at 7:16 PM, Kasper Dieter
>
><[1]
utilities provide you create images on RADOS as
>
>block storage.
>
>
>
>On Tue, Mar 11, 2014 at 7:37 PM, Kasper Die
> >> wrote:
> >>
> >>> So, should I open a bug report ?
> >>>
> >>> STRIPINGV2 feature was added in Ceph v0.53, and I'm running v0.61 and
> >>> using '--image-format 2' during 'rbd create'
> >&g
;192.168.113.13:6789,192.168.113.14:6789,192.168.113.15:6789
name=admin,key=client.admin SSD-r2 t2-1", 99) = -1 EINVAL (Invalid argument)
-Dieter
From: Michael J. Kidd [mailto:michael.k...@inktank.com]
Sent: Wednesday, March 12, 2014 12:52 PM
To: Kasper, Dieter
Cc: Pankaj Laddha; Jean-Charle
Please see this Email on ceph-devel
---snip---
Date: Thu, 15 Aug 2013 14:30:24 +0200
From: Damien Churchill
To: "Kasper, Dieter"
CC: "ceph-de...@vger.kernel.org"
Subject: Re: rbd: format 2 support in rbd.ko ?
On 15 August 2013 12:42, Kasper Dieter wrote:
> When wil
We have observed a very similar behavior.
In a 140 OSD cluster (new created and idle) ~8000 PGs are available.
After adding two new pools (each with 2 PGs)
100 out of 140 OSDs are going down + out.
The cluster never recovers.
This problem can be reproduced every time with v0.67 and 0.72.
Wit
On Thu, Mar 13, 2014 at 11:16:45AM +0100, Gandalf Corvotempesta wrote:
> 2014-03-13 10:53 GMT+01:00 Kasper Dieter :
> > After adding two new pools (each with 2 PGs)
> > 100 out of 140 OSDs are going down + out.
> > The cluster never recovers.
>
> In my case, clust
Hi Sage,
I'm a little bit confused about 'ceph-deploy' in 0.61:
. the 0.61 release note says: "ceph-deploy: our new deployment tool to replace
'mkcephfs'"
. http://ceph.com/docs/master/rados/deployment/mkcephfs/
says "To deploy a test or development cluster, you can use the mkcephfs
tool
On Wed, May 15, 2013 at 05:48:22PM +0200, Sage Weil wrote:
> On Wed, 15 May 2013, Kasper Dieter wrote:
> > Hi Sage,
> > I'm a little bit confused about 'ceph-deploy' in 0.61:
> >
> > . the 0.61 release note says: "ceph-deploy: our new deploym
27 matches
Mail list logo