Re: [ceph-users] which kernel version can help avoid kernel client deadlock

2015-07-28 Thread Ilya Dryomov
On Tue, Jul 28, 2015 at 9:17 AM, van wrote: > Hi, list, > > I found on the ceph FAQ that, ceph kernel client should not run on > machines belong to ceph cluster. > As ceph FAQ metioned, “In older kernels, Ceph can deadlock if you try to > mount CephFS or RBD client services on the same host th

Re: [ceph-users] State of nfs-ganesha CEPH fsal

2015-07-28 Thread Burkhard Linke
Hi, On 07/27/2015 05:42 PM, Gregory Farnum wrote: On Mon, Jul 27, 2015 at 4:33 PM, Burkhard Linke wrote: Hi, the nfs-ganesha documentation states: "... This FSAL links to a modified version of the CEPH library that has been extended to expose its distributed cluster and replication facilitie

[ceph-users] wrong documentation in add or rm mons

2015-07-28 Thread Makkelie, R (ITCDCC) - KLM
i followed the following documentation to add monitors to my already existing cluster with 1 mon http://ceph.com/docs/master/rados/operations/add-or-rm-mons/ when i follow this documentation. the monitor assimilates the old monitor so my monitor status is gone. but when i skip the "ceph mon add

Re: [ceph-users] which kernel version can help avoid kernel client deadlock

2015-07-28 Thread van
Hi, Ilya, Thanks for your quick reply. Here is the link http://ceph.com/docs/cuttlefish/faq/ , under the "HOW CAN I GIVE CEPH A TRY?” section which talk about the old kernel stuff. By the way, what’s the main reason of using kernel 4.1, is there a

Re: [ceph-users] State of nfs-ganesha CEPH fsal

2015-07-28 Thread Gregory Farnum
On Tue, Jul 28, 2015 at 8:01 AM, Burkhard Linke wrote: > Hi, > > On 07/27/2015 05:42 PM, Gregory Farnum wrote: >> >> On Mon, Jul 27, 2015 at 4:33 PM, Burkhard Linke >> wrote: >>> >>> Hi, >>> >>> the nfs-ganesha documentation states: >>> >>> "... This FSAL links to a modified version of the CEPH l

[ceph-users] Did maximum performance reached?

2015-07-28 Thread Shneur Zalman Mattern
We've built Ceph cluster: 3 mon nodes (one of them is combined with mds) 3 osd nodes (each one have 10 osd + 2 ssd for journaling) switch 24 ports x 10G 10 gigabit - for public network 20 gigabit bonding - between osds Ubuntu 12.04.05 Ceph 0.87.2

Re: [ceph-users] State of nfs-ganesha CEPH fsal

2015-07-28 Thread Haomai Wang
On Tue, Jul 28, 2015 at 4:47 PM, Gregory Farnum wrote: > On Tue, Jul 28, 2015 at 8:01 AM, Burkhard Linke > wrote: >> Hi, >> >> On 07/27/2015 05:42 PM, Gregory Farnum wrote: >>> >>> On Mon, Jul 27, 2015 at 4:33 PM, Burkhard Linke >>> wrote: Hi, the nfs-ganesha documentation st

Re: [ceph-users] Did maximum performance reached?

2015-07-28 Thread Johannes Formann
Hello, what is the „size“ parameter of your pool? Some math do show the impact: size=3 means each write is written 6 times (3 copies, first journal, later disk). Calculating with 1.300MB/s „Client“ Bandwidth that means: 3 (size) * 1300 MB/s / 6 (SSD) => 650MB/s per SSD 3 (size) * 1300 MB/s / 30

Re: [ceph-users] Did maximum performance reached?

2015-07-28 Thread Karan Singh
Hi What type of clients do you have. - Are they Linux physical OR VM mounting Ceph RBD or CephFS ?? - Or they are simply openstack / cloud instances using Ceph as cinder volumes or something like that ?? - Karan - > On 28 Jul 2015, at 11:53, Shneur Zalman Mattern wrote: > > We've built Ceph

Re: [ceph-users] State of nfs-ganesha CEPH fsal

2015-07-28 Thread Burkhard Linke
Hi, On 07/28/2015 11:08 AM, Haomai Wang wrote: On Tue, Jul 28, 2015 at 4:47 PM, Gregory Farnum wrote: On Tue, Jul 28, 2015 at 8:01 AM, Burkhard Linke wrote: *snipsnap* Can you give some details on that issues? I'm currently looking for a way to provide NFS based access to CephFS to our des

Re: [ceph-users] State of nfs-ganesha CEPH fsal

2015-07-28 Thread Haomai Wang
On Tue, Jul 28, 2015 at 5:28 PM, Burkhard Linke wrote: > Hi, > > On 07/28/2015 11:08 AM, Haomai Wang wrote: >> >> On Tue, Jul 28, 2015 at 4:47 PM, Gregory Farnum wrote: >>> >>> On Tue, Jul 28, 2015 at 8:01 AM, Burkhard Linke >>> wrote: > > > *snipsnap* Can you give some details on that

[ceph-users] Did maximum performance reached?

2015-07-28 Thread Shneur Zalman Mattern
Hi, Johannes (that's my grandpa's name) The size is 2, do you really think that number of replicas can increase performance? on the http://ceph.com/docs/master/architecture/ written "Note: Striping is independent of object replicas. Since CRUSH replicates objects across OSDs, stripes get replic

[ceph-users] Did maximum performance reached?

2015-07-28 Thread Shneur Zalman Mattern
Hi, Karan! That's physical CentOS clients of CephFS mounted by kernel-module (kernel 4.1.3) Thanks >Hi > >What type of clients do you have. > >- Are they Linux physical OR VM mounting Ceph RBD or CephFS ?? >- Or they are simply openstack / cloud instances using Ceph as cinder volumes >or someth

Re: [ceph-users] hadoop on ceph

2015-07-28 Thread Gregory Farnum
On Mon, Jul 27, 2015 at 6:34 PM, Patrick McGarry wrote: > Moving this to the ceph-user list where it has a better chance of > being answered. > > > > On Mon, Jul 27, 2015 at 5:35 AM, jingxia@baifendian.com > wrote: >> Dear , >> I have questions to ask. >> The doc says hadoop on ceph but requi

[ceph-users] Did maximum performance reached?

2015-07-28 Thread Shneur Zalman Mattern
Hi, But my question is why speed is divided between clients? And how much OSDnodes, OSDdaemos, PGs, I have to add/remove to ceph, that each cephfs-client could write with his max network speed (10Gbit/s ~ 1.2GB/s)??? From: Johannes Formann Sent: Tuesday

Re: [ceph-users] OSD RAM usage values

2015-07-28 Thread Kenneth Waegeman
On 07/17/2015 02:50 PM, Gregory Farnum wrote: On Fri, Jul 17, 2015 at 1:13 PM, Kenneth Waegeman wrote: Hi all, I've read in the documentation that OSDs use around 512MB on a healthy cluster.(http://ceph.com/docs/master/start/hardware-recommendations/#ram) Now, our OSD's are all using around

[ceph-users] Did maximum performance reached?

2015-07-28 Thread Shneur Zalman Mattern
Hi! And so, in your math I need to build size = osd, 30 replicas for my cluster of 120TB - to get my demans And 4TB real storage capacity in price 3000$ per 1TB? Joke? All the best, Shneur From: Johannes Formann Sent: Tuesday, July 28, 2015 12:46 PM

Re: [ceph-users] Weird behaviour of cephfs with samba

2015-07-28 Thread Gregory Farnum
On Mon, Jul 27, 2015 at 6:25 PM, Jörg Henne wrote: > Gregory Farnum writes: >> >> Yeah, I think there were some directory listing bugs in that version >> that Samba is probably running into. They're fixed in a newer kernel >> release (I'm not sure which one exactly, sorry). > > Ok, thanks, good t

Re: [ceph-users] Did maximum performance reached?

2015-07-28 Thread Johannes Formann
The speed is divided because ist fair :) You reach the limit your hardware (I guess the SSDs) can deliver. For 2 clients each doing 1200 MB/s you’ll have basically to double the amount of OSDs. greetings Johannes > Am 28.07.2015 um 11:56 schrieb Shneur Zalman Mattern : > > Hi, > > But my que

Re: [ceph-users] OSD RAM usage values

2015-07-28 Thread Gregory Farnum
On Tue, Jul 28, 2015 at 11:00 AM, Kenneth Waegeman wrote: > > > On 07/17/2015 02:50 PM, Gregory Farnum wrote: >> >> On Fri, Jul 17, 2015 at 1:13 PM, Kenneth Waegeman >> wrote: >>> >>> Hi all, >>> >>> I've read in the documentation that OSDs use around 512MB on a healthy >>> cluster.(http://ceph.c

[ceph-users] Did maximum performance reached?

2015-07-28 Thread Shneur Zalman Mattern
Oh, now I've to cry :-) not because it's not SSDs... it's SAS2 HDDs Because, I need to build something for 140 clients... 4200 OSDs :-( Looks like, I can pickup my performance by SSDs, but I need a huge capacity ~ 2PB Perhaps, tiering cache pool can save my money, but I've read here - that it'

Re: [ceph-users] Did maximum performance reached?

2015-07-28 Thread John Spray
On 28/07/15 11:17, Shneur Zalman Mattern wrote: Oh, now I've to cry :-) not because it's not SSDs... it's SAS2 HDDs Because, I need to build something for 140 clients... 4200 OSDs :-( Looks like, I can pickup my performance by SSDs, but I need a huge capacity ~ 2PB Perhaps, tiering cache po

Re: [ceph-users] Weird behaviour of cephfs with samba

2015-07-28 Thread Dzianis Kahanovich
I use cephfs over samba vfs and have some issues. 1) If I use >1 stacked vfs (ceph & scannedonly) - I have problems with file order, but solved by "dirsort" vfs ("vfs objects = scannedonly dirsort ceph"). Single "ceph" vfs looks good too (and I use it single for fast internal shares), but you can

Re: [ceph-users] which kernel version can help avoid kernel client deadlock

2015-07-28 Thread Ilya Dryomov
On Tue, Jul 28, 2015 at 11:19 AM, van wrote: > Hi, Ilya, > > Thanks for your quick reply. > > Here is the link http://ceph.com/docs/cuttlefish/faq/ , under the "HOW > CAN I GIVE CEPH A TRY?” section which talk about the old kernel stuff. > > By the way, what’s the main reason of using kerne

Re: [ceph-users] Weird behaviour of cephfs with samba

2015-07-28 Thread Dzianis Kahanovich
PS I start to use this patches with samba 4.1. IMHO some of problems may (or must) be solved not inside vfs code, but outside - in samba kernel, but I still use both in samba 4.2.3 without verification. Dzianis Kahanovich пишет: > I use cephfs over samba vfs and have some issues. > > 1) If I use

Re: [ceph-users] Did maximum performance reached?

2015-07-28 Thread John Spray
On 28/07/15 11:53, John Spray wrote: On 28/07/15 11:17, Shneur Zalman Mattern wrote: Oh, now I've to cry :-) not because it's not SSDs... it's SAS2 HDDs Because, I need to build something for 140 clients... 4200 OSDs :-( Looks like, I can pickup my performance by SSDs, but I need a huge

Re: [ceph-users] Unable to create new pool in cluster

2015-07-28 Thread Daleep Bais
Dear Kefu, Thanks.. It worked.. Appreciate your help.. TC On Sun, Jul 26, 2015 at 8:06 AM, kefu chai wrote: > On Sat, Jul 25, 2015 at 9:43 PM, Daleep Bais wrote: > > Hi All, > > > > I am unable to create new pool in my cluster. I have some existing pools. > > > > I get error : > > > > ceph o

Re: [ceph-users] which kernel version can help avoid kernel client deadlock

2015-07-28 Thread van
Hi, Ilya, In the dmesg, there is also a lot of libceph socket error, which I think may be caused by my stopping ceph service without unmap rbd. Here is a more than 1 lines log contains more info, http://jmp.sh/NcokrfT Thanks for willing to help. van ch

Re: [ceph-users] which kernel version can help avoid kernel client deadlock

2015-07-28 Thread Ilya Dryomov
On Tue, Jul 28, 2015 at 2:46 PM, van wrote: > Hi, Ilya, > > In the dmesg, there is also a lot of libceph socket error, which I think > may be caused by my stopping ceph service without unmap rbd. Well, sure enough, if you kill all OSDs, the filesystem mounted on top of rbd device will get stuck

Re: [ceph-users] Did maximum performance reached?

2015-07-28 Thread Shneur Zalman Mattern
As I'm understanding now that's in this case (30 disks) 10Gbit Network is not a bottleneck! With other HW config ( + 5 OSD nodes = + 50 disks ) I'd get 3400 MB/s, and 3 clients can work on full bandwidth, yes? OK, let's try ! ! ! ! ! ! ! Perhaps, somebody has more suggestions for increasing per

Re: [ceph-users] Did maximum performance reached?

2015-07-28 Thread Udo Lembke
Hi, On 28.07.2015 12:02, Shneur Zalman Mattern wrote: > Hi! > > And so, in your math > I need to build size = osd, 30 replicas for my cluster of 120TB - to get my > demans 30 replicas is the wrong math! Less replicas = more speed (because of less writing). More replicas less speed. Fore data

Re: [ceph-users] OSD RAM usage values

2015-07-28 Thread Dan van der Ster
On Tue, Jul 28, 2015 at 12:07 PM, Gregory Farnum wrote: > On Tue, Jul 28, 2015 at 11:00 AM, Kenneth Waegeman > wrote: >> >> >> On 07/17/2015 02:50 PM, Gregory Farnum wrote: >>> >>> On Fri, Jul 17, 2015 at 1:13 PM, Kenneth Waegeman >>> wrote: Hi all, I've read in the documenta

Re: [ceph-users] OSD RAM usage values

2015-07-28 Thread Mark Nelson
On 07/17/2015 07:50 AM, Gregory Farnum wrote: On Fri, Jul 17, 2015 at 1:13 PM, Kenneth Waegeman wrote: Hi all, I've read in the documentation that OSDs use around 512MB on a healthy cluster.(http://ceph.com/docs/master/start/hardware-recommendations/#ram) Now, our OSD's are all using around

[ceph-users] Updating OSD Parameters

2015-07-28 Thread Noah Mehl
When we update the following in ceph.conf: [osd] osd_recovery_max_active = 1 osd_max_backfills = 1 How do we make sure it takes affect? Do we have to restart all of the ceph osd’s and mon’s? Thanks! ~Noah ___ ceph-users mailing list ceph-users@

Re: [ceph-users] Updating OSD Parameters

2015-07-28 Thread Wido den Hollander
On 28-07-15 16:53, Noah Mehl wrote: > When we update the following in ceph.conf: > > [osd] > osd_recovery_max_active = 1 > osd_max_backfills = 1 > > How do we make sure it takes affect? Do we have to restart all of the > ceph osd’s and mon’s? On a client with client.admin keyring you exec

Re: [ceph-users] why are there "degraded" PGs when adding OSDs?

2015-07-28 Thread Samuel Just
If it wouldn't be too much trouble, I'd actually like the binary osdmap as well (it contains the crushmap, but also a bunch of other stuff). There is a command that lets you get old osdmaps from the mon by epoch as long as they haven't been trimmed. -Sam - Original Message - From: "Cha

Re: [ceph-users] Updating OSD Parameters

2015-07-28 Thread Noah Mehl
Wido, That’s awesome, I will look at this right now. Thanks! ~Noah > On Jul 28, 2015, at 11:02 AM, Wido den Hollander wrote: > > > > On 28-07-15 16:53, Noah Mehl wrote: >> When we update the following in ceph.conf: >> >> [osd] >> osd_recovery_max_active = 1 >> osd_max_backfills = 1 >> >

Re: [ceph-users] Ceph 0.94 (and lower) performance on >1 hosts ??

2015-07-28 Thread SCHAER Frederic
Hi again, So I have tried - changing the cpus frequency : either 1.6GHZ, or 2.4GHZ on all cores - changing the memory configuration, from "advanced ecc mode" to "performance mode", boosting the memory bandwidth from 35GB/s to 40GB/s - plugged a second 10GB/s link and setup a ceph internal networ

Re: [ceph-users] which kernel version can help avoid kernel client deadlock

2015-07-28 Thread van
> On Jul 28, 2015, at 7:57 PM, Ilya Dryomov wrote: > > On Tue, Jul 28, 2015 at 2:46 PM, van wrote: >> Hi, Ilya, >> >> In the dmesg, there is also a lot of libceph socket error, which I think >> may be caused by my stopping ceph service without unmap rbd. > > Well, sure enough, if you kill al

Re: [ceph-users] which kernel version can help avoid kernel client deadlock

2015-07-28 Thread Ilya Dryomov
On Tue, Jul 28, 2015 at 7:20 PM, van wrote: > >> On Jul 28, 2015, at 7:57 PM, Ilya Dryomov wrote: >> >> On Tue, Jul 28, 2015 at 2:46 PM, van wrote: >>> Hi, Ilya, >>> >>> In the dmesg, there is also a lot of libceph socket error, which I think >>> may be caused by my stopping ceph service withou

[ceph-users] RadosGW - radosgw-agent start error

2015-07-28 Thread Italo Santos
Hello everyone, I’m setting up a federated configuration of radosgw but when I start a radosgw-agent I face with the error bellow and I’d like to know if I’m doing something wrong…? See the error: root@cephgw0001:~# radosgw-agent -v -c /etc/ceph/radosgw-agent/default.conf 2015-07-28 17:02:03

[ceph-users] Configuring MemStore in Ceph

2015-07-28 Thread Aakanksha Pudipeddi-SSI
Hello, I am trying to setup a ceph cluster with a memstore backend. The problem is, it is always created with a fixed size (1GB). I made changes to the ceph.conf file as follows: osd_objectstore = memstore memstore_device_bytes = 5*1024*1024*1024 The resultant cluster still has 1GB allocated t

Re: [ceph-users] Updating OSD Parameters

2015-07-28 Thread Nikhil Mitra (nikmitra)
I believe you can use ceph tell to inject it in a running cluster. >From your admin node you should be able to run Ceph tell osd.* injectargs "--osd_recovery_max_active 1 --osd_max_backfills 1” Regards, Nikhil Mitra From: ceph-users mailto:ceph-users-boun...@lists.ceph.com>> on behalf of Noa

Re: [ceph-users] Configuring MemStore in Ceph

2015-07-28 Thread Haomai Wang
Which version do you use? https://github.com/ceph/ceph/commit/c60f88ba8a6624099f576eaa5f1225c2fcaab41a should fix your problem On Wed, Jul 29, 2015 at 5:44 AM, Aakanksha Pudipeddi-SSI wrote: > Hello, > > > > I am trying to setup a ceph cluster with a memstore backend. The problem is, > it is alw

Re: [ceph-users] Configuring MemStore in Ceph

2015-07-28 Thread Aakanksha Pudipeddi-SSI
Hello Haomai, I am using v0.94.2. Thanks, Aakanksha -Original Message- From: Haomai Wang [mailto:haomaiw...@gmail.com] Sent: Tuesday, July 28, 2015 7:20 PM To: Aakanksha Pudipeddi-SSI Cc: ceph-us...@ceph.com Subject: Re: [ceph-users] Configuring MemStore in Ceph Which version do you us

Re: [ceph-users] Configuring MemStore in Ceph

2015-07-28 Thread Haomai Wang
On Wed, Jul 29, 2015 at 10:21 AM, Aakanksha Pudipeddi-SSI wrote: > Hello Haomai, > > I am using v0.94.2. > > Thanks, > Aakanksha > > -Original Message- > From: Haomai Wang [mailto:haomaiw...@gmail.com] > Sent: Tuesday, July 28, 2015 7:20 PM > To: Aakanksha Pudipeddi-SSI > Cc: ceph-us...@ce