[ceph-users] ceph breizh meetup

2014-06-19 Thread eric mourgaya
Hi, The meeting will be in this building arkea

[ceph-users] Installing a tets ceph cluster

2014-06-19 Thread Iban Cabrillo
Hi, I am really newbie on ceph. I was trying to deploy a ceph-test on SL6.2, package installation has been OK. I have create a initial cluster with 3 machines (cephadm, ceph02 and ceph03), ssh passwdless using ceph user is ok using a config file: cephcloud.conf [global] auth_service_required

Re: [ceph-users] radosgw-agent SSL certificate verify failed

2014-06-19 Thread Fabrizio G. Ventola
I've done a dirty workaround editing by hand the code into /usr/lib/python2.7/dist-packages/requests/sessions.py. Is there any more orthodox method to do it? Fabrizio On 18 June 2014 17:00, Fabrizio G. Ventola wrote: > Hi everyone, > > I'm trying to sync data and metadata between zones of differ

Re: [ceph-users] [Solved] Init scripts in Debian not working

2014-06-19 Thread Dieter Scholz
Hello, at the moment I'm trying to create a small test cluster with Ceph Firefly. I followed the documentation and everything works as expected. But when I try to start the Ceph daemons on the individual OSD-machines using the init scripts nothing happens.No daemon is started. No error message o

[ceph-users] understanding rados df statistics

2014-06-19 Thread george.ryall
Hi all, I'm struggling to understand some Ceph usage statistics and I was hoping someone might be able to explain them to me. If I run 'rados df' I get the following: # rados df pool name category KB objects clones degraded unfound rdrd K

Re: [ceph-users] Some easy questions

2014-06-19 Thread Gerard Toonstra
Thanks Craig, I learned a bit more in the meantime. On Tue, Jun 17, 2014 at 3:30 PM, Craig Lewis wrote: > > 3. You must use MDS from the start, because it's a metadata > > structure/directory that only gets populated when writing files through > > cephfs / FUSE. Otherwise, it doesn't even know

Re: [ceph-users] Cache tier pool in CephFS

2014-06-19 Thread Sherry Shahbazi
Hi Greg,  Thanks for your prompt reply. I appreciate, if you could also help me with the following issues: 1) After mounting a directory to a pool called cold-pool, I started to save data through CephFS. By removing all of the created files from CephFS, I could not remove objects from the cold

Re: [ceph-users] Installing a tets ceph cluster

2014-06-19 Thread Iban Cabrillo
Hi, I respond myself. I was not using the default ceph.conf file, mine was cephcloud.conf and I do not way this new cephcloud.conf was not passed (I do not know if this could be a bug). I was to run : sudo python /usr/sbin/ceph-create-keys -v -i "node_name" --cluster cephcloud In each mach

[ceph-users] switch pool from replicated to erasure coded

2014-06-19 Thread Pavel V. Kaygorodov
Hi! May be I have missed something in docs, but is there a way to switch a pool from replicated to erasure coded? Or I have to create a new pool an somehow manually transfer data from old pool to new one? Pavel. ___ ceph-users mailing list ceph-users

[ceph-users] Error in documentation

2014-06-19 Thread george.ryall
Hi, I've come across an error in the Ceph documentation, what's the proper way for me to report it so that it gets fixed? (on http://ceph.com/docs/master/rados/operations/pools/#set-the-number-of-object-replicas "ceph osd pool set-quota {pool-name} [max-objects {obj-count}] [max_bytes {bytes}]

Re: [ceph-users] understanding rados df statistics

2014-06-19 Thread Gregory Farnum
The total used/available/capacity is calculated by running the syscall which "df" uses across all OSDs and summing the results. The "total data" is calculated by summing the sizes of the objects stored. It depends on how you've configured your system, but I'm guessing the markup is due to the (con

Re: [ceph-users] Cache tier pool in CephFS

2014-06-19 Thread Gregory Farnum
1) it will take time for the deleted objects to flush out of the cache pool and then be deleted in the cold pool. They will disappear eventually, though! 2) you can't delete pools which are in the MDSMap. On Thursday, June 19, 2014, Sherry Shahbazi wrote: > Hi Greg, > > Thanks for your prompt re

Re: [ceph-users] switch pool from replicated to erasure coded

2014-06-19 Thread Gregory Farnum
On Thursday, June 19, 2014, Pavel V. Kaygorodov wrote: > Hi! > > May be I have missed something in docs, but is there a way to switch a > pool from replicated to erasure coded? No. > Or I have to create a new pool an somehow manually transfer data from old > pool to new one? Yes. Please kee

Re: [ceph-users] switch pool from replicated to erasure coded

2014-06-19 Thread Loic Dachary
On 19/06/2014 14:06, Pavel V. Kaygorodov wrote: > Hi! > > May be I have missed something in docs, but is there a way to switch a pool > from replicated to erasure coded? > Or I have to create a new pool an somehow manually transfer data from old > pool to new one? Hi, There is no way to turn

Re: [ceph-users] understanding rados df statistics

2014-06-19 Thread george.ryall
Having looked at a sample of OSDs it appears that it is indeed the case that for every GB of data we have 9 GB of Journal. Is this normal? Or are we not doing some Journal/cluster management that we should be? George From: Gregory Farnum [mailto:g...@inktank.com] Sent: 19 June 2014 13:53 To: R

Re: [ceph-users] Error in documentation

2014-06-19 Thread John Wilkins
I can address it, or if you want, you can fix it yourself: http://ceph.com/docs/master/start/documenting-ceph/ On Thu, Jun 19, 2014 at 5:46 AM, wrote: > Hi, > > I’ve come across an error in the Ceph documentation, what’s the proper way > for me to report it so that it gets fixed? > > > > (on >

Re: [ceph-users] Level DB with RADOS

2014-06-19 Thread Shesha Sreenivasamurthy
Thanks, What is the right GIT repo from where I can download (clone) the RADOS code in which OMAP uses LevelDB. I am a newbie hence the question. On Wed, Jun 18, 2014 at 7:28 PM, Gregory Farnum wrote: > On Wed, Jun 18, 2014 at 9:14 PM, Shesha Sreenivasamurthy > wrote: > > I am doing some resea

Re: [ceph-users] understanding rados df statistics

2014-06-19 Thread Gregory Farnum
Yeah, the journal is a fixed size; it won't grow! On Thursday, June 19, 2014, wrote: > Having looked at a sample of OSDs it appears that it is indeed the case > that for every GB of data we have 9 GB of Journal. Is this normal? Or are > we not doing some Journal/cluster management that we shoul

Re: [ceph-users] [Solved] Init scripts in Debian not working

2014-06-19 Thread Alfredo Deza
On Thu, Jun 19, 2014 at 6:32 AM, Dieter Scholz wrote: > Hello, > >> at the moment I'm trying to create a small test cluster with Ceph Firefly. >> I >> followed the documentation and everything works as expected. But when I >> try >> to start the Ceph daemons on the individual OSD-machines using th

[ceph-users] erasure pool & crush ruleset

2014-06-19 Thread Pavel V. Kaygorodov
Hi! I want to make erasure-coded pool with k=3 and m=3. Also, I want to distribute data between two hosts, having 3 osd from host1 and 3 from host2. I have created a ruleset: rule ruleset_3_3 { ruleset 0 type replicated min_size 6 max_size 6 step take host

Re: [ceph-users] erasure pool & crush ruleset

2014-06-19 Thread Loic Dachary
On 19/06/2014 18:17, Pavel V. Kaygorodov wrote: > Hi! > > I want to make erasure-coded pool with k=3 and m=3. Also, I want to > distribute data between two hosts, having 3 osd from host1 and 3 from host2. > I have created a ruleset: > > rule ruleset_3_3 { > ruleset 0 > type rep

Re: [ceph-users] erasure pool & crush ruleset

2014-06-19 Thread Pavel V. Kaygorodov
This ruleset works well for replicated pools with size 6 (I have tested it on data and metadata pools, which I cannot delete). The erasure pool with k=3 and m=3 must have size 6? Pavel. > On 19/06/2014 18:17, Pavel V. Kaygorodov wrote: >> Hi! >> >> I want to make erasure-coded pool with k=3 a

[ceph-users] /etc/init.d/rbdmap

2014-06-19 Thread Chad Seys
Hi all, Shouldn't /etc/init.d/rbdmap be in the librbd package rather than in "ceph"? Thanks, Chad. ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] /etc/ceph/rbdmap

2014-06-19 Thread Chad Seys
Hi all, Also /etc/ceph/rbdmap in librbd1 rather than ceph? Thanks, Chad. ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] /etc/ceph/rbdmap

2014-06-19 Thread Sage Weil
On Thu, 19 Jun 2014, Chad Seys wrote: > Hi all, > Also /etc/ceph/rbdmap in librbd1 rather than ceph? This is for mapping kernel rbd devices on system startup, and belong with ceph-common (which hasn't yet been but soon will be split out from ceph) along with the 'rbd' cli utility. It isn't di

Re: [ceph-users] Error in documentation

2014-06-19 Thread Aaron Ten Clay
Perhaps the fix should include changing the parameters to use hyphens to work toward making the ceph CLI more consistent? On Thu, Jun 19, 2014 at 7:47 AM, John Wilkins wrote: > I can address it, or if you want, you can fix it yourself: > http://ceph.com/docs/master/start/documenting-ceph/ > > >

Re: [ceph-users] Some easy questions

2014-06-19 Thread Craig Lewis
> > >> Just to clarify. Suppose you insert an object into rados directly, you > won't be able to see that file > in cephfs anywhere, since it won't be listed in MDS. Correct? > > Meaning, you can start using CephFS+MDS at any point in time, but it will > only ever list objects/files > that were ins

Re: [ceph-users] /etc/ceph/rbdmap

2014-06-19 Thread Chad Seys
> This is for mapping kernel rbd devices on system startup, and belong with > ceph-common (which hasn't yet been but soon will be split out from ceph) Great! Yeah, I was hoping to map /dev/rbd without installing all the ceph daemons! > along with the 'rbd' cli utility. It isn't directly relat

Re: [ceph-users] RADOSGW + OpenStack basic question

2014-06-19 Thread Craig Lewis
Unfortunately, I can't help much. I'm just using the S3 interface for object storage. Looking back at the archives, this question does come up a lot, and there aren't a lot of replies. The best thread I see in the archive is http://lists.ceph.com/pipermail/ceph-users-ceph.com/2013-November/00628

Re: [ceph-users] understanding rados df statistics

2014-06-19 Thread John Wilkins
George, I'll look into writing up some additional detail. We do have a description for 'ceph df' here: http://ceph.com/docs/master/rados/operations/monitoring/#checking-a-cluster-s-usage-stats On Thu, Jun 19, 2014 at 8:07 AM, Gregory Farnum wrote: > Yeah, the journal is a fixed size; it won't

Re: [ceph-users] Taking down one OSD node (10 OSDs) for maintenance - best practice?

2014-06-19 Thread Alphe Salas Michels
Hello, the best practice is to simply shut down the whole cluster starting form the clients, monitors the mds and the osd. You do your maintenance then you bring back everyone starting from monitors, mds, osd. clients. Other while the osds missing will lead to a reconstruction of your cluste

Re: [ceph-users] Taking down one OSD node (10 OSDs) for maintenance - best practice?

2014-06-19 Thread Gregory Farnum
No, you definitely don't need to shut down the whole cluster. Just do a polite shutdown of the daemons, optionally with the noout flag that Wido mentioned. Software Engineer #42 @ http://inktank.com | http://ceph.com On Thu, Jun 19, 2014 at 1:55 PM, Alphe Salas Michels wrote: > Hello, the best p

Re: [ceph-users] erasure pool & crush ruleset

2014-06-19 Thread Pavel V. Kaygorodov
> You need: > > type erasure > It works! Thanks a lot! Pavel. min_size 6 max_size 6 step take host1 step chooseleaf firstn 3 type osd step emit step take host2 step chooseleaf firstn 3 type osd step emit >

[ceph-users] Erasure coded pool suitable for MDS?

2014-06-19 Thread Erik Logtenberg
Hi, Are erasure coded pools suitable for use with MDS? I tried to give it a go by creating two new pools like so: # ceph osd pool create ecdata 128 128 erasure # ceph osd pool create ecmetadata 128 128 erasure Then looked up their id's: # ceph osd lspools ..., 6 ecdata,7 ecmetadata # ceph mds

Re: [ceph-users] RADOSGW + OpenStack basic question

2014-06-19 Thread Craig Lewis
There is a tool named s3cmd, I use that for minor things. The first time you run it, use `s3cmd --configure`. Most of my access to the cluster is using the Amazon S3 library. On Thu, Jun 19, 2014 at 1:07 PM, Vickey Singh wrote: > Hello Craig > > I want to use object storage only NOT cinder an

Re: [ceph-users] Taking down one OSD node (10 OSDs) for maintenance - best practice?

2014-06-19 Thread David
Hi, Thanks all for answers - we actually already did this yesterday night , one OSD node at a time without disrupting service. We used the noout flag and also paused deep scrub which was running with nodeepscrub flag during the maintenance. Took down one node with 10 OSDs just through normal sh

Re: [ceph-users] Erasure coded pool suitable for MDS?

2014-06-19 Thread Wido den Hollander
> Op 19 jun. 2014 om 16:10 heeft "Erik Logtenberg" het > volgende geschreven: > > Hi, > > Are erasure coded pools suitable for use with MDS? > I don't think so. It does in-place updates of objects and that doesn't work with EC pools. > I tried to give it a go by creating two new pools

Re: [ceph-users] Erasure coded pool suitable for MDS?

2014-06-19 Thread Loic Dachary
On 19/06/2014 22:51, Wido den Hollander wrote: > > > > >> Op 19 jun. 2014 om 16:10 heeft "Erik Logtenberg" het >> volgende geschreven: >> >> Hi, >> >> Are erasure coded pools suitable for use with MDS? >> > > I don't think so. It does in-place updates of objects and that doesn't work > wi

Re: [ceph-users] Permissions spontaneously changing in cephfs

2014-06-19 Thread Erik Logtenberg
I am using the kernel client. kernel: 3.14.4-100.fc19.x86_64 ceph: ceph-0.80.1-0.fc19.x86_64 Actually, I seem to be able to reproduce it quite reliably. I just reset my cephfs (fiddling with erasure coded pools which was no success), so just for kicks tried again with creating a directory. Exactl

Re: [ceph-users] question about feature set mismatch

2014-06-19 Thread Erik Logtenberg
Hi Ilya, Do you happen to know when this fix will be released? Is upgrading to a newer kernel (client side) still a solution/workaround too? If yes, which kernel version is required? Kind regards, Erik. > The "if there are any erasure code pools in the cluster, kernel clients > (both krbd and

Re: [ceph-users] Erasure coded pool suitable for MDS?

2014-06-19 Thread Erik Logtenberg
Hi Loic, That is a nice idea. And if I then use newfs against that replicated cache pool, it'll work reliably? Kind regards, Erik. On 06/19/2014 11:09 PM, Loic Dachary wrote: > > > On 19/06/2014 22:51, Wido den Hollander wrote: >> >> >> >> >>> Op 19 jun. 2014 om 16:10 heeft "Erik Logtenb

Re: [ceph-users] Permissions spontaneously changing in cephfs

2014-06-19 Thread Erik Logtenberg
Hi Zheng, Additionally, I notice that as long as I don't do anything with that directory, the permissions stay wrong. Previously I noticed that the permissions eventually got right by themselves, but I don't know what triggered it. Also, the permission problem is not just with the first ever cre

Re: [ceph-users] Cache tier pool in CephFS

2014-06-19 Thread Sherry Shahbazi
Hi Greg,  1) The problem is that when I start to delete objects some of them disappear from the pool but the rest would be kept forever! I had the same problem while I was not using the cache tier pool. I checked "ceph mds tell \* dumpcache" and it was clear! 2) I forgot to remove the pool from

Re: [ceph-users] /etc/ceph/rbdmap

2014-06-19 Thread Sage Weil
On Thu, 19 Jun 2014, Chad Seys wrote: > > This is for mapping kernel rbd devices on system startup, and belong with > > ceph-common (which hasn't yet been but soon will be split out from ceph) > > Great! Yeah, I was hoping to map /dev/rbd without installing all the ceph > daemons! The package c

[ceph-users] Invitation to connect on LinkedIn

2014-06-19 Thread Jiwei Liu
LinkedIn I'd like to add you to my professional network on LinkedIn. - Jiwei Jiwei Liu cloud at gamewave China Confirm that you know Jiwei Liu: https://www.linkedin.com/e/9wgveg-hwn1uina-4b/isd/5885622043625414656/Vzc9JGTv/?hs=false&tok=3ahAkz3ZrwT6g1 -- You are receiving Invita

Re: [ceph-users] what is the Recommandation configure for a ceph cluster with 10 servers without memory leak?

2014-06-19 Thread Uwe Grohnwaldt
Hi, first hint: use the release rpms from ceph.com - not 0.79. Then test again. Mit freundlichen Grüßen / Best Regards, -- Consultant Dipl.-Inf. Uwe Grohnwaldt Gutleutstr. 351 60327 Frankfurt a. M. eMail: u...@grohnwaldt.eu Telefon: +49-69-34878906 Mobil: +49-172-3209285 Fax: +49-69-348789069 >

Re: [ceph-users] Permissions spontaneously changing in cephfs

2014-06-19 Thread Yan, Zheng
On Fri, Jun 20, 2014 at 6:13 AM, Erik Logtenberg wrote: > Hi Zheng, > > Additionally, I notice that as long as I don't do anything with that > directory, the permissions stay wrong. > > Previously I noticed that the permissions eventually got right by > themselves, but I don't know what triggered

Re: [ceph-users] Erasure coded pool suitable for MDS?

2014-06-19 Thread Loic Dachary
On 20/06/2014 00:06, Erik Logtenberg wrote: > Hi Loic, > > That is a nice idea. And if I then use newfs against that replicated > cache pool, it'll work reliably? It will not be limited by the erasure coded pool features, indeed. Cheers > > Kind regards, > > Erik. > > > On 06/19/2014 11:0