Re: [ceph-users] erasure code : number of chunks for a small cluster ?

2015-02-06 Thread Hector Martin
On 02/02/15 03:38, Udo Lembke wrote: > With 3 hosts only you can't survive an full node failure, because for > that you need > host >= k + m. Sure you can. k=2, m=1 with the failure domain set to host will survive a full host failure. Configuring an encoding that survives one full host failure or

[ceph-users] 0.80.8 ReplicationPG Fail

2015-02-06 Thread Irek Fasikhov
Morning found that some OSD dropped out of Tier Cache Pool. Maybe it's a coincidence, but at this point was rollback. 2015-02-05 23:23:18.231723 7fd747ff1700 -1 *** Caught signal (Segmentation fault) ** in thread 7fd747ff1700 ceph version 0.80.8 (69eaad7f8308f21573c604f121956e64679a52a7) 1: /u

[ceph-users] Introducing "Learning Ceph" : The First ever Book on Ceph

2015-02-06 Thread Karan Singh
Hello Community Members I am happy to introduce the first book on Ceph with the title “Learning Ceph”. Me and many folks from the publishing house together with technical reviewers spent several months to get this book compiled and published. Finally the book is up for sale on , i hope you wou

Re: [ceph-users] Ceph Supermicro hardware recommendation

2015-02-06 Thread Stephan Seitz
Hi, Am Dienstag, den 03.02.2015, 15:16 + schrieb Colombo Marco: > Hi all, > I have to build a new Ceph storage cluster, after i‘ve read the > hardware recommendations and some mail from this mailing list i would > like to buy these servers: just FYI: SuperMicro already focuses on ceph with

Re: [ceph-users] Introducing "Learning Ceph" : The First ever Book on Ceph

2015-02-06 Thread pixelfairy
congrats! page 17, xen is spelled with an X, not Z. On Fri, Feb 6, 2015 at 1:17 AM, Karan Singh wrote: > Hello Community Members > > I am happy to introduce the first book on Ceph with the title “Learning > Ceph”. > > Me and many folks from the publishing house together with technical > reviewer

Re: [ceph-users] updation of container and account while using Swift API

2015-02-06 Thread Abhishek L
Hi pragya jain writes: > Hello all! > I have some basic questions about the process followed by Ceph > software when a user use SwiftAPIs for accessing its > storage1. According to my understanding, to keep the objects listing > in containers and containers listing in an account, Ceph software

Re: [ceph-users] Ceph Supermicro hardware recommendation

2015-02-06 Thread Mohamed Pakkeer
Hi all, We are building EC cluster with cache tier for CephFS. We are planning to use the following 1U chassis along with Intel SSD DC S3700 for cache tier. It has 10 * 2.5" slots. Could you recommend a suitable Intel processor and amount of RAM to cater 10 * SSDs?. http://www.supermicro.com/prod

[ceph-users] Status of SAMBA VFS

2015-02-06 Thread Dennis Kramer (DT)
Hi, Is the Samba VFS module for CephFS actively maintained at this moment? I haven't seen much updates in the ceph/samba git repo. With regards, ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.c

Re: [ceph-users] erasure code : number of chunks for a small cluster ?

2015-02-06 Thread Udo Lembke
Am 06.02.2015 09:06, schrieb Hector Martin: > On 02/02/15 03:38, Udo Lembke wrote: >> With 3 hosts only you can't survive an full node failure, because for >> that you need >> host >= k + m. > > Sure you can. k=2, m=1 with the failure domain set to host will survive > a full host failure. > Hi,

Re: [ceph-users] erasure code : number of chunks for a small cluster ?

2015-02-06 Thread Hector Martin
On 06/02/15 21:07, Udo Lembke wrote: > Am 06.02.2015 09:06, schrieb Hector Martin: >> On 02/02/15 03:38, Udo Lembke wrote: >>> With 3 hosts only you can't survive an full node failure, because for >>> that you need >>> host >= k + m. >> >> Sure you can. k=2, m=1 with the failure domain set to host

Re: [ceph-users] erasure code : number of chunks for a small cluster ?

2015-02-06 Thread Alexandre DERUMIER
Oh, I didn't thinked about this. Thanks Hector ! - Mail original - De: "Hector Martin" À: "ceph-users" Envoyé: Vendredi 6 Février 2015 09:06:29 Objet: Re: [ceph-users] erasure code : number of chunks for a small cluster ? On 02/02/15 03:38, Udo Lembke wrote: > With 3 hosts only you c

Re: [ceph-users] Status of SAMBA VFS

2015-02-06 Thread Sage Weil
On Fri, 6 Feb 2015, Dennis Kramer (DT) wrote: > Hi, > > Is the Samba VFS module for CephFS actively maintained at this moment? > I haven't seen much updates in the ceph/samba git repo. You should really ignore the ceph/samba fork; it isn't used. The Ceph VFS driver is upstream in Samba and main

Re: [ceph-users] Status of SAMBA VFS

2015-02-06 Thread Dennis Kramer (DT)
I've used the upstream module for our production cephfs cluster, but i've noticed a bug where timestamps aren't being updated correctly. Modified files are being reset to the beginning of Unix time. It looks like this bug only manifest itself in applications like MS Office where extra metadata

[ceph-users] parsing ceph -s and how much free space, really?

2015-02-06 Thread pixelfairy
heres output of 'ceph -s' from a kvm instance running as a ceph node. all 3 nodes are monitors, each with 6 4gig osds. mon_osd_full ratio: .611 mon_osd_nearfull ratio: .60 whats 23689MB used? is that a buffer because of mon_osd_full ratio? is there a way to query a pool for how much usable space

Re: [ceph-users] Status of SAMBA VFS

2015-02-06 Thread Gregory Farnum
On Fri, Feb 6, 2015 at 6:39 AM, Dennis Kramer (DT) wrote: > I've used the upstream module for our production cephfs cluster, but i've > noticed a bug where timestamps aren't being updated correctly. Modified > files are being reset to the beginning of Unix time. > > It looks like this bug only man

Re: [ceph-users] Status of SAMBA VFS

2015-02-06 Thread Dennis Kramer (DT)
On Fri, 6 Feb 2015, Gregory Farnum wrote: On Fri, Feb 6, 2015 at 6:39 AM, Dennis Kramer (DT) wrote: I've used the upstream module for our production cephfs cluster, but i've noticed a bug where timestamps aren't being updated correctly. Modified files are being reset to the beginning of Unix

[ceph-users] journal placement for small office?

2015-02-06 Thread pixelfairy
3 nodes, each with 2x1TB in a raid (for /) and 6x4TB for storage. all of this will be used for block devices for kvm instances. typical office stuff. databases, file servers, internal web servers, a couple dozen thin clients. not using the object store or cephfs. i was thinking about putting the j

[ceph-users] Replacing an OSD Drive

2015-02-06 Thread Gaylord Holder
When the time comes to replace an OSD I've used the following procedure 1) Stop/down/out the osd and replace the drive 2) Create the ceph osd directory: ceph-osd -i N --mkfs 3) Copy the osd key out of the authorized keys list 4) ceph osd crush rm osd.N 5) ceph osd crush add osd.$i $osd_size root=

[ceph-users] replica or erasure coding for small office?

2015-02-06 Thread pixelfairy
is there any reliability trade off with erasure coding vs a relica size of 3? how would you get the most out of 6x4TB osds in 3 nodes? ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Status of SAMBA VFS

2015-02-06 Thread Gregory Farnum
On Fri, Feb 6, 2015 at 7:11 AM, Dennis Kramer (DT) wrote: > > On Fri, 6 Feb 2015, Gregory Farnum wrote: > >> On Fri, Feb 6, 2015 at 6:39 AM, Dennis Kramer (DT) >> wrote: >>> >>> I've used the upstream module for our production cephfs cluster, but i've >>> noticed a bug where timestamps aren't bei

[ceph-users] Compilation problem

2015-02-06 Thread David J. Arias
Hello! I am sysadmin for a small IT consulting enterprise in México. We are trying to integrate three servers running RHEL 5.9 into a new CEPH cluster. I downloaded the source code and tried compiling it, though I got stuck with the requirements for leveldb and libblkid. The versions installed