On 02/02/15 03:38, Udo Lembke wrote:
> With 3 hosts only you can't survive an full node failure, because for
> that you need
> host >= k + m.
Sure you can. k=2, m=1 with the failure domain set to host will survive
a full host failure.
Configuring an encoding that survives one full host failure or
Morning found that some OSD dropped out of Tier Cache Pool. Maybe it's a
coincidence, but at this point was rollback.
2015-02-05 23:23:18.231723 7fd747ff1700 -1 *** Caught signal
(Segmentation fault) **
in thread 7fd747ff1700
ceph version 0.80.8 (69eaad7f8308f21573c604f121956e64679a52a7)
1: /u
Hello Community Members
I am happy to introduce the first book on Ceph with the title “Learning Ceph”.
Me and many folks from the publishing house together with technical reviewers
spent several months to get this book compiled and published.
Finally the book is up for sale on , i hope you wou
Hi,
Am Dienstag, den 03.02.2015, 15:16 + schrieb Colombo Marco:
> Hi all,
> I have to build a new Ceph storage cluster, after i‘ve read the
> hardware recommendations and some mail from this mailing list i would
> like to buy these servers:
just FYI:
SuperMicro already focuses on ceph with
congrats!
page 17, xen is spelled with an X, not Z.
On Fri, Feb 6, 2015 at 1:17 AM, Karan Singh wrote:
> Hello Community Members
>
> I am happy to introduce the first book on Ceph with the title “Learning
> Ceph”.
>
> Me and many folks from the publishing house together with technical
> reviewer
Hi
pragya jain writes:
> Hello all!
> I have some basic questions about the process followed by Ceph
> software when a user use SwiftAPIs for accessing its
> storage1. According to my understanding, to keep the objects listing
> in containers and containers listing in an account, Ceph software
Hi all,
We are building EC cluster with cache tier for CephFS. We are planning to
use the following 1U chassis along with Intel SSD DC S3700 for cache tier.
It has 10 * 2.5" slots. Could you recommend a suitable Intel processor and
amount of RAM to cater 10 * SSDs?.
http://www.supermicro.com/prod
Hi,
Is the Samba VFS module for CephFS actively maintained at this moment?
I haven't seen much updates in the ceph/samba git repo.
With regards,
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.c
Am 06.02.2015 09:06, schrieb Hector Martin:
> On 02/02/15 03:38, Udo Lembke wrote:
>> With 3 hosts only you can't survive an full node failure, because for
>> that you need
>> host >= k + m.
>
> Sure you can. k=2, m=1 with the failure domain set to host will survive
> a full host failure.
>
Hi,
On 06/02/15 21:07, Udo Lembke wrote:
> Am 06.02.2015 09:06, schrieb Hector Martin:
>> On 02/02/15 03:38, Udo Lembke wrote:
>>> With 3 hosts only you can't survive an full node failure, because for
>>> that you need
>>> host >= k + m.
>>
>> Sure you can. k=2, m=1 with the failure domain set to host
Oh, I didn't thinked about this.
Thanks Hector !
- Mail original -
De: "Hector Martin"
À: "ceph-users"
Envoyé: Vendredi 6 Février 2015 09:06:29
Objet: Re: [ceph-users] erasure code : number of chunks for a small cluster ?
On 02/02/15 03:38, Udo Lembke wrote:
> With 3 hosts only you c
On Fri, 6 Feb 2015, Dennis Kramer (DT) wrote:
> Hi,
>
> Is the Samba VFS module for CephFS actively maintained at this moment?
> I haven't seen much updates in the ceph/samba git repo.
You should really ignore the ceph/samba fork; it isn't used. The Ceph VFS
driver is upstream in Samba and main
I've used the upstream module for our production cephfs cluster, but i've
noticed a bug where timestamps aren't being updated correctly.
Modified files are being reset to the beginning of Unix time.
It looks like this bug only manifest itself in applications like MS Office
where extra metadata
heres output of 'ceph -s' from a kvm instance running as a ceph node.
all 3 nodes are monitors, each with 6 4gig osds.
mon_osd_full ratio: .611
mon_osd_nearfull ratio: .60
whats 23689MB used? is that a buffer because of mon_osd_full ratio?
is there a way to query a pool for how much usable space
On Fri, Feb 6, 2015 at 6:39 AM, Dennis Kramer (DT) wrote:
> I've used the upstream module for our production cephfs cluster, but i've
> noticed a bug where timestamps aren't being updated correctly. Modified
> files are being reset to the beginning of Unix time.
>
> It looks like this bug only man
On Fri, 6 Feb 2015, Gregory Farnum wrote:
On Fri, Feb 6, 2015 at 6:39 AM, Dennis Kramer (DT) wrote:
I've used the upstream module for our production cephfs cluster, but i've
noticed a bug where timestamps aren't being updated correctly. Modified
files are being reset to the beginning of Unix
3 nodes, each with 2x1TB in a raid (for /) and 6x4TB for storage. all
of this will be used for block devices for kvm instances. typical
office stuff. databases, file servers, internal web servers, a couple
dozen thin clients. not using the object store or cephfs.
i was thinking about putting the j
When the time comes to replace an OSD I've used the following procedure
1) Stop/down/out the osd and replace the drive
2) Create the ceph osd directory: ceph-osd -i N --mkfs
3) Copy the osd key out of the authorized keys list
4) ceph osd crush rm osd.N
5) ceph osd crush add osd.$i $osd_size root=
is there any reliability trade off with erasure coding vs a relica size of 3?
how would you get the most out of 6x4TB osds in 3 nodes?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On Fri, Feb 6, 2015 at 7:11 AM, Dennis Kramer (DT) wrote:
>
> On Fri, 6 Feb 2015, Gregory Farnum wrote:
>
>> On Fri, Feb 6, 2015 at 6:39 AM, Dennis Kramer (DT)
>> wrote:
>>>
>>> I've used the upstream module for our production cephfs cluster, but i've
>>> noticed a bug where timestamps aren't bei
Hello!
I am sysadmin for a small IT consulting enterprise in México.
We are trying to integrate three servers running RHEL 5.9 into a new
CEPH cluster.
I downloaded the source code and tried compiling it, though I got stuck
with the requirements for leveldb and libblkid.
The versions installed
21 matches
Mail list logo