Hello,
Just a general question really. What is the recommended node size for Ceph
with storage clusters? The Ceph documentation does say use more smaller
nodes rather than fewer large nodes but what constitutes to large in terms
of Ceph? Is it 16 OSD or more like 32 OSD?
Where does Ceph tail off
I agree, I have tried it before and even with tmpfs and removing the
logs, usb sticks will last only a few months (3-4 at most)
---
Thanks,
Kenneth
Apollo Global Corp.
On 03/09/2014 02:23 PM, Kyle
Bader wrote:
>> Is there an issue with IO performance?
>
> Ceph
monitors store cluster maps a
Hi, everyone!
I have installed ceph RGW, but didn't know sucessful or not.
I first installed ceph in 3 servers and installed MDS. I could upload files
by mount a disk to ceph.
Then I installed RGW in this ceph.
Server OS: CentOS 6.4 x86_64
When I started radosgw, It showed:
[root@ceph65 ceph]# /e
On 03/09/2014 12:19 PM, Pieter Koorts wrote:
Hello,
Just a general question really. What is the recommended node size for
Ceph with storage clusters? The Ceph documentation does say use more
smaller nodes rather than fewer large nodes but what constitutes to
large in terms of Ceph? Is it 16 OSD
Hi All,
blkio weight sharing is not working on Ceph Block device (RBD),IOPS number
are not getting divided as per the weight proportion but are getting
divided equally.
i am using CFQ IO scheduler for RBD.
I did it getting it worked once but now i am not able to get it worked as i
have upgraded c
On Sat, 8 Mar 2014, Stefan Schwarz wrote:
> Hello,
>
> i've lost a complete server and an additional disk on another server
> over the night. I was able to fix everything except one PG that stays
> incomplete.
>
> i wasn't able to rescue the failed single disk. i already tried to get
> the cluste
>
> I agree, I have tried it before and even with tmpfs and removing the logs, usb
> sticks will last only a few months (3-4 at most)
>
Not all USB sticks are the same. Some will last much much longer than others
before they become write-only.
Is anyone making USB memory sticks built on the sa
Hi,
Has anyone written a book, ebook, etc about CEPH, including from
basics to performance tuning?
Thanks,
Alejandro
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On Thu, 2014-03-06 at 19:17 -0500, Alfredo Deza wrote:
> >> But what it means is that you kind of deployed monitors that have no
> >> idea how to communicate with the ones that were deployed before.
> > What's the best way to resolve this then?
This. I'd like to get it back to 1 monitor. Any i
Ceph is seriously badass, but my requirements are to create a cluster in which
I can host my customer's data in separate areas which are independently
encrypted, with passphrases which we as cloud admins do not have access to.
My current thoughts are:
1. Create an OSD per machine stretching ov
Hi guys,
I need to add a extend server, which reside several osds, to a running
ceph cluster. During add osds, ceph would not automatically modify the
ceph.conf. So I manually modify the ceph.conf
And restart the whole ceph cluster with command: 'service ceph -a restart'. I
just confuse
Why not have the application encrypt the data or at the compute server's file
system? That way you don't have to manage keys.
Seth
On Mar 9, 2014, at 6:09 PM, "Mark s2c"
mailto:m...@stuff2cloud.com>> wrote:
Ceph is seriously badass, but my requirements are to create a cluster in which
Am 07.03.2014 16:56, schrieb Konrad Gutkowski:
> Hi,
>
> If those are journal drives you could have n+1 ssd's and swap them at
> some intervals, could introduce more problems.
> If it required data to be synchronized one could operate it with
> degraded raid1 to swap disks, would introduce unneces
13 matches
Mail list logo