I have a server with 2 x 2TB disks. For performance, is it better to combine
them as a single OSD backed by RAID0 or have 2 OSD's backed by a single disk?
(log will be on SSD in either case).
My need in performance is more IOPS than overall throughput (maybe that's a
universal thing? :)
Thanks
Hi,
Rgw bucket index is in one file (one osd performance issues).
Is there on roudmap sharding or other change to increase performance?
--
Pozdrawiam
Dominik
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-u
Hi,
On 07/21/2013 07:20 AM, James Harper wrote:
I have a server with 2 x 2TB disks. For performance, is it better to combine
them as a single OSD backed by RAID0 or have 2 OSD's backed by a single disk?
(log will be on SSD in either case).
I'd saw two disks and not raid0. Since when you are
Hi,
On 07/21/2013 08:14 AM, Sébastien RICCIO wrote:
Hi !
I'm currently trying to get the xenserver on centos 6.4 tech preview
working against a test ceph cluster and having the same issue.
Some infos: the cluster is named "ceph", the pool is named "rbd".
ceph.xml:
rbd
ceph
On 07/20/2013 11:42 PM, Wido den Hollander wrote:
On 07/20/2013 05:16 PM, Sage Weil wrote:
On Sat, 20 Jul 2013, Wido den Hollander wrote:
On 07/20/2013 06:56 AM, Jeffrey 'jf' Lim wrote:
On Fri, Jul 19, 2013 at 12:54 PM, Jeffrey 'jf' Lim
wrote:
hey folks, I was hoping to be able to use xfs on
Hi,
thanks a lot for your answer. I was successfully able to create the
storage pool with virsh.
However it is not listed when I issue a virsh pool-list:
Name State Autostart
-
If I try to add it again:
[root@xen-blade05 ~]# virsh
Hi again,
[root@xen-blade05 ~]# virsh pool-info rbd
Name: rbd
UUID: ebc61120-527e-6e0a-efdc-4522a183877e
State: running
Persistent: no
Autostart: no
Capacity: 5.28 TiB
Allocation: 16.99 GiB
Available: 5.24 TiB
I managed to get it running. How
Hello.
I am intending to build a Ceph cluster using several Dell C6100 multi-node
chassis servers.
These have only 3 disk bays per node (12 x 3.5" drives across 4 nodes) so I
can't afford to sacrifice a third of my capacity for SSDs. However, fitting the
SSD via PCI-e seems a valid option.
Un
On 07/21/13 20:37, Wido den Hollander wrote:
I'd saw two disks and not raid0. Since when you are doing parallel I/O
both disks can be doing something completely different.
Completely agree, Ceph is already doing the stripping :)
___
ceph-users mailin
On 22/07/2013 08:03, Charles 'Boyo wrote:
Counting on the kernel's cache, it appears I will be best served
purchasing write-optimized SSDs?
Can you share any information on the SSD you are using, is it PCIe
connected?
We are on a standard SAS bus so any SSD going to 500MB/s and being
stable o
On Mon, Jul 22, 2013 at 08:45:07AM +1100, Mikaël Cluseau wrote:
> On 22/07/2013 08:03, Charles 'Boyo wrote:
> >Counting on the kernel's cache, it appears I will be best served
> >purchasing write-optimized SSDs?
> >Can you share any information on the SSD you are using, is it PCIe
> >connected?
>
11 matches
Mail list logo