Hi to all,
I'm testing Ceph with a 4 server configuration with 300GB 15K
SAS disks used for the OSD's, the journal is included inside the OSD
partition also. And i want to know if it's possible with Ceph to
obtain a 400-700MB/sec of throughput. I've tested with XFS and BRTFS
when doing t
Hi,
> > The low points are all ~35Mbytes/sec and the high points are all
> > ~60Mbytes/sec. This is very reproducible.
>
> It occurred to me that just stopping the OSD's selectively would allow me to
> see if there was a change when one
> was ejected, but at no time was there a change to the gra
Le 01/12/2013 15:22, German Anders a écrit :
[...]
ceph@ceph-deploy01:/mnt/ceph-btrfs-test$ for i in 1 2 3 4; do sudo dd
if=/dev/zero of=./a bs=1M count=1000; done
Hello,
You should really write anything but zeros.
I suspect that nothing is really written to disk, specially on btrfs, a
cow fi
On Fri, Nov 29, 2013 at 3:19 AM, James Harper
wrote:
> When I do gatherkeys, ceph-deploy tells me:
>
> UnsupportedPlatform: Platform is not supported: debian
>
> Given that I downloaded ceph-deploy from the ceph.com debian repository, I'm
> hoping that Debian is supported and that I have somethin
On Thu, Nov 28, 2013 at 8:25 AM, Jonas Andersson wrote:
> Hi all,
>
>
>
> I am seeing some weirdness when trying to deploy Ceph Emperor on fedora 19
> using ceph-deploy. Problem occurs when trying to install ceph-deploy, and
> seems to point to the version of pushy in your repository:
>
>
Since c
On Fri, Nov 29, 2013 at 1:35 AM, Alexis GÜNST HORN
wrote:
> Hello to all,
>
> I use heavily ceph-deploy and it works really well.
> I've just one question : is there an option (i have not found) or a way to let
> ceph-deploy osd create ...
> create an OSD with a weight of 0 ?
There is no option t
>
> On Fri, Nov 29, 2013 at 3:19 AM, James Harper
> wrote:
> > When I do gatherkeys, ceph-deploy tells me:
> >
> > UnsupportedPlatform: Platform is not supported: debian
> >
> > Given that I downloaded ceph-deploy from the ceph.com debian
> repository, I'm hoping that Debian is supported and that
Hi Gilles,
Thanks a lot for the answer, i've made a new benchmark test with
bio, i've used the following configuration for the test:
; Four threads, two query, two writers.
[global]
rw=randread
size=256m
directory=/mnt/ceph-btrfs-test
ioengine=libaio
iodepth=4
invalidate=1
direct=1
[bg
On Sun, Dec 1, 2013 at 2:33 PM, James Harper
wrote:
>>
>> On Fri, Nov 29, 2013 at 3:19 AM, James Harper
>> wrote:
>> > When I do gatherkeys, ceph-deploy tells me:
>> >
>> > UnsupportedPlatform: Platform is not supported: debian
>> >
>> > Given that I downloaded ceph-deploy from the ceph.com debia
>
> ceph-deploy uses Python to detect information for a given platform,
> can you share what this command gives
> as output?
>
> python -c "import platform; print platform.linux_distribution()"
>
Servers that 'gatherkeys' does work on:
('debian', '7.1', '')
('debian', '7.2', '')
Servers that '
On Sun, Dec 1, 2013 at 6:47 PM, James Harper
wrote:
>>
>> ceph-deploy uses Python to detect information for a given platform,
>> can you share what this command gives
>> as output?
>>
>> python -c "import platform; print platform.linux_distribution()"
>>
>
> Servers that 'gatherkeys' does work on:
>
> On Sun, Dec 1, 2013 at 6:47 PM, James Harper
> wrote:
> >>
> >> ceph-deploy uses Python to detect information for a given platform,
> >> can you share what this command gives
> >> as output?
> >>
> >> python -c "import platform; print platform.linux_distribution()"
> >>
> >
> > Servers that '
> Hi,
>
> > > The low points are all ~35Mbytes/sec and the high points are all
> > > ~60Mbytes/sec. This is very reproducible.
> >
> > It occurred to me that just stopping the OSD's selectively would allow me to
> > see if there was a change when one
> > was ejected, but at no time was there a cha
My OSD servers have 4 network ports currently, configured as:
eth0 - lan/osd public
eth1 - unused
eth2 - osd cluster network #1
eth3 - osd cluster network #2
each server has two OSD's, one is configured on osd cluster network #1, the
other on osd cluster network #2. This avoids any messing around
14 matches
Mail list logo