Thanks Jamie,
I have not tried bonnie++. I was trying to keep it to sequential IO for
comparison since that is all Rados bench can do. I did do a full io test
in a windows vm using SQLIO. I have both read/write sequential/random for
4/8/64K blocks from that test. I also have access to a Dell E
The iflag addition should help with at least having more accurate reads via
dd, but in terms of actually testing performance, have you tried sysbench
or bonie++?
I'd be curious how things change with multiple io threads, as dd isn't
necessarily a good performance investigation tool (you're rather
Thanks Jamie,
I tried that too. But similar results. The issue looks to possibly be
with the latency but everything is running on one server so logiclly I
would think there would be no latency but according to this there may be
something that is causing slow results. See Co-Residency
http://cep
I thought I'd just throw this in there, as I've been following this
thread: dd also has an 'iflag' directive just like the 'oflag'.
I don't have a deep, offhand recollection of the caching mechanisms at play
here, but assuming you want a solid synchronous / non-cached read, you
should probably spe
Mike,
So I do have to ask, where would the extra latency be coming from if all my
OSDs are on the same machine that my test VM is running on? I have tried
every SSD tweak in the book. The primary concerning issue I see is with
Read performance of sequential IOs in the 4-8K range. I would expect
Thank Mike,
High hopes right ;)
I guess we are not doing too bad compared to you numbers then. Just wish
the gap was a little closer between native and ceph per osd.
C:\Program Files (x86)\SQLIO>sqlio -kW -t8 -s30 -o8 -fsequential -b1024 -BH
-LS
c:\TestFile.dat
sqlio v1.5.SG
using system counter
Well, in a word, yes. You really expect a network replicated storage system in
user space to be comparable to direct attached ssd storage? For what it's
worth, I've got a pile of regular spinning rust, this is what my cluster will
do inside a vm with rbd writeback caching on. As you can see, l
Any other thoughts on this thread guys. I am just crazy to want near
native SSD performance on a small SSD cluster?
On Wed, Sep 18, 2013 at 8:21 AM, Jason Villalta wrote:
> That dd give me this.
>
> dd if=ddbenchfile of=- bs=8K | dd if=- of=/dev/null bs=8K
> 819200 bytes (8.2 GB) copied, 3
That dd give me this.
dd if=ddbenchfile of=- bs=8K | dd if=- of=/dev/null bs=8K
819200 bytes (8.2 GB) copied, 31.1807 s, 263 MB/s
Which makes sense because the SSD is running as SATA 2 which should give
3Gbps or ~300MBps
I am still trying to better understand the speed difference between the
On 17 Sep 2013, at 21:47, Jason Villalta wrote:
> dd if=ddbenchfile of=/dev/null bs=8K
> 819200 bytes (8.2 GB) copied, 19.7318 s, 415 MB/s
As a general point, this benchmark may not do what you think it does, depending
on the version of dd, as writes to /dev/null can be heavily optimised.
eph-users] Ceph performance with 8K blocks.
>> >>
>> >> Thanks for you feed back it is helpful.
>> >>
>> >> I may have been wrong about the default windows block size. What
>> would be the best tests to compare native performance of the SSD disks at
>> 4K bloc
llalta"
> *To: *"Bill Campbell"
> *Cc: *"Gregory Farnum" , "ceph-users" <
> ceph-users@lists.ceph.com>
> *Sent: *Tuesday, September 17, 2013 11:31:43 AM
>
> *Subject: *Re: [ceph-users] Ceph performance with 8K blocks.
>
> Thanks fo
__
From: "Jason Villalta"
To: "Bill Campbell"
Cc: "Gregory Farnum" , "ceph-users"
Sent: Tuesday, September 17, 2013 11:31:43 AM
Subject: Re: [ceph-users] Ceph performance with 8K blocks.
Thanks for you feed back it is helpful.
I may have been wrong about the d
s or
> on the same disk as the OSD? What is the replica size of your pool?
> >>
> >>
> >> From: "Jason Villalta"
> >> To: "Bill Campbell"
> >> Cc: "Gregory Farnum" , "ceph-user
________
>> From: "Jason Villalta"
>> To: "Bill Campbell"
>> Cc: "Gregory Farnum" , "ceph-users"
>>
>> Sent: Tuesday, September 17, 2013 11:31:43 AM
>>
>> Subject: Re: [ceph-users] Ceph performance w
ce is going to seem good. You can add
>>>> the 'oflag=direct' to your dd test to try and get a more accurate reading
>>>> from that.
>>>>
>>>> RADOS performance from what I've seen is largely going to hinge on
>>>> replica siz
As Gregory mentioned, your 'dd' test looks to be reading from the cache (you are writing 8GB in, and then reading that 8GB out, so the reads are all cached reads) so the performance is going to seem good. You can add the 'oflag=direct' to your dd test to try and get a more accurate reading from th
gt;
>> Windows default (NTFS) is a 4k block. Are you changing the allocation
>> unit to 8k as a default for your configuration?
>>
>> ----------
>> *From: *"Gregory Farnum"
>> *To: *"Jason Villalta"
>> *Cc: *cep
> *To: *"Jason Villalta"
> *Cc: *ceph-users@lists.ceph.com
> *Sent: *Tuesday, September 17, 2013 10:40:09 AM
> *Subject: *Re: [ceph-users] Ceph performance with 8K blocks.
>
>
> Your 8k-block dd test is not nearly the same as your 8k-block rados bench
> or SQL tes
Oh, and you should run some local sync benchmarks against these drives to
figure out what sort of performance they can deliver with two write streams
going on, too. Sometimes the drives don't behave the way one would expect.
-Greg
On Tuesday, September 17, 2013, Gregory Farnum wrote:
> Your 8k-bl
Subject: Re: [ceph-users] Ceph performance with 8K blocks.
Your 8k-block dd test is not nearly the same as your 8k-block rados bench or
SQL tests. Both rados bench and SQL require the write to be committed to disk
before moving on to the next one; dd is simply writing into the page cache. So
you
Your 8k-block dd test is not nearly the same as your 8k-block rados bench
or SQL tests. Both rados bench and SQL require the write to be committed to
disk before moving on to the next one; dd is simply writing into the page
cache. So you're not going to get 460 or even 273MB/s with sync 8k
writes r
Hello all,
I am new to the list.
I have a single machines setup for testing Ceph. It has a dual proc 6
cores(12core total) for CPU and 128GB of RAM. I also have 3 Intel 520
240GB SSDs and an OSD setup on each disk with the OSD and Journal in
separate partitions formatted with ext4.
My goal here
23 matches
Mail list logo