thanks all, for your great explanation.
Regards
Pragya Jain
On Saturday, 30 August 2014 4:51 PM, Joao Eduardo Luis
wrote:
>
>
>On 08/30/2014 08:03 AM, pragya jain wrote:
>> Thanks Greg, Joao and David,
>>
>> The concept why odd no. of monitors are preferred is clear to me, but
>> still I a
On 01/09/14 17:10, Alexandre DERUMIER wrote:
Allegedly this model ssd (128G m550) can do 75K 4k random write IOPS
(running fio on the filesystem I've seen 70K IOPS so is reasonably
believable). So anyway we are not getting anywhere near the max IOPS
>from our devices.
Hi,
Just check this:
htt
Yes crucial is not suitable for this. Of you write sequential data like the
journal for around 1-2 hours the speed goes down to 80mb/s.
Also it has very low performance in sync / flush mode which the journal is
using.
Stefan
Excuse my typo sent from my mobile phone.
Am 01.09.2014 um 07:10 sch
>>Allegedly this model ssd (128G m550) can do 75K 4k random write IOPS
>>(running fio on the filesystem I've seen 70K IOPS so is reasonably
>>believable). So anyway we are not getting anywhere near the max IOPS
>>from our devices.
Hi,
Just check this:
http://www.anandtech.com/show/7864/crucia
On 01/09/14 12:36, Mark Kirkwood wrote:
Allegedly this model ssd (128G m550) can do 75K 4k random write IOPS
(running fio on the filesystem I've seen 70K IOPS so is reasonably
believable). So anyway we are not getting anywhere near the max IOPS
from our devices.
We use the Intel S3700 for prod
Yes, as Jason suggests - 27 IOPS doing 4k blocks is:
27*4/1024 MB/s = 0.1 MB/s
While the RBD volume is composed of 4MB objects - many of the
(presumably) random IOs of 4k blocks can reside in the same 4MB object,
so it is tricky to estimate how many 4MB objects are needing to be
rewritten eac
Hi all:
Apologize if this question has been asked before.
I noticed that since librbd doesn't have a daemon context, there seems no
way to retrieve
librbd log and tune librbd configuration, but since librbd is an important
part in virtual machine
IO stack, it may be helpful if we can get its log.
Somnath,
on the small workload performance, 107k is higher than the theoretical IOPS
of 520, any idea why?
>>Single client is ~14K iops, but scaling as number of clients increases.
10 clients *~107K* iops. ~25 cpu cores are used.
2014-09-01 11:52 GMT+08:00 Jian Zhang :
> Somnath,
> on the sma
Somnath,
on the small workload performance,
2014-08-29 14:37 GMT+08:00 Somnath Roy :
> Thanks Haomai !
>
> Here is some of the data from my setup.
>
>
>
>
> -
Guess you should multiply 27 by bs=4k?
Jason
2014-08-29 15:52 GMT+08:00 lixue...@chinacloud.com.cn <
lixue...@chinacloud.com.cn>:
>
> guys:
> There's a ceph cluster working and nodes were connected with 10Gb
> cable. We defined fio's bs=4k and the object size of rbd is 4MB.
> Client node
As the names suggest, the former removes the object from the store while
the latter deletes bucket index only.
Check the code for more details.
Jason
2014-08-29 19:09 GMT+08:00 zhu qiang :
> Hi all,
>From radosgw-admin commond :
> # radosgw-admin object rm --object=my_test_file.txt --buc
On 31/08/14 17:55, Mark Kirkwood wrote:
On 29/08/14 22:17, Sebastien Han wrote:
@Mark thanks trying this :)
Unfortunately using nobarrier and another dedicated SSD for the
journal (plus your ceph setting) didn’t bring much, now I can reach
3,5K IOPS.
By any chance, would it be possible for you
If you want your data to be N+2 redundant (able to handle 2 failures, more
or less), then you need to set size=3 and have 3 replicas of your data.
If you want your monitors to be N+2 redundant, then you need 5 monitors.
If you feel that your data is worth size=3, then you should really try to
hav
Hi Ceph,
In a mixed dumpling / emperor cluster, because osd 2 has been removed but is
still in
"might_have_unfound": [
{ "osd": 2,
"status": "osd is down"},
{ "osd": 6,
"status": "already probed"}],
and because of tha
14 matches
Mail list logo