Can librbd interface provide abort api for aborting IO? If yes, can the
abort interface detach write buffer immediately? I hope can reuse the write
buffer quickly after issued the abort request, while not waiting IO aborted
in osd side.
thanks.
___
ceph-
Jason Dillaman :
> There is no such interface currently on the librados / OSD side to abort
> IO operations. Can you provide some background on your use-case for
> aborting in-flight IOs?
>
> --
>
> Jason Dillaman
>
>
> - Original Message -
>
> &g
Hi,my code get Segmentation fault when using librbd to do sync read IO.
>From the trace, I can say there are several read IOs get successfully, but
the last read IO (2015-10-31 08:56:34.804383) can not be returned and my
code got segmentation fault. I used rbd_read interface and malloc a buffer
f
this segmentation fault should happen in rbd_read function, I can see code
call this function, and then get segmentation fault, which means rbd_read
has not been completed successfully when segmentation fault happened.
2015-11-01 10:34 GMT+08:00 min fang :
>
>
> Hi,my code get Seg
Hi cepher, I tried to use the following command to create a img, but
unfortunately, the command hung for a long time until I broken it by
crtl-z.
rbd -p hello create img-003 --size 512
so I checked the cluster status, and showed:
cluster 0379cebd-b546-4954-b5d6-e13d08b7d2f1
health HEALT
Hi, I setup ceph cluster for storing pictures. I want to introduce a data
mining program in ceph osd nodes to dig objects with concrete properties.
I hope some kind of map-reduce framework can use ceph object interface
directly,while not using posix file system interface.
Can somebody help give s
oop once,
> maybe search for that?
> Your other option is to try and make use of object classes directly, but
> that's a bit orimitive to build full map-reduce on top of without a lot of
> effort.
> -Greg
>
>
> On Friday, November 13, 2015, min fang wrote:
>
>
Is this function used in detach rx buffer, and complete IO back to the
caller? From the code, I think this function will not interact with OSD or
MON side, which means, we just cancel IO from client side. Am I right?
Thanks.
___
ceph-users mailing list
Hi, I run randow fio with rwmixread=70, and found read iops is 707, write
is 303. (reference the following). This value is less than random write
and read value. The 4K random write IOPs is 529 and 4k randread IOPs is
11343. Apart from rw type is different, other parameters are all same.
I do n
Hi, is there a document describing librbd compatibility? For example,
something like this: librbd from Ceph 0.88 can also be applied to
0.90,0.91..
I hope not keep librbd relative stable, so can avoid more code iteration
and testing.
Thanks.
___
ce
Hi, I created a new ceph cluster, and create a pool, but see "stuck unclean
since forever" errors happen(as the following), can help point out the
possible reasons for this? thanks.
ceph -s
cluster 602176c1-4937-45fc-a246-cc16f1066f65
health HEALTH_WARN
8 pgs degraded
chooseleaf firstn 0 type host
step emit
}
# end crush map
2016-06-22 18:27 GMT+08:00 Burkhard Linke <
burkhard.li...@computational.bio.uni-giessen.de>:
> Hi,
>
> On 06/22/2016 12:10 PM, min fang wrote:
>
> Hi, I created a new ceph cluster, and create a pool, but see &
Hi, as my understanding, in PG level, IOs are execute in a sequential way,
such as the following cases:
Case 1:
Write A, Write B, Write C to the same data area in a PG --> A Committed,
then B committed, then C. The final data will from write C. Impossible
that mixed (A, B,C) data is in the data a
Hi, I created 40 osds ceph cluster with 8 PM863 960G SSD as journal. One
ssd is used by 5 osd drives as journal. The ssd 512 random write
performance is about 450MB/s, but the whole cluster sequential write
throughput is only 800MB/s. Any suggestion on improving sequential write
performance? than
I used 2 copies, not 3, so should be 1000MB/s in theory. thanks.
2016-09-29 17:54 GMT+08:00 Nick Fisk :
> *From:* ceph-users [mailto:ceph-users-boun...@lists.ceph.com] *On Behalf
> Of *min fang
> *Sent:* 29 September 2016 10:34
> *To:* ceph-users
> *Subject:* [ceph-users] ceph wr
Hi, I have a 2 port 10Gb NIC installed in ceph client, but I just want to
use open NIC port to do ceph IO. How can I achieve this?
Thanks.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi, I have a 2 port 10Gb NIC installed in ceph client, but I just want to
use one NIC port to do ceph IO. The other port in the NIC will be reserved
to other purpose.
Does currently ceph support to choose NIC port to do IO?
Thanks.
___
ceph-users mailin
Hi, I am looking for OS for my ceph cluster, from
http://docs.ceph.com/docs/master/start/os-recommendations/#infernalis-9-1-0,
there are two OS has been fully tested, centos 7 and ubuntu 14.04. Which
one is better? thanks.
___
ceph-users mailing list
cep
Hi, as my understanding, write IO will committed data to journal firstly,
then give a safe callback to ceph client. So it is possible that data still
in journal when I send a read IO to the same area. So what data will be
returned if the new data still in journal?
Thanks.
_
-12-31 10:29 GMT+08:00 Dong Wu :
> there are two callbacks: committed and applied, committed means write
> to all replica's journal, applied means write to all replica's file
> system. so when applied callback return to client, it means data can
> be read.
>
> 2015-12
> Regards,
> Zhi Zhang (David)
> Contact: zhang.david2...@gmail.com
> zhangz.da...@outlook.com
>
>
> On Thu, Dec 31, 2015 at 10:33 AM, min fang
> wrote:
> > yes, the question here is, librbd use the committed callback, as my
> > understand
Hi, can rbd block_name_prefix be changed? Is it constant for a rbd image?
thanks.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi, I did a fio testing on my ceph cluster, and found ceph random read
performance is better than sequential read. Is it true in your stand?
Thanks.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.
throughput when we say performance is better. However, application
>> random read performance "generally" implies an interest in lower latency
>> - which of course is much more involved from a testing perspective.
>>
>> Cheers
>> Wade
>>
>>
>>
Hi, I set the following parameters in ceph.conf
[client]
rbd cache=true
rbd cache size= 25769803776
rbd readahead disable after byte=0
map a rbd image to a rbd device then run fio testing on 4k read as the
command
./fio -filename=/dev/rbd4 -direct=1 -iodepth 64 -thread -rw=read
-ioengine=aio -bs
Shinobu Kinjo wrote:
>
>> You may want to set "ioengine=rbd", I guess.
>>
>> Cheers,
>>
>> - Original Message -
>> From: "min fang"
>> To: "ceph-users"
>> Sent: Tuesday, March 1, 2016 1:28:54 PM
>> S
/sys/class/block/rbd4/queue/read_ahead_kb
>
> Adrien
>
>
>
> On Tue, Mar 1, 2016 at 12:48 PM, min fang wrote:
>
>> I can use the following command to change parameter, for example as the
>> following, but not sure whether it will work.
>>
>> cep
Dear, I used osd dump to extract osd monmap, and found up_from, up_thru
list, what is the difference between up_from and up_thru?
osd.0 up in weight 1 up_from 673 up_thru 673 down_at 670
last_clean_interval [637,669)
Thanks.
___
ceph-users mailing l
Hi, as my understanding, ceph rbd image will be divided into multiple
objects based on LBA address.
My question here is:
if two clients write to the same LBA address, such as client A write ""
to LBA 0x123456, client B write "" to the same LBA.
LBA address and data will only be in an ob
on the same image
> you need a clustering filesystem on top of RBD (e.g. GFS2) or the
> application needs to provide its own coordination to avoid concurrent
> writes to the same image extents.
>
> --
>
> Jason Dillaman
>
>
> - Original Message -
>
> > From: &qu
I am confused on ceph/ceph-qa-suite and ceph/teuthology. Which one should I
use? thanks.
2016-04-11 23:58 GMT+08:00 Gregory Farnum :
> If you've got the time to run teuthology/ceph-qa-suite on it, that would
> be awesome!
>
> But really if you've got it running now, you're probably good. You can
Hi, my ceph cluster has two pools, ssd cache tier pool and SATA backend
pool. For this configuration, do I need use SSD as journal device? I do not
know whether cache tier take the journal role? thanks
___
ceph-users mailing list
ceph-users@lists.ceph.com
93402 beim Amtsgericht Hanau
> Geschäftsführung: Oliver Dzombic
>
> Steuer Nr.: 35 236 3622 1
> UST ID: DE274086107
>
>
> Am 21.04.2016 um 13:27 schrieb min fang:
> > Hi, my ceph cluster has two pools, ssd cache tier pool and SATA backend
> > pool. For this configura
33 matches
Mail list logo