ad::entry()+0xd) [0x556839c2280d]
18: (()+0x8184) [0x7f3070842184]
19: (clone()+0x6d) [0x7f306ed9637d]
NOTE: a copy of the executable, or `objdump -rdS ` is needed
to interpret this.
On Wed, Feb 8, 2017 at 9:13 AM, Shinobu Kinjo wrote:
>
>
> On Wed, Feb 8, 2017 at 3:05 PM, Ahmed Khuraidah
:03 AM, Shinobu Kinjo wrote:
> Are you using opensource Ceph packages or suse ones?
>
> On Sat, Feb 4, 2017 at 3:54 PM, Ahmed Khuraidah
> wrote:
>
>> I Have opened ticket on http://tracker.ceph.com/
>>
>> http://tracker.ceph.com/issues/18816
>>
>>
# uname -a
Linux cephnode 4.4.38-93-default #1 SMP Wed Dec 14 12:59:43 UTC 2016
(2d3e9d4) x86_64 x86_64 x86_64 GNU/Linux
Thanks
On Fri, Feb 3, 2017 at 1:59 PM, John Spray wrote:
> On Fri, Feb 3, 2017 at 8:07 AM, Ahmed Khuraidah
> wrote:
> > Thank you guys,
> >
> &g
AM, Wido den Hollander wrote:
> >
> >> Op 2 februari 2017 om 15:35 schreef Ahmed Khuraidah <
> abushi...@gmail.com>:
> >>
> >>
> >> Hi all,
> >>
> >> I am still confused about my CephFS sandbox.
> >>
> >> When I
Hi all,
I am still confused about my CephFS sandbox.
When I am performing simple FIO test into single file with size of 3G I
have too many IOps:
cephnode:~ # fio payloadrandread64k3G
test: (g=0): rw=randread, bs=64K-64K/64K-64K/64K-64K, ioengine=libaio,
iodepth=2
fio-2.13
Starting 1 process
test
Lembke wrote:
> Hi,
>
> I don't use mds, but I thinks it's the same like with rdb - the readed
> data are cached on the OSD-nodes.
>
> The 4MB-chunks of the 3G-file fit completly in the cache, the other not.
>
>
> Udo
>
>
> On 18.01.2017 07:50, Ahmed
looking forward so read some good help here
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hello community,
I need your help to understand a little bit more about current MDS
architecture.
I have created one node CephFS deployment and tried to test it by fio. I
have used two file sizes of 3G and 320G. My question is why I have around 1k+
IOps when perform random reading from 3G file int