1. ​
Is the layout default, apart from the change to object_size?
It is default. The only change I make is object_size and stripe_unit. I set
both to the same value (i.e. stripe_count is 1 in all cases).

2. What version are the client and server?
ceph version 0.94.1

3.
​
Not really... are you using the fuse client?  Enabling "debug objecter =
10" on the client will give you a log that says what writes the client is
doing.
I am using the kernel module. Does this work with the kernel module? How
can I set it up?

4.
​
This is probably a client issue, so I would expect killing the client to
get you out of it.
You are absolutely right. It goes away when I reboot the client node.

Thanks,
Hadi


On Tue, Jul 21, 2015 at 4:57 PM, John Spray <john.sp...@redhat.com> wrote:

>
>
> On 21/07/15 21:54, Hadi Montakhabi wrote:
>
>  Hello Cephers,
>
>  I am using CephFS, and running some benchmarks using fio.
> After increasing the object_size to 33554432, when I try to run some read
> and write tests with different block sizes, when I get to block size of 64m
> and beyond, Ceph does not finish the operation (I tried letting it run for
> more than a day at least three times).
> However, when I cancel the job and I expect to see no io  operations, here
> is what I get:
>
>
> ​​
> Is the layout default, apart from the change to object_size?
>
> What version are the client and server?
>
>
>  [cephuser@node01 ~]$ ceph -s
>     cluster b7beebf6-ea9f-4560-a916-a58e106c6e8e
>      health HEALTH_OK
>      monmap e3: 3 mons at {node02=
> 192.168.17.212:6789/0,node03=192.168.17.213:6789/0,node04=192.168.17.214:6789/0
> }
>             election epoch 8, quorum 0,1,2 node02,node03,node04
>      mdsmap e74: 1/1/1 up {0=node02=up:active}
>      osdmap e324: 14 osds: 14 up, 14 in
>       pgmap v155699: 768 pgs, 3 pools, 15285 MB data, 1772 objects
>             91283 MB used, 7700 GB / 7817 GB avail
>                  768 active+clean
> *  client io 2911 MB/s rd, 90 op/s*
>
>
>  If I do ceph -w, it shows me that it is constantly doing reads, but I
> have no idea from where and when it would stop?
> I had to remove my CephFS file system and the associated pools and start
> things from scratch.
>
>  1. Any idea what is happening?
>
>
> ​​
> Not really... are you using the fuse client?  Enabling "debug objecter =
> 10" on the client will give you a log that says what writes the client is
> doing.
>
>
>
>   2. When this happens, do you know a better way to get out of the
> situation without destroying the filesystem and the pools?
>
>
> ​​
> This is probably a client issue, so I would expect killing the client to
> get you out of it.
>
> Cheers,
> John
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to