Hi, Gregory!
Thanks for the comment. I compiled simple program to play with write speed
measurements (from librados examples). Underline "write" functions are:
rados_write(io, "hw", read_res, 1048576, i*1048576);
rados_aio_write(io, "foo", comp, read_res, 1048576, i*1048576);
So I consecutively
Cephfs does have repair tools but I wouldn't jump the gun, your metadata
pool is probably fine. Unless you're getting health errors or seeing errors
in your MDS log?
Are you exporting a fuse or kernel mount with Ganesha (i.e using the vfs
FSAL) or using the Ceph FSAL? Have you tried any tests dire
On 2017-10-01 16:47, Alexander Kushnirenko wrote:
> Hi, Gregory!
>
> Thanks for the comment. I compiled simple program to play with write speed
> measurements (from librados examples). Underline "write" functions are:
> rados_write(io, "hw", read_res, 1048576, i*1048576);
> rados_aio_write(i
Dear,
Thanks for help. I am able to install on single node.
Now going to install on multiple nodes. Just want to clarify one small thing.
Is Ceph key and Ceph repository need to add on every node or it is required
only on admin node where we execute ceph-deploy command ?
On Friday, Septemb
Quoting Kashif Mumtaz (kashif.mum...@yahoo.com):
> Dear, Thanks for help. I am able to install on single node. Now going
> to install on multiple nodes. Just want to clarify one small thing.
> Is Ceph key and Ceph repository need to add on every node or it is
> required only on admin node where w