Hi all,
(Apologies for the shotgun mail.)
Following this up for anyone heading to Sydney in a week. I did end up
getting a Ceph BoF on the program:
https://www.openstack.org/summit/sydney-2017/summit-schedule/events/20490
If you have stuff you'd like to talk about, or especially share,
please sh
Hi all,
I am testing ec pool backed rbd image performace and found that it takes a very
long time to format the rbd image by mkfs.
I created a 5TB image and mounted it on the client(ubuntu 16.04 with 4.12
kernel) and use mkfs.ext4 and mkfs.xfs to format it.
It takes hours to finish the format and
Is it possible to add a longer description with the created snapshot
(other than using name)?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Try running "mkfs.xfs -K" which disables discarding to see if that
improves the mkfs speed. The librbd-based implementation encountered a
similar issue before when certain OSs sent very small discard extents
for very large disks.
On Sun, Oct 29, 2017 at 10:16 AM, shadow_lin wrote:
> Hi all,
> I a
Hello,
Am 27.10.2017 um 19:00 schrieb David Turner:
> What does your crush map look like? Also a `ceph df` output. You're
> optimizing your map for pool #5, if there are other pools with a
> significant amount of data, then your going to be off on your cluster
> balance.
There are no other pool