[ceph-users] Unable to start mds when creating cephfs volume with erasure encoding data pool

2020-09-13 Thread tri
Hi all, I'm using ceph Octopus version and deployed it using cephadm. The ceph documentation provides 2 ways for creating a new cephfs volume: 1. via "ceph fs volume create ..." - I can use this and it works fine with the MDS automatically deployed but there is no provision for using EC with t

[ceph-users] Re: Disk consume for CephFS

2020-09-14 Thread tri
I suggest trying the rsync --sparse option. Typically, qcow2 files (tend to be large) are sparse files. Without the sparse option, the files expand in their destination. September 14, 2020 6:15 PM, fotof...@gmail.com wrote: > Hello. > > I'm using the Nautilus Ceph version for some huge folder

[ceph-users] Using cephadm shell/ceph-volume

2020-09-18 Thread tri
> RuntimeError: Unable to create a new OSD id Any idea on how to get pass these errors? Thanks. --Tri Hoang ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Process for adding a separate block.db to an osd

2020-09-18 Thread tri
lowed this and got both executed fine without any error. Yet when the OSD got started up, it keeps on using the integrated block.db instead of the new db. The block.db link to the new db device was deleted. Again, no error, just not using the new db Any suggestion? Thanks. --Tri Hoang root

[ceph-users] Re: Process for adding a separate block.db to an osd

2020-09-20 Thread tri
rted up, it keeps on using the integrated >> block.db instead of the new db. The block.db link to the new db >> device was deleted. Again, no error, just not using the new db >> >> Any suggestion? Thanks. >> >> --Tri Hoang >> >> root@elmo:/#CEPH_ARGS=&q

[ceph-users] Is ceph-mon disk write i/o normal at more than 1/2TB a day on an empty cluster?

2020-09-20 Thread tri
rates about 22GB worth of disk write I/O or more than 1/2TB a day. The log was generated using iotop. It seems the 3 monitors in the cluster are doing the same thing. Is this normal or something I should be taking a closer look at? Cheers, --Tri

[ceph-users] Re: Understanding what ceph-volume does, with bootstrap-osd/ceph.keyring, tmpfs

2020-09-22 Thread tri
The key is stored in the ceph cluster config db. It can be retrieved by KEY=`/usr/bin/ceph --cluster ceph --name client.osd-lockbox.${OSD_FSID} --keyring $OSD_PATH/lockbox.keyring config-key get dm-crypt/osd/$OSD_FSID/luks` September 22, 2020 2:25 AM, "Janne Johansson" wrote: > Den mån 21 sep.

[ceph-users] Re: Low level bluestore usage

2020-09-22 Thread tri
You can also expand the OSD. ceph-bluestore-tool has an option for expansion of the OSD. I'm not 100% sure if that would solve the rockdb out of space issue. I think it will, though. If not, you can move rockdb to a separate block device. September 22, 2020 7:31 PM, "George Shuklin" wrote: >

[ceph-users] Re: NVMe's

2020-09-23 Thread tri
I don't think you need a bucket under host for the two LVs. It's unnecessary. September 23, 2020 6:45 AM, "George Shuklin" wrote: > On 23/09/2020 10:54, Marc Roos wrote: > >>> Depends on your expected load not? I already read here numerous of times >> that osd's can not keep up with nvme's, tha

[ceph-users] How OSD encryption affects latency/iops on NVMe, SSD and HDD

2020-09-26 Thread tri
t;very fast NVMe. Using aes-xts, one can only expect around 1600-2000GB/s with >256/512 bit keys. Best, Tri Hoang ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: How OSD encryption affects latency/iops on NVMe, SSD and HDD

2020-09-28 Thread tri
bottleneck when used on > very fast NVMe. Using > aes-xts, one can only expect around 1600-2000GB/s with 256/512 bit keys. > > Best, > > Tri Hoang > ___ > ceph-users mailing list -- ceph-users@ceph.

[ceph-users] RBD huge diff between random vs non-random IOPs - all flash

2020-09-30 Thread tri
random I/O being faster. Read and write tests show similar results both inside and outside VMs. Typically, the random I/O performance would be less (or much less) than bulk. Any idea as to what I should be looking at? Thanks. Tri Hoang Inside VM At QD=8, randread is around 2.8X

[ceph-users] Re: ceph-volume quite buggy compared to ceph-disk

2020-10-01 Thread tri
dmcrypt volume (if required), create proper links, and execute ceph-osd command. The existing links in /var/lib/ceph/ceph-osd/ would be overrided by the info from the LV tags. You can use lvs -o lv_tags on an LV to see all the labels created for an OSD. Hope it helps. --Tri Hoang October 1, 20