Hi all,
I'm using ceph Octopus version and deployed it using cephadm. The ceph
documentation provides 2 ways for creating a new cephfs volume:
1. via "ceph fs volume create ..." - I can use this and it works fine with the
MDS automatically deployed but there is no provision for using EC with t
I suggest trying the rsync --sparse option. Typically, qcow2 files (tend to be
large) are sparse files. Without the sparse option, the files expand in their
destination.
September 14, 2020 6:15 PM, fotof...@gmail.com wrote:
> Hello.
>
> I'm using the Nautilus Ceph version for some huge folder
> RuntimeError: Unable to create a new OSD id
Any idea on how to get pass these errors? Thanks.
--Tri Hoang
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
lowed this and got both executed fine without any error. Yet when the OSD
got started up, it keeps on using the integrated block.db instead of the new
db. The block.db link to the new db device was deleted. Again, no error, just
not using the new db
Any suggestion? Thanks.
--Tri Hoang
root
rted up, it keeps on using the integrated
>> block.db instead of the new db. The block.db link to the new db
>> device was deleted. Again, no error, just not using the new db
>>
>> Any suggestion? Thanks.
>>
>> --Tri Hoang
>>
>> root@elmo:/#CEPH_ARGS=&q
rates about 22GB worth of disk
write I/O or more than 1/2TB a day. The log was generated using iotop. It seems
the 3 monitors in the cluster are doing the same thing.
Is this normal or something I should be taking a closer look at?
Cheers,
--Tri
The key is stored in the ceph cluster config db. It can be retrieved by
KEY=`/usr/bin/ceph --cluster ceph --name client.osd-lockbox.${OSD_FSID}
--keyring $OSD_PATH/lockbox.keyring config-key get dm-crypt/osd/$OSD_FSID/luks`
September 22, 2020 2:25 AM, "Janne Johansson" wrote:
> Den mån 21 sep.
You can also expand the OSD. ceph-bluestore-tool has an option for expansion of
the OSD. I'm not 100% sure if that would solve the rockdb out of space issue. I
think it will, though. If not, you can move rockdb to a separate block device.
September 22, 2020 7:31 PM, "George Shuklin" wrote:
>
I don't think you need a bucket under host for the two LVs. It's unnecessary.
September 23, 2020 6:45 AM, "George Shuklin" wrote:
> On 23/09/2020 10:54, Marc Roos wrote:
>
>>> Depends on your expected load not? I already read here numerous of times
>> that osd's can not keep up with nvme's, tha
t;very fast NVMe. Using aes-xts, one can only expect around 1600-2000GB/s with
>256/512 bit keys.
Best,
Tri Hoang
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
bottleneck when used on
> very fast NVMe. Using
> aes-xts, one can only expect around 1600-2000GB/s with 256/512 bit keys.
>
> Best,
>
> Tri Hoang
> ___
> ceph-users mailing list -- ceph-users@ceph.
random I/O being faster. Read and write
tests show similar results both inside and outside VMs.
Typically, the random I/O performance would be less (or much less) than bulk.
Any idea as to what I should be looking at? Thanks.
Tri Hoang
Inside VM
At QD=8, randread is around 2.8X
dmcrypt volume (if required), create proper links,
and execute ceph-osd command. The existing links in /var/lib/ceph/ceph-osd/
would be overrided by the info from the LV tags.
You can use lvs -o lv_tags on an LV to see all the labels created for an OSD.
Hope it helps.
--Tri Hoang
October 1, 20
13 matches
Mail list logo