> In my case, the replica-3 and k2m2 are stored on the same spinning disks.
That is exactly what I meant by same pool. The only way for a cache to
make sense would be if the data being written or read will be modified or
heavily read for X amount of time and then ignored.
If things are rarely re
Hi David,
Thanks for the clarification. Reminded me of some details I forgot
to mention.
In my case, the replica-3 and k2m2 are stored on the same spinning
disks. (Mainly using EC for "compression" b/c with the EC k2m2 setting
PG only takes up the same amount of space as a replica-2 while
I'm pretty sure that the process is the same as with filestore. The cluster
doesn't really know if an osd is filestore or bluestore... It's just an osd
running a daemon.
If there are any differences, they would be in the release notes for
Luminous as changes from Jewel.
On Sat, Sep 30, 2017, 6:28
Hi all.
Independetly that i've deployerd a ceph Luminous cluster with Bluestore
using ceph-ansible (https://github.com/ceph/ceph-ansible) what is the right
way to replace a disk when using Bluestore ?
I will try to forget everything i know on how to recover things with
filestore and start fresh.
I have on luminous 12.2.1 on a osd node nfs-ganesha 2.5.2 (from ceph
download) running. And when I rsync on a vm that has the nfs mounted, I
get stalls.
I thought it was related to the amount of files of rsyncing the centos7
distro. But when I tried to rsync just one file it also stalled. It
Proofread failure. "modified and read during* the first X hours, and then
remains in cold storage for the remainder of its life with rare* reads"
On Sat, Sep 30, 2017, 1:32 PM David Turner wrote:
> I can only think of 1 type of cache tier usage that is faster if you are
> using the cache tier on
I can only think of 1 type of cache tier usage that is faster if you are
using the cache tier on the same root of osds as the EC pool. That is cold
storage where the file is written initially, modified and read door the
first X hours, and then remains in cold storage for the remainder of its
life
Hi all,
Now that Luminous supports direct writing to EC pools I was wondering
if one can get more performance out of an erasure-coded pool with
overwrites or an erasure-coded pool with a cache tier?
I currently have a 3 replica pool in front of a k2m2 erasure coded
pool. Luminous documenta
Is this useful for someone?
[Sat Sep 30 15:51:11 2017] libceph: osd5 192.168.10.113:6809 socket
closed (con state OPEN)
[Sat Sep 30 15:51:11 2017] libceph: osd5 192.168.10.113:6809 socket
closed (con state CONNECTING)
[Sat Sep 30 15:51:11 2017] libceph: osd5 down
[Sat Sep 30 15:51:11 2017] lib
[Sat Sep 30 15:51:11 2017] libceph: osd5 192.168.10.113:6809 socket
closed (con state OPEN)
[Sat Sep 30 15:51:11 2017] libceph: osd5 192.168.10.113:6809 socket
closed (con state CONNECTING)
[Sat Sep 30 15:51:11 2017] libceph: osd5 down
[Sat Sep 30 15:51:11 2017] libceph: osd5 down
[Sat Sep 3
Yes, use tips from here :
http://ceph.com/geen-categorie/incremental-snapshots-with-rbd/
Basically:
- create a snapshot
- export the diff between your new snap and the previous one
- on the backup cluster, import the diff
This way, you can keep a few snap on the production cluster, for quick
reco
Dear
We have VM images stored in Ceph cluster.Now we need to configure its backup
mechanism to anotherdatacenter so that image periodically copies to other
datacenter.How can we achieve this?
Regards,
___
ceph-users mailing list
ceph-users@lists.ceph
12 matches
Mail list logo