so we can prevent high network throughput between DC1 and DC2
during rebuild because an entire OSD server fails? If yes, how?
set max_recovery, max_backfill or '_sleeps'. On other hands - network QoS.
k
___
ceph-users mailing list
ceph-users@list
Hi *,
sorry for bringing up that old topic again, but we just faced a
corresponding situation and have successfully tested two migration
scenarios.
Zitat von ceph-users-requ...@lists.ceph.com:
Date: Sat, 24 Feb 2018 06:10:16 +
From: David Turner
To: Nico Schottelius
Cc: Caspar Smit ,
Hi,
Intended setup (only focusing on OSD server here, of course monitor and
other servers will be part of the intended cluster):
* 1 single Cluster, spanning across 2 datacenters:
* 6 OSD servers (each containing 5 OSD’s/disks) in Datacenter1
* 6 OSD servers (each containing 5 OSD’s/disks) in Data
Am 8. April 2018 05:44:11 MESZ schrieb Marc Roos :
>
>Hi Mehmet,
>
>The data is already lost in these snapshots?
I cannot say that. Cause i did Not need the Snapshots. But you can try to Clone
the vm in the state of the Snapshot ( i am using proxmox).
> And how did you identify
>the
Hi Mehmet,
The data is already lost in these snapshots? And how did you identify
the snapshot? It looks like I have these only in the rbd pool.
-Original Message-
From: c...@elchaka.de [mailto:c...@elchaka.de]
Sent: zondag 8 april 2018 10:44
To: ceph-users@lists.ceph.com
Subject:
Hi Marc,
Am 7. April 2018 18:32:40 MESZ schrieb Marc Roos :
>
>How do you resolve these issues?
>
In my Case i could get rid of this by deleting the existing Snapshots.
- Mehmet
>
>Apr 7 22:39:21 c03 ceph-osd: 2018-04-07 22:39:21.928484 7f0826524700
>-1
>osd.13 pg_epoch: 19008 pg[17.13( v 1
Thank you.
I will look into the script.
For fixed object size application(rbd,cephfs), do you think it is good idea to
pre-split the folders to the point when each folder contains about 1-2k
objects when the cluster is full. I think by doing this can avoid the performce
impact of splitting fold