I have a 6 osd system (w/ 3 mon, and 3 mds).
it is running cephfs as part of its task.
i have upgraded the 3 mon nodes to Ubuntu 16.04 and the bundled
ceph 10.1.0-0ubuntu1.
(upgraded from Ubuntu 15.10 with ceph 0.94.6-0ubuntu0.15.10.1).
2 of the mon nodes are happy and up. But the 3rd is giving
If you have a qcow2 image on *local* type storage and move it to a
ceph pool pmox will automatically convert the image to raw.
Performance is entirely down to your particular setup - moving image
to a ceph pool certainly won't guarantee performance increase - in
fact the opposite could happen.
Yo
some supplement:
#2:ceph support heterogeneous nodes.
#3.I think if you add an OSD by hand,you should set the `osd crush
reweigth` to 0 first
and then increase it to suit the disk size.and degrade the priority ,
thread of recover and backfill.just like this:
osd_max_backfills 1
osd_recovery_max_a
On 9/04/2016 11:01 PM, Mad Th wrote:
After this move, does the qcow2 image get converted to some raw or rbd
file format?
raw (rbd block)
Will moving vm/quest images to ceph storage pool after converting
qcow2 to raw format first , improve performance?
I doubt it.
We still see some i/o
I was reading that raw format is faster than qcow2. We have few vm/guest
images in qcow2 which we have moved to ceph storage pool ( using proxmox
GUI disk move ).
After this move, does the qcow2 image get converted to some raw or rbd file
format?
Will moving vm/quest images to ceph storage po
Without knowing proxmox specific stuff ..
#1: just create an OSD the regular way
#2: it is safe; However, you may, either create a spoof
(osd_crush_chooseleaf_type = 0), or underuse your cluster
(osd_crush_chooseleaf_type = 1)
On 09/04/2016 14:39, Mad Th wrote:
> We have a 3 node proxmox/ceph clu
We have a 3 node proxmox/ceph cluster ... each with 4 x4 TB disks
1) If we want to add more disks , what are the things that we need to be
careful about?
Will the following steps automatically add it to ceph.conf?
ceph-disk zap /dev/sd[X]
pveceph createosd /dev/sd[X] -journal_dev /dev/sd[Y]
whe
Hi,
*snipsnap*
On 09.04.2016 08:11, Christian Balzer wrote:
3 MDS nodes:
-SuperMicro 1028TP-DTR (one node from scale-out chassis)
--2x E5-2630v4
--128GB RAM
--2x 120GB SSD (RAID 1 for OS)
Not using CephFS, but if the MDS are like all the other Ceph bits
(MONs in particular) they are likely to