The new chapter "Hyper-converged Infrastructure" is added. It contains a 
general statement about it and
as subchapter "Ceph server in Proxmox VE Cluster"

The latter is derived from the corresponding Wiki article. Since the Wiki 
article contains a lot of
additional best practice information it is not obsolete yet. A combined source 
for both Wiki and Administration Guide
will be made later.


Signed-off-by: Friedrich Ramberger <f.ramber...@proxmox.com>
---
 ceph-server.adoc                    | 138 ++++++++++++++++++++++++++++++++++++
 hyper-converged-infrastructure.adoc |  12 ++++
 pmxcfs.adoc                         |   1 +
 pve-admin-guide.adoc                |   2 +
 pve-storage-rbd.adoc                |   1 +
 5 files changed, 154 insertions(+)
 create mode 100644 ceph-server.adoc
 create mode 100644 hyper-converged-infrastructure.adoc

diff --git a/ceph-server.adoc b/ceph-server.adoc
new file mode 100644
index 0000000..607e811
--- /dev/null
+++ b/ceph-server.adoc
@@ -0,0 +1,138 @@
+Ceph Server in Proxmox VE Cluster
+---------------------------------
+
+
+It is possible to install at Proxmox VE cluster nodes a Ceph Server for RADOS 
Block Devices (RBD), see 
+xref:ceph_rados_block_devices[chapter Ceph RADOS Block Devices (RBD)] 
+
+
+Precondition
+~~~~~~~~~~~~
+
+There should be at least three (preferably) identical servers for setup which 
build together a Proxmox Cluster.
+
+
+A 10Gb network is recommmended, exclusively used for Ceph. If there are no 
10Gb switches available meshed network is 
+also an option, see {webwiki-url}Full_Mesh_Network_for_Ceph_Server[wiki]. 
+
+
+Check also the recommendations from 
http://docs.ceph.com/docs/jewel/start/hardware-recommendations/[Ceph's Website].
+
+
+Installation of Ceph Packages
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+
+On each node run the installation script as follows:
+
+[source,bash]
+----
+pveceph install -version jewel
+----
+
+
+This sets up an 'apt' package repository in /etc/apt/sources.list.d/ceph.list 
and installs the required software.
+
+
+Creating initial Ceph configuration
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+After installation of packages, you need to create an initial Ceph 
configuration on just one node, based on your network (10.10.10.0/24 in the 
following example) dedicated for Ceph: 
+
+[source,bash]
+----
+pveceph init --network 10.10.10.0/24
+----
+
+This creates an initial config at /etc/pve/ceph.conf. That file is 
automatically distributed to all Proxmox VE nodes by using 
xref:proxmox_cluster_file_system[pmxcfs]. The command also creates a symbolic 
link from /etc/ceph/ceph.conf pointing to that file. So you can simply run Ceph 
commands without the need to specify a configuration file. 
+
+
+Creating Ceph Monitors
+~~~~~~~~~~~~~~~~~~~~~~
+
+On each node where a monitor is requested (at least 3 are recommended) create 
it by using the "Ceph" item in the GUI or run 
+
+
+[source,bash]
+----
+pveceph createmon
+----
+
+
+Creating Ceph OSDs
+~~~~~~~~~~~~~~~~~~
+
+
+via GUI or via CLI as follows:
+
+[source,bash]
+----
+pveceph createosd /dev/sd[X]
+----
+
+If you want to use a dedicated SSD journal disk: 
+
+NOTE: In order to use a dedicated journal disk (SSD), the disk needs to have a 
GPT partition table. You can create this with 'gdisk /dev/sd(x)'. If there is 
no GPT, you cannot select the disk as journal. Currently the journal size is 
fixed to 5 GB.
+
+
+[source,bash]
+----
+pveceph createosd /dev/sd[X] -journal_dev /dev/sd[X]
+----
+
+Example: /dev/sdf as data disk (4TB) and /dev/sdb is the dedicated SSD journal 
disk 
+
+[source,bash]
+----
+pveceph createosd /dev/sdf -journal_dev /dev/sdb
+----
+
+
+This partitions the disk (data and journal partition), creates filesystems and 
starts the OSD, afterwards it is running and fully functional. Please create at 
least 12 OSDs, distributed among your nodes (4 on each node). 
+
+It should be noted that this command refuses to initialize disk when it 
detects existing data. So if you want to overwrite a disk you should remove 
existing data first. You can do that using: 
+
+[source,bash]
+----
+ceph-disk zap /dev/sd[X]
+----
+
+
+You can create OSDs containing both journal and data partitions or you can 
place the journal on a dedicated SSD. Using a SSD journal disk is highly 
recommended if you expect good performance. 
+
+
+
+Ceph Pools
+~~~~~~~~~~
+
+
+The standard installation creates per default the pool 'rbd', additional pools 
can be created via GUI.
+
+
+
+Ceph Client
+~~~~~~~~~~~
+
+
+You can then configure Proxmox VE to use such pools to store VM images, just 
use the GUI ("Add Storage": RBD, see also xref:ceph_rados_block_devices[chapter 
Ceph RADOS Block Devices (RBD)]).
+
+
+You also need to copy the keyring to a predefined location.
+NOTE: that the file name needs to be storage id + .keyring - storage id is the 
expression after 'rbd:' in /etc/pve/storage.cfg which is 'my-ceph-storage' in 
the following example:
+
+[source,bash]
+----
+mkdir /etc/pve/priv/ceph
+cp /etc/ceph/ceph.client.admin.keyring 
/etc/pve/priv/ceph/my-ceph-storage.keyring
+----
+
+
+
+
+
+
+
+
+
+
+
diff --git a/hyper-converged-infrastructure.adoc 
b/hyper-converged-infrastructure.adoc
new file mode 100644
index 0000000..2948498
--- /dev/null
+++ b/hyper-converged-infrastructure.adoc
@@ -0,0 +1,12 @@
+[[chapter_hyper_converged_infrastructure]]
+Hyper-converged Infrastructure
+==============================
+
+Proxmox VE has all the 
https://en.wikipedia.org/wiki/Hyper-converged_infrastructure[Hyper-converged 
Infrastructure] capabilities needed to deploy and manage a complete open source 
hyper-converged 
+infrastructure. 
+It integrates tightly compute, networking, and storage resources into a single 
deployment unit and you can manage everything with the centralized web 
management interface. 
+Proxmox VE unifies your compute and storage systems, i.e. you can use the same 
physical nodes within a cluster for both computing (processing VMs and 
Containers) as well as for replicated storage.
+
+
+include::ceph-server.adoc[]
+
diff --git a/pmxcfs.adoc b/pmxcfs.adoc
index d3b7a71..8f8c81c 100644
--- a/pmxcfs.adoc
+++ b/pmxcfs.adoc
@@ -18,6 +18,7 @@ DESCRIPTION
 endif::manvolnum[]
 
 ifndef::manvolnum[]
+[[proxmox_cluster_file_system]]
 Proxmox Cluster File System (pmxcfs)
 ====================================
 :pve-toplevel:
diff --git a/pve-admin-guide.adoc b/pve-admin-guide.adoc
index 5cb85bb..b783cd0 100644
--- a/pve-admin-guide.adoc
+++ b/pve-admin-guide.adoc
@@ -25,6 +25,8 @@ include::pve-installation.adoc[]
 
 include::sysadmin.adoc[]
 
+include::hyper-converged-infrastructure.adoc[]
+
 include::pve-gui.adoc[]
 
 include::pvecm.adoc[]
diff --git a/pve-storage-rbd.adoc b/pve-storage-rbd.adoc
index c33b70e..c782ee1 100644
--- a/pve-storage-rbd.adoc
+++ b/pve-storage-rbd.adoc
@@ -1,3 +1,4 @@
+[[ceph_rados_block_devices]]
 Ceph RADOS Block Devices (RBD)
 ------------------------------
 ifdef::wiki[]
-- 
2.1.4

_______________________________________________
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

Reply via email to