Signed-off-by: Stefan Reiter <s.rei...@proxmox.com> --- pvecm.adoc | 81 ++++++++++++++++++++++++++++++++++++++++-------------- 1 file changed, 60 insertions(+), 21 deletions(-)
diff --git a/pvecm.adoc b/pvecm.adoc index e986a75..5379c3f 100644 --- a/pvecm.adoc +++ b/pvecm.adoc @@ -103,25 +103,33 @@ to the other with SSH via the easier to remember node name (see also xref:pvecm_corosync_addresses[Link Address Types]). Note that we always recommend to reference nodes by their IP addresses in the cluster configuration. - -[[pvecm_create_cluster]] Create the Cluster ------------------ -Login via `ssh` to the first {pve} node. Use a unique name for your cluster. -This name cannot be changed later. The cluster name follows the same rules as -node names. +Use a unique name for your cluster. This name cannot be changed later. The +cluster name follows the same rules as node names. + +Create via Web GUI +~~~~~~~~~~~~~~~~~~ + +Under __Datacenter -> Cluster__, click on *Create Cluster*. Type your cluster +name and select a network connection from the dropdown to serve as your main +cluster network (Link 0, default is what the node's hostname resolves to). + +Optionally, you can select the 'Advanced' check box and choose an additional +network interface for fallback purposes (Link 1, see also +xref:pvecm_redundancy[Corosync Redundancy]). + +Create via Command Line +~~~~~~~~~~~~~~~~~~~~~~~ + +Login via `ssh` to the first {pve} node and run the following command: ---- hp1# pvecm create CLUSTERNAME ---- -NOTE: It is possible to create multiple clusters in the same physical or logical -network. Use unique cluster names if you do so. To avoid human confusion, it is -also recommended to choose different names even if clusters do not share the -cluster network. - -To check the state of your cluster use: +To check the state of your new cluster use: ---- hp1# pvecm status @@ -131,9 +139,9 @@ Multiple Clusters In Same Network ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ It is possible to create multiple clusters in the same physical or logical -network. Each such cluster must have a unique name, this does not only helps -admins to distinguish on which cluster they currently operate, it is also -required to avoid possible clashes in the cluster communication stack. +network. Each such cluster must have a unique name, to not only help admins +distinguish which cluster they are currently operating on, but also to avoid +possible clashes in the cluster communication stack. While the bandwidth requirement of a corosync cluster is relatively low, the latency of packages and the package per second (PPS) rate is the limiting @@ -145,6 +153,39 @@ infrastructure for bigger clusters. Adding Nodes to the Cluster --------------------------- +CAUTION: A new node cannot hold any VMs, because you would get +conflicts about identical VM IDs. Also, all existing configuration in +`/etc/pve` is overwritten when you join a new node to the cluster. To +workaround, use `vzdump` to backup and restore to a different VMID after +adding the node to the cluster. + +Add Node via GUI +~~~~~~~~~~~~~~~~ + +If you want to use "assisted join", where most parameters will be filled in for +you, first login to the web interface on a node already in the cluster. Under +__Datacenter -> Cluster__, click on *Join Information* at the top. Click on +*Copy Information* or manually copy the string from the 'Information' field. + +To add the new node, login to the web interface on the node you want to add. +Under __Datacenter -> Cluster__, click on *Join Cluster*. Fill in the +'Information' field with the text you copied earlier. + +For security reasons, the password is not included, so you have to fill that in +manually. + +NOTE: The Join Information is not necessarily required, you can also uncheck the +'Assisted Join' checkbox and fill in the required fields manually. + +After clicking on *Join* your node will immediately be added to the cluster. +You might need to reload the web page, and re-login with the cluster +credentials. + +Confirm that your node is visible under __Datacenter -> Cluster__. + +Add Node via Command Line +~~~~~~~~~~~~~~~~~~~~~~~~~ + Login via `ssh` to the node you want to add. ---- @@ -154,11 +195,6 @@ Login via `ssh` to the node you want to add. For `IP-ADDRESS-CLUSTER` use the IP or hostname of an existing cluster node. An IP address is recommended (see xref:pvecm_corosync_addresses[Link Address Types]). -CAUTION: A new node cannot hold any VMs, because you would get -conflicts about identical VM IDs. Also, all existing configuration in -`/etc/pve` is overwritten when you join a new node to the cluster. To -workaround, use `vzdump` to backup and restore to a different VMID after -adding the node to the cluster. To check the state of the cluster use: @@ -229,6 +265,8 @@ pvecm add IP-ADDRESS-CLUSTER -link0 LOCAL-IP-ADDRESS-LINK0 If you want to use the built-in xref:pvecm_redundancy[redundancy] of the kronosnet transport layer, also use the 'link1' parameter. +In the GUI you can select the correct interface from the corresponding 'Link 0' +and 'Link 1' fields. Remove a Cluster Node --------------------- @@ -692,8 +730,9 @@ Corosync Redundancy Corosync supports redundant networking via its integrated kronosnet layer by default (it is not supported on the legacy udp/udpu transports). It can be enabled by specifying more than one link address, either via the '--linkX' -parameters of `pvecm` (while creating a cluster or adding a new node) or by -specifying more than one 'ringX_addr' in `corosync.conf`. +parameters of `pvecm`, in the GUI as **Link 1** (while creating a cluster or +adding a new node) or by specifying more than one 'ringX_addr' in +`corosync.conf`. NOTE: To provide useful failover, every link should be on its own physical network connection. -- 2.20.1 _______________________________________________ pve-devel mailing list pve-devel@pve.proxmox.com https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel