[pve-devel] [PATCH pve-docs] qm: IO Thread: fix typo and reword section

2020-07-21 Thread Dylan Whyte
Fix typo 'isks' => 'disks'
Reword section for better readability.

Signed-off-by: Dylan Whyte 
---
 qm.adoc | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/qm.adoc b/qm.adoc
index 45832e9..d30672e 100644
--- a/qm.adoc
+++ b/qm.adoc
@@ -253,8 +253,8 @@ The option *IO Thread* can only be used when using a disk 
with the
 *VirtIO* controller, or with the *SCSI* controller, when the emulated 
controller
  type is  *VirtIO SCSI single*.
 With this enabled, Qemu creates one I/O thread per storage controller,
-instead of a single thread for all I/O, so it can increase performance when
-multiple isks are used and each disk has its own storage controller.
+rather than a single thread for all I/O. This can increase performance when
+multiple disks are used and each disk has its own storage controller.
 
 
 [[qm_cpu]]
-- 
2.20.1



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH docs DRAFT] Add section with more infos about ZFS RAID levels

2020-07-21 Thread Aaron Lauterer




On 7/17/20 3:23 PM, Andreas Steinel wrote:

Very good.

Maybe we can also include some references to books, e.g. the ZFS books from
Allan Jude and Michael W. Lucas for further reading?


That is a good idea IMHO. But I think this section is not the right place. I 
would put it somewhere in the introduction at the beginning of the ZFS chapter.


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH docs DRAFT] Add section with more infos about ZFS RAID levels

2020-07-21 Thread Aaron Lauterer

Thanks, will incorporate these.

On 7/20/20 8:30 PM, Stoiko Ivanov wrote:

Thanks for picking this up! Looking forward to not searching the web/our
forum for the good answers to questions that come up quite often.

a few mostly stylistic (as in more a matter of my taste) comments inline:




___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH docs v1] Add section with more infos about ZFS RAID levels

2020-07-21 Thread Aaron Lauterer
This new section explains the performance and failure properties of
mirror and RAIDZ VDEVs as well as the "unexpected" higher space usage by
ZVOLs on a RAIDZ.

Signed-off-by: Aaron Lauterer 
---

draft->v1:
* incorporate Stoikos suggestions [0]
* went with lower case vdev as this is the way it is used in the zpool
  manpage
* used N-P(arity) in the RAIDZ sizing part
* added a short explanation of vdevs at the beginning and mention the
  zpool manpage


This is a first version to explain the performance characteristics of the
different RAID levels / vdev types, as well as their failure behavior.

Additionally it explains the situation why a VM disk (ZVOL) can end up
using quite a bit more space than expected when placed on a pool made of
RAIDZ VDEVs.

The motivation behind this is, that in the recent past, these things
came up quite a bit. Thus it would be nice to have some documentation
that we can link to and having it in the docs might help users to make
an informed decision from the start.

I hope I did not mess up any technical details and that it is
understandable enough.

[0] https://lists.proxmox.com/pipermail/pve-devel/2020-July/044453.html

 local-zfs.adoc | 95 ++
 1 file changed, 95 insertions(+)

diff --git a/local-zfs.adoc b/local-zfs.adoc
index fd03e89..5b36f5e 100644
--- a/local-zfs.adoc
+++ b/local-zfs.adoc
@@ -151,6 +151,101 @@ rpool/swap4.25G  7.69T64K  -
 
 
 
+[[sysadmin_zfs_raid_considerations]]
+ZFS RAID Level Considerations
+~
+
+There are a few factors to take into consideration when choosing the layout of
+a ZFS pool. The basic building block of a ZFS pool is the virtual device, or
+`vdev`. All vdevs in a pool are used equally and the data is striped among them
+(RAID0). Check the `zpool(8)` manpage for more details on vdevs.
+
+[[sysadmin_zfs_raid_performance]]
+Performance
+^^^
+
+Each `vdev` type has different performance behaviors. The two
+parameters of interest are the IOPS (Input/Output Operations per Second) and
+the bandwidth with which data can be written or read.
+
+A 'mirror' vdev (RAID1) will approximately behave like a single disk in regards
+to both parameters when writing data. When reading data if will behave like the
+number of disks in the mirror.
+
+A common situation is to have 4 disks. When setting it up as 2 mirror vdevs
+(RAID10) the pool will have the write characteristics as two single disks in
+regard of IOPS and bandwidth. For read operations it will resemble 4 single
+disks.
+
+A 'RAIDZ' of any redundancy level will approximately behave like a single disk
+in regard of IOPS with a lot of bandwidth. How much bandwidth depends on the
+size of the RAIDZ vdev and the redundancy level.
+
+For running VMs, IOPS is the more important metric in most situations.
+
+
+[[sysadmin_zfs_raid_size_space_usage_redundancy]]
+Size, Space usage and Redundancy
+
+
+While a pool made of 'mirror' vdevs will have the best performance
+characteristics, the usable space will be 50% of the disks available. Less if a
+mirror vdev consists of more than 2 disks, for example in a 3-way mirror. At
+least one healthy disk per mirror is needed for the pool to stay functional.
+
+The usable space of a 'RAIDZ' type vdev of N disks is roughly N-P, with P being
+the RAIDZ-level. The RAIDZ-level indicates how many arbitrary disks can fail
+without losing data. A special case is a 4 disk pool with RAIDZ2. In this
+situation it is usually better to use 2 mirror vdevs for the better performance
+as the usable space will be the same.
+
+Another important factor when using any RAIDZ level is how ZVOL datasets, which
+are used for VM disks, behave. For each data block the pool needs parity data
+which is at least the size of the minimum block size defined by the `ashift`
+value of the pool. With an ashift of 12 the block size of the pool is 4k.  The
+default block size for a ZVOL is 8k. Therefore, in a RAIDZ2 each 8k block
+written will cause two additional 4k parity blocks to be written,
+8k + 4k + 4k = 16k.  This is of course a simplified approach and the real
+situation will be slightly different with metadata, compression and such not
+being accounted for in this example.
+
+This behavior can be observed when checking the following properties of the
+ZVOL:
+
+ * `volsize`
+ * `refreservation` (if the pool is not thin provisioned)
+ * `used` (if the pool is thin provisioned and without snapshots present)
+
+
+# zfs get volsize,refreservation,used /vm--disk-X
+
+
+`volsize` is the size of the disk as it is presented to the VM, while
+`refreservation` shows the reserved space on the pool which includes the
+expected space needed for the parity data. If the pool is thin provisioned, the
+`refreservation` will be set to 0. Another way to observe the behavior is to
+compare the used disk space within the VM and the `used` property. Be aware
+that snapshots will 

[pve-devel] [PATCH manager] ui: add ceph recovery graph and progressbar

2020-07-21 Thread Dominik Csapak
shows a runningChart (like for reads) for the recovery speed
also shows a progressbar of the health percentage of the cluster
with an estimate of how long the recovery might need
(unhealthy objects) / (objects per second)

only show them when a recovery is ongoing or there are unhealthy
objects

Signed-off-by: Dominik Csapak 
---
we could also opt to only show it when there is really recovery
going on (recovering_objects_per_sec !== undefined) but
in my tests sometimes during recovery, ceph would not return those
values intermittently, making the panel flicker, so i opted
to show it also when there are unhealthy objects
(though i am not opposed to to the first and have some counter
to catch intermittent missing values)

 www/manager6/ceph/Status.js | 63 +++--
 1 file changed, 60 insertions(+), 3 deletions(-)

diff --git a/www/manager6/ceph/Status.js b/www/manager6/ceph/Status.js
index e75fe5c0..7d1474eb 100644
--- a/www/manager6/ceph/Status.js
+++ b/www/manager6/ceph/Status.js
@@ -188,9 +188,37 @@ Ext.define('PVE.node.CephStatus', {
items: [
{
flex: 1,
-   xtype: 'proxmoxGauge',
-   itemId: 'space',
-   title: gettext('Usage')
+   xtype: 'container',
+   items: [
+   {
+   xtype: 'proxmoxGauge',
+   itemId: 'space',
+   title: gettext('Usage')
+   },
+   {
+   flex: 1,
+   border: false,
+   },
+   {
+   xtype: 'container',
+   itemId: 'recovery',
+   hidden: true,
+   padding: 25,
+   items: [
+   {
+   itemId: 'recoverychart',
+   xtype: 'pveRunningChart',
+   title: gettext('Recovery'),
+   renderer: PVE.Utils.render_bandwidth,
+   height: 100,
+   },
+   {
+   xtype: 'progressbar',
+   itemId: 'recoveryprogress',
+   },
+   ]
+   },
+   ]
},
{
flex: 2,
@@ -297,6 +325,35 @@ Ext.define('PVE.node.CephStatus', {
me.writes.addDataPoint(writes);
me.readiops.addDataPoint(readiops);
me.writeiops.addDataPoint(writeiops);
+
+   let degraded = pgmap.degraded_objects || 0;
+   let misplaced = pgmap.misplaced_objects || 0;
+   let unfound = pgmap.unfound_objects || 0;
+   let unhealthy = degraded + unfound + misplaced;
+   // update recovery
+   if (pgmap.recovering_objects_per_sec !== undefined || unhealthy > 0) {
+   let total = pgmap.misplaced_total || pgmap.unfound_total || 
pgmap.degraded_total || 0;
+   if (total === 0) return;
+   let recovered = (total - unhealthy) || 0;
+   let speed = pgmap.recovering_bytes_per_sec || 0;
+   let speedTxt = PVE.Utils.render_bandwidth(speed);
+   let obj_per_sec = speed / (4*1024*1024); // 4MiB per Object
+   let duration = 
Proxmox.Utils.format_duration_human(unhealthy/obj_per_sec);
+
+   let percentage = recovered/total;
+   let txt = `${(percentage*100).toFixed(2)}%`;
+   if (speed > 0) {
+   txt += ` (${speedTxt} - ${duration} left)`;
+   }
+
+   me.down('#recovery').setVisible(true);
+   me.down('#recoveryprogress').updateValue(percentage);
+   me.down('#recoveryprogress').updateText(txt);
+   me.down('#recoverychart').addDataPoint(speed);
+   } else {
+   me.down('#recovery').setVisible(false);
+   me.down('#recoverychart').addDataPoint(0);
+   }
 },
 
 initComponent: function() {
-- 
2.20.1



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] Proxmox Org on GitHub

2020-07-21 Thread Kamil Trzciński
Hi Proxmox Team,

I got a message from Thomas Lamprecht that asked about
the github proxmox org[0] that I'm owner of.

Thomas indicated that this is confusing to Proxmox users
the existence of this organisation as some users look
for sources or help on this organisation, where all
development happens over mailing list.

Thomas proposed to use this organisation as a read-only mirror
of `git.proxmox.com` and asked if I would like to hand off
the ownership.

I would love to!

I propose the following process of transferring
ownership:

1. I would like to use this public communication thread for
   the transfer process to make it formal
2. I would ask Martin or Dietmar to provide a list of GitHub accounts
   to which I should transfer ownership (feel free to send it privately)
3. I will add these member(s) to the GitHub org
4. I will ask to confirm that you got access
5. I will `Leave` myself from the Org

Thanks for awesome work on this project for that
many years.

Kamil

[0]: https://github.com/proxmox
___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] Proxmox Org on GitHub

2020-07-21 Thread Martin Maurer
Hello Kamil,

Thanks, here is the list of the requested GitHub accounts.

https://github.com/Fabian-Gruenbichler
https://github.com/Blub
https://github.com/ThomasLamprecht


On 7/21/20 3:11 PM, Kamil Trzciński wrote:
> Hi Proxmox Team,
> 
> I got a message from Thomas Lamprecht that asked about
> the github proxmox org[0] that I'm owner of.
> 
> Thomas indicated that this is confusing to Proxmox users
> the existence of this organisation as some users look
> for sources or help on this organisation, where all
> development happens over mailing list.
> 
> Thomas proposed to use this organisation as a read-only mirror
> of `git.proxmox.com` and asked if I would like to hand off
> the ownership.
> 
> I would love to!
> 
> I propose the following process of transferring
> ownership:
> 
> 1. I would like to use this public communication thread for
>the transfer process to make it formal
> 2. I would ask Martin or Dietmar to provide a list of GitHub accounts
>to which I should transfer ownership (feel free to send it privately)
> 3. I will add these member(s) to the GitHub org
> 4. I will ask to confirm that you got access
> 5. I will `Leave` myself from the Org
> 
> Thanks for awesome work on this project for that
> many years.
> 
> Kamil
> 
> [0]: https://github.com/proxmox
> ___
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 

-- 
Best Regards,

Martin Maurer

mar...@proxmox.com
https://www.proxmox.com


Proxmox Server Solutions GmbH
Bräuhausgasse 37, 1050 Vienna, Austria
Commercial register no.: FN 258879 f
Registration office: Handelsgericht Wien


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied: [PATCH manager] ui: add ceph recovery graph and progressbar

2020-07-21 Thread Thomas Lamprecht
On 21.07.20 15:06, Dominik Csapak wrote:
> shows a runningChart (like for reads) for the recovery speed
> also shows a progressbar of the health percentage of the cluster
> with an estimate of how long the recovery might need
> (unhealthy objects) / (objects per second)
> 
> only show them when a recovery is ongoing or there are unhealthy
> objects
> 
> Signed-off-by: Dominik Csapak 
> ---
> we could also opt to only show it when there is really recovery
> going on (recovering_objects_per_sec !== undefined) but
> in my tests sometimes during recovery, ceph would not return those
> values intermittently, making the panel flicker, so i opted
> to show it also when there are unhealthy objects
> (though i am not opposed to to the first and have some counter
> to catch intermittent missing values)
> 
>  www/manager6/ceph/Status.js | 63 +++--
>  1 file changed, 60 insertions(+), 3 deletions(-)
> 
>

applied, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel