Am 20.02.2016 um 08:25 schrieb Alexandre DERUMIER :
>>> Some articles, for instance
>>> https://www.kernel.org/doc/ols/2009/ols2009-pages-169-184.pdf,
>>> explicitly recommend disabling irqbalance when 10GbE is involved.
>>>
>>> Do you know if this is still true today? After all, the paper is
>>Some articles, for instance
>>https://www.kernel.org/doc/ols/2009/ols2009-pages-169-184.pdf,
>>explicitly recommend disabling irqbalance when 10GbE is involved.
>>
>>Do you know if this is still true today? After all, the paper is from 2009.
Well, the article is about to disabling irqbalance
First remove trailing whitespace from log messages on state changes
This needs to touch some regression test, but with no change in
semantics.
Second add a missing paranthese on the "fixup service location"
message. This needs no regression test log.expect changes.
Signed-off-by: Thomas Lamprecht
Sorry foir the big resend of this series but else, i think, we lose completely
sight what ought to be related.
First two are the same and are (1) output formatting fix and (2) a follow up
from 9da84a0d51dcc1e1b80e2a92127749de38851e5f no changes here to previous
version.
The third (fix possible o
This fixes a bug introduced by commit 9da84a0 which set the wrong
hash when a disabled service got a migrate/relocate command.
We set "node => $target", while our state machine could handle that
we got some "uninitialized value" warnings when migrating a disabled
service to an inactive LRM. Better
Description of the problem, imagine the following:
We get the CRM command to migrate 'vm:100' from A to B.
Now when the migration fails, we would normally get placed in the
started state on the source node A trough the CRM when it processes
our result.
But if the CRM didn't processed our result bef
We want to give the error state priority over EWRONG_NODE as a
service may be in the error state because of EWRONG_NODE
Change the error message a bit and add a possibility to not log
the error message which will be used in a future patch to spam
the log less.
Signed-off-by: Thomas Lamprecht
---
If a service is in the error state we got an not rather useful log
message about every 5 seconds, this sums up rather quickly and is
not quite helpful.
This changes the behaviour so that we get an initial log message
and then once per minute.
Signed-off-by: Thomas Lamprecht
---
This one is prob
If we get an 'EWRONG_NODE' error from the migration we have no sane
way out. If we place it then in the started state we also get the
'EWRONG_NODE' error again and it even will place the service in
the migration state again (when it's not restricted by a group) and
thus result in an infinite starte
Hi Alexandre,
> Am 19.02.2016 um 15:06 schrieb Alexandre DERUMIER :
>
> Hi,
>
> I think it could be great to add irqbalance as recommends package for
> pve-kernel.
>
> I have seen a lot of improvement, mainly with network access (ceph for
> example),
> when a lot of network interrupts occurs
Hi,
I think it could be great to add irqbalance as recommends package for
pve-kernel.
I have seen a lot of improvement, mainly with network access (ceph for example),
when a lot of network interrupts occurs and goes to the same cpu.
It's in debian linux-image package since years.
Regards,
Ale
The following 2 patches (one for pve-storage and one for pve-manager)
add the ability to add a thinpool through the gui
my patch 'restrict lvm thin on qemu to storage type raw' should be applied
before these two (to prevent running into error messages on the gui)
this patch adds the ability to add existing lvm thinpools to the
storage configuration via the gui
Signed-off-by: Dominik Csapak
---
www/manager/Makefile | 1 +
www/manager/dc/StorageView.js | 11 ++
www/manager/storage/LvmThinEdit.js | 269 +
this patch adds an lvmthin scan to the api, so that we can get a list
of thinpools for a specific vg via an api call
Signed-off-by: Dominik Csapak
---
PVE/API2/Storage/Scan.pm | 35 +++
PVE/Storage/LvmThinPlugin.pm | 14 ++
2 files changed, 49 inse
also for current master
cleanup of patch from Dhaussy Alexandre from 02/15/2016
Signed-off-by: Dominik Csapak
---
PVE/QemuServer.pm | 8 ++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 7bf3e4d..18f0c29 100644
--- a/PVE/QemuServer
Drop patches applied upstream
Update use-var-lib-vz-as-default-dir.patch
Note: run "make download" before building to fetch the
upstream release candidate.
---
Update to upstream release candidate for upcoming 2.0.0
release for testing purposes.
Makefile
prevents volumes to be active when they are not actually in use
this is a cleanup of Dhaussy Alexandre's patch from 02/15/2016
Signed-off-by: Dominik Csapak
---
PVE/QemuServer.pm |8 ++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
applied
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
---
Changes since v1 like in pve-firewall:
The option is now called 'ndp', defaults to 1 and exists on host & vm level.
www/manager/grid/FirewallOptions.js | 2 ++
www/manager6/grid/FirewallOptions.js | 2 ++
2 files changed, 4 insertions(+)
diff --git a/www/manager/grid/FirewallOptions.js
b/ww
It's is enabled by default.
---
Changes sicne v1:
It's now a host-level as well as VM-level option
The option name changed from 'disable_ndp' with default 0 to just 'ndp' with
default 1.
Added a patch to include router-solicitation in the NeighborDiscovery macro.
src/PVE/API2/Firewall/Host.pm |
to be more consistent with the host-wide NDP option.
This macro is now mostly useful to disable NDP on VMs.
---
src/PVE/Firewall.pm | 1 +
1 file changed, 1 insertion(+)
diff --git a/src/PVE/Firewall.pm b/src/PVE/Firewall.pm
index 25f1cc9..c556be4 100644
--- a/src/PVE/Firewall.pm
+++ b/src/PVE/Fi
>
> This should fix migration from qemu 2.5 (machine 2.4) to qemu 2.4
>
> http://lists.nongnu.org/archive/html/qemu-devel/2016-02/msg04310.html
> https://forum.proxmox.com/threads/cant-live-migrate-after-dist-upgrade.26097/
>
> Please test
applied (so that people can start testing it).
___
That error check is wrong (see man strtoul)
I uploaded a fix here:
https://git.proxmox.com/?p=pve-cluster.git;a=commitdiff;h=8f8fbe90036f7cb52d2786def6ec4a3dde80f620
> diff --git a/data/src/memdb.c b/data/src/memdb.c
> index af20e05..57c2804 100644
> --- a/data/src/memdb.c
> +++ b/data/src/memdb
> I wonder how perform the native qemu backup blockjob vs proxmox vma backup
> format ?
We use the qemu backup blockjob, just slightly modified...
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-d
This should fix migration from qemu 2.5 (machine 2.4) to qemu 2.4
http://lists.nongnu.org/archive/html/qemu-devel/2016-02/msg04310.html
https://forum.proxmox.com/threads/cant-live-migrate-after-dist-upgrade.26097/
Please test
Signed-off-by: Alexandre Derumier
---
...g-unbreak-migration-compati
This is not on The GUI, so you need to set it manually:
rootfs: ...,quota=1
> On February 18, 2016 at 5:56 PM Dietmar Maurer wrote:
>
>
> ideas from this howto:
> > https://www.howtoforge.com/tutorial/how-to-setup-virtual-containers-with-lxc-and-quota/
> > be incorporated in Proxmox to suppo
26 matches
Mail list logo