Many thanks for your answer Alexandre, if i can do the test before of put
the servers in production, i will do it (in function of MS-SQL Server).
And if in the time, you have best patches, please let me know.
Many thanks again and the best successes for you.
Best regards
Cesar
- Original
> I think we could add:
>
> numa:0|1.
I like that ;-)
> which generate the first config, create 1numa node by socket, and share the
> ram across the the nodes
>
>
>
> and also,for advanced users which need manual pinning:
>
>
> numa0:cpus=,memory=,hostnode=,policy="bind|preferred|)
> nu
Well,I think that some change will be done in config,
the code is really fresh, need to be discused, review,...
so, I you are going in production in some days, maybe it's better to wait.
if you really want to do test, save the current proxmox deb from
/var/cache/apt/archives/
you can use dpkg
According to this web link:
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/pdf/Virtualization_Tuning_and_Optimization_Guide/Red_Hat_Enterprise_Linux-7-Virtualization_Tuning_and_Optimization_Guide-en-US.pdf
The title "Configuring Multi-Queue virtio-net" say that the kerne
Hi Dietmar and Alexandre.
First of all, I want to thank and congratulate them for the effort and
professionalism that you demonstrate in their work.
Now i have a doubt, as Alexandre gave me two patches that apparently will
solve my problem of numa in PVE, i need to ask you if will be supported b
Ok,
Finally I found the last pieces of the puzzle:
to have autonuma balancing, we just need:
2sockes-2cores-2gb ram
-object memory-backend-ram,size=1024M,id=ram-node0
-numa node,nodeid=0,cpus=0-1,memdev=ram-node0
-object memory-backend-ram,size=1024M,id=ram-node1
-numa node,nodeid=1,cpus=2-3,m
>>shared? That looks strange to me.
I mean split across the both nodes.
I have check a little libvirt,
and I'm not sure, but I think that memory-backend-ram is optionnal, to have
autonuma.
It's more about cpu pinning/memory pinning on selected host node
Here an example for libvirt:
http://ww
> "When do memory hotplug, if there is numa node, we should add the memory
> size to the corresponding node memory size.
>
> For now, it mainly affects the result of hmp command "info numa"."
>
>
> So, it's seem to be done automaticaly.
> Not sure on which node is assigne the pc-dimm, but maybe
>>and how does this work with memory hotplug?
http://lists.gnu.org/archive/html/qemu-devel/2014-09/msg03187.html
"When do memory hotplug, if there is numa node, we should add
the memory size to the corresponding node memory size.
For now, it mainly affects the result of hmp command "info numa"."
I just send a new patch, more simplier,
and it should work with migration between numa host and non numa host.
It's also possible to enable numa in guest only.
- Mail original -
De: "Alexandre DERUMIER"
À: "Dietmar Maurer"
Cc: pve-devel@pve.proxmox.com
Envoyé: Mardi 2 Décembre 201
This add numa topology support
numa:
on: only expose numa to guest
other values : bind memory to host numa nodes
example:
---
sockets:4
cores:2
memory:4096
numa: bind
qemu command line for values != on
--
-object memory-backend-ram,size=1024M,policy=bind,hos
>>So migration from a NUMA host to a non-NUMA host always fail?
I think it could work with:
source host numa:
-object memory-backend-ram,size=1024M,policy=bind,host-nodes=0,id=ram-node0
-numa node,nodeid=0,cpus=0,memdev=ram-node0
-object memory-backend-ram,size=1024M,policy=bind,host-nodes=1,id=
>>and how does this work with memory hotplug?
I known that pc-dimm are supported with numa, not sure if we can update
topology when adding more memory.
We can hot-add a memory-backend-ram, not sure if we can update it with new
value.
I'll try to find more informations.
>>And balloning?
I
> This add numa topology support
>
> numa[0-8]: memory=,[policy=]
and how does this work with memory hotplug? And balloning?
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> >>what happens when you migrate auch VM to a host with different NUMA
> architecture?
>
> If the host have less numa nodes than vmsocket, the qemu process don't start.
> (I have a check for this in my patch)
>
> AFAIK, this is the only restriction.
So migration from a NUMA host to a non-NUMA h
>>I thought that should be:
>>$save_vmstate = 0 if !$snap->{vmstate};
>>$freezefs = 0 if $snap->{vmstate};
Yes, indeed !
- Mail original -
De: "Dietmar Maurer"
À: "Alexandre DERUMIER" , "Wolfgang"
Cc: pve-devel@pve.proxmox.com
Envoyé: Mardi 2 Décembre 2014 16:51:35
Objet: RE: [
> >>my opinion is you freeze the fs and have a consistent backup, it doesn't
> >>matter
> if with or without ram?
>
> When we backup the ram, the vm is going to pause mode, so no disk access and
> no need to freeze the fs.
Sounds reasonable, but the current code implements the opposite?
4883
>>1) I have these packages installed in my two servers (configured in HA and
>>both are equals in software and hardware)
It should work on top of proxmox 3.3 (Don't have tested).
To be sure, make a copy of your current deb (/var/cache/apt/archives) , if you
want to rollback later.
>>2) When
Excuse me please if i am writing misplaced, if any think that i am wrong,
please let me know for don't do it.
I want to download DRBD 8.4 from git portal, that supports DRBD 8.4.5
version for install it in PVE, but when i run:
shell> git clone git://git.drbd.org/drbd-8.4.git
Cloning into 'drbd
Hi Alexandre
OOOooo ...Many thanks for do the patches,
...I will be installing.
Moreover, i would like to tell you two things:
1) I have these packages installed in my two servers (configured in HA and
both are equals in software and hardware)
pveversion -v
proxmox-ve-2.6.32: 3.3-139 (run
>>my opinion is you freeze the fs and have a consistent backup, it doesn't
>>matter if with or without ram?
When we backup the ram, the vm is going to pause mode, so no disk access and no
need to freeze the fs.
(BTW, $freezefs don't do anything currently, we need to implement qga command,
f
Hi Alexandre,
I have a question about a Variable $freezefs, what you use at
PVE::QemuServer.
In sub snapshot_create you deliver $freezefs but when $snap->{vmstate} is
not set then $freezefs is 0 , why.
my opinion is you freeze the fs and have a consistent backup, it doesn't
matter if with or wit
>>For huge pages: Will need PVE a patch?
For transparent hugepage,
For 3.10 kernel it's enablde by default.
cat /sys/kernel/mm/transparent_hugepage/enabled
always] madvise never
for 2.6.32 kernel, I'm not sure, but I think that openvz have disabled them by
default, don't remember exactly when
OOWill be wonderful !!!
I don't want to wait for have such patchs.
The Servers (that will be in HA) will be in porduction in few days
Many thanks and i will wait.
Important Question :
For huge pages: Will need PVE a patch?
- Original Message -
From: "Alexandre DERUMIER"
>>what happens when you migrate auch VM to a host with different NUMA
>>architecture?
If the host have less numa nodes than vmsocket, the qemu process don't start.
(I have a check for this in my patch)
AFAIK, this is the only restriction.
Some good informations about numa here:
http://www.linu
>>Will work network multiqueue support?,
>>Does QEMU support for network multiqueue?
yes, and proxmox already support it. (not in gui)
just edit your vm config file:
net0: ..,queues=
(and use new drivers, if not I think you'll have a bsod)
it should improve rx or tx, I don't remember exactl
Hi Alexandre
Will work network multiqueue support?,
Does QEMU support for network multiqueue?
- Original Message -
From: "Alexandre DERUMIER"
To: "Cesar Peschiera"
Cc:
Sent: Tuesday, December 02, 2014 2:57 AM
Subject: Re: [pve-devel] New virtio-win driver 0.1-94.iso version in fedor
> numa: [policy=]
>
> then generate the numa nodes, from the sockets number with same policy and
> split the memory across the nodes
what happens when you migrate auch VM to a host with different NUMA
architecture?
___
pve-devel mailing list
pve-devel@
Hi Alexandre
Thanks for your reply
I am not sure that will be necessary, please read the PDF document that is
in link Web below.
It is saying many thing, as for example numa and huge pages are working in
automatic mode, the optimization is in the fly, i guess that i don't need
worry more for it.
Note that if we want to have something simplier but less flexible
we could have an
numa: [policy=]
then generate the numa nodes, from the sockets number with same policy and
split the memory across the nodes
- Mail original -
De: "Alexandre Derumier"
À: pve-devel@pve.proxmox.com
Hi,
can you test this:
http://odisoweb1.odiso.net/pve-qemu-kvm_2.2-2_amd64.deb
http://odisoweb1.odiso.net/qemu-server_3.3-5_amd64.deb
then edit your vm config file:
sockets: 2
cores: 4
memory: 262144
numa0: memory=131072,policy=bind
numa1: memory=131072,policy=bind
(you need 1 numa by socket
This add numa topology support
numa[0-8]: memory=,[policy=]
example:
---
sockets:4
cores:2
memory:4096
numa0: memory=1024,policy=bind
numa1: memory=1024,policy=bind
numa2: memory=1024,policy=bind
numa3: memory=1024,policy=bind
- total numa memory should be equal to vm memory
- we assign 1 n
Signed-off-by: Alexandre Derumier
---
debian/control |4 ++--
debian/rules |2 +-
2 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/debian/control b/debian/control
index 67b9ee0..0a2e776 100644
--- a/debian/control
+++ b/debian/control
@@ -2,12 +2,12 @@ Source: pve-qemu-kv
This add numa support to qemu.
(I'll send qemu-server patches later)
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Hi,
some news.
It's seem that current proxmox qemu build don't have numa support enable.
So, previous command line don't work.
I'll send a patch for pve-qemu-kvm and also to add numa options to vm config
file.
- Mail original -
De: "Alexandre DERUMIER"
À: "Cesar Peschiera"
C
>>Please, let me to do a comparison with VMware...
>>In VMware esx, when a VM is created or modified, it shown me options of
>>cores and threads in separate form for configure it, while in PVE only show
>>me the cores, this situation can confuse to any, due to that the user may
>>think that onl
>>Thanks, where does PVE set that value?
/usr/share/perl5/PVE/Firewall.pm
sub update_nf_conntrack_max {
my ($hostfw_conf) = @_;
my $max = 65536; # reasonable default
my $options = $hostfw_conf->{options} || {};
if (defined($options->{nf_conntrack_max}) && ($options->{nf_conntr
Hi,
>>NETIF="ifname=eth0,mac=02:00:00:**:3b:b9,host_ifname=veth106.0,host_mac=02:00:00:**:3b:b8,bridge=vmbr0;ifname=eth1,mac=02:00:00:4a:**:b2,host_ifname=veth106.1,host_mac=02:00:00:4a:**:b3,bridge=vmbr0"
>>
Seem that you don't have enable firewall on the interface.
(bridge should be like bri
Am 02.12.2014 um 09:31 schrieb Dietmar Maurer:
>> The kernel host log is full of:
>>
>> [1620408.606201] net_ratelimit: 462 callbacks suppressed [1620408.606204]
>> nf_conntrack: table full, dropping packet
>>
>> 1.) Where do we use nf_conntrack?
>
> everywhere
>
>> 2.) Should PVE ship with a s
> The kernel host log is full of:
>
> [1620408.606201] net_ratelimit: 462 callbacks suppressed [1620408.606204]
> nf_conntrack: table full, dropping packet
>
> 1.) Where do we use nf_conntrack?
everywhere
> 2.) Should PVE ship with a sysctl file raising the nf conntrack limits?
You can adjust
Hi,
Am 02.12.2014 um 09:13 schrieb Stefan Priebe - Profihost AG:
> Hi,
>
> since starting to use pve firewall i had today the first time where VMs
> and Host starts heavily in dropping packets.
>
> I'm only using IP and MAC filters. Nothing else.
>
> The kernel host log is full of:
>
> [1620408
Hi,
the PVE-Firewall not filter via blacklist the traffic to containers via
veth !
example:
NETIF="ifname=eth0,mac=02:00:00:**:3b:b9,host_ifname=veth106.0,host_mac=02:00:00:**:3b:b8,bridge=vmbr0;ifname=eth1,mac=02:00:00:4a:**:b2,host_ifname=veth106.1,host_mac=02:00:00:4a:**:b3,bridge=vmbr0"
Reg
Hi,
since starting to use pve firewall i had today the first time where VMs
and Host starts heavily in dropping packets.
I'm only using IP and MAC filters. Nothing else.
The kernel host log is full of:
[1620408.606201] net_ratelimit: 462 callbacks suppressed
[1620408.606204] nf_conntrack: table
43 matches
Mail list logo