> > It is still not clear if we want a single file, or multiple files. I
> > guess this requires careful evaluation. How does such system behave
> > when we have 3000VMs? We need to test that before we go further.
>
> Oh i thought a single file containing 32 net in/out values. I've no idea how
>
Am 17.04.2013 09:08, schrieb Dietmar Maurer:
>>> It is still not clear if we want a single file, or multiple files. I
>>> guess this requires careful evaluation. How does such system behave
>>> when we have 3000VMs? We need to test that before we go further.
>>
>> Oh i thought a single file contain
> Sure but how to benchmark? Just create a file with 32 values and create
> 2 files (this but be the avg of network interfaces) may be it is just 1.5.
>
> And then update both each second and look at the disk i/o? Or how did you
> imagine this. I think it makes no sense to compare 32 values agains
Am 17.04.2013 09:28, schrieb Dietmar Maurer:
>> Sure but how to benchmark? Just create a file with 32 values and create
>> 2 files (this but be the avg of network interfaces) may be it is just 1.5.
>>
>> And then update both each second and look at the disk i/o? Or how did you
>> imagine this. I th
> > I am particularly interested in IO load (on normal ide disks) and memory
> requirements of rrdcached.
>
> Don't have IDE Disks at all. Can only provide SSD or SAS or SATA2 Disks.
I know that you have quite fast hardware. But we need to test on 'normal'
hardware.
Testing IO on SSDs makes no s
Am 17.04.2013 09:44, schrieb Dietmar Maurer:
>>> I am particularly interested in IO load (on normal ide disks) and memory
>> requirements of rrdcached.
>>
>> Don't have IDE Disks at all. Can only provide SSD or SAS or SATA2 Disks.
>
> I know that you have quite fast hardware. But we need to test o
> But on IDE with 3000 Files it doesn't work every 10s. At least without
> rrdcache.
> But that sounds in general a way to much for such a HW.
So maybe it is a bad idea to use rrd for that purpose - any other suggestions?
___
pve-devel mailing list
pve
Am 17.04.2013 13:52, schrieb Dietmar Maurer:
>> But on IDE with 3000 Files it doesn't work every 10s. At least without
>> rrdcache.
>> But that sounds in general a way to much for such a HW.
>
> So maybe it is a bad idea to use rrd for that purpose - any other suggestions?
How have you calculate
> How have you calculated your 3000 Files?
>
> 3000 VMs all with 32 nics seems unrealistic to me.
I simply want to show you that this can overload all! servers inside a cluster
by
just writing rrd files. I am not keen to spend any resources on that task.
> Another way could be to just allow to
> How have you calculated your 3000 Files?
>
> 3000 VMs all with 32 nics seems unrealistic to me.
BTW, we have several users with >500VMs, so I guess we can easily reach that
limit.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxm
Am 17.04.2013 14:09, schrieb Dietmar Maurer:
>> How have you calculated your 3000 Files?
>>
>> 3000 VMs all with 32 nics seems unrealistic to me.
>
> I simply want to show you that this can overload all! servers inside a
> cluster by
> just writing rrd files. I am not keen to spend any resources
> Nowhere ;-) how about just return the counter values for the correct tap
> device
> through API?
>
> So it is basically:
> 1.) a wrapper from netX to correct tap
> 2.) query tap counter inout / output values
> 3.) allow to query this through API
>
> So it is at least possible to implement traf
Am 17.04.2013 14:11, schrieb Dietmar Maurer:
>> How have you calculated your 3000 Files?
>>
>> 3000 VMs all with 32 nics seems unrealistic to me.
>
> BTW, we have several users with >500VMs, so I guess we can easily reach that
> limit.
Sure but those won't have 32 NICs. So if we go the way one f
Am 17.04.2013 14:17, schrieb Dietmar Maurer:
>> Nowhere ;-) how about just return the counter values for the correct tap
>> device
>> through API?
>>
>> So it is basically:
>> 1.) a wrapper from netX to correct tap
>> 2.) query tap counter inout / output values
>> 3.) allow to query this through A
I'm sorry to dropping in, but isn't Proxmox already counting traffic in
it's internal RRD? Couldn't the "rrddata" API call be used to retrieve the
data in the RRD externally and process it to count the total bandwidth used
in 1 month?
Maybe I've misunderstood what the issue is here...
Best rega
Hi,
Am 17.04.2013 14:42, schrieb Adrian Costin:
> I'm sorry to dropping in, but isn't Proxmox already counting traffic in
> it's internal RRD? Couldn't the "rrddata" API call be used to retrieve
> the data in the RRD externally and process it to count the total
> bandwidth used in 1 month?
>
> May
> >> 3000 VMs all with 32 nics seems unrealistic to me.
> >
> > BTW, we have several users with >500VMs, so I guess we can easily reach that
> limit.
>
> Sure but those won't have 32 NICs. So if we go the way one file per NIC.
> We might have 3800 files for 3000 VMs. (800 VMs with 2 Nics so they h
Am 17.04.2013 16:39, schrieb Dietmar Maurer:
3000 VMs all with 32 nics seems unrealistic to me.
BTW, we have several users with >500VMs, so I guess we can easily reach that
limit.
Sure but those won't have 32 NICs. So if we go the way one file per NIC.
We might have 3800 files for 3000 VMs. (
Is this feature planned ?
And if not, would patches be accepted to add this feature ?
Use cases:
-Virtual Terminal Services (Required for RDS Remote Sound)
-Virtual Desktop Infrastructure
Based on the guest OS the default sound card could be selected, e.g.
ac97 for 32bit WinXP, hda
Hi Dietmar,
We have been running this successfully for a few weeks now on the
default PVE kernel with no issues.
I searched the list archives, and the issues I could see looked like
they were related to the MTU on the parent physical interface being too
small.
Linux does not clearly differ
> Is this feature planned ?
no, not really.
> And if not, would patches be accepted to add this feature ?
I do not think that is needed.
> Use cases:
>
> -Virtual Terminal Services (Required for RDS Remote Sound)
> -Virtual Desktop Infrastructure
All known VDI solutions provide some k
> We have been running this successfully for a few weeks now on the default PVE
> kernel with no issues.
>
> I searched the list archives, and the issues I could see looked like they were
> related to the MTU on the parent physical interface being too small.
>
> Linux does not clearly differentia
> Linux does not clearly differentiate between L2 (interface) and L3 (IP) MTU,
> and
> also has a hidden 4 byte allowance on the interface MTU, making it confusing.
> When running QinQ the parent physical interface needs a MTU of at least 1504
> bytes (allows a 1508 byte frame), any sub vlan's and
> Then let's go this way. It's much simpler than adding RRD.
>
> So the question is should this be a completely new call
Yes, I think this should be a new call:
GET /nodes//netstat
[
{vmid => 100, dev => net0, in => XXX, out => YYY},
{vmid => 100, dev => net1, in => XXX, out => YYY},
{vmid =>
RDP still requires that the Terminal Server has a sound card.
It only configures a virtual sound device for the RDS client if one
exists on the server.
On 4/18/2013 3:49 PM, Dietmar Maurer wrote:
Is this feature planned ?
no, not really.
And if not, would patches be accepted to add th
Hi Dietmar,
It depends on what size frame you are trying to pass. Anything over
1504 bytes is considered "Jumbo". An MTU of 9000 would allow you to
pass a 9004byte frame. If you were doing "jumbo" frames to one of your
VM's and wanted double tagging you would set your physical interface
w
> It depends on what size frame you are trying to pass. Anything over
> 1504 bytes is considered "Jumbo". An MTU of 9000 would allow you to pass a
> 9004byte frame. If you were doing "jumbo" frames to one of your VM's and
> wanted double tagging you would set your physical interface with an MTU
I'm not sure about spice, I think we need a sound card.
I'll do test this week.
- Mail original -
De: "Dietmar Maurer"
À: "Andrew Thrift" , pve-devel@pve.proxmox.com
Envoyé: Jeudi 18 Avril 2013 05:49:33
Objet: Re: [pve-devel] Add "sound card" to VM's from WebUI
> Is this feature
Signed-off-by: Stefan Priebe
---
API2/Nodes.pm | 40
1 file changed, 40 insertions(+)
diff --git a/API2/Nodes.pm b/API2/Nodes.pm
index 0dac6af..8aebae0 100644
--- a/API2/Nodes.pm
+++ b/API2/Nodes.pm
@@ -123,6 +123,7 @@ __PACKAGE__->register_method ({
Am 18.04.2013 06:32, schrieb Dietmar Maurer:
Then let's go this way. It's much simpler than adding RRD.
So the question is should this be a completely new call
Yes, I think this should be a new call:
GET /nodes//netstat
[
{vmid => 100, dev => net0, in => XXX, out => YYY},
{vmid => 100, dev =
Thanks Dietmar, I'll test it this week.
BTW, I finally get tls working with spice, I'll submit patches this week.
Do you think it's possible to use this new http server as CONNECT proxy ?
- Mail original -
De: "Dietmar Maurer"
À: pve-devel@pve.proxmox.com
Envoyé: Mercredi 17 Avr
> Patch sent. I used a different output format which has the vmid as a key so
> it is
> easier to access the right information as nobody needs to loop through the
> array.
But that format does not work with an ExtJS grid widget (in case someone wants
to display it on the GUI)?
_
Am 18.04.2013 08:35, schrieb Dietmar Maurer:
Patch sent. I used a different output format which has the vmid as a key so it
is
easier to access the right information as nobody needs to loop through the
array.
But that format does not work with an ExtJS grid widget (in case someone wants
to di
> BTW, I finally get tls working with spice, I'll submit patches this week.
> Do you think it's possible to use this new http server as CONNECT proxy ?
Honestly, I never implemented HTTP Connect method. But we now have full
control on the web server, so we can do anything we want ;-)
I guess it
> Oh ok sorry didn't know that. I can change it but i don't see any useful
> usage to
> display these values. They're just counters rising up to 64bit and then reset
> through the lifetime of a VM. I think without useful deltas and history there
> is no
> usage. But i can still change it if you w
jumbo frame on cisco is 9216 and normal size is 1518
If you set 1500 or 9000 in your vm and you do vlan tagging, it's not a problem
as we have some extra bytes.
- Mail original -
De: "Dietmar Maurer"
À: "Andrew Thrift" , pve-devel@pve.proxmox.com
Envoyé: Jeudi 18 Avril 2013 07:1
> + $res->{$vmid}{'net'.$netid}{out} = $netdev->{$dev}->{receive};
> + $res->{$vmid}{'net'.$netid}{in} = $netdev->{$dev}->{transmit};
I also prefer "net$netid" instead of 'net'.$netid
___
pve-devel mailing list
pve-devel@pve.prox
Am 18.04.2013 08:41, schrieb Dietmar Maurer:
+ $res->{$vmid}{'net'.$netid}{out} = $netdev->{$dev}->{receive};
+ $res->{$vmid}{'net'.$netid}{in} = $netdev->{$dev}->{transmit};
I also prefer "net$netid" instead of 'net'.$netid
Is this a bug?
The code in vmstatus see
>>BTW, does spice support websockets, or only HTTP Connect?
Natively for the spice server support only http connect.
We need to use websockets (for spice-html5), using websocky (python or C
implementation)
https://github.com/kanaka/websockify
I have redone tests with spice-html5 last week,
Changes since V1:
- new return format (use an arrayref instead of a hash to be JS compatible)
- swap in / out / transmit / receive
Signed-off-by: Stefan Priebe
---
API2/Nodes.pm | 48
1 file changed, 48 insertions(+)
diff --git a/API2/Nodes.pm
> Is this a bug?
>
> The code in vmstatus seems to be wrong for netout and netin
>
> $d->{netout} += $netdev->{$dev}->{receive};
> $d->{netin} += $netdev->{$dev}->{transmit};
>
>
> netin is receive and netout is transmit... isn't it?
Should be easy to test by downloading a fi
41 matches
Mail list logo