Hi there,
is there a way to enable the 'Advanced' view by default? I do not, for myself,
see any benefit in hiding config options. It only means additional clicks for
me.
If not, perhaps there could be a flag in datacenter.cfg?
Best,
Martin Waschbüsch
_
> Am 05.01.2018 um 21:41 schrieb Fabian Grünbichler
> :
>
> On Fri, Jan 05, 2018 at 06:50:33PM +0100, Waschbüsch IT-Services GmbH wrote:
>>
>> AFAIK Meltdown is only affecting Intel (& ARM), but not AMD - see 'Forcing
>> direct cache loads' here:
> Am 05.01.2018 um 11:25 schrieb Fabian Grünbichler
> :
>
> On Thu, Jan 04, 2018 at 09:08:32PM +0100, Stefan Priebe - Profihost AG wrote:
>>
>> Here we go - attached is the relevant patch - extracted from the
>> opensuse src.rpm.
>
> this will most likely not be needed for some time, since a p
gt; yes, sounds interesting. please contact me directly as soon as you can
> provide access for testing,
>
> Martin
>
> On 05.10.2017 09:56, Waschbüsch IT-Services GmbH wrote:
>> Hi all,
>> Since several times I read both on this list and on the forum that AMD based
i,
>
> yes, sounds interesting. please contact me directly as soon as you can
> provide access for testing,
>
> Martin
>
> On 05.10.2017 09:56, Waschbüsch IT-Services GmbH wrote:
>> Hi all,
>> Since several times I read both on this list and on the forum that AMD based
Hi all,
Since several times I read both on this list and on the forum that AMD based
servers are rarely used for development / testing, I'd like to offer the
following:
I just ordered a dual socket EPYC system (Supermicro AS-1123US-TR4 with dual
EPYC 7351) and if any of the core developers wa
Reflect changed output for 'ceph pg dump -f json'.
Signed-off-by: Martin Waschbüsch
---
Without this patch, all osds will show a latency of 0.
Sadly, that is not true. :-)
PVE/API2/Ceph.pm | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/PVE/API2/Ceph.pm b/PVE/API2/Ceph.pm
inde
Solved!
> Am 14.08.2017 um 22:34 schrieb Waschbüsch IT-Services GmbH
> :
>
> Be that as it may, the problem I have which lead me to have a look at it is:
> in the UI *all* my osds show a latency of 0.
> Using the shell, they don't.
I was on the right track after all - i
> Am 14.08.2017 um 20:39 schrieb Waschbüsch IT-Services GmbH
> :
>
> Hi all,
>
> In API2/Ceph.pm
>
> osd latency information are read from the output of $get_osd_usage sub by
> running the monitor command 'pg dump'.
> I don't know if this used to
Hi all,
In API2/Ceph.pm
osd latency information are read from the output of $get_osd_usage sub by
running the monitor command 'pg dump'.
I don't know if this used to contain the latency information for each osd, but
it does not in the current (luminous) tree.
I guess the information needs to b
Hi all,
In case this is helpful to anyone else:
I just installed PVE 4.4 on a box with a megaraid controller (9261-8i).
For some reason, the device's interrupts where not distributed among CPU cores.
After digging a little, I found that the version of the megaraid driver that
comes with the curr
> Am 14.01.2017 um 10:29 schrieb Dmitry Petuhov :
>
> Yes, you can. Just install pve-headers package corresponding to your running
> kernel. Also you will have to manually install headers on every kernel update.
> Or you can just wait for next PVE kernel release. It usually contains latest
> RA
Hi there,
Can I use the dkms infrastructure with proxmox kernels?
I ask because there is a newer driver for current Microsemi / Adaptec RAID
adapters:
http://download.adaptec.com/raid/aac/linux/aacraid-linux-src-1.2.1-52011.tgz
(or for dkms)
http://download.adaptec.com/raid/aac/linux/aacraid-d
Hi Dietmar,
> Am 06.01.2017 um 12:39 schrieb Dietmar Maurer :
>
>> The online help explains the ballooning feature quite nicely, but there is a
>> mismatch:
>> Under the 'Use fixed size memory' option, I can set the memory size and there
>> is a checkbox 'Ballooning'.
>> I find this confusing. If
Hi all,
I just stumbled across the following:
When configuring memory for a VM, you can choose between the options 'Use fixed
size memory' and 'Automatically allocate memory within this range'.
The online help explains the ballooning feature quite nicely, but there is a
mismatch:
Under the 'Use
> Am 02.12.2016 um 20:04 schrieb Michael Rasmussen :
>
> On Fri, 2 Dec 2016 19:54:20 +0100
> Waschbüsch IT-Services GmbH wrote:
>
>>
>> Any ideas how that could be avoided? Like, at all. :-/
>>
> Could you try when logged in to do: dpkg --configure -a
T
Hi all,
I just upgraded a current node running PVE 4.3 to the latest updates available
on the enterprise repo.
Things work ok until apt gets to:
Preparing to unpack .../proxmox-ve_4.3-72_all.deb ...
Unpacking proxmox-ve (4.3-72) over (4.3-71) ...
Preparing to unpack .../openvswitch-switch_2.6.0
> Am 05.11.2016 um 15:43 schrieb Alexandre Derumier :
>
> This increase iops and decrease latencies by around 30%
Alexandre,
apart from debug_ms = 0/0, what are the currently suggested defaults for these
performance tweaks?
___
pve-devel mailing list
> Am 01.08.2016 um 09:44 schrieb Alexandre DERUMIER :
>
>>> Answering myself, 'close' does not issue flush/fsync.
>
> close send a flush
>
> It's was introduce by this commit
>
> [Qemu-devel] [PATCH v3] qemu-img: let 'qemu-img convert' flush data
> https://lists.nongnu.org/archive/html/qemu-d
> Am 01.08.2016 um 09:26 schrieb Dominik Csapak :
>
> On 08/01/2016 08:51 AM, Alexandre Derumier wrote:
>> Signed-off-by: Alexandre Derumier
>> ---
>> PVE/QemuServer.pm | 2 +-
>> 1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
>> index 7778
> Am 19.07.2016 um 13:00 schrieb Emmanuel Kasper :
>
> Hi
>
> This patch series adds capabilities to store Qemu Wizard Defaults, and
> use these capabilities
> to set virtio by default for Linux machines.
Sounds like a really good idea. But why not go a step further and allow to
create presets
> Am 07.07.2016 um 17:26 schrieb Andreas Steinel :
>
> Hi,
>
> I currently only have one big 3.4 install (>150 VMs), on which I compared
> the generated MACs and found out that they are completely random. Are there
> plans or probably is there already an implementation to generate only from
> a
> Am 29.03.2015 um 20:18 schrieb Daniel Hunsaker :
>
> There's Gentoo, which seemed pretty solid and stable while I was using it,
> but I haven't looked at their kernels lately to see how they are faring...
But being a rolling-release OS, would that be at all suitable?
Martin
signature.asc
D
> Am 29.03.2015 um 20:07 schrieb Dietmar Maurer :
>
>> I guess that is not really the problem, but Docker is intended to run
>> applications not full systems like the way it works for OpenVZ now.
>
> It is even more limited. The idea is to run single binaries inside a docker
> container.
Agreed
Hi all,
Martin has kindly redirected me to the list as the appropriate place to ask /
discuss this:
I noticed that, even though a kvm guest is set to CPU type 'host', the live
migration feature does not check for compatibility with the destination host.
E.g. Moving from a Opteron 6366 to an Int
25 matches
Mail list logo