Answering myself, I found we can set this with
CPUQuota=XX%
So we can directly map this to the 'cpulimit' parameter?
On 05/29/2015 08:33 AM, Dietmar Maurer wrote:
Are there other things we can set this way?
For example can we limit CPU with cfs_period_us and cfs_quota_us?
I guess that woul
Are there other things we can set this way?
For example can we limit CPU with cfs_period_us and cfs_quota_us?
I guess that would be interesting for hosters.
On 05/28/2015 03:59 PM, Alexandre Derumier wrote:
I have tested it, it's working fine.
(I don't known how HA will be managed but it sho
applied, thanks!
On 05/28/2015 03:59 PM, Alexandre Derumier wrote:
I have tested it, it's working fine.
(I don't known how HA will be managed but it should work too)
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/
> So it's like a temp service management from systemd.
>
> Also as advantage, is it also possible to catch qemu crash with journalctl ?
What do you want to do? Restart if crached?
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.
A little bit off-topic, but there is hope:
This week, I almost saturated a 1 Gbit network link between two brand new
Dell Servers with 3.2 GHz Xeon E5-2667v3 CPUs. I got 105 MB/sec using
standard SSH/SCP. So we finally have single-thread-performance that is fast
enough for encryption on gigabit. P
On May 28, 2015 8:12 PM, "Stanislav German-Evtushenko" gmail.com > wrote:
>
> > Your test code was a really interesting example, and lead to
> > really unexpected results (at least for me). But as Eneko already
> > mentioned, nobody would write such code. It is simply not thread save,
> > and I thi
> However now I think it is going to be even easier to reproduce in a
> VM. My guess if you install a VM with virtual drive on DRBD or MD RAID
> and cache=none, create ext3 or ext4 partition inside this VM and run
> my code inside then you will get inconsistence. May be you need to run
> it not onc
> Your test code was a really interesting example, and lead to
> really unexpected results (at least for me). But as Eneko already
> mentioned, nobody would write such code. It is simply not thread save,
> and I think qemu does it correctly.
I have written that code only because nobody wanted to t
>>Interesting. I need toö take a look at the code in 'systemd-run' to see how
>>complex that is. Are there any disadvantages?
I don't see any disadvantages.
from systemd-run:
"
If a command is run as transient scope unit, it will be started directly by
systemd-run and thus inherit the execution
> On Thu, May 28, 2015 at 7:19 PM, Dietmar Maurer wrote:
> >> Each kvm process have multiple threads and the number of them is
> >> changing in time.
> >
> > AFAIK all disk IO is done by a single, dedicated thread.
>
> I tried to read qemu-kvm code but it is difficult for me as I have
> never wri
On Thu, May 28, 2015 at 7:19 PM, Dietmar Maurer wrote:
>> Each kvm process have multiple threads and the number of them is
>> changing in time.
>
> AFAIK all disk IO is done by a single, dedicated thread.
I tried to read qemu-kvm code but it is difficult for me as I have
never written C code. Wha
> Each kvm process have multiple threads and the number of them is
> changing in time.
AFAIK all disk IO is done by a single, dedicated thread.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-deve
On Thu, May 28, 2015 at 6:47 PM, Dietmar Maurer wrote:
>> > But there is currently only one io-thread in qemu, so this
>> > cannot happen with qemu if above is the only problem?
>>
>> But there are other threads, right? Buffer can be changed by another
>> thread where guest OS itself is running.
>
> Message: 9
> Date: Thu, 28 May 2015 17:47:54 +0200 (CEST)
> From: Dietmar Maurer
> To: Stanislav German-Evtushenko
> Cc: pve-devel
> Subject: Re: [pve-devel] Default cache mode for VM hard drives
> Message-ID:
> <851217848.227.1432828074143.javamail.open-xcha...@ronja.mits.lan>
> Conte
applied. But we should implement/use the new functionality asap to see
if everything works as expecteced.
On 05/28/2015 04:54 PM, Wolfgang Bumiller wrote:
Okay things are running again.
I noticed the first git --amend didn't make it into the previous patches either
so here's the updated and work
> If you provide a buffer to the kernel, that you change while it is
> working with it, I don't know why you expect a reliable/predictable
> result? Specially (but not only) if you tell it not to make a copy!!
>
> Note that without O_DIRECT you won't get a "correct" result either; disk
> may en
On Thu, May 28, 2015 at 6:35 PM, Dietmar Maurer wrote:
>> This is not okay and this is what is actually happening:
>> 0. set_buffer
>> 1. start_writing_with_o_direct_from_buffer
>> 2. change_buffer (we can do this only in another thread)
>> 3. finish_writing_with_o_direct_from_buffer
>> 4. change_
> > But there is currently only one io-thread in qemu, so this
> > cannot happen with qemu if above is the only problem?
>
> But there are other threads, right? Buffer can be changed by another
> thread where guest OS itself is running.
No, AFAIK there is only one thread doing all IO (currently).
> This is not okay and this is what is actually happening:
> 0. set_buffer
> 1. start_writing_with_o_direct_from_buffer
> 2. change_buffer (we can do this only in another thread)
> 3. finish_writing_with_o_direct_from_buffer
> 4. change_buffer
> 5. start_writing_with_o_direct_from_buffer
> 6. chang
> systemd-run --scope --slice=qemu --unit 100 -p CPUShares=499 /usr/bin/kvm -id
> 100 ...
>
>
> like this, cgroup is autoremoved on process stop.
Interesting. I need toö take a look at the code in 'systemd-run' to see how
complex that
is. Are there any disadvantages?
___
> Another way could be use launch qemu with systemd-run, I think we can specify
> cgroup (slice in systemd) directly.
What would be the advantage? Fr me that just makes things more complex?
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve
Okay things are running again.
I noticed the first git --amend didn't make it into the previous patches either
so here's the updated and working patch.
Wolfgang Bumiller (1):
defer some daemon setup routines
src/PVE/CLIHandler.pm | 4 +-
src/PVE/Daemon.pm | 116 +-
A first step towards untangling some of the intermingled data and
functionality setup tasks for the daemons:
Daemon::new now only validates and untaints arguments, but doesn't
perform any actions such as setuid/setgid until the new Daemon::setup
method which is now executed from Daemon::start righ
A first step towards untangling some of the intermingled data and
functionality setup tasks for the daemons:
Daemon::new now only validates and untaints arguments, but doesn't
perform any actions such as setuid/setgid until the new Daemon::setup
method which is now executed from Daemon::start righ
This first patch should still work with the existing scripts and allow us to
gradually change the startup of our daemon scripts so that they do not try to
perform tasks that need root before handling the autogenerated build-time
utiliity commands like 'printmanpod'.
We can then move mkdirs, chown
I have tested it, it's working fine.
(I don't known how HA will be managed but it should work too)
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
On 28/05/15 15:32, Stanislav German-Evtushenko wrote:
What does it mean that operations with buffer are not ensured to be
thread-safe in qemu?
O_DIRECT doesn't guarantee that buffer reading is finished when write
returns if I read "man -s 2 open" correctly.
The statement seems to be not correc
Signed-off-by: Alexandre Derumier
---
PVE/QemuServer.pm | 15 +++
1 file changed, 15 insertions(+)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 3cd4475..fe40140 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -67,6 +67,17 @@
PVE::JSONSchema::register_standard_o
Signed-off-by: Alexandre Derumier
---
PVE/QemuServer.pm | 9 +
1 file changed, 9 insertions(+)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 22ff875..3cd4475 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -2618,6 +2618,15 @@ sub config_to_command {
my $hotplug_
Signed-off-by: Alexandre Derumier
---
PVE/QemuServer.pm | 87 ---
1 file changed, 87 deletions(-)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index fe40140..94b9176 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -78,78 +78,6 @
> On May 28, 2015 at 3:17 PM Eneko Lacunza wrote:
> On 28/05/15 15:01, Stanislav German-Evtushenko wrote:
> >> Note that without O_DIRECT you won't get a "correct" result either; disk
> >> may end not containing the data in the buffer when write was called.
> >> Softmirror data will be identically
On Thu, May 28, 2015 at 4:17 PM, Eneko Lacunza wrote:
> On 28/05/15 15:01, Stanislav German-Evtushenko wrote:
>>>
>>> Note that without O_DIRECT you won't get a "correct" result either; disk
>>> may end not containing the data in the buffer when write was called.
>>> Softmirror data will be identi
better with scope:
systemd-run --scope --slice=qemu --unit 100 -p CPUShares=499 /usr/bin/kvm -id
100 ...
like this, cgroup is autoremoved on process stop.
- Mail original -
De: "aderumier"
À: "dietmar"
Cc: "pve-devel"
Envoyé: Jeudi 28 Mai 2015 15:21:10
Objet: Re: [pve-devel] qemu-
>>Another way could be use launch qemu with systemd-run, I think we can specify
>>cgroup (slice in systemd) directly.
for example:
systemd-run --remain-after-exit --slice=qemu --unit=100 -p CPUShares=499
/usr/bin/kvm -id 100
Seem better than hacking qemu ?
- Mail original -
De
On 28/05/15 15:01, Stanislav German-Evtushenko wrote:
Note that without O_DIRECT you won't get a "correct" result either; disk
may end not containing the data in the buffer when write was called.
Softmirror data will be identically uncertain :)
You are right. That is why I suppose there is a bug
Another way could be use launch qemu with systemd-run, I think we can specify
cgroup (slice in systemd) directly.
- Mail original -
De: "aderumier"
À: "Stefan Priebe"
Cc: "pve-devel"
Envoyé: Jeudi 28 Mai 2015 13:36:25
Objet: Re: [pve-devel] qemu-server: cgroups && cpu.shares implement
Eneko,
> Note that without O_DIRECT you won't get a "correct" result either; disk
> may end not containing the data in the buffer when write was called.
> Softmirror data will be identically uncertain :)
You are right. That is why I suppose there is a bug (operations with
buffer are not ensured t
On 28/05/15 13:49, Dietmar Maurer wrote:
I'm not kernel/IO expert in any way, but I think this test program has a
race condition, so it is not helping us diagnose the problem.
We're writing to buffer x while it is in use by write syscall. This is
plainly wrong on userspace.
For this test, we do
> On May 28, 2015 at 2:07 PM Stanislav German-Evtushenko
> wrote:
> With O_DIRECT we have to trust our user space application because data
> is getting by kernel directly from the application memory. We can
> think that kernel could copy buffer from user space before writing it
> to block device h
On 28/05/15 13:44, Stanislav German-Evtushenko wrote:
Eneko,
> I'm not kernel/IO expert in any way, but I think this test program has a race
condition, so it is not helping us diagnose the problem.
> We're writing to buffer x while it is in use by write syscall. This is
plainly wrong on usersp
Dietmar,
>> I'm not kernel/IO expert in any way, but I think this test program has a
>> race condition, so it is not helping us diagnose the problem.
>>
>> We're writing to buffer x while it is in use by write syscall. This is
>> plainly wrong on userspace.
>
> For this test, we do not care about
> > ceph use O_DIRECT+O_DYNC to write to the journal of osds.
> Is this done inside KVM process? If so then KVM keeps buffer for this
> O_DIRECT writing. Therefore if multiple threads can access (and change)
> this buffer at the same time then the similar issue can happen in theory.
It only happen
Alexandre,
> qemu use librbd to access directly to ceph, so host don't have any
/dev/rbd.. or filesystem mount.
Ah, I understand, this is not a normal block device but userspace lib.
> ceph use O_DIRECT+O_DYNC to write to the journal of osds.
Is this done inside KVM process? If so then KVM keeps
> I'm not kernel/IO expert in any way, but I think this test program has a
> race condition, so it is not helping us diagnose the problem.
>
> We're writing to buffer x while it is in use by write syscall. This is
> plainly wrong on userspace.
For this test, we do not care about userspace seman
> On May 28, 2015 at 1:31 PM Eneko Lacunza wrote:
> I'm not kernel/IO expert in any way, but I think this test program has a
> race condition, so it is not helping us diagnose the problem.
>
> We're writing to buffer x while it is in use by write syscall. This is
> plainly wrong on userspace.
>> qemu rbd access is only userland, so host don't have any cache or buffer.
>>If RBD device does not use host cache then it is very likely that RBD
>>utilizes O_DIRECT. I am not sure if there are other ways to avoid host cache.
qemu use librbd to access directly to ceph, so host don't have any
Eneko,
> I'm not kernel/IO expert in any way, but I think this test program has a race
> condition, so it is not helping us diagnose the problem.
> We're writing to buffer x while it is in use by write syscall. This is
> plainly wrong on userspace.
Yes, and this is exactly what is happening ins
>>Not tested but what about this:
>>
>>fork()
>># in child
>>put current pid into cgroup
>>exec kvm
Yes, I think it should work, if we put the pid of the forked process in cgroups.
Other child threads should go automaticaly to the parent cgroup.
I have done tests with hotplug virtio-net with
Hi,
I'm not kernel/IO expert in any way, but I think this test program has a
race condition, so it is not helping us diagnose the problem.
We're writing to buffer x while it is in use by write syscall. This is
plainly wrong on userspace.
Cheers
Eneko
On 28/05/15 11:27, Wolfgang Bumiller wr
Hi Stanislav,
On 28/05/15 13:10, Stanislav German-Evtushenko wrote:
Alexandre,
The important point is whether O_DIRECT is used with Ceph or not.
Don't you know?
> qemu rbd access is only userland, so host don't have any cache or
buffer.
If RBD device does not use host cache then it is very
Alexandre,
The important point is whether O_DIRECT is used with Ceph or not. Don't you
know?
> qemu rbd access is only userland, so host don't have any cache or buffer.
If RBD device does not use host cache then it is very likely that RBD
utilizes O_DIRECT. I am not sure if there are other ways t
>>If you implement it inside qemu-server you have a race-condition, because
>>you do it too late and qemu already started the threads? Or maybe only parts
>>of necessary threads are already created?
Yes maybe, I don't have see race when I have tested it.
new threads should go automatically to t
>>BTW: can anybody test drbd_oos_test.c against Ceph? I guess we will have the
>>same result.
I think they are no problem with ceph, qemu cache option only enable|disable
rbd_cache.
qemu rbd access is only userland, so host don't have any cache or buffer.
When data is written to ceph, it's wri
Am 28.05.2015 um 12:51 schrieb Dietmar Maurer:
>> Here the patches serie for implemtation of cpuunits through cgroups
>
> If you implement it inside qemu-server you have a race-condition, because
> you do it too late and qemu already started the threads? Or maybe only parts
> of necessary threads
> I don't think it is wise to play with security-related software in
> the stack. If OpenBSD and Debian (or for the matter all the other
> distros) haven't applied those patches, I'm sure there is some
> reason, although maybe it being only "uncertainty".
Yes, is true.
But I think that from a
Hi Eneko,
Writing in QEMU-KVM are not thread-safe. I don't know if this is "by
design" or just a bug but proving this information here is necessary to
show that we should find a solution or work around for Proxmox.
General problem is that using Proxmox VE with default settings can make any
of Sof
> Here the patches serie for implemtation of cpuunits through cgroups
If you implement it inside qemu-server you have a race-condition, because
you do it too late and qemu already started the threads? Or maybe only parts
of necessary threads are already created?
So wouldn't it be easier to do it
On 28/05/15 12:38, dea wrote:
Il Thu, 28 May 2015 12:02:21 +0200 (CEST), Dietmar Maurer scrisse
I've find this...
http://www.psc.edu/index.php/hpn-ssh
What do you all think?
This is great, but unfortunately ssh people rejected those patches
(AFAIK). So default ssh tools from Debian does not
Il Thu, 28 May 2015 12:02:21 +0200 (CEST), Dietmar Maurer scrisse
> > I've find this...
> >
> > http://www.psc.edu/index.php/hpn-ssh
> >
> > What do you all think?
>
> This is great, but unfortunately ssh people rejected those patches
> (AFAIK). So default ssh tools from Debian does not have t
Hi,
Here the patches serie for implemtation of cpuunits through cgroups
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Signed-off-by: Alexandre Derumier
---
PVE/QemuServer.pm | 60 +++
1 file changed, 60 insertions(+)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 22ff875..21fa84c 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -67,6 +67,58
Signed-off-by: Alexandre Derumier
---
PVE/QemuServer.pm | 87 ---
1 file changed, 87 deletions(-)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index a8177d7..bf0792f 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -119,78 +119,6
Signed-off-by: Alexandre Derumier
---
PVE/QemuServer.pm | 4
1 file changed, 4 insertions(+)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 21fa84c..a8177d7 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -3949,6 +3949,8 @@ sub vmconfig_hotplug_pending {
} el
> I've find this...
>
> http://www.psc.edu/index.php/hpn-ssh
>
> What do you all think?
This is great, but unfortunately ssh people rejected those patches (AFAIK).
So default ssh tools from Debian does not have that features.
___
pve-devel mailing l
---
www/manager5/dc/Config.js | 4
www/manager5/dc/Log.js| 4
www/manager5/dc/Tasks.js | 4
www/manager5/form/ViewSelector.js | 3 +++
www/manager5/panel/StatusPanel.js | 3 +++
www/manager5/tree/ResourceTree.js | 3 +++
6 files changed, 21 insertions(+)
---
www/manager/data/ResourceStore.js | 1 -
www/manager/form/NodeSelector.js | 1 -
www/manager/form/RealmComboBox.js | 1 -
www/manager/form/ViewSelector.js | 1 -
www/manager/node/TimeEdit.js | 2 +-
www/manager5/data/ResourceStore.js | 1 -
www/manager5/form/RealmComboBox.js | 1 -
---
www/manager5/dc/Config.js | 2 ++
1 file changed, 2 insertions(+)
diff --git a/www/manager5/dc/Config.js b/www/manager5/dc/Config.js
index 6c19805..aa9c0f8 100644
--- a/www/manager5/dc/Config.js
+++ b/www/manager5/dc/Config.js
@@ -100,6 +100,8 @@ Ext.define('PVE.dc.Config', {
This patch serie allows to load the bottom Statuspanel, and add some comment
headers to the classes already added in manager5/
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
---
PVE/ExtJSIndex5.pm| 3 +++
www/manager5/Workspace.js | 2 +-
2 files changed, 4 insertions(+), 1 deletion(-)
diff --git a/PVE/ExtJSIndex5.pm b/PVE/ExtJSIndex5.pm
index 34952ac..0f1eb89 100644
--- a/PVE/ExtJSIndex5.pm
+++ b/PVE/ExtJSIndex5.pm
@@ -31,6 +31,7 @@ _EOD
+
Hi all !!!
Proxmox uses ssh to move data from nodes (ok, is possible to disable
encryption but is not safe).
I've find this...
http://www.psc.edu/index.php/hpn-ssh
What do you all think?
Luca
___
pve-devel mailing list
pve-devel@pve.proxmox.com
htt
I was able to reproduce the problem on my local machine (kernel 3.10).
To be sure everything's correct I added some error checking to the code.
I'm attaching the changed source (and the bdiff source).
Transcript is below.
I also added an fsync() before close() due to this section in close(2)'s NO
On 05/27/2015 12:15 PM, Dietmar Maurer wrote:
>> IMHO this way of bypassing os-prober is cleaner than adding a 'Conflict'
>> in our zfs-grub package, since it minimizes the packages conflicts when
>> adding our proxmox repo on top of debian's.
>>
>> [1] https://bugs.debian.org/cgi-bin/bugreport.c
Alexandre,
> That's why we need to use barrier or FUA in last kernel in guest, when
using O_DIRECT, to be sure that guest filesystem is ok and datas are
flushed at regular interval.
The problems are:
- Linux swap - no barrier or something similar
- Windows - I have no idea what Windows does to en
Alexandre,
> do you see the problem with qemu cache=directsync ? (O_DIRECT + O_DSYNC).
Yes, it happens in less number of cases (may be 10 times less) but still
happens. I have a reproducible case with Windows 7 and directsync.
Stanislav
On Thu, May 28, 2015 at 11:18 AM, Alexandre DERUMIER
wrote
>>Resume: when working in O_DIRECT mode QEMU has to wait until "write" system
>>call is finished before changing this buffer OR QEMU has to create new buffer
>>every time OR ... other ideas?
AFAIK, only O_DSYNC can guarantee that data are really written to the last
layer(disk platters)
That's
Alexandre,
This is all correct but not related to inconsistency issue.
Stanislav
On Thu, May 28, 2015 at 10:44 AM, Alexandre DERUMIER
wrote:
> >>That is right and you just can't use O_DIRECT without alignment. You
> would just get an error on "write" system call. If you check
> drbd_oos_test.c
Dietmar,
fsync esures that data reaches underlying hardware but it does not help
being sure that buffer is not changed until it is fully written.
I will describe my understanding here why we get this problem with O_DIRECT
and don't have without.
** Without O_DIRECT **
1. Application tries to wri
>>I recall from ceph list that there were some barrier problems in kernels <
>>2.6.33 . I don't know whether those are fixed in the kernel from RHEL6
>>Proxmox uses...
About barrier,
they are multiple layers (fs, lvm,md, virtio-blk,...) where it was buggy.
It should be ok with any kernel > 2.6
> not sure it's related, but with O_DIRECT I think that the write need to
be aligned with multiple of 4k block. (or 512bytes)
That is right and you just can't use O_DIRECT without alignment. You would
just get an error on "write" system call. If you check drbd_oos_test.c you
find posix_memalign the
Hi Stanislav,
I really think you should have a look where the problem is. This is not
Proxmox specific at all, so this list doesn't seem the place to diagnose it.
If DRBD guys think it's an upper layer problem, maybe they can point you
to the problematic layer, and then contact the mantainer
> The testcase is bellow.
Oh, I missed that test case. I will try to reproduce that - give me some time.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Hi,
not sure it's related, but with O_DIRECT I think that the write need to be
aligned with multiple of 4k block. (or 512bytes)
(and I remember some bug with qemu and and 512b-logical/4k-physical disks
http://pve.proxmox.com/pipermail/pve-devel/2012-November/004530.html
I'm not an expert so I
Moreover, if you create ext3 on top of md0 and repeat then raid array
becomes inconsistent too.
# Additional steps:
mkfs.ext3 /dev/md0
mkdir /tmp/ext3
mount /dev/md0 /tmp/ext3
./a.out /tmp/ext3/testfile1
# and then:
vbindiff /tmp/mdadm{1,2} #press enter multiple times to skip metadata
On T
> I have just done the same test with mdadm and not DRBD. And what I found
> that this problem was reproducible on the software raid too, just as it was
> claimed by Lars Ellenberg. It means that problem is not only related to
> DRBD but to O_DIRECT mode generally when we don't use host cache and a
m/|/ is always true as it effectively matches 'nothing or nothing
anywhere in a string'
looks like it was supposed to be m/\|/
---
src/PVE/Tools.pm | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/src/PVE/Tools.pm b/src/PVE/Tools.pm
index 1bc9eec..8e18087 100644
--- a/src/PVE/To
Hi Dietmar,
I did it couple of times already and everytime I had the same answer "upper
layer problem". Well, as we've done this long way up to this point I would
like to continue.
I have just done the same test with mdadm and not DRBD. And what I found
that this problem was reproducible on the s
86 matches
Mail list logo