- Le 6 Mai 20, à 17:46, Thomas Lamprecht t.lampre...@proxmox.com a écrit :
> On 5/6/20 5:28 PM, Thomas Lamprecht wrote:
>> On 5/6/20 5:21 PM, Daniel Berteaud wrote:
>>> Just opened [ https://bugzilla.proxmox.com/show_bug.cgi?id=2719 |
>>> https://bugzilla.proxm
Just opened [ https://bugzilla.proxmox.com/show_bug.cgi?id=2719 |
https://bugzilla.proxmox.com/show_bug.cgi?id=2719 ]
It's rather important, as it might cut remote access (and not everyone has an
IPMI console available)
++
--
[ https://www.firewall-services.com/ ]
Daniel Ber
Hi.
I've seen quite a few mentions of Proxmox Backup Server recently (in commit
messages). Is there some doc written about it somewhere ? What kind of features
will it provide etc. ?
Thanks.
Daniel
--
[ https://www.firewall-services.com/ ]
Daniel Berteaud
FIREWALL-SERVICES SA
ware) Linux bridges. So, what's left for OVS ?
++
--
[ https://www.firewall-services.com/ ]
Daniel Berteaud
FIREWALL-SERVICES SAS, La sécurité des réseaux
Société de Services en Logiciels Libres
Tél : +33.5 56 64 15 32
Matrix: @dani:fws.fr
[ https://www.firewall-
reak current
> behaviour any more), only use scsi-hd, as in recent versions, there is
> almost no difference between the two anyway.
I see that qemu 4.0.1 was just pushed in pve-qemu repo. Do you have an idea
when 4.1 will be ?
Regards,
Daniel
--
[ https://www.firewall-services.com/ ]
Daniel
- Le 22 Oct 19, à 17:25, Stefan Reiter s.rei...@proxmox.com a écrit :
>
> @Daniel Berteaud: You also mentioned using scsi-hd fixes #2335 (which you
> already have submitted a patch for previously) and #2380. Is this correct?
> Just for reference, so we can keep them in
27;t you just use VirtIO Block (which doesn't
require an SCSI controler, unlike VirtIO SCSI) ?
Cheers,
Daniel
--
[ https://www.firewall-services.com/ ]
Daniel Berteaud
FIREWALL-SERVICES SAS, La sécurité des réseaux
Société de Services en Logiciels Libres
Tél : +33.5 56 64 15 32
Matrix:
i.e., add a new Controller w/o downtime, would be
> nice...
>
VirtIO Block shouldn't need a VirtIO SCSI controller (only VirtIO SCSI does
AFAIK)
What happens if you try to add a disk using VirtIO block ?
Cheers,
Daniel
--
[ https://www.firewall-services.com/ ]
Daniel Berteaud
FIREWAL
l use it, but the default is
safer for everyone.
Cheers,
Daniel
--
[ https://www.firewall-services.com/ ]
Daniel Berteaud
FIREWALL-SERVICES SAS, La sécurité des réseaux
Société de Services en Logiciels Libres
Tél : +33.5 56 64 15 32
Matrix: @dani:fws.fr
[ https://www.firewall-services
o ZFS over
iSCSI (guest I/O error during live move from ZFS over iSCSI to something else)
++
--
[ https://www.firewall-services.com/ ]
Daniel Berteaud
FIREWALL-SERVICES SAS, La sécurité des réseaux
Société de Services en Logiciels Libres
Tél : +33.5 56 64 15 32
Matrix: @dani:fws.fr
- Le 2 Oct 19, à 18:41, Daniel Berteaud dan...@firewall-services.com a
écrit :
> - Le 30 Sep 19, à 11:52, Thomas Lamprecht t.lampre...@proxmox.com a écrit
> :
>
>>
>> Depends on the outcome of above, but effectively we would like to
>> not have the cho
nd dest are not the same type (eg if source uses scsi-hd but dest
scsi-block or scsi-generic). Without even considering the issues I have when
ZFS is used as a backend.
Cheers,
Daniel
--
[ https://www.firewall-services.com/ ]
Daniel Berteaud
FIREWALL-SERVICES SAS, La sécurité des réseaux
as I
can reproduce it easily, but I won't be able to track it further.
Waiting for this to be solved, would you accept a patch to either force
scsi-hd, or disable scsi-generic/scsi-block ? (not sure yet if it should be a
VM tuning, or at the DC level)
Cheers,
Daniel
--
[ https://www.f
switching to scsi-hd instead of scsi-generic, then both of those issues are
gone. Drive mirror and resizing works like for any other storage types (which,
for most of them, also use scsi-hd)
Cheers,
Daniel
--
[ https://www.firewall-services.com/ ]
Daniel Berteaud
FIREWALL-SERVICES SAS, La
://www.firewall-services.com/ ]
Daniel Berteaud
FIREWALL-SERVICES SAS, La sécurité des réseaux
Société de Services en Logiciels Libres
Tél : +33.5 56 64 15 32
Matrix: @dani:fws.fr
[ https://www.firewall-services.com/ | https://www.firewall-services.com ]
as it
> seems practical to you in such a case. :)
>
> cheers,
> Thomas
OK, thanks. I'll wait for it to be packaged and available in pvetest so if
someone find a regression we can continue on this bug.
Cheers,
Daniel
--
[ https://www.firewall-services.com/ ]
Daniel Berteaud
- Le 26 Sep 19, à 18:46, Thomas Lamprecht t.lampre...@proxmox.com a écrit :
>>
>> Changes since V1
>> * Avoid nested hash accesses
>> * Re-use variables
>>
>> Daniel Berteaud (3):
>> LIO: Make the target cache works per target and portal
>
When working with several ZFS over iSCSI / LIO storages, we might lookup
between them with less than 15 sec interval.
Previously, the cache of the previous storage was used, which was breaking
disk move for example
Signed-off-by: Daniel Berteaud
---
PVE/Storage/LunCmd/LIO.pm | 36
So it won't clash with another backstore in another pool
Signed-off-by: Daniel Berteaud
---
PVE/Storage/LunCmd/LIO.pm | 21 +
1 file changed, 21 insertions(+)
diff --git a/PVE/Storage/LunCmd/LIO.pm b/PVE/Storage/LunCmd/LIO.pm
index 2fd3181..122c203 100644
--- a/PVE/St
Signed-off-by: Daniel Berteaud
---
PVE/Storage/LunCmd/LIO.pm | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/PVE/Storage/LunCmd/LIO.pm b/PVE/Storage/LunCmd/LIO.pm
index 122c203..15ddabf 100644
--- a/PVE/Storage/LunCmd/LIO.pm
+++ b/PVE/Storage/LunCmd/LIO.pm
@@ -255,7
This serie fixes support of several ZFS pools exported by the same
server (through different targets) when using LIO. It fixes bug 2384
Changes since V1
* Avoid nested hash accesses
* Re-use variables
Daniel Berteaud (3):
LIO: Make the target cache works per target and portal
LIO: Prefix
- Le 26 Sep 19, à 15:26, Thomas Lamprecht t.lampre...@proxmox.com a écrit :
> On 9/25/19 10:28 AM, Daniel Berteaud wrote:
>> When working with several ZFS over iSCSI / LIO storages, we might lookup
>> between them with less than 15 sec interval.
>> Previously, the
extract_volname can return an undef $volname
Signed-off-by: Daniel Berteaud
---
PVE/Storage/LunCmd/LIO.pm | 8 +---
1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/PVE/Storage/LunCmd/LIO.pm b/PVE/Storage/LunCmd/LIO.pm
index 5f9794d..5d7a21d 100644
--- a/PVE/Storage/LunCmd
This serie fixes support of several ZFS pools exported by the same
server (through different targets) when using LIO. It fixes bug 2384
Daniel Berteaud (3):
Make the target cache works per target and portal
Ensure $volname is defined before using it
Prefix backstores with the pool name
So it won't clash with another backstore in another pool
Signed-off-by: Daniel Berteaud
---
PVE/Storage/LunCmd/LIO.pm | 20
1 file changed, 20 insertions(+)
diff --git a/PVE/Storage/LunCmd/LIO.pm b/PVE/Storage/LunCmd/LIO.pm
index 5d7a21d..80133d4 100644
--- a/PVE/St
When working with several ZFS over iSCSI / LIO storages, we might lookup
between them with less than 15 sec interval.
Previously, the cache of the previous storage was used, which was breaking
disk move for example
Signed-off-by: Daniel Berteaud
---
PVE/Storage/LunCmd/LIO.pm | 20
,
Daniel
--
[ https://www.firewall-services.com/ ]
Daniel Berteaud
FIREWALL-SERVICES SAS, La sécurité des réseaux
Société de Services en Logiciels Libres
Tél : +33.5 56 64 15 32
Matrix: @dani:fws.fr
[ https://www.firewall-services.com/ | https://www.firewall-services.com ]
)
Signed-off-by: Daniel Berteaud
---
PVE/Storage/ZFSPlugin.pm | 10 +-
1 file changed, 9 insertions(+), 1 deletion(-)
diff --git a/PVE/Storage/ZFSPlugin.pm b/PVE/Storage/ZFSPlugin.pm
index 1fac811..8c6709c 100644
--- a/PVE/Storage/ZFSPlugin.pm
+++ b/PVE/Storage/ZFSPlugin.pm
@@ -101,6 +101,8
In the default config, emulate_tpu is set to 0, which disables
unmap support. Once enabled, trim can run from guest to reclaim free
space.
Signed-off-by: Daniel Berteaud
---
PVE/Storage/LunCmd/LIO.pm | 7 ++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/PVE/Storage/LunCmd
This patch serie fixes a few minor problems with ZFS over iSCSI
in general, and in the LIO backend. It also enable unmap support.
Change from V1 :
* Fix linked clone support in ZFSPlugin instead of LIO
Daniel Berteaud (3):
Don't remove and recreate lun when changing a volume
Enable
It's not needed, LIO sees the new size automatically.
And it was broken anyway. Partially fix #2335
Signed-off-by: Daniel Berteaud
---
PVE/Storage/LunCmd/LIO.pm | 10 ++
1 file changed, 2 insertions(+), 8 deletions(-)
diff --git a/PVE/Storage/LunCmd/LIO.pm b/PVE/Storage/LunCmd/L
- Le 17 Sep 19, à 12:44, Daniel Berteaud dan...@firewall-services.com a
écrit :
> Linked clones has an image like base-100-disk-0/vm-101-disk-0
> The previous regex didn't catched it, and thus, resizing a linked clone
> failed
> ---
> PVE/Storage/LunCmd/LIO.pm | 2 +-
It's not needed, LIO sees the new size automatically.
And it was broken anyway. Partially fix #2335
---
PVE/Storage/LunCmd/LIO.pm | 10 ++
1 file changed, 2 insertions(+), 8 deletions(-)
diff --git a/PVE/Storage/LunCmd/LIO.pm b/PVE/Storage/LunCmd/LIO.pm
index e0fac82..1ddc02d 100644
--- a
In the default config, emulate_tpu is set to 0, which disables
unmap support. Once enabled, trim can run from guest to reclaim free
space.
---
PVE/Storage/LunCmd/LIO.pm | 7 ++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/PVE/Storage/LunCmd/LIO.pm b/PVE/Storage/LunCmd/LIO.pm
i
Linked clones has an image like base-100-disk-0/vm-101-disk-0
The previous regex didn't catched it, and thus, resizing a linked clone
failed
---
PVE/Storage/LunCmd/LIO.pm | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/PVE/Storage/LunCmd/LIO.pm b/PVE/Storage/LunCmd/LIO.pm
index
This patch serie fix a few minor problems with LIO backend for ZFS
over iSCSI, especially with volume resizing.
It also enable unmap support
Daniel Berteaud (3):
Don't remove and recreate lun when changing a volume
Enable unmap support
Fix volume name identification with linked clones
Hi there.
I'm planing to give FreeNAS and ZFS-over-iSCSI a try. Looks like
https://github.com/TheGrandWazoo/freenas-proxmox is maintained and
regularily updated.
Any plan to merge this into PVE ? I'm a bit reluctant to start relying
on 3rd party patches which might be abandonned anytime.
Regard
This is all the responsability of the VM, not the host.
++
--
Logo FWS
*Daniel Berteaud*
FIREWALL-SERVICES SAS.
Société de Services en Logiciels Libres
Tel : 05 56 64 15 32
Matrix: @dani:fws.fr
/www.firewall-services.com/
___
pve-d
koverflow.com/questions/20514239/how-to-make-gnu-screen-auto-start-when-login)
Haven't even though about this before ! Thanks for the tip :-)
++
--
Logo FWS
*Daniel Berteaud*
FIREWALL-SERVICES SAS.
Société de Services en Logiciels Libres
Tel : 05
r,
> but an unexpected event could always arise. A real VNC console would
> be unaffected.
Could Proxmox automatically create/re-attach a screen to emulate a
persistent session ? Not sure if easy to do, or feasible at all...(well,
not with NoVNC I guess, but with xterm maybe)
--
Logo FW
mple, "vncviewer my.xenserver.host 5901" will persist even after
> killing the "vncviewer" process.
> This is not the same with NoVNC.
>
> Any idea ? If currently is not possible, do you have plan to add
> support for this ? I think that would be much apreciated.
Use
41 matches
Mail list logo