on linux (source) and nas4free/freebsd (destination).
Sent from my iPhone
> On 16 Feb 2015, at 08:57, Wolfgang Link wrote:
>
> Do you use this on Linux or on BSD/SunOS?
>
>> On 02/13/2015 01:44 PM, Pablo Ruiz wrote:
>> I am using this same one at production on a f
ith iscsi export should not in the storage.cfg.
> Am 12.02.2015 21:25 schrieb "Pablo Ruiz" :
>
>> Hi,
>>
>> IMHO, I see no reason to not default for the most common case (ie.
>> auto-importing) if there's a way to override it, and such a way is
>> som
I am using this same one at production on a few machines w/o an issue. Also
around google you will find a port over bash instead of ksh (which in fact
requires changing no more than 10 lines)..
Sometimes when a software has no recent releases, does not mean it is not
maintained, but that it requir
Hi,
IMHO, I see no reason to not default for the most common case (ie.
auto-importing) if there's a way to override it, and such a way is
some-what documented.. ;)
On Thu, Feb 12, 2015 at 8:35 PM, Adrian Costin wrote:
>
> AFAIK having a setting to control wether auto-import of pool is desirable
Hi,
AFAIK having a setting to control wether auto-import of pool is desirable
would be a plus. As in some situations the import/export of the pool is
controlled by any other means, and an accidental pool of the pool may be a
destructive action (ie. when the pool maybe from a shared medium like
isc
Well, in this case ZFS is not running on the host, but on an external
system (a two node redhat cluster running zfs on linux, with attached
shared storage disks, to be exact).
On Sat, Aug 2, 2014 at 7:20 PM, Michael Rasmussen wrote:
> On Sat, 2 Aug 2014 16:57:48 +
> Dietmar Maurer wrote:
Implementing a simple provider which would simple call an script was my
initial attempt at.. and I even sent to the list a first initial set of
patches, which then as requested by list members I evolved into the more
radical approach you've seen now. :?
I'll try to refactor the code back into an s
n off.. I would be glad to hear from the
community or from the ZFSPlugin maintainer/original developer in order to
merge this feature(s) somehow. ;)
Regards
Pablo
On Wed, Jul 30, 2014 at 3:37 AM, Michael Rasmussen wrote:
> On Wed, 30 Jul 2014 02:51:30 +0200
> Pablo Ruiz wrote:
>
There are also a few patches I sent some time ago with an overhaul of ZFS LUN
handling code, which are awaiting some comments from proxmox community..
I have created my own testing deb packages and I could share the repo with
anyone interested on testing.
Sent from my iPhone
On 29 Jul 2014, a
Oh, In such a case, his issue would (hopefully) be fixed once this patch
reaches pve-testing.. ;)
On Sun, May 4, 2014 at 2:09 AM, Michael Rasmussen wrote:
> On Sun, 4 May 2014 01:56:36 +0200
> Pablo Ruiz wrote:
>
> > Yeah.
> >
> > If Adrian is already using
Yeah.
If Adrian is already using this patch, and the problem persists, I could
take a look at it tomorrow or maybe by monday. So, just le me know.
Regards
Pablo
On Sun, May 4, 2014 at 12:27 AM, Michael Rasmussen wrote:
> On Sat, 3 May 2014 21:46:21 +0200
> Pablo Ruiz wrote:
>
>
I sent a patch a couple of months ago allowing support for nested pools. I
think it was merged into testing.
On Sat, May 3, 2014 at 9:23 PM, Michael Rasmussen wrote:
> On Sat, 3 May 2014 21:58:43 +0300
> Adrian Costin wrote:
>
> > Small bug when creating a second disk for the same VM on the sa
Setup vmbr0.102 as host interface instead of bond0.102.
Sent from my iPhone
> On 24 Apr 2014, at 21:59, Stefan Priebe wrote:
>
> Hi,
>
> I've the following config on my host:
> auto bond0
> iface bond0 inet manual
>slaves eth0 eth1 eth2
>bond_mode 802.3ad
>
> auto vmbr0
> ifac
Signed-off-by: Pablo Ruiz García
---
PVE/Storage/ZFSPlugin.pm |3 +++
1 files changed, 3 insertions(+), 0 deletions(-)
diff --git a/PVE/Storage/ZFSPlugin.pm b/PVE/Storage/ZFSPlugin.pm
index ac8eb0a..d310a8c 100644
--- a/PVE/Storage/ZFSPlugin.pm
+++ b/PVE/Storage/ZFSPlugin.pm
@@ -41,6 +41,9
Signed-off-by: Pablo Ruiz García
---
PVE/Storage/ZFSPlugin.pm | 16 +++-
1 files changed, 3 insertions(+), 13 deletions(-)
diff --git a/PVE/Storage/ZFSPlugin.pm b/PVE/Storage/ZFSPlugin.pm
index 80fc0ea..ac8eb0a 100644
--- a/PVE/Storage/ZFSPlugin.pm
+++ b/PVE/Storage/ZFSPlugin.pm
Signed-off-by: Pablo Ruiz García
---
zfs-helpers/Common.pm | 285
zfs-helpers/comstar | 114 ++
zfs-helpers/iet | 488
zfs-helpers/istgt | 593 +
4 files
return the given ZVOL/LUN's number.
This actual commands are modelled after the ones invoked by Comstar's LunCmd
perl module, which even if it maybe a bit too coupled to Comstar's/Solaris'
way of doing things. It seems to be flexible enough as to be usefull for other
implementati
Signed-off-by: Pablo Ruiz García
---
zfs-helpers/comstar | 12 ++--
zfs-helpers/iet | 20
zfs-helpers/istgt | 36 +---
3 files changed, 47 insertions(+), 21 deletions(-)
diff --git a/zfs-helpers/comstar b/zfs-helpers
Hello,
This is a new version of my ZFS's LUN management code refactor, this time
completely
removing 'embedded' LunCmd drivers, and adding them as independent perl scripts
to pve-storage repo.
ZFSPlugin functionality has been tested and it works fine with our one
zfs-helper,
also, I've tried to
Signed-off-by: Pablo Ruiz García
---
PVE/Storage/LunCmd/Comstar.pm | 102 ---
PVE/Storage/LunCmd/Iet.pm | 478 -
PVE/Storage/LunCmd/Istgt.pm | 580 -
PVE/Storage/LunCmd/Makefile |5 -
PVE/Storage/Makefile
Mar 2014, at 17:51, Michael Rasmussen wrote:
>
> On Thu, 20 Mar 2014 16:32:44 +0100
> Pablo Ruiz wrote:
>
>>
>> Any preferences on this subject?
>>
> If your scripts should be able to execute on non-proxmox servers then
> the answer is obvious.
>
Another question regarding this lun scripts.. I will be adding them, as
requested, into a new dir /scripts at pve-storage repo.
However, the actual lun helpers (ie LunCmd/*) make use of (among others)
PVE::Tools package's functions, and I am facing two options:
1) Import such packages into each o
ettings (which looks a bit of a kludge) and only expose a selected subset
as PMXVARs.
Regards
Pablo
On Wed, Mar 19, 2014 at 10:42 AM, Pablo Ruiz wrote:
> Hi,
>
> While using command line arguments may seem the obvious approach, in the
> end this is a more fragile mechanism to pass d
Humm, thats a good point and it's perfectly workable.. ;)
I've reviewed the current code and I will be adding 'pool' and 'target' as
PMXVAR, which seems like the most obvious values needed by lunhelper. And
thus removing all previouslly exposed PMXCFG variables.
On Thu, Mar 20, 2014 at 2:09 PM,
It is needed by the ZFS code in order to invoke zfs volume creation with an
specific blocksize. ZFS plugin handles interaction with zfs, while
lun-helper handles exposing/sharing such ZFS volume as an iSCSI LUN. (Which
makes sense, as ZFS volume creation, etc. it's allways common acros all ZFS
impl
Well I was thinking on passing some discriminator value which can be used by
the script to difference in case of multiple callers using different zpools.
(on my own test cluster I have setup a couple of storages/zpools).
However, specifying the pool would be a valid discriminator too, alas it wi
Humm, that makes a lot of sense. ;)
I will pass the storage's name/id as anotjer PMXVAR, and remove all PMXCFG
variables, this way we will produce a much shorter command string.
On Mar 20, 2014 7:30 AM, "Dietmar Maurer" wrote:
> > > Just curious - how long are the command lines this patch genera
Hi,
While using command line arguments may seem the obvious approach, in the
end this is a more fragile mechanism to pass data to helper script, as
backward/forward compatbility would be harder to achieve once new 'data' is
to be passed.
Command line arguments need to respect ordering, or define
Hello,
This is a followup at my privous attempt at providing generic support
of LUN management to ZFS Plugin by using an independent helper script/binary.
This patch series refactors current ZFS Plugin, removing support for perl-based
LunCmd drivers, and instead provides a generic-way of invokin
Signed-off-by: Pablo Ruiz García
---
PVE/Storage/LunCmd/Comstar.pm | 102 ---
PVE/Storage/LunCmd/Iet.pm | 478 -
PVE/Storage/LunCmd/Istgt.pm | 580 -
PVE/Storage/LunCmd/Makefile |5 -
PVE/Storage/Makefile
return the given ZVOL/LUN's number.
This actual commands are modelled after the ones invoked by Comstar's LunCmd
perl module, which even if it maybe a bit too coupled to Comstar's/Solaris'
way of doing things. It seems to be flexible enough as to be usefull for other
implementati
ng to tests the new code. Volunters?
2) Where should I include such scripts? Maybe some sort of contrib
repository? Or placing them at my own github account would be enough?
Regards
Pablo
On Mon, Mar 3, 2014 at 2:09 PM, Pablo Ruiz wrote:
> Daniel, That's exctly the idea. ;)
>
Hi Chris,
I am working on a refactor of ZFS Plugin which will decouple specific LUN
implementations from the driver, by providing a single interface for LUN
implementation by means of invoking an external script/program/binary. As
such, I will try to review your patches and incorporate "what is no
Daniel, That's exctly the idea. ;)
I'll be a bit busy this week attending some conferences, etc. But I will
work on a revised patch the next week so it can be reviewed by any
interested peers on this same list.
Regards
Pablo
On Sun, Mar 2, 2014 at 9:08 PM, Daniel Hunsaker wrote:
> > I might ha
Yeah!
If I understand it correctly, what you mean is removing all LunCmd
implementations and just invoke a remote script (defined at storage.cfg)
from the ZFS plugin directly. That would be great and would still allow
each implementor of a ZFS backend to freely implement their own LUN
management l
Hi Dietmar,
What I am using is a customized redhat cluster setup with ZFS on linux,
plus some resource agent & helper scripts (available here:
https://github.com/pruiz/zfs-cluster) which allow management of NFS/iSCSI
exports, along with raking care of mounting/unmounting (export/import in
ZFS term
Hi Dietmar,
I am going to deep into dynamic loading of lun plugins and wil be seinding
a new version of the code ASAP. However, would you consider merging the
second patch ([PATCH 2/2] Improve parsing of zfs volumes (ZVOLs) in order
to avoid) which indeed is not related to the lun plugins?
Regard
This is an interesting idea, however in such a case, my main issue would be
validating new store.cfg parameters each new plugin would need/add. As json
schema validation needs to be done before store.cfg' contents are loaded
and thus we would have not loaded the plugins either.
Any ideas?
On Wed
This is just an additional plugin under LunCmd which allows developing lun
plugins without depending on proxmox's release schedle. And which canbe
used in cases where the plugin to be developed is going to be too specific
to be usefull for others.
The way I see it, ending up with a lot of specific
Our iSCSI/ZFS infraestructure is somewhat specific to our environment,
and I felt like adding a driver just for us was of no use for proxmox
folks, nor for the community at large, so I've opted for an alternative
way by introducing a 'generic' LUN management driver which just invokes
an independent
From: Pablo Ruiz Garcia
This way, the specifics of the lun management can be developed independently
of proxmox, and we can avoid interfering with proxmox's release schedule.
The 'protocol' which proxmox uses to communicate with the helper is simply
based on exposing a set
From: Pablo Ruiz Garcia
The actual code would only accept zvols like: POOL/vm-123-disk-1.
However, using POOL/DataSet/vm-123-disk-1 allows setting specific
proparties at POOL/DataSet level (like compression, etc.) which
would be inherited by any zvol created under such DataSet.
This allows more
Hello,
Our iSCSI/ZFS infraestructure is somewhat specific to our environment,
and I felt like adding a driver just for us was of no use for proxmox
folks, nor for the community at large, so I've opted for an alternative
way by introducing a 'generic' LUN management driver which just invokes
an ind
/optimizations as of today.. ;))
On Wed, Feb 12, 2014 at 9:55 AM, Pablo Ruiz wrote:
> That's what MSTP/PVSTP+ is supposed to avoid. (And infact, it does so in
> our environment).. however, it requires switches with such capability.
>
>
> On Wed, Feb 12, 2014 at 9:53 AM, An
> It is the same as looping a cable between two ports on a switch that does
> not have edge-safeguard functionality.
>
> Just my 2c.
>
>
>
>
> On Wed, Feb 12, 2014 at 6:28 PM, Pablo Ruiz wrote:
>
>> Hi,
>>
>> In our proxmox cluster, each node has two b
Signed-off-by: Pablo Ruiz Garcia
---
data/PVE/Network.pm | 72 ---
1 files changed, 34 insertions(+), 38 deletions(-)
diff --git a/data/PVE/Network.pm b/data/PVE/Network.pm
index 9ad34f1..d5550a3 100644
--- a/data/PVE/Network.pm
+++ b/data/PVE
Signed-off-by: Pablo Ruiz Garcia
---
data/PVE/Network.pm |3 +++
1 files changed, 3 insertions(+), 0 deletions(-)
diff --git a/data/PVE/Network.pm b/data/PVE/Network.pm
index d5550a3..96cf20b 100644
--- a/data/PVE/Network.pm
+++ b/data/PVE/Network.pm
@@ -142,6 +142,9 @@ sub
Signed-off-by: Pablo Ruiz Garcia
---
data/PVE/Network.pm | 20 +++-
1 files changed, 11 insertions(+), 9 deletions(-)
diff --git a/data/PVE/Network.pm b/data/PVE/Network.pm
index 96cf20b..2fbb715 100644
--- a/data/PVE/Network.pm
+++ b/data/PVE/Network.pm
@@ -166,6 +166,14
Hi Dietmar,
Here goes an v2 with you requested changes. Just tested it on briefly by
rebooting nodes and migrating a couple of VMs from/to the nodes with this
new changes applied.
diff --git a/data/PVE/Network.pm b/data/PVE/Network.pm
index 9ad34f1..2fbb715 100644
--- a/data/PVE/Network.pm
+++ b/
Sure ;)
diff --git a/data/PVE/Network.pm b/data/PVE/Network.pm
index 9ad34f1..96cf20b 100644
--- a/data/PVE/Network.pm
+++ b/data/PVE/Network.pm
@@ -122,36 +122,10 @@ sub copy_bridge_config {
}
}
-sub activate_bridge_vlan {
-my ($bridge, $tag_param) = @_;
-
-die "bridge '$bridge' is
Btw, I've included a link to the first commit only. Full branch with
feature can be found at:
https://github.com/pruiz/pve-common/tree/bridge-multislave
On Wed, Feb 12, 2014 at 6:28 AM, Pablo Ruiz wrote:
> Hi,
>
> In our proxmox cluster, each node has two bond interfaces,
Hi,
In our proxmox cluster, each node has two bond interfaces, and each bond
interface connects to and independent switch. This allows us to enable
MSTP/PVSTP+ and thus load share traffic on different vlans across switches.
+==+
52 matches
Mail list logo