d what the issue is here...
Best regards,
Adrian Costin
On Wed, Apr 17, 2013 at 3:29 PM, Stefan Priebe - Profihost AG <
s.pri...@profihost.ag> wrote:
> Am 17.04.2013 14:17, schrieb Dietmar Maurer:
> >> Nowhere ;-) how about just return the counter values for the correct
> t
aced as unusedX and is completely
removed from the config file.
Best regards,
Adrian Costin
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
no conflict. You ca install zfs on Proxmox by simply adding
the Ubuntu PPA and doing apt-get install ubuntu-zfs.
Best regards,
Adrian Costin
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> I need to find a bug in the ZFSPlugin.
The plugin should be named something like RemoteZFSPlugin as it would
be confused with a local zfs plugin.
I've already made one and I'm thinking of submitting it after a bit
more testing (if people are interested that is).
Best regards,
the entire
SSH stack and running zfs/zpool command directly.
Best regards,
Adrian Costin
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Also no crashes here:
Xeon 5500 and 5600 series, and Xeon E3-12XX.
Best regards,
Adrian Costin
On Thu, Apr 24, 2014 at 12:28 AM, Adrian Costin wrote:
> Also no crashes here:
>
> Xeon 5500 and 5600 series, and Xeon E3-12XX.
>
> Best regards,
> Adrian Costin
>
>
> On
and also running a qcow2 local SCSI drive works.
Best regards,
Adrian Costin
--
# pveversion -v
proxmox-ve-2.6.32: 3.2-126 (running kernel: 2.6.32-29-pve)
pve-manager: 3.2-4 (running version: 3.2-4/e24a91c1)
pve-kernel-2.6.32-28-pve: 2.6.32-124
pve-kernel-2.6.32-29-pve: 2.6.32-126
lvm2: 2.02.9
taller and also on an
already installed VM (using VirtIO or IDE which is then switched to
SCSI).
Best regards,
Adrian Costin
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
ixes the problem.
I'vee just tried MegaRAID and it doesn't work either. Basically I've
tried all the SCSI HW and the same think happens.
Best regards,
Adrian Costin
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.c
nd works just fine, but
I don't want to take the performance penalty of running with IDE.
Best regards,
Adrian Costin
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
ncrements
the ID, however I have need to keep all the images inside a secondary
zfs.
Best regards,
Adrian Costin
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> It does not sounds like a Proxmox bug. I think a bug should be opened
> on the qemu bugzilla?
Seems to be related to the fact that qemu tries to access the drive
via iscsi rather then local file or local device.
I'll open the bug report with qemu.
Best regards,
Ad
> this.
This is the only problem. I've been using this config in production
with no other issues for a while. VMs start, disk creation / deletion
works fine (if only one disk).
Best regards,
Adrian Costin
___
pve-devel mailing list
pve-d
I was using version 3.0-19. I've manually applied the diff from git
and indeed it fixes the problem.
Best regards,
Adrian Costin
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
,romfile=,mac=82:D3:92:5F:29:5A,netdev=net0,bus=pci.0,addr=0x12,id=net0
Best regards,
Adrian Costin
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> Do you have tried to change from cache=writeback to cache=none ?
Yeah, I did.
Best regards,
Adrian Costin
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
open-iscsi 2.0.873-3
amd64High performance, transport independent iSCSI
implementation
Best regards,
Adrian Costin
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
lowing drivers still remains:
scsihw: lsi
scsihw: lsi53c810
scsihw: pvscsi
using scsihw: virtio-scsi-pci or megasas SeaBIOS detects the drive and
can start grub from it.
Maybe we need an updated SeaBIOS as well?
Best regards,
Adrian Costin
___
pve-deve
over Infiniband
which degrades performance. I've tested SRP on our network and it has
at least 100% improvement over the current solution.
Best regards,
Adrian Costin
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-b
> Already in git:
> commit 082e79f35b2f7b75862dc3014fb7de8e65fa76c6
Sorry, I didn't see if. It's not visible here:
https://git.proxmox.com/?p=pve-storage.git;a=summary
Best regards,
Adrian Costin
___
pve-devel mailing list
pve-devel
megaraid. No speed monster though;-)
It's not as fast as VirtIO, but it's definitely better the IDE.
> maybe can you try with qemu 2.0 ?
> (I can built it for you if you want).
I can definitely test with qemu 2.0. Are there packages avai
In the git version:
- If libiscsi1 was renamed to libiscsi2, then it needs a "Replace:
libiscsi1" or it won't install correctly
- pve-qemu-kvm needs to depend on libiscsi2 rather then libiscsi1
Just my findings when trying to compile qemu 2.0 with the latest libiscsi.
Best
x as a stand-alone server and it¹s much more convenient
to access all the data using localhost¹ rather then doing an extra get to
find our the node name.
Is this a bug or it¹s the intended behaviour?
I¹m running the latest packages from pvetest, but I¹ve
e-root-ca.pem
3. /etc/pve/local/pve-ssl.pem contains the following:
-BEGIN CERTIFICATE-
[My Cert]
-END CERTIFICATE-
-BEGIN CERTIFICATE-
[Intermediate cert]
-END CERTIFICATE-
-BEGIN RSA PRIVATE KEY-
[Private key]
-END RSA PRIVATE KEY
> Please remove the private key here!
I guess it wasn't necessary. I've removed it and everything seems to work.
Best regards,
Adrian Costin
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
- Moved the zpool_import method of zfs_request() to it's own pool_request
function
- activate_storage() is now using "zfs list" to check if the zpool is imported
- pool import only the configured pool, not all the accessible pools
Signed-off-by: Adrian Costin
---
PVE/Storage/Z
e from a shared medium like iscsi,
> and thus should not be mounted by two nodes at the same time).
I agree. Should I add another parameter for this? If yes, should this be
default to auto-import, or not?
Best regards,
Adrian Costin
___
pve-devel mail
27 matches
Mail list logo