hi, PVE Developers
here is a bug?
# cat /etc/pve/qemu-server/101.conf
balloon: 4096
bootdisk: virtio0
cores: 4
cpuunits: 15
hotplug: disk,network,usb,memory,cpu
memory: 8192
name: f21-base
net0: virtio=0A:C3:78:E6:50:8F,bridge=vmbr1
numa: 1
onboot: 1
ostype: l26
smbios1: uuid=f3f1a604
hi, PVE Developers
why disk count less than 6 ?
why can't select all disk to zfs_raid_setup ?
……
sub get_zfs_raid_setup {
my $filesys = $config_options->{filesys};
my $dev_name_hash = {};
my $devlist = [];
my $bootdevlist = [];
for (my $i = 0; $i < 6; $i++) {
hi, PVE Developers
here is a bug?
# cat /etc/pve/qemu-server/101.conf
balloon: 4096
bootdisk: virtio0
cores: 4
cpuunits: 15
hotplug: disk,network,usb,memory,cpu
memory: 8192
name: f21-base
net0: virtio=0A:C3:78:E6:50:8F,bridge=vmbr1
numa: 1
onboot: 1
ostype: l26
smbios1: uuid=f3f1a604
OpenVZ would probably work just fine, but KVM would be slow, at best. The
rest should function normally.
On Sun, Feb 1, 2015, 19:09 Lindsay Mathieson
wrote:
>
> On 31 January 2015 at 02:59, Martin Maurer wrote:
>
>> We just updated the pvetest repository and uploaded a lot of latest
>> package
On 31 January 2015 at 02:59, Martin Maurer wrote:
> We just updated the pvetest repository and uploaded a lot of latest
> packages required to support ZFS on Linux.
>
> Also note that we have downgraded pve-qemu-kvm from 2.2 to 2.1, because
> live migration was unstable on some hosts. So please d
Signed-off-by: Stefan Priebe
---
PVE/QemuServer.pm |5 +
1 file changed, 5 insertions(+)
diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 7045c14..04db8a6 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -4598,6 +4598,11 @@ sub pci_dev_group_bind_to_vfio {
foreac
Hi,
while testing newer kernels i stumpled upon a problem with vfio code in PVE.
First it thought it is a kernel bug but it isn't alex williamson from
redhat told me. Newer kernels and also all stable kernels have a patch
since january prevent binding of non pci devices to vfio-pci. I'm sure