p in rebasing.
>
> --Sanjay
>
> > -Original Message-
> > From: Pawit Pornkitprasan [mailto:p.pa...@gmail.com]
> > Sent: Wednesday, June 12, 2013 7:35 AM
> > To: dev@cloudstack.apache.org
> > Cc: Ryousei Takano; Edison Su; Kelven Yang
> >
or which
> PCI device is available, so the compute offering tag is not going to make
> any sense.
> On the other hand, if it is a tag like "GPU enabled", then that would make
> more sense?
>
>
> On 8/8/13 8:44 AM, "Pawit Pornkitprasan" wrote:
>
>>
/dictionary.jsp 3dfdefe
ui/scripts/configuration.js f9c2498
ui/scripts/docs.js e3be08c
Diff: https://reviews.apache.org/r/12098/diff/
Testing
---
Testing done with Mellanox ConnectX-2 NIC with SR-IOV on Ubuntu Raring.
Thanks,
Pawit Pornkitprasan
ny VM requesting PCI
Passthrough will gracefully fail in planning step as no host will be able to
provide the request PCI devices.
- Pawit Pornkitprasan
On July 2, 2013, 7:12 a.m., Pawit Pornkitprasan wrote:
>
> ---
> This is an automatic
setup/db/db/schema-410to420.sql 325924b
ui/dictionary.jsp 7809cdb
ui/scripts/configuration.js 7f0e1a5
ui/scripts/docs.js 5aa352a
Diff: https://reviews.apache.org/r/12098/diff/
Testing
---
Testing done with Mellanox ConnectX-2 NIC with SR-IOV on Ubuntu Raring.
Thanks,
Pawit
5aa352a
Diff: https://reviews.apache.org/r/12098/diff/
Testing
---
Testing done with Mellanox ConnectX-2 NIC with SR-IOV on Ubuntu Raring.
Thanks,
Pawit Pornkitprasan
Hi Paul,
I think that is more or less dependent on the hardware quirks. For my
case I only had to add intel_iommu=on to the kernel cmdline to get PCI
Passthrough working. An additional cmdline, pci=nocrs, was needed to
get SR-IOV mode of the Mellanox ConnectX-2 card working.
Best Regards,
Pawit
on state change event). After the migration is successful,
PciDeviceManager sends AttachPciDevicesCommand to the new agent with a
list of the PCI IDs on the new host and the agent orders libvirt to
attach it based on the command.
Best Regards,
Pawit
On Thu, Jun 20, 2013 at 1:12 PM, Pawit Pornk
Hi,
Following my previous post about implementing PCI Passthrough on
CloudStack (KVM), I have taken Edison Su’s and others’ comments into
account and came up with an improved design.
Because the devices available at each agent may be different, the
available devices for passthrough are now config
On 6/11/13 09:35 PM, "Edison Su" wrote:
> If change vm's xml is enough, then how about use libvirt's hook system:
> http://www.libvirt.org/hooks.html
> I think, the issue is that, how to let cloudstack only create one VM per KVM
> host, or few
> VMs per host(based on the available PCI devices on
On Tue, Jun 11, 2013 at 8:26 PM, Vijayendra Bhamidipati
wrote:
> -Original Message-
> From: David Nalley [mailto:da...@gnsa.us]
> Sent: Tuesday, June 11, 2013 5:08 AM
> To: dev@cloudstack.apache.org
> Cc: Ryousei Takano
> Subject: Re: PCI-Passthrough with CloudStack
>
> [Vijay>] Any speci
Hi,
I am implementing PCI-Passthrough to use with CloudStack for use with
high-performance networking (10 Gigabit Ethernet/Infiniband).
The current design is to attach a PCI ID (from lspci) to a compute
offering. (Not a network offering since from CloudStack’s point of view,
the pass through devi
12 matches
Mail list logo