ncy in 'xl pci-assignable-list' state tracking
3. The GFN mapping failure on guest setup
Any suggestions for the next step?
Thanks,
G.R.
-wrapping issue below... Have no idea
what happened about the formatting
On Sun, Jul 3, 2022 at 1:43 AM G.R. wrote:
>
> Hi everybody,
>
> I run into problems passing through a SN570 NVME SSD to a HVM guest.
> So far I have no idea if the problem is with this specific SSD or wit
On Mon, Jul 4, 2022 at 5:53 PM Roger Pau Monné wrote:
>
> On Sun, Jul 03, 2022 at 01:43:11AM +0800, G.R. wrote:
> > Hi everybody,
> >
> > I run into problems passing through a SN570 NVME SSD to a HVM guest.
> > So far I have no idea if the problem is with this sp
On Mon, Jul 4, 2022 at 7:34 PM G.R. wrote:
>
> On Mon, Jul 4, 2022 at 5:53 PM Roger Pau Monné wrote:
> >
> > Would also be helpful if you could get the RMRR regions from that
> > box. Booting with `iommu=verbose` on the Xen command line should print
> > those.
&g
On Mon, Jul 4, 2022 at 9:09 PM Roger Pau Monné wrote:
> >
> > 05:00.0 Non-Volatile memory controller: Sandisk Corp Device 501a (prog-if
> > 02 [NVM Express])
> > Subsystem: Sandisk Corp Device 501a
> > Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr-
> > Stepping-
On Mon, Jul 4, 2022 at 10:51 PM G.R. wrote:
>
> On Mon, Jul 4, 2022 at 9:09 PM Roger Pau Monné wrote:
> > >
> > > 05:00.0 Non-Volatile memory controller: Sandisk Corp Device 501a (prog-if
> > > 02 [NVM Express])
> > > Subsystem: Sandisk Corp
On Mon, Jul 4, 2022 at 11:15 PM G.R. wrote:
>
> On Mon, Jul 4, 2022 at 10:51 PM G.R. wrote:
> >
> > On Mon, Jul 4, 2022 at 9:09 PM Roger Pau Monné wrote:
> > > Can you paste the lspci -vvv output for any other device you are also
> > > passing through to
rsion that I
can pick up if I decide to upgrade?
Which version would it be?
On the other hand, according to the other experiment I did, this may
not be the only issue related to this device.
Still not sure if the device or the SW stack is faulty this time...
Thanks,
G.R.
On Tue, Jul 5, 2022 at 12:21 AM Roger Pau Monné wrote:
>
> On Mon, Jul 04, 2022 at 11:37:13PM +0800, G.R. wrote:
> > On Mon, Jul 4, 2022 at 11:15 PM G.R.
> > wrote:
> > >
> > > On Mon, Jul 4, 2022 at 10:51 PM G.R.
> > > wrote:
> > > &g
On Tue, Jul 5, 2022 at 5:04 PM Jan Beulich wrote:
>
> On 04.07.2022 18:31, G.R. wrote:
> > On Tue, Jul 5, 2022 at 12:21 AM Roger Pau Monné
> > wrote:
> >>> I retried with the following:
> >>> pci=['05:00.0,permissive=1,msitranslate=1']
>
On Tue, Jul 5, 2022 at 7:59 PM Jan Beulich wrote:
> Nothing useful in there. Yet independent of that I guess we need to
> separate the issues you're seeing. Otherwise it'll be impossible to
> know what piece of data belongs where.
Yep, I think I'm seeing several different issues here:
1. The FLR r
On Wed, Jul 6, 2022 at 2:33 PM Jan Beulich wrote:
>
> On 06.07.2022 08:25, G.R. wrote:
> > On Tue, Jul 5, 2022 at 7:59 PM Jan Beulich wrote:
> >> Nothing useful in there. Yet independent of that I guess we need to
> >> separate the issues you're seeing. Otherwi
On Thu, Jul 7, 2022 at 11:24 PM G.R. wrote:
>
> On Wed, Jul 6, 2022 at 2:33 PM Jan Beulich wrote:
> >
> > > Should I expect a debug build of XEN hypervisor to give better
> > > diagnose messages, without the debug patch that Roger mentioned?
> >
> > Well
On Fri, Jul 8, 2022 at 12:38 AM Jan Beulich wrote:
>
> On 07.07.2022 17:24, G.R. wrote:
> > On Wed, Jul 6, 2022 at 2:33 PM Jan Beulich wrote:
> >>
> >> On 06.07.2022 08:25, G.R. wrote:
> >>> On Tue, Jul 5, 2022 at 7:59 PM Jan Beulich wrote:
> >
On Fri, Jul 8, 2022 at 10:28 AM G.R. wrote:
>
> On Fri, Jul 8, 2022 at 12:38 AM Jan Beulich wrote:
> > > But the 'xl pci-assignable-remove' will lead to xl segmentation fault...
> > >> [ 655.041442] xl[975]: segfault at 0 ip 7f2cccdaf71f sp
> > &g
On Fri, Jul 8, 2022 at 10:28 AM G.R. wrote:
>
> On Fri, Jul 8, 2022 at 12:38 AM Jan Beulich wrote:
> >
> > > I built both 4.14.3 debug version and 4.16.1 release version for
> > > testing purposes.
> > > Unfortunately they gave me absolutely zero inform
e checking `n`. Otherwise, `n` might be non-zero
> > with `bdfs` NULL which would lead to a segv.
> >
> > Reported-by: "G.R."
> > Fixes: 57bff091f4 ("libxl: add 'name' field to 'libxl_device_pci' in the
> > IDL...")
> > S
graded to FreeNAS 13 only to rediscover this issue
> > > once again :-(
> > >
> > > Any chance the patch can apply on FreeBSD 13.1-RELEASE-p1 kernel?
> > >
> > > Thanks,
> > > G.R.
> > >
> >
> > Hi,
> >
> > I want
Hi all,
I'm trying out to boot domU through UEFI path but so far made very
little progress.
I'm currently on a self-built XEN 4.16.1 hypervisor version without
the --enable-ovmf configuration. I attempted a dirty build to generate
the ovmf.bin firmware. The build succeeds but chery-picking the
fir
ill be welcome, while I work on the planned experiments.
Thanks,
G.R.
On Sun, Dec 19, 2021 at 2:35 AM G.R. wrote:
>
> Hi all,
>
> I ran into the following error report in the DOM0 kernel after a recent
> upgrade:
> [ 501.840816] vif vif-1-0 vif1.0: Cross page boundary, txp->offset:
> 2872, size: 1460
> [ 501.840828] vif vif-1-0 vif
s is probably irrelevant.
For #3, I'm not sure if the content in the extent matters.
So far I have been testing the same extent, which is formatted as an NTFS disk.
>
> Thanks, Roger.
> >
> > Thanks,
> > G.R.
> > I omitted all operational details with the assumption that you are familiar
> > with TrueNAS and iSCSI setup.
>
> Not really. Ideally I would like a way to reproduce that can be done
> using iperf, nc or similar simple command line tool, without requiring
> to setup iSCSI.
I think it would be t
On Wed, Dec 22, 2021 at 3:13 AM Roger Pau Monné wrote:
> Could you build a debug kernel with the following patch applied and
> give me the trace when it explodes?
Please find the trace and the kernel CL below.
Note, the domU get stuck into a bootloop with this assertion as the
situation will com
lied the patch and it worked like a charm!
Thank you so much for your quick help!
Wish you a wonderful holiday!
Thanks,
G.R.
> > Thanks. I've raised this on freensd-net for advice [0]. IMO netfront
> > shouldn't receive an mbuf that crosses a page boundary, but if that's
> > indeed a legit mbuf I will figure out the best way to handle it.
> >
> > I have a clumsy patch (below) that might solve this, if you want to
> > giv
>
> I think this is hitting a KASSERT, could you paste the text printed as
> part of the panic (not just he backtrace)?
>
> Sorry this is taking a bit of time to solve.
>
> Thanks!
>
Sorry that I didn't make it clear in the first place.
It is the same cross boundary assertion.
Also sorry about the
On Thu, Dec 30, 2021 at 3:07 AM Roger Pau Monné wrote:
>
> On Wed, Dec 29, 2021 at 11:27:50AM +0100, Roger Pau Monné wrote:
> > On Wed, Dec 29, 2021 at 05:13:00PM +0800, G.R. wrote:
> > > >
> > > > I think this is hitting a KASSERT, could you paste the text
On Fri, Dec 31, 2021 at 2:52 AM Roger Pau Monné wrote:
>
> On Thu, Dec 30, 2021 at 11:12:57PM +0800, G.R. wrote:
> > On Thu, Dec 30, 2021 at 3:07 AM Roger Pau Monné
> > wrote:
> > >
> > > On Wed, Dec 29, 2021 at 11:27:50AM +0100, Roger Pau Monné wrote:
> &
> > > > But seems like this patch is not stable enough yet and has its own
> > > > issue -- memory is not properly released?
> > >
> > > I know. I've been working on improving it this morning and I'm
> > > attaching an updated version below.
> > >
> > Good news.
> > With this new patch, the NAS do
On Wed, Jan 5, 2022 at 10:33 PM Roger Pau Monné wrote:
>
> On Wed, Jan 05, 2022 at 12:05:39AM +0800, G.R. wrote:
> > > > > > But seems like this patch is not stable enough yet and has its own
> > > > > > issue -- memory is not properly released?
> &
On Mon, Jan 10, 2022 at 10:54 PM Roger Pau Monné wrote:
>
> On Sat, Jan 08, 2022 at 01:14:26AM +0800, G.R. wrote:
> > On Wed, Jan 5, 2022 at 10:33 PM Roger Pau Monné
> > wrote:
> > >
> > > On Wed, Jan 05, 2022 at 12:05:39AM +0800, G.R. wrote:
> > &g
On Tue, Jan 9, 2024 at 11:28 PM Roger Pau Monné wrote:
>
> On Tue, Jan 09, 2024 at 12:13:04PM +0100, Niklas Hallqvist wrote:
> > > On 14 Dec 2022, at 07:16, G.R. wrote:
...
> > > Hi Roger,
> > > Just another query of the latest status. It'll be great if you
or the upstream fix? I haven't heard any news since...
The reason I came back to this thread is that I totally forgot about
this issue and upgraded to FreeNAS 13 only to rediscover this issue
once again :-(
Any chance the patch can apply on FreeBSD 13.1-RELEASE-p1 kernel?
Thanks,
G.R.
34 matches
Mail list logo