On Sat, 2025-08-30 at 19:48 +0200, Borislav Petkov wrote:
>
>
> With newer MESA (version 9.0.2 in Debian), the message
Which Mesa version do you use exactly?
Are you sure that version number is correct?
Mesa 9.0.2 was released on January 22nd, 2013, more than 12 years ago.
Timur
On Tue, 2023-07-25 at 19:00 +0200, Michel Dänzer wrote:
> On 7/25/23 17:05, Marek Olšák wrote:
> > On Tue, Jul 25, 2023 at 4:03 AM Michel Dänzer
> > wrote:
> > > On 7/25/23 04:55, André Almeida wrote:
> > > > Hi everyone,
> > > >
> > > > It's not clear what we should do about non-robust OpenGL ap
Hi Felix,
On Wed, 2023-05-03 at 11:08 -0400, Felix Kuehling wrote:
> That's the worst-case scenario where you're debugging HW or FW
> issues.
> Those should be pretty rare post-bringup. But are there hangs caused
> by
> user mode driver or application bugs that are easier to debug and
> probabl
Hi,
On Tue, 2023-05-02 at 13:14 +0200, Christian König wrote:
> >
> > Christian König ezt írta (időpont: 2023.
> > máj. 2., Ke 9:59):
> >
> > > Am 02.05.23 um 03:26 schrieb André Almeida:
> > > > Em 01/05/2023 16:24, Alex Deucher escreveu:
> > > >> On Mon, May 1, 2023 at 2:58 PM André Almeid
On Tue, 2023-05-02 at 09:45 -0400, Alex Deucher wrote:
> On Tue, May 2, 2023 at 9:35 AM Timur Kristóf
> wrote:
> >
> > Hi,
> >
> > On Tue, 2023-05-02 at 13:14 +0200, Christian König wrote:
> > > >
> > > > Christian König ezt
Hi Christian,
Christian König ezt írta (időpont: 2023. máj.
2., Ke 9:59):
> Am 02.05.23 um 03:26 schrieb André Almeida:
> > Em 01/05/2023 16:24, Alex Deucher escreveu:
> >> On Mon, May 1, 2023 at 2:58 PM André Almeida
> >> wrote:
> >>>
> >>> I know that devcoredump is also used for this kind of
On Sat, 2020-02-29 at 14:46 -0500, Nicolas Dufresne wrote:
> >
> > 1. I think we should completely disable running the CI on MRs which
> > are
> > marked WIP. Speaking from personal experience, I usually make a lot
> > of
> > changes to my MRs before they are merged, so it is a waste of CI
> > res
On Fri, 2020-02-28 at 10:43 +, Daniel Stone wrote:
> On Fri, 28 Feb 2020 at 10:06, Erik Faye-Lund
> wrote:
> > On Fri, 2020-02-28 at 11:40 +0200, Lionel Landwerlin wrote:
> > > Yeah, changes on vulkan drivers or backend compilers should be
> > > fairly
> > > sandboxed.
> > >
> > > We also hav
> >
> > 1. Why is the GTT->VRAM copy so much slower than the VRAM->GTT
> > copy?
> >
> > 2. Why is the bus limited to 24 Gbit/sec? I would expect the
> > Thunderbolt port to give me at least 32 Gbit/sec for PCIe traffic.
>
> That's unrealistic I'm afraid. As I said on IRC, from the GPU POV
> th
On Fri, 2019-07-05 at 09:36 -0400, Alex Deucher wrote:
> On Thu, Jul 4, 2019 at 6:55 AM Michel Dänzer
> wrote:
> > On 2019-07-03 1:04 p.m., Timur Kristóf wrote:
> > > > > There may be other factors, yes. I can't offer a good
> > > > > explanation
&
> >
> > I took a look at amdgpu_device_get_pcie_info() and found that it
> > uses
> > pcie_bandwidth_available to determine the capabilities of the PCIe
> > port. However, pcie_bandwidth_available gives you only the current
> > bandwidth as set by the PCIe link status register, not the maximum
>
> > Thanks Marek, I didn't know about that option.
> > Tried it, here is the output: https://pastebin.com/raw/9SAAbbAA
> >
> > I'm not quite sure how to interpret the numbers, they are
> > inconsistent
> > with the results from both pcie_bw and amdgpu.benchmark, for
> > example
> > GTT->VRAM at a
On Friday, 5 July 2019, Marek Olšák wrote:
> On Fri, Jul 5, 2019 at 5:27 AM Timur Kristóf
> wrote:
>
> > On Wed, 2019-07-03 at 14:44 -0400, Marek Olšák wrote:
> > > You can run:
> > > AMD_DEBUG=testdmaperf glxgears
> > >
> > > It tests transf
> > Can you point me to the place where amdgpu decides the PCIe link
> > speed?
> > I'd like to try to tweak it a little bit to see if that helps at
> > all.
>
> I'm not sure offhand, Alex or anyone?
Thus far, I started by looking at how the pp_dpm_pcie sysfs interface
works, and found smu7_hwmg
On Wed, 2019-07-03 at 14:44 -0400, Marek Olšák wrote:
> You can run:
> AMD_DEBUG=testdmaperf glxgears
>
> It tests transfer sizes of up to 128 MB, and it tests ~60 slightly
> different methods of transfering data.
>
> Marek
Thanks Marek, I didn't know about that option.
Tried it, here is the ou
> > Okay, so I booted my system with amdgpu.benchmark=3
> > You can find the full dmesg log here: https://pastebin.com/zN9FYGw4
> >
> > The result is between 1-5 Gbit / sec depending on the transfer size
> > (the higher the better), which corresponds to neither the 8 Gbit /
> > sec
> > that the k
On Tue, 2019-07-02 at 10:09 +0200, Michel Dänzer wrote:
> On 2019-07-01 6:01 p.m., Timur Kristóf wrote:
> > On Mon, 2019-07-01 at 16:54 +0200, Michel Dänzer wrote:
> > > On 2019-06-28 2:21 p.m., Timur Kristóf wrote:
> > > > I haven't found a good way to measure
> > > > Like I said the device really is limited to 2.5 GT/s even
> > > > though it
> > > > should be able to do 8 GT/s.
> > >
> > > There is Thunderbolt link between the host router (your host
> > > system)
> > > and
> > > the eGPU box. That link is not limited to 2.5 GT/s so even if the
> > > sl
> >
> > That's unfortunate, I would have expected there to be some sort of
> > PCIe
> > speed test utility.
> >
> > Now that I gave it a try, I can measure ~20 Gbit/sec when I run
> > Gnome
> > Wayland on this system (which forces the eGPU to send the
> > framebuffer
> > back and forth all the ti
On Mon, 2019-07-01 at 16:54 +0200, Michel Dänzer wrote:
> On 2019-06-28 2:21 p.m., Timur Kristóf wrote:
> > I haven't found a good way to measure the maximum PCIe throughput
> > between the CPU and GPU,
>
> amdgpu.benchmark=3
>
> on the kernel command line wil
Hi guys,
I use an AMD RX 570 in a Thunderbolt 3 external GPU box.
dmesg gives me the following message:
pci :3a:00.0: 8.000 Gb/s available PCIe bandwidth, limited by 2.5 GT/s x4
link at :04:04.0 (capable of 31.504 Gb/s with 8 GT/s x4 link)
Here is a tree view of the devices as well as th
Hi Mika,
Thanks for your quick reply.
> > 1. Why are there four bridge devices? 04:00.0, 04:01.0 and 04:02.0
> > look
> > superfluous to me and nothing is connected to them. It actually
> > gives
> > me the feeling that the TB3 driver creates 4 devices with 2.5 GT/s
> > each, instead of one devic
On Fri, 2019-06-28 at 17:14 +0300, Mika Westerberg wrote:
> On Fri, Jun 28, 2019 at 03:33:56PM +0200, Timur Kristóf wrote:
> > I have two more questions:
> >
> > 1. What is the best way to test that the virtual link is indeed
> > capable
> > of 40 Gbit / sec? So f
> Well that's the extension PCIe downstream port. The other one is
> 04:01.0.
>
> Typically 04:00.0 and 04:00.2 are used to connect TBT (05:00.0) and
> xHCI
> (39:00.0) but in your case you don't seem to have USB 3 devices
> connected to that so it is not present. If you plug in USB-C device
> (n
> > Sure, though in this case 3 of those downstream ports are not
> > exposed
> > by the hardware, so it's a bit surprising to see them there.
>
> They lead to other peripherals on the TBT host router such as the TBT
> controller and xHCI. Also there are two downstream ports for
> extension
> fro
On Thu, 2019-03-14 at 19:40 +0200, Mika Westerberg wrote:
> On Thu, Mar 14, 2019 at 06:26:00PM +0100, Timur Kristóf wrote:
> > I know atomics is a PCIe feature, but in this case the PCIe goes
> > through TB3, so I would assume it has something to do with it.
>
> Does it
On Thu, 2019-03-14 at 20:17 +0200, Mika Westerberg wrote:
> > > > Here is the output of 'lspci -vv':
> > > > https://pastebin.com/Qt5RUFVc
> > >
> > > The root port (1c.4) says this:
> > >
> > > DevCap2: Completion Timeout: Range ABC, TimeoutDis+, LTR+,
> > > OBFF
> > > Not Supported ARIFwd
On Thu, 2019-03-14 at 12:30 +0200, Mika Westerberg wrote:
> On Wed, Mar 13, 2019 at 07:09:26PM +0100, Timur Kristóf wrote:
> > Hi,
>
> Hi,
>
> > I was sent here by Greg KH from the Linux USB mailing list, I hope
> > this
> > is the right place to ask.
>
Hi,
I was sent here by Greg KH from the Linux USB mailing list, I hope this
is the right place to ask.
PCI-E atomics don't work for me with Thunderbolt 3.
I see the following message from my Thunderbolt 3 eGPU in dmesg:
kfd: skipped device 1002:67df, PCI rejects atomics
Hardware is a Dell XPS 1
29 matches
Mail list logo