[PATCH v4 4/9] slab: determine barn status racily outside of lock

2025-04-25 Thread Vlastimil Babka
The possibility of many barn operations is determined by the current number of full or empty sheaves. Taking the barn->lock just to find out that e.g. there are no empty sheaves results in unnecessary overhead and lock contention. Thus perform these checks outside of the lock with a data_race() ann

Re: [PATCH v8 7/8] vhost: Add check for inherit_owner status

2025-04-06 Thread Cindy Lu
On Tue, Apr 1, 2025 at 9:59 PM Stefano Garzarella wrote: > > On Fri, Mar 28, 2025 at 06:02:51PM +0800, Cindy Lu wrote: > >The VHOST_NEW_WORKER requires the inherit_owner > >setting to be true. So we need to add a check for this. > > > >Signed-off-by: Cindy Lu > >--- > > drivers/vhost/vhost.c | 7

Re: [PATCH v8 7/8] vhost: Add check for inherit_owner status

2025-04-01 Thread Stefano Garzarella
On Fri, Mar 28, 2025 at 06:02:51PM +0800, Cindy Lu wrote: The VHOST_NEW_WORKER requires the inherit_owner setting to be true. So we need to add a check for this. Signed-off-by: Cindy Lu --- drivers/vhost/vhost.c | 7 +++ 1 file changed, 7 insertions(+) IMHO we should squash this patch also

[PATCH v8 7/8] vhost: Add check for inherit_owner status

2025-03-28 Thread Cindy Lu
The VHOST_NEW_WORKER requires the inherit_owner setting to be true. So we need to add a check for this. Signed-off-by: Cindy Lu --- drivers/vhost/vhost.c | 7 +++ 1 file changed, 7 insertions(+) diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c index ff930c2e5b78..fb0c7fb43f78 1006

[PATCH RFC v3 5/8] slab: determine barn status racily outside of lock

2025-03-17 Thread Vlastimil Babka
The possibility of many barn operations is determined by the current number of full or empty sheaves. Taking the barn->lock just to find out that e.g. there are no empty sheaves results in unnecessary overhead and lock contention. Thus perform these checks outside of the lock with a data_race() ann

Re: [PATCH RFC v2 07/10] slab: determine barn status racily outside of lock

2025-03-12 Thread Vlastimil Babka
On 2/25/25 09:54, Harry Yoo wrote: > On Fri, Feb 14, 2025 at 05:27:43PM +0100, Vlastimil Babka wrote: >> The possibility of many barn operations is determined by the current >> number of full or empty sheaves. Taking the barn->lock just to find out >> that e.g. there are no empty sheaves results in

[PATCH v7 7/8] vhost: Add check for inherit_owner status

2025-03-02 Thread Cindy Lu
The VHOST_NEW_WORKER requires the inherit_owner setting to be true. So we need to add a check for this. Signed-off-by: Cindy Lu --- drivers/vhost/vhost.c | 7 +++ 1 file changed, 7 insertions(+) diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c index ff930c2e5b78..fb0c7fb43f78 1006

Re: [PATCH RFC v2 07/10] slab: determine barn status racily outside of lock

2025-02-25 Thread Harry Yoo
On Fri, Feb 14, 2025 at 05:27:43PM +0100, Vlastimil Babka wrote: > The possibility of many barn operations is determined by the current > number of full or empty sheaves. Taking the barn->lock just to find out > that e.g. there are no empty sheaves results in unnecessary overhead and > lock content

Re: [PATCH v6 6/6] vhost: Add check for inherit_owner status

2025-02-23 Thread Jason Wang
On Sun, Feb 23, 2025 at 11:41 PM Cindy Lu wrote: > > The VHOST_NEW_WORKER requires the inherit_owner > setting to be true. So we need to add a check for this. > > Signed-off-by: Cindy Lu > --- Acked-by: Jason Wang Thanks

Re: [PATCH 1/4] arm64: dts: qcom: sdm632-fairphone-fp3: Move status properties last

2025-02-23 Thread Dmitry Baryshkov
On Sat, Feb 22, 2025 at 02:00:47PM +0100, Luca Weiss wrote: > As is common style nowadays, move the status properties to be the last > property of a node. > > Signed-off-by: Luca Weiss > --- > arch/arm64/boot/dts/qcom/sdm632-fairphone-fp3.dts | 15 +-- > 1 file

[PATCH v6 6/6] vhost: Add check for inherit_owner status

2025-02-23 Thread Cindy Lu
The VHOST_NEW_WORKER requires the inherit_owner setting to be true. So we need to add a check for this. Signed-off-by: Cindy Lu --- drivers/vhost/vhost.c | 7 +++ 1 file changed, 7 insertions(+) diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c index 45d8f5c5bca9..26da561c6685 1006

Re: [PATCH RFC v2 07/10] slab: determine barn status racily outside of lock

2025-02-22 Thread Suren Baghdasaryan
On Fri, Feb 14, 2025 at 8:27 AM Vlastimil Babka wrote: > > The possibility of many barn operations is determined by the current > number of full or empty sheaves. Taking the barn->lock just to find out > that e.g. there are no empty sheaves results in unnecessary overhead and > lock contention. Th

[PATCH 1/4] arm64: dts: qcom: sdm632-fairphone-fp3: Move status properties last

2025-02-22 Thread Luca Weiss
As is common style nowadays, move the status properties to be the last property of a node. Signed-off-by: Luca Weiss --- arch/arm64/boot/dts/qcom/sdm632-fairphone-fp3.dts | 15 +-- 1 file changed, 9 insertions(+), 6 deletions(-) diff --git a/arch/arm64/boot/dts/qcom/sdm632

[PATCH RFC v2 07/10] slab: determine barn status racily outside of lock

2025-02-14 Thread Vlastimil Babka
The possibility of many barn operations is determined by the current number of full or empty sheaves. Taking the barn->lock just to find out that e.g. there are no empty sheaves results in unnecessary overhead and lock contention. Thus perform these checks outside of the lock with a data_race() ann

Re: [PATCH v2 1/2] get_maintainer: add --substatus for reporting subsystem status

2025-02-12 Thread Uwe Kleine-König
On Wed, Feb 12, 2025 at 01:42:15PM +0100, Geert Uytterhoeven wrote: > Hi Vlastimil, > > On Tue, 11 Feb 2025 at 17:01, Vlastimil Babka wrote: > > On 2/3/25 12:13, Vlastimil Babka wrote: > > > The subsystem status is currently reported with --role(stats) by > > >

Re: [PATCH v2 1/2] get_maintainer: add --substatus for reporting subsystem status

2025-02-12 Thread Geert Uytterhoeven
Hi Vlastimil, On Tue, 11 Feb 2025 at 17:01, Vlastimil Babka wrote: > On 2/3/25 12:13, Vlastimil Babka wrote: > > The subsystem status is currently reported with --role(stats) by > > adjusting the maintainer role for any status different from Maintained. > > This has two down

Re: [PATCH v2 1/2] get_maintainer: add --substatus for reporting subsystem status

2025-02-11 Thread Vlastimil Babka
ed scripts before running them, >> >> > we can delete the unwanted lines, but it's more work... >> >> > Thanks! >> >> >> >> I guess technically your scripts could detect first if --no-substatus is >> >> supported by grepping

Re: [PATCH v2 1/2] get_maintainer: add --substatus for reporting subsystem status

2025-02-11 Thread Geert Uytterhoeven
can delete the unwanted lines, but it's more work... > >> > Thanks! > >> > >> I guess technically your scripts could detect first if --no-substatus is > >> supported by grepping --help or testing if passing the option results in an > >> error? But yeah it'

Re: [PATCH v2 1/2] get_maintainer: add --substatus for reporting subsystem status

2025-02-11 Thread Vlastimil Babka
ut it's more work... >> > Thanks! >> >> I guess technically your scripts could detect first if --no-substatus is >> supported by grepping --help or testing if passing the option results in an >> error? But yeah it's not ideal, looks like I've hit the

Re: [PATCH v2 1/2] get_maintainer: add --substatus for reporting subsystem status

2025-02-11 Thread Vlastimil Babka
On 2/3/25 12:13, Vlastimil Babka wrote: > The subsystem status is currently reported with --role(stats) by > adjusting the maintainer role for any status different from Maintained. > This has two downsides: > > - if a subsystem has only reviewers or mailing lists and no maint

Re: [PATCH v2 1/2] get_maintainer: add --substatus for reporting subsystem status

2025-02-11 Thread Geert Uytterhoeven
Hi Uwe, On Tue, 11 Feb 2025 at 16:09, Uwe Kleine-König wrote: > On Tue, Feb 11, 2025 at 11:48:13AM +0100, Geert Uytterhoeven wrote: > > On Tue, 11 Feb 2025 at 11:32, Uwe Kleine-König > > wrote: > > > On Mon, Feb 03, 2025 at 12:13:16PM +0100, Vlastimil Babka wrote: >

Re: [PATCH v2 1/2] get_maintainer: add --substatus for reporting subsystem status

2025-02-11 Thread Vlastimil Babka
On 2/11/25 11:59, Vlastimil Babka wrote: > On 2/11/25 11:48, Geert Uytterhoeven wrote: >> Hi Uwe, >> >> On Tue, 11 Feb 2025 at 11:32, Uwe Kleine-König >> wrote: >>> On Mon, Feb 03, 2025 at 12:13:16PM +0100, Vlastimil Babka wrote: >>> > The subsyste

Re: [PATCH v2 1/2] get_maintainer: add --substatus for reporting subsystem status

2025-02-11 Thread Geert Uytterhoeven
Hi Vlastimil, On Tue, 11 Feb 2025 at 15:58, Vlastimil Babka wrote: > On 2/11/25 11:48, Geert Uytterhoeven wrote: > > On Tue, 11 Feb 2025 at 11:32, Uwe Kleine-König > > wrote: > >> On Mon, Feb 03, 2025 at 12:13:16PM +0100, Vlastimil Babka wrote: > >> > The su

Re: [PATCH v2 1/2] get_maintainer: add --substatus for reporting subsystem status

2025-02-11 Thread Uwe Kleine-König
Hello Geert, On Tue, Feb 11, 2025 at 11:48:13AM +0100, Geert Uytterhoeven wrote: > On Tue, 11 Feb 2025 at 11:32, Uwe Kleine-König > wrote: > > On Mon, Feb 03, 2025 at 12:13:16PM +0100, Vlastimil Babka wrote: > > > The subsystem status is currently reported with --role(stat

Re: [PATCH v2 1/2] get_maintainer: add --substatus for reporting subsystem status

2025-02-11 Thread Vlastimil Babka
On 2/11/25 11:48, Geert Uytterhoeven wrote: > Hi Uwe, > > On Tue, 11 Feb 2025 at 11:32, Uwe Kleine-König > wrote: >> On Mon, Feb 03, 2025 at 12:13:16PM +0100, Vlastimil Babka wrote: >> > The subsystem status is currently reported with --role(stats) by >> > a

Re: [PATCH v2 1/2] get_maintainer: add --substatus for reporting subsystem status

2025-02-11 Thread Geert Uytterhoeven
Hi Uwe, On Tue, 11 Feb 2025 at 11:32, Uwe Kleine-König wrote: > On Mon, Feb 03, 2025 at 12:13:16PM +0100, Vlastimil Babka wrote: > > The subsystem status is currently reported with --role(stats) by > > adjusting the maintainer role for any status different from Maintained. &

Re: [PATCH v2 1/2] get_maintainer: add --substatus for reporting subsystem status

2025-02-11 Thread Uwe Kleine-König
Hello, On Mon, Feb 03, 2025 at 12:13:16PM +0100, Vlastimil Babka wrote: > The subsystem status is currently reported with --role(stats) by > adjusting the maintainer role for any status different from Maintained. > This has two downsides: > > - if a subsystem has only reviewers o

Re: [PATCH v2 1/2] get_maintainer: add --substatus for reporting subsystem status

2025-02-04 Thread Lorenzo Stoakes
On Mon, Feb 03, 2025 at 12:13:16PM +0100, Vlastimil Babka wrote: > The subsystem status is currently reported with --role(stats) by > adjusting the maintainer role for any status different from Maintained. > This has two downsides: > > - if a subsystem has only reviewers or maili

Re: [PATCH v2 2/2] get_maintainer: stop reporting subsystem status as maintainer role

2025-02-04 Thread Lorenzo Stoakes
On Mon, Feb 03, 2025 at 12:13:17PM +0100, Vlastimil Babka wrote: > After introducing the --substatus option, we can stop adjusting the > reported maintainer role by the subsystem's status. > > For compatibility with the --git-chief-penguins option, keep the "chief > peng

Re: [PATCH v2 0/2] get_maintainer: report subsystem status separately from maintainer role

2025-02-04 Thread Lorenzo Stoakes
On Mon, Feb 03, 2025 at 12:13:15PM +0100, Vlastimil Babka wrote: > The subsystem status (S: field) can inform a patch submitter if the > subsystem is well maintained or e.g. maintainers are missing. In > get_maintainer, it is currently reported with --role(stats) by adjusting > the mai

[PATCH v2 0/2] get_maintainer: report subsystem status separately from maintainer role

2025-02-03 Thread Vlastimil Babka
The subsystem status (S: field) can inform a patch submitter if the subsystem is well maintained or e.g. maintainers are missing. In get_maintainer, it is currently reported with --role(stats) by adjusting the maintainer role for any status different from Maintained. This has two downsides: - if

[PATCH v2 2/2] get_maintainer: stop reporting subsystem status as maintainer role

2025-02-03 Thread Vlastimil Babka
After introducing the --substatus option, we can stop adjusting the reported maintainer role by the subsystem's status. For compatibility with the --git-chief-penguins option, keep the "chief penguin" role. Signed-off-by: Vlastimil Babka --- scripts/get_mai

[PATCH v2 1/2] get_maintainer: add --substatus for reporting subsystem status

2025-02-03 Thread Vlastimil Babka
The subsystem status is currently reported with --role(stats) by adjusting the maintainer role for any status different from Maintained. This has two downsides: - if a subsystem has only reviewers or mailing lists and no maintainers, the status is not reported (i.e. typically, Orphan subsystems

Re: [PATCH v5 6/6] vhost_scsi: Add check for inherit_owner status

2025-01-22 Thread Mike Christie
On 12/30/24 6:43 AM, Cindy Lu wrote: > The vhost_scsi VHOST_NEW_WORKER requires the inherit_owner > setting to be true. So we need to implement a check for this. > > Signed-off-by: Cindy Lu > --- > drivers/vhost/scsi.c | 8 > 1 file changed, 8 insertions(+) > > diff --git a/drivers/vho

[PATCH 0/2] get_maintainer: report subsystem status separately from maintainer role

2025-01-14 Thread Vlastimil Babka
The script currently uses the subystem's status (S: field) to change how maintainers are reported. One prominent example is when the status is Supported, the maintainers are reported as "(supporter:SUBSYSTEM)". I have been confused myself in the past seeing "supporter&quo

[PATCH 2/2] get_maintainer: print subsystem status also for reviewers and lists

2025-01-14 Thread Vlastimil Babka
When reporting maintainers, the subsystem information includes its status (S: entry from MAINTAINERS) whenever it's not the most common one (Maintained). However this status information is missing for reviewers and especially for mailing lists, which may be often the only kind of e-mail ad

[PATCH 1/2] get_maintainer: decouple subsystem status from maintainer role

2025-01-14 Thread Vlastimil Babka
The script currently uses the subystem's status (S: field in MAINTAINERS) to change how maintainers are reported. One prominent example is when the status is Supported, the maintainers are reported as "(supporter:SUBSYSTEM)". This is misleading, as the Supported status defined

Re: [RFC PATCH] get_maintainer: decouple subsystem status from maintainer role

2025-01-13 Thread Vlastimil Babka
+0100, Vlastimil Babka wrote: >>> The script currently uses the subystem's status (S: field) to change how >>> maintainers are reported. One prominent example is when the status is >>> Supported, the maintainers are reported as "(supporter:SUBSYSTEM)". >

Re: [RFC PATCH] get_maintainer: decouple subsystem status from maintainer role

2025-01-06 Thread Thorsten Leemhuis
Lo! From the "better reply late than never" department: Thx for picking this up again, much appreciated! On 18.12.24 06:48, Kees Cook wrote: > On Fri, Dec 13, 2024 at 12:29:22PM +0100, Vlastimil Babka wrote: >> The script currently uses the subystem's status

Re: [RFC PATCH 1/2] ptp: add PTP_SYS_OFFSET_STAT for xtstamping with status

2025-01-06 Thread Peter Hilber
On Thu, Jan 02, 2025 at 08:21:11AM -0800, Richard Cochran wrote: > On Thu, Jan 02, 2025 at 05:11:01PM +0100, Peter Hilber wrote: > > For sure. But the aim of this proposal is to have an interoperable time > > synchronization solution for VMs through a Virtio device. So the idea is > > to include me

Re: [RFC PATCH 1/2] ptp: add PTP_SYS_OFFSET_STAT for xtstamping with status

2025-01-02 Thread Richard Cochran
On Thu, Jan 02, 2025 at 05:11:01PM +0100, Peter Hilber wrote: > Would it be more acceptable to just announce leap seconds, but not > whether to smear? Up until now, leap second announcements were handled in user space, and the kernel played no role. > I do not understand. Is the point that guests

Re: [RFC PATCH 1/2] ptp: add PTP_SYS_OFFSET_STAT for xtstamping with status

2025-01-02 Thread Peter Hilber
On Wed, Dec 25, 2024 at 04:42:14PM -0800, Richard Cochran wrote: > On Mon, Dec 23, 2024 at 07:13:46PM +0100, Peter Hilber wrote: > > > The precise synchronization of the VM guest with its immediate > > environment can also be important; a VM guest may depend the decision > > about leap second smea

[PATCH v5 6/6] vhost_scsi: Add check for inherit_owner status

2024-12-30 Thread Cindy Lu
The vhost_scsi VHOST_NEW_WORKER requires the inherit_owner setting to be true. So we need to implement a check for this. Signed-off-by: Cindy Lu --- drivers/vhost/scsi.c | 8 1 file changed, 8 insertions(+) diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c index 718fa4e0b31e..0d

Re: [RFC PATCH 1/2] ptp: add PTP_SYS_OFFSET_STAT for xtstamping with status

2024-12-25 Thread Richard Cochran
On Mon, Dec 23, 2024 at 07:13:46PM +0100, Peter Hilber wrote: > The precise synchronization of the VM guest with its immediate > environment can also be important; a VM guest may depend the decision > about leap second smearing on its environment. I thought that the whole point of using a VM is t

Re: [RFC PATCH 1/2] ptp: add PTP_SYS_OFFSET_STAT for xtstamping with status

2024-12-23 Thread Peter Hilber
On Fri, Dec 20, 2024 at 07:19:52AM -0800, Richard Cochran wrote: > On Thu, Dec 19, 2024 at 09:42:03PM +0100, Peter Hilber wrote: > > Ioctl PTP_SYS_OFFSET_PRECISE2 provides cross-timestamping of device time > > and system time. This can be used for virtualization where (virtualization) > > host and

Re: [RFC PATCH 1/2] ptp: add PTP_SYS_OFFSET_STAT for xtstamping with status

2024-12-20 Thread Richard Cochran
On Thu, Dec 19, 2024 at 09:42:03PM +0100, Peter Hilber wrote: > Ioctl PTP_SYS_OFFSET_PRECISE2 provides cross-timestamping of device time > and system time. This can be used for virtualization where (virtualization) > host and guest refer to the same clocksource. It may be preferred to > indicate UT

[RFC PATCH 1/2] ptp: add PTP_SYS_OFFSET_STAT for xtstamping with status

2024-12-19 Thread Peter Hilber
ioctl PTP_SYS_OFFSET_STAT, which can convey, in addition to the cross-timestamp: - leap second related status, - clock accuracy. Reserve space for more information. Drivers indicate through flags which status information is valid. A driver zeroing struct ptp_stat_extra would only provid

Re: [PATCH v3 2/7] KVM: x86: Add emulation status for unhandleable vectoring

2024-12-18 Thread Sean Christopherson
On Tue, Dec 17, 2024, Ivan Orlov wrote: > Add emulation status for unhandleable vectoring, i.e. when KVM can't > emulate an instruction during vectoring. Such a situation can occur > if guest sets the IDT descriptor base to point to MMIO region, and > triggers an exception after t

Re: [RFC PATCH] get_maintainer: decouple subsystem status from maintainer role

2024-12-17 Thread Kees Cook
On Fri, Dec 13, 2024 at 12:29:22PM +0100, Vlastimil Babka wrote: > The script currently uses the subystem's status (S: field) to change how > maintainers are reported. One prominent example is when the status is > Supported, the maintainers are reported as "(supporter:SUBSYS

[PATCH v3 2/7] KVM: x86: Add emulation status for unhandleable vectoring

2024-12-17 Thread Ivan Orlov
Add emulation status for unhandleable vectoring, i.e. when KVM can't emulate an instruction during vectoring. Such a situation can occur if guest sets the IDT descriptor base to point to MMIO region, and triggers an exception after that. Exit to userspace with event delivery error when KVM

[RFC PATCH] get_maintainer: decouple subsystem status from maintainer role

2024-12-13 Thread Vlastimil Babka
The script currently uses the subystem's status (S: field) to change how maintainers are reported. One prominent example is when the status is Supported, the maintainers are reported as "(supporter:SUBSYSTEM)". This is misleading, as the Supported status defined as "Someone

[PATCH v4 8/8] vhost_scsi: Add check for inherit_owner status

2024-12-10 Thread Cindy Lu
The vhost_scsi VHOST_NEW_WORKER requires the inherit_owner setting to be true. So we need to implement a check for this. Signed-off-by: Cindy Lu --- drivers/vhost/scsi.c | 8 1 file changed, 8 insertions(+) diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c index 718fa4e0b31e..0d

Re: [PATCH v3 8/9] vhost_scsi: Add check for inherit_owner status

2024-11-25 Thread Mike Christie
On 11/5/24 1:25 AM, Cindy Lu wrote: > The vhost_scsi VHOST_NEW_WORKER requires the inherit_owner > setting to be true. So we need to implement a check for this. > > Signed-off-by: Cindy Lu > --- > drivers/vhost/scsi.c | 5 + > 1 file changed, 5 insertions(+) > > diff --git a/drivers/vhost/s

[PATCH v2 2/6] KVM: x86: Add emulation status for vectoring during MMIO

2024-11-11 Thread Ivan Orlov
Add emulation status for vectoring error due to MMIO. Such a situation can occur if guest sets the IDT descriptor base to point to MMIO region, and triggers an exception after that. Exit to userspace with event delivery error when MMIO happens during vectoring. Signed-off-by: Ivan Orlov --- V1

[PATCH v3 8/9] vhost_scsi: Add check for inherit_owner status

2024-11-05 Thread Cindy Lu
The vhost_scsi VHOST_NEW_WORKER requires the inherit_owner setting to be true. So we need to implement a check for this. Signed-off-by: Cindy Lu --- drivers/vhost/scsi.c | 5 + 1 file changed, 5 insertions(+) diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c index 006ffacf1c56..05290

Re: [PATCH v2 1/2] selftest: rtc: Add to check rtc alarm status for alarm related test

2024-10-23 Thread Shuah Khan
On 10/22/24 10:01, Alexandre Belloni wrote: On 20/10/2024 20:22:13-0700, Joseph Jang wrote: In alarm_wkalm_set and alarm_wkalm_set_minute test, they use different ioctl (RTC_ALM_SET/RTC_WKALM_SET) for alarm feature detection. They will skip testing if RTC_ALM_SET/RTC_WKALM_SET ioctl returns an E

Re: [PATCH v2 1/2] selftest: rtc: Add to check rtc alarm status for alarm related test

2024-10-22 Thread Alexandre Belloni
On 20/10/2024 20:22:13-0700, Joseph Jang wrote: > In alarm_wkalm_set and alarm_wkalm_set_minute test, they use different > ioctl (RTC_ALM_SET/RTC_WKALM_SET) for alarm feature detection. They will > skip testing if RTC_ALM_SET/RTC_WKALM_SET ioctl returns an EINVAL error > code. This design may miss

[PATCH v2 1/2] selftest: rtc: Add to check rtc alarm status for alarm related test

2024-10-20 Thread Joseph Jang
In alarm_wkalm_set and alarm_wkalm_set_minute test, they use different ioctl (RTC_ALM_SET/RTC_WKALM_SET) for alarm feature detection. They will skip testing if RTC_ALM_SET/RTC_WKALM_SET ioctl returns an EINVAL error code. This design may miss detecting real problems when the efi.set_wakeup_time() r

Re: [PATCH 1/2] selftest: rtc: Add to check rtc alarm status for alarm related test

2024-10-18 Thread Shuah Khan
On 10/18/24 02:27, Alexandre Belloni wrote: On 18/10/2024 12:26:44+0800, Joseph Jang wrote: On 2024/6/24 9:43 AM, Joseph Jang wrote: On 2024/6/21 3:36 AM, Alexandre Belloni wrote: On 23/05/2024 18:38:06-0700, Joseph Jang wrote: In alarm_wkalm_set and alarm_wkalm_set_minute test, they use

Re: [PATCH 1/2] selftest: rtc: Add to check rtc alarm status for alarm related test

2024-10-18 Thread Alexandre Belloni
On 18/10/2024 12:26:44+0800, Joseph Jang wrote: > > > On 2024/6/24 9:43 AM, Joseph Jang wrote: > > > > > > On 2024/6/21 3:36 AM, Alexandre Belloni wrote: > > > On 23/05/2024 18:38:06-0700, Joseph Jang wrote: > > > > In alarm_wkalm_set and alarm_wkalm_set_minute test, they use different > > > >

Re: [PATCH 1/2] selftest: rtc: Add to check rtc alarm status for alarm related test

2024-10-17 Thread Joseph Jang
On 2024/6/24 9:43 AM, Joseph Jang wrote: On 2024/6/21 3:36 AM, Alexandre Belloni wrote: On 23/05/2024 18:38:06-0700, Joseph Jang wrote: In alarm_wkalm_set and alarm_wkalm_set_minute test, they use different ioctl (RTC_ALM_SET/RTC_WKALM_SET) for alarm feature detection. They will skip testi

[PATCH AUTOSEL 5.4 18/21] virtio_pmem: Check device status before requesting flush

2024-10-04 Thread Sasha Levin
From: Philip Chen [ Upstream commit e25fbcd97cf52c3c9824d44b5c56c19673c3dd50 ] If a pmem device is in a bad status, the driver side could wait for host ack forever in virtio_pmem_flush(), causing the system to hang. So add a status check in the beginning of virtio_pmem_flush() to return early

[PATCH AUTOSEL 5.10 22/26] virtio_pmem: Check device status before requesting flush

2024-10-04 Thread Sasha Levin
From: Philip Chen [ Upstream commit e25fbcd97cf52c3c9824d44b5c56c19673c3dd50 ] If a pmem device is in a bad status, the driver side could wait for host ack forever in virtio_pmem_flush(), causing the system to hang. So add a status check in the beginning of virtio_pmem_flush() to return early

[PATCH AUTOSEL 5.15 27/31] virtio_pmem: Check device status before requesting flush

2024-10-04 Thread Sasha Levin
From: Philip Chen [ Upstream commit e25fbcd97cf52c3c9824d44b5c56c19673c3dd50 ] If a pmem device is in a bad status, the driver side could wait for host ack forever in virtio_pmem_flush(), causing the system to hang. So add a status check in the beginning of virtio_pmem_flush() to return early

[PATCH AUTOSEL 6.1 34/42] virtio_pmem: Check device status before requesting flush

2024-10-04 Thread Sasha Levin
From: Philip Chen [ Upstream commit e25fbcd97cf52c3c9824d44b5c56c19673c3dd50 ] If a pmem device is in a bad status, the driver side could wait for host ack forever in virtio_pmem_flush(), causing the system to hang. So add a status check in the beginning of virtio_pmem_flush() to return early

[PATCH AUTOSEL 6.6 48/58] virtio_pmem: Check device status before requesting flush

2024-10-04 Thread Sasha Levin
From: Philip Chen [ Upstream commit e25fbcd97cf52c3c9824d44b5c56c19673c3dd50 ] If a pmem device is in a bad status, the driver side could wait for host ack forever in virtio_pmem_flush(), causing the system to hang. So add a status check in the beginning of virtio_pmem_flush() to return early

[PATCH AUTOSEL 6.10 55/70] virtio_pmem: Check device status before requesting flush

2024-10-04 Thread Sasha Levin
From: Philip Chen [ Upstream commit e25fbcd97cf52c3c9824d44b5c56c19673c3dd50 ] If a pmem device is in a bad status, the driver side could wait for host ack forever in virtio_pmem_flush(), causing the system to hang. So add a status check in the beginning of virtio_pmem_flush() to return early

[PATCH AUTOSEL 6.11 61/76] virtio_pmem: Check device status before requesting flush

2024-10-04 Thread Sasha Levin
From: Philip Chen [ Upstream commit e25fbcd97cf52c3c9824d44b5c56c19673c3dd50 ] If a pmem device is in a bad status, the driver side could wait for host ack forever in virtio_pmem_flush(), causing the system to hang. So add a status check in the beginning of virtio_pmem_flush() to return early

Re: [PATCH v3] virtio_pmem: Check device status before requesting flush

2024-09-09 Thread Pankaj Gupta
+CC MST > If a pmem device is in a bad status, the driver side could wait for > host ack forever in virtio_pmem_flush(), causing the system to hang. > > So add a status check in the beginning of virtio_pmem_flush() to return > early if the device is not activated. > > Signe

[PATCH v3] virtio_pmem: Check device status before requesting flush

2024-08-26 Thread Philip Chen
If a pmem device is in a bad status, the driver side could wait for host ack forever in virtio_pmem_flush(), causing the system to hang. So add a status check in the beginning of virtio_pmem_flush() to return early if the device is not activated. Signed-off-by: Philip Chen --- v3: - Fix a typo

Re: [PATCH v2] virtio_pmem: Check device status before requesting flush

2024-08-21 Thread Philip Chen
Hi On Wed, Aug 21, 2024 at 1:37 PM Ira Weiny wrote: > > Philip Chen wrote: > > Hi, > > > > On Tue, Aug 20, 2024 at 1:01 PM Dave Jiang wrote: > > > > > > > > > > > > On 8/20/24 10:22 AM, Philip Chen wrote: > > > > If a pme

Re: [PATCH v2] virtio_pmem: Check device status before requesting flush

2024-08-21 Thread Ira Weiny
Philip Chen wrote: > Hi, > > On Tue, Aug 20, 2024 at 1:01 PM Dave Jiang wrote: > > > > > > > > On 8/20/24 10:22 AM, Philip Chen wrote: > > > If a pmem device is in a bad status, the driver side could wait for > > > host ack forever

Re: [PATCH v2] virtio_pmem: Check device status before requesting flush

2024-08-20 Thread Philip Chen
Hi, On Tue, Aug 20, 2024 at 1:01 PM Dave Jiang wrote: > > > > On 8/20/24 10:22 AM, Philip Chen wrote: > > If a pmem device is in a bad status, the driver side could wait for > > host ack forever in virtio_pmem_flush(), causing the system to hang. > > > > So ad

Re: [PATCH] virtio_pmem: Check device status before requesting flush

2024-08-20 Thread Philip Chen
Hi, On Tue, Aug 20, 2024 at 7:23 AM Ira Weiny wrote: > > Philip Chen wrote: > > On Mon, Aug 19, 2024 at 2:56 PM Ira Weiny wrote: > > > > > > Philip Chen wrote: > > > > If a pmem device is in a bad status, the driver side could wait for > > >

Re: [PATCH v2] virtio_pmem: Check device status before requesting flush

2024-08-20 Thread Dave Jiang
On 8/20/24 10:22 AM, Philip Chen wrote: > If a pmem device is in a bad status, the driver side could wait for > host ack forever in virtio_pmem_flush(), causing the system to hang. > > So add a status check in the beginning of virtio_pmem_flush() to return > early if th

[PATCH v2] virtio_pmem: Check device status before requesting flush

2024-08-20 Thread Philip Chen
If a pmem device is in a bad status, the driver side could wait for host ack forever in virtio_pmem_flush(), causing the system to hang. So add a status check in the beginning of virtio_pmem_flush() to return early if the device is not activated. Signed-off-by: Philip Chen --- v2: - Remove

Re: [PATCH] virtio_pmem: Check device status before requesting flush

2024-08-20 Thread Ira Weiny
Philip Chen wrote: > On Mon, Aug 19, 2024 at 2:56 PM Ira Weiny wrote: > > > > Philip Chen wrote: > > > If a pmem device is in a bad status, the driver side could wait for > > > host ack forever in virtio_pmem_flush(), causing the system to hang. > > > >

Re: [PATCH] virtio_pmem: Check device status before requesting flush

2024-08-19 Thread Philip Chen
On Mon, Aug 19, 2024 at 2:56 PM Ira Weiny wrote: > > Philip Chen wrote: > > If a pmem device is in a bad status, the driver side could wait for > > host ack forever in virtio_pmem_flush(), causing the system to hang. > > I assume this was supposed to be v2 and you re

Re: [PATCH] virtio_pmem: Check device status before requesting flush

2024-08-19 Thread Ira Weiny
Philip Chen wrote: > If a pmem device is in a bad status, the driver side could wait for > host ack forever in virtio_pmem_flush(), causing the system to hang. I assume this was supposed to be v2 and you resent this as a proper v2 with a change list from v1? Ira > > Signed-off-by:

[PATCH v2] virtio_pmem: Check device status before requesting flush

2024-08-14 Thread Philip Chen
If a pmem device is in a bad status, the driver side could wait for host ack forever in virtio_pmem_flush(), causing the system to hang. Signed-off-by: Philip Chen --- Change since v1: - Remove change id from the patch description drivers/nvdimm/nd_virtio.c | 9 + 1 file changed, 9

[PATCH] virtio_pmem: Check device status before requesting flush

2024-08-14 Thread Philip Chen
If a pmem device is in a bad status, the driver side could wait for host ack forever in virtio_pmem_flush(), causing the system to hang. Signed-off-by: Philip Chen --- drivers/nvdimm/nd_virtio.c | 9 + 1 file changed, 9 insertions(+) diff --git a/drivers/nvdimm/nd_virtio.c b/drivers

[PATCH] virtio_pmem: Check device status before requesting flush

2024-08-14 Thread Philip Chen
If a pmem device is in a bad status, the driver side could wait for host ack forever in virtio_pmem_flush(), causing the system to hang. Change-Id: Icc1d0a4405359fb5364751031589d15a455f849b Signed-off-by: Philip Chen --- drivers/nvdimm/nd_virtio.c | 9 + 1 file changed, 9 insertions

Re: Current status and possible improvements in CONFIG_MODULE_FORCE_UNLOAD

2024-06-14 Thread Aditya Garg
Thanks for the reply Lucas. It makes sense now! > On 15 Jun 2024, at 12:18 AM, Lucas De Marchi wrote: > > On Thu, Jun 06, 2024 at 06:49:59AM GMT, Aditya Garg wrote: >> Hi >> >> I am Aditya Garg. I often require using out of tree drivers to support >> various hardwares on Linux. Sometimes the

Re: Current status and possible improvements in CONFIG_MODULE_FORCE_UNLOAD

2024-06-14 Thread Lucas De Marchi
On Thu, Jun 06, 2024 at 06:49:59AM GMT, Aditya Garg wrote: Hi I am Aditya Garg. I often require using out of tree drivers to support various hardwares on Linux. Sometimes the provider doesn't write good drivers, and often they have to be force unloaded. It's a common thing in proprietary driv

Re: Current status and possible improvements in CONFIG_MODULE_FORCE_UNLOAD

2024-06-07 Thread Christoph Hellwig
On Thu, Jun 06, 2024 at 06:49:59AM +, Aditya Garg wrote: > Hi > > I am Aditya Garg. I often require using out of tree drivers to support > various hardwares on Linux. Just stop buying hardwarew that requires this, or improve and upstream the drivers to make your life easier instead of making

Current status and possible improvements in CONFIG_MODULE_FORCE_UNLOAD

2024-06-05 Thread Aditya Garg
Hi I am Aditya Garg. I often require using out of tree drivers to support various hardwares on Linux. Sometimes the provider doesn't write good drivers, and often they have to be force unloaded. It's a common thing in proprietary drivers. I know the author of the driver should take note of the

[PATCH AUTOSEL 6.6 25/47] pds_vdpa: clear config callback when status goes to 0

2023-12-11 Thread Sasha Levin
From: Shannon Nelson [ Upstream commit dd3b8de16e90c5594eddd29aeeb99e97c6f863be ] If the client driver is setting status to 0, something is getting shutdown and possibly removed. Make sure we clear the config_cb so that it doesn't end up crashing when trying to call a bogus callback. S

Re: [PATCH v3 3/3] drm/msm/dp: check main link status before start aux read

2021-04-20 Thread Stephen Boyd
This patch have DP aux channel read/write to return NAK immediately > if DP controller connection status is in unplugged state. > > Changes in V3: > -- check core_initialized before handle irq_hpd > Signed-off-by: Kuogee Hsieh > --- > drivers/gpu/drm/msm/dp/dp_aux.c | 5 +

RE: [PATCH 2/2] drivers: hv: Create a consistent pattern for checking Hyper-V hypercall status

2021-04-20 Thread Michael Kelley
From: Joseph Salisbury Sent: Friday, April 16, 2021 5:43 PM > > There is not a consistent pattern for checking Hyper-V hypercall status. > Existing code uses a number of variants. The variants work, but a consistent > pattern would improve the readability of the code, and be mor

Re: [PATCH v19 4/6] misc: eeprom: at24: check suspend status before disable regulator

2021-04-20 Thread Hsin-Yi Wang
On Fri, Apr 16, 2021 at 10:09 PM Bartosz Golaszewski wrote: > > On Wed, Apr 14, 2021 at 7:29 PM Hsin-Yi Wang wrote: > > > > cd5676db0574 ("misc: eeprom: at24: support pm_runtime control") disables > > regulator in runtime suspend. If runtime suspend is called before > > regulator disable, it will

[PATCH] misc: eeprom: at24: check suspend status before disable regulator

2021-04-20 Thread Hsin-Yi Wang
cd5676db0574 ("misc: eeprom: at24: support pm_runtime control") disables regulator in runtime suspend. If runtime suspend is called before regulator disable, it will results in regulator unbalanced disabling. Fixes: cd5676db0574 ("misc: eeprom: at24: support pm_runtime control") Signed-off-by: Hsi

[PATCH] usb: gadget: net2272: remove redundant initialization of status

2021-04-20 Thread Colin King
From: Colin Ian King The variable status is being initialized with a value that is never read and it is being updated later with a new value. The initialization is redundant and can be removed and move the declaration of status to the scope where it is used. Addresses-Coverity: ("Unused

Re: [PATCH] dmaengine: idxd: Fix potential null dereference on pointer status

2021-04-20 Thread Vinod Koul
On 15-04-21, 12:06, Colin King wrote: > From: Colin Ian King > > There are calls to idxd_cmd_exec that pass a null status pointer however > a recent commit has added an assignment to *status that can end up > with a null pointer dereference. The function expects a null s

Re: [PATCH v13 09/12] mm: x86: Invoke hypercall when page encryption status is changed

2021-04-20 Thread Paolo Bonzini
On 15/04/21 17:57, Ashish Kalra wrote: From: Brijesh Singh Invoke a hypercall when a memory region is changed from encrypted -> decrypted and vice versa. Hypervisor needs to know the page encryption status during the guest migration. Boris, can you ack this patch? Paolo Cc: Tho

[PATCH v1] Bluetooth: Fix the HCI to MGMT status conversion table

2021-04-19 Thread Yu Liu
0x2B, 0x31 and 0x33 are reserved for future use but were not present in the HCI to MGMT conversion table, this caused the conversion to be incorrect for the HCI status code greater than 0x2A. Reviewed-by: Miao-chen Chou Signed-off-by: Yu Liu --- Changes in v1: - Initial change net/bluetooth

[PATCH AUTOSEL 4.4 3/7] xen-netback: Check for hotplug-status existence before watching

2021-04-19 Thread Sasha Levin
registered for a nonexistent node (and will send notifications should the node be subsequently created). As of commit 1f2565780 ("xen-netback: remove 'hotplug-status' once it has served its purpose"), this leads to a failure when a domU transitions into XenbusStateConnected mor

[PATCH AUTOSEL 4.9 4/8] xen-netback: Check for hotplug-status existence before watching

2021-04-19 Thread Sasha Levin
registered for a nonexistent node (and will send notifications should the node be subsequently created). As of commit 1f2565780 ("xen-netback: remove 'hotplug-status' once it has served its purpose"), this leads to a failure when a domU transitions into XenbusStateConnected mor

[PATCH AUTOSEL 4.14 06/11] xen-netback: Check for hotplug-status existence before watching

2021-04-19 Thread Sasha Levin
registered for a nonexistent node (and will send notifications should the node be subsequently created). As of commit 1f2565780 ("xen-netback: remove 'hotplug-status' once it has served its purpose"), this leads to a failure when a domU transitions into XenbusStateConnected mor

[PATCH AUTOSEL 4.19 07/12] xen-netback: Check for hotplug-status existence before watching

2021-04-19 Thread Sasha Levin
registered for a nonexistent node (and will send notifications should the node be subsequently created). As of commit 1f2565780 ("xen-netback: remove 'hotplug-status' once it has served its purpose"), this leads to a failure when a domU transitions into XenbusStateConnected mor

[PATCH AUTOSEL 5.4 07/14] xen-netback: Check for hotplug-status existence before watching

2021-04-19 Thread Sasha Levin
registered for a nonexistent node (and will send notifications should the node be subsequently created). As of commit 1f2565780 ("xen-netback: remove 'hotplug-status' once it has served its purpose"), this leads to a failure when a domU transitions into XenbusStateConnected mor

  1   2   3   4   5   6   7   8   9   10   >