[PATCH v2 2/2] kselftest/arm64: Poll less often while waiting for fp-stress children

2024-10-29 Thread Mark Brown
While fp-stress is waiting for children to start it doesn't send any signals to them so there is no need for it to have as short an epoll() timeout as it does when the children are all running. We do still want to have some timeout so that we can log diagnostics about missing children but thi

Re: [PATCH 2/2] kselftest/arm64: Lower poll interval while waiting for fp-stress children

2024-10-29 Thread Mark Rutland
Nit: the title says we lower the poll interval, while we actually raise it. Maybe that'd be clearer as: kselftest/arm64: Raise poll timeout while waiting for fp-stress children ... or: kselftest/arm64: Poll less frequently while waiting for fp-stress children That aside,

[PATCH 2/2] kselftest/arm64: Lower poll interval while waiting for fp-stress children

2024-10-28 Thread Mark Brown
While fp-stress is waiting for children to start it doesn't send any signals to them so there is no need for it to have as short an epoll() timeout as it does when the children are all running. We do still want to have some timeout so that we can log diagnostics about missing children but thi

[RFC PATCH v2 08/11] bfq: disallow idle if CLASS_RT waiting for service

2021-03-12 Thread brookxu
From: Chunguang Xu if CLASS_RT is waiting for service,queues belong to other class disallow idle, so that a schedule can be invoked in time. Signed-off-by: Chunguang Xu --- block/bfq-iosched.c | 5 + 1 file changed, 5 insertions(+) diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c

[PATCH v3 0/3] drm/msm: fix for "Timeout waiting for GMU OOB set GPU_SET: 0x0"

2021-01-28 Thread Eric Anholt
Updated commit messages over v2, no code changes. Eric Anholt (3): drm/msm: Fix race of GPU init vs timestamp power management. drm/msm: Fix races managing the OOB state for timestamp vs timestamps. drm/msm: Clean up GMU OOB set/clear handling. drivers/gpu/drm/msm/adreno/a6xx_gmu.c | 105 +

Re: [PATCH v2 3/5] drm/panel-simple: Retry if we timeout waiting for HPD

2021-01-27 Thread Doug Anderson
Hi, On Mon, Jan 25, 2021 at 12:28 PM Stephen Boyd wrote: > > > +/* > > + * Some panels simply don't always come up and need to be power cycled to > > + * work properly. We'll allow for a handful of retries. > > + */ > > +#define MAX_PANEL_PREPARE_TRIES5 > > Is this define used an

Re: [PATCH v2 3/5] drm/panel-simple: Retry if we timeout waiting for HPD

2021-01-25 Thread Stephen Boyd
Quoting Douglas Anderson (2021-01-15 14:44:18) > On an Innolux N116BCA panel that I have in front of me, sometimes HPD > simply doesn't assert no matter how long you wait for it. As per the > very wise advice of The IT Crowd ("Have you tried turning it off and > on again?") it appears that power cy

Let lockdep complain when locks are taken while waiting for userspace.

2021-01-18 Thread Christian König
Hi guys, because of the Vulkan graphics API we have a specialized synchronization object to handle both inter process as well as process to hardware synchronization. The problem is now that when drivers call this interface with some lock help it is trivial to create a deadlock when those locks

[PATCH v2 3/5] drm/panel-simple: Retry if we timeout waiting for HPD

2021-01-15 Thread Douglas Anderson
el's problems are attributed to the fact that it's pre-production and/or can be fixed, retries clearly can help in some cases and really don't hurt. Signed-off-by: Douglas Anderson --- Changes in v2: - ("drm/panel-simple: Retry if we timeout waiting for HPD") ne

[PATCH 5.10 441/717] drm: mxsfb: Silence -EPROBE_DEFER while waiting for bridge

2020-12-28 Thread Greg Kroah-Hartman
From: Guido Günther [ Upstream commit ee46d16d2e40bebc2aa790fd7b6a056466ff895c ] It can take multiple iterations until all components for an attached DSI bridge are up leading to several: [3.796425] mxsfb 3032.lcd-controller: Cannot connect bridge: -517 [3.816952] mxsfb 3032.lcd

Re: [PATCH v1 1/1] drm: mxsfb: Silence -EPROBE_DEFER while waiting for bridge

2020-12-15 Thread Daniel Vetter
On Tue, Dec 15, 2020 at 09:23:38AM +0100, Guido Günther wrote: > It can take multiple iterations until all components for an attached DSI > bridge are up leading to several: > > [3.796425] mxsfb 3032.lcd-controller: Cannot connect bridge: -517 > [3.816952] mxsfb 3032.lcd-controller

[PATCH v1 1/1] drm: mxsfb: Silence -EPROBE_DEFER while waiting for bridge

2020-12-15 Thread Guido Günther
It can take multiple iterations until all components for an attached DSI bridge are up leading to several: [3.796425] mxsfb 3032.lcd-controller: Cannot connect bridge: -517 [3.816952] mxsfb 3032.lcd-controller: [drm:mxsfb_probe [mxsfb]] *ERROR* failed to attach bridge: -517 Silen

[PATCH v1 0/1] drm: mxsfb: Silence -EPROBE_DEFER while waiting for bridge

2020-12-15 Thread Guido Günther
the only DRM_DEV_ERROR() usage, the rest of the driver uses dev_err(). Guido Günther (1): drm: mxsfb: Silence -EPROBE_DEFER while waiting for bridge drivers/gpu/drm/mxsfb/mxsfb_drv.c | 10 -- 1 file changed, 4 insertions(+), 6 deletions(-) -- 2.29.2

Re: 💥 PANICKED: Waiting for review: Test report for kernel 5.9.11 (stable-queue)

2020-11-30 Thread Xiumei Mu
- Original Message - > From: "CKI Project" > To: skt-results-mas...@redhat.com > Cc: "Yi Zhang" , "Xiong Zhou" , > "Rachel Sibley" , "Xiumei > Mu" , "Jianwen Ji" , "Hangbin Liu" > , "David

Continental Fitness Spa Timisoara "👄 Secret meetings and single girls are waiting for you. Answer me here: http://bit.do/fLifv?dmr7 👄"

2020-11-24 Thread Continental Fitness Spa Timisoara
👄 Secret meetings and single girls are waiting for you. Answer me here: http://bit.do/fLifv?dmr7 👄, Vă mulțumim pentru mesajul dumneavoastră. Vom revenim cu un răspuns în cel mai scurt timp! We thank you for your message. We will get back to you as soon as possible. -- This e-mail was sent

🖤 Secret meetings and single girls are waiting for you. Answer me here: http://bit.do/fLifv?f5p9 🖤

2020-11-23 Thread 🖤 Secret meetings and single girls are waiting for you . Answer me here : http : //bit . do/fLifv?f5p9 🖤
Message Body: 🖤 Secret meetings and single girls are waiting for you. Answer me here: http://bit.do/fLifv?f5p9 🖤 -- This e-mail was sent from a contact form on BeTheme (http://themes.muffingroup.com/betheme)

[PATCH v2 13/17] driver core: Use device's fwnode to check if it is waiting for suppliers

2020-11-20 Thread Saravana Kannan
To check if a device is still waiting for its supplier devices to be added, we used to check if the devices is in a global waiting_for_suppliers list. Since the global list will be deleted in subsequent patches, this patch stops using this check. Instead, this patch uses a more device specific

Re: [PATCH v1 14/18] driver core: Use device's fwnode to check if it is waiting for suppliers

2020-11-20 Thread Saravana Kannan
On Mon, Nov 16, 2020 at 8:34 AM Rafael J. Wysocki wrote: > > On Thu, Nov 5, 2020 at 12:24 AM Saravana Kannan wrote: > > > > To check if a device is still waiting for its supplier devices to be > > added, we used to check if the devices is in a global > > waiting

Re: [PATCH v1 14/18] driver core: Use device's fwnode to check if it is waiting for suppliers

2020-11-16 Thread Rafael J. Wysocki
On Thu, Nov 5, 2020 at 12:24 AM Saravana Kannan wrote: > > To check if a device is still waiting for its supplier devices to be > added, we used to check if the devices is in a global > waiting_for_suppliers list. Since the global list will be deleted in > subsequent patches, t

Re: [RESEND PATCH v3] fuse: Abort waiting for a response if the daemon receives a fatal signal

2020-11-11 Thread Miklos Szeredi
On Wed, Nov 11, 2020 at 8:42 AM Eric W. Biederman wrote: > > Miklos Szeredi writes: > > Okay, so the problem with making the wait_event() at the end of > > request_wait_answer() killable is that it would allow compromising the > > server's integrity by unlocking the VFS level lock (which protect

Re: [RESEND PATCH v3] fuse: Abort waiting for a response if the daemon receives a fatal signal

2020-11-10 Thread Eric W. Biederman
. >> >> You have a good point about the looping issue. I wonder if there is a >> way to enhance this comparatively simple approach to prevent the more >> complex scenario you mention. > > Let's take a concrete example: > > - task A is "server" for fus

Re: [RESEND PATCH v3] fuse: Abort waiting for a response if the daemon receives a fatal signal

2020-11-09 Thread Miklos Szeredi
ularly serious. It is > very annoying not to be able to kill processes with SIGKILL or the OOM > killer. > > You have a good point about the looping issue. I wonder if there is a > way to enhance this comparatively simple approach to prevent the more > complex scenario you mention.

Re: [RESEND PATCH v3] fuse: Abort waiting for a response if the daemon receives a fatal signal

2020-11-09 Thread Eric W. Biederman
Miklos Szeredi writes: > On Mon, Nov 9, 2020 at 1:48 PM Alexey Gladkov > wrote: >> >> This patch removes one kind of the deadlocks inside the fuse daemon. The >> problem appear when the fuse daemon itself makes a file operation on its >> filesystem and receives a fatal signal. >> >> This deadlo

Re: [RESEND PATCH v3] fuse: Abort waiting for a response if the daemon receives a fatal signal

2020-11-09 Thread Miklos Szeredi
On Mon, Nov 9, 2020 at 1:48 PM Alexey Gladkov wrote: > > This patch removes one kind of the deadlocks inside the fuse daemon. The > problem appear when the fuse daemon itself makes a file operation on its > filesystem and receives a fatal signal. > > This deadlock can be interrupted via fusectl fi

[RESEND PATCH v3] fuse: Abort waiting for a response if the daemon receives a fatal signal

2020-11-09 Thread Alexey Gladkov
*pid_ns; @@ -720,6 +723,9 @@ struct fuse_conn { /* Do not show mount options */ unsigned int no_mount_options:1; + /** Do not check fusedev_file (virtiofs) */ + unsigned int check_fusedev_file:1; + /** The number of requests waiting for completion */ atomic

[PATCH v1 14/18] driver core: Use device's fwnode to check if it is waiting for suppliers

2020-11-04 Thread Saravana Kannan
To check if a device is still waiting for its supplier devices to be added, we used to check if the devices is in a global waiting_for_suppliers list. Since the global list will be deleted in subsequent patches, this patch stops using this check. Instead, this patch uses a more device specific

[PATCH 5.9 174/391] soc: qcom: rpmh-rsc: Sleep waiting for tcs slots to be free

2020-11-03 Thread Greg Kroah-Hartman
From: Stephen Boyd [ Upstream commit 2bc20f3c8487bd5bc4dd9ad2c06d2ba05fd4e838 ] The busy loop in rpmh_rsc_send_data() is written with the assumption that the udelay will be preempted by the tcs_tx_done() irq handler when the TCS slots are all full. This doesn't hold true when the calling thread

[PATCH 5.4 158/408] spi: omap2-mcspi: Improve performance waiting for CHSTAT

2020-10-27 Thread Greg Kroah-Hartman
From: Aswath Govindraju [ Upstream commit 7b1d96813317358312440d0d07abbfbeb0ef8d22 ] This reverts commit 13d515c796 (spi: omap2-mcspi: Switch to readl_poll_timeout()). The amount of time spent polling for the MCSPI_CHSTAT bits to be set on AM335x-icev2 platform is less than 1us (about 0.6us) in

[PATCH 5.8 254/633] spi: omap2-mcspi: Improve performance waiting for CHSTAT

2020-10-27 Thread Greg Kroah-Hartman
From: Aswath Govindraju [ Upstream commit 7b1d96813317358312440d0d07abbfbeb0ef8d22 ] This reverts commit 13d515c796 (spi: omap2-mcspi: Switch to readl_poll_timeout()). The amount of time spent polling for the MCSPI_CHSTAT bits to be set on AM335x-icev2 platform is less than 1us (about 0.6us) in

[PATCH 5.9 300/757] spi: omap2-mcspi: Improve performance waiting for CHSTAT

2020-10-27 Thread Greg Kroah-Hartman
From: Aswath Govindraju [ Upstream commit 7b1d96813317358312440d0d07abbfbeb0ef8d22 ] This reverts commit 13d515c796 (spi: omap2-mcspi: Switch to readl_poll_timeout()). The amount of time spent polling for the MCSPI_CHSTAT bits to be set on AM335x-icev2 platform is less than 1us (about 0.6us) in

[PATCH AUTOSEL 5.9 140/147] soc: qcom: rpmh-rsc: Sleep waiting for tcs slots to be free

2020-10-26 Thread Sasha Levin
From: Stephen Boyd [ Upstream commit 2bc20f3c8487bd5bc4dd9ad2c06d2ba05fd4e838 ] The busy loop in rpmh_rsc_send_data() is written with the assumption that the udelay will be preempted by the tcs_tx_done() irq handler when the TCS slots are all full. This doesn't hold true when the calling thread

[PATCH AUTOSEL 5.8 128/132] soc: qcom: rpmh-rsc: Sleep waiting for tcs slots to be free

2020-10-26 Thread Sasha Levin
From: Stephen Boyd [ Upstream commit 2bc20f3c8487bd5bc4dd9ad2c06d2ba05fd4e838 ] The busy loop in rpmh_rsc_send_data() is written with the assumption that the udelay will be preempted by the tcs_tx_done() irq handler when the TCS slots are all full. This doesn't hold true when the calling thread

Waiting for your urgent response.

2020-10-07 Thread Mr. Nor Hizam Hashim.
e desegregation explaining how the fund will be transferred to you Please continue to achieve the purpose. Waiting for your urgent response. Attentively Mr. Nor Hizam Hashim.

Re: [Openipmi-developer] [PATCH 3/3] ipmi: Add timeout waiting for channel information

2020-10-07 Thread Corey Minyard
On Thu, Sep 10, 2020 at 11:08:40AM +, Boehme, Markus via Openipmi-developer wrote: > > > - && ipmi_version_minor(id) >= 5)) { > > > - unsigned int set; > > > + if (ipmi_version_major(id) == 1 && ipmi_version_minor(id) < 5) { > > This is incorrect, it wil

[PATCH v1] fuse: Abort waiting for a response if the daemon receives a fatal signal

2020-10-01 Thread Alexey Gladkov
*pid_ns; @@ -720,6 +723,9 @@ struct fuse_conn { /* Do not show mount options */ unsigned int no_mount_options:1; + /** Do not check fusedev_file (virtiofs) */ + unsigned int check_fusedev_file:1; + /** The number of requests waiting for completion */ atomic

[PATCH 5.8 16/16] mptcp: free acked data before waiting for more memory

2020-09-11 Thread Greg Kroah-Hartman
From: Florian Westphal [ Upstream commit 1cec170d458b1d18f6f1654ca84c0804a701c5ef ] After subflow lock is dropped, more wmem might have been made available. This fixes a deadlock in mptcp_connect.sh 'mmap' mode: wmem is exhausted. But as the mptcp socket holds on to already-acked data (for retr

[Spam] We are still waiting for your email...

2020-09-11 Thread piyin . crhe
Dear Beneficiary, We wish to inform you that a power of attorney was forwarded to our office by two gentlemen regarding your unclaimed fund of $56 Million Dollar. One of them is an American citizen named Mr. Robert Porter and the other is Mr. Wilhelm Berg a Swedish citizen.We have be waiting

Re: [PATCH 3/3] ipmi: Add timeout waiting for channel information

2020-09-10 Thread Boehme, Markus
respond. This leads to an indefinite wait in the > > ipmi_msghandler's __scan_channels function, showing up as hung task > > messages for modprobe. > > > > Add a timeout waiting for the channel scan to complete. If the scan > > fails to complete within that time, tre

Re: [PATCH 3/3] ipmi: Add timeout waiting for channel information

2020-09-07 Thread Corey Minyard
task > messages for modprobe. > > Add a timeout waiting for the channel scan to complete. If the scan > fails to complete within that time, treat that like IPMI 1.0 and only > assume the presence of the primary IPMB channel at channel number 0. This patch is a significant rewrite

Re: [PATCH 2/3] ipmi: Add timeout waiting for device GUID

2020-09-07 Thread Corey Minyard
ages > for modprobe. > > According to IPMI 2.0 specification chapter 20, the implementation of > the Get Device GUID command is optional. Therefore, add a timeout to > waiting for its response and treat the lack of one the same as missing a > device GUID. This patch looks good.

[PATCH 2/3] ipmi: Add timeout waiting for device GUID

2020-09-07 Thread Markus Boehme
ementation of the Get Device GUID command is optional. Therefore, add a timeout to waiting for its response and treat the lack of one the same as missing a device GUID. Signed-off-by: Stefan Nuernberger Signed-off-by: Markus Boehme --- drivers/char/ipmi/ipmi_msghandler.c | 16 --

[PATCH 3/3] ipmi: Add timeout waiting for channel information

2020-09-07 Thread Markus Boehme
We have observed hosts with misbehaving BMCs that receive a Get Channel Info command but don't respond. This leads to an indefinite wait in the ipmi_msghandler's __scan_channels function, showing up as hung task messages for modprobe. Add a timeout waiting for the channel scan to comple

I AM WAITING FOR YOUR URGENT REPLY.....

2020-08-27 Thread Mrs.Ruff Lori Erica
-- Dear Friend. I am Mrs. Ruff Lori Erica. I am sending this brief letter to solicit your partnership to transfer a sum of 10.5Million Dollars into your reliable account as my business partner. However, it's my urgent need for foreign partner that made me to contact you for this transaction. Fur

Re: [PATCH] aio: use wait_for_completion_io() when waiting for completion of io

2020-08-27 Thread Jan Kara
ss. > > Honza > > > On 08/26/2020 21:23, Jan Kara wrote: > > On Wed 05-08-20 09:35:51, Xianting Tian wrote: > > > When waiting for the completion of io, we need account iowait time. As > > > wait_for_completion() calls s

Re: [PATCH] aio: use wait_for_completion_io() when waiting for completion of io

2020-08-27 Thread Jan Kara
r patch is rather pointless. Honza > On 08/26/2020 21:23, Jan Kara wrote: > On Wed 05-08-20 09:35:51, Xianting Tian wrote: > > When waiting for the completion of io, we need account iowait time. As > > wait_for_completion() calls schedule_timeout(), which doesn't account >

Re: [PATCH] aio: use wait_for_completion_io() when waiting for completion of io

2020-08-26 Thread Jan Kara
On Wed 05-08-20 09:35:51, Xianting Tian wrote: > When waiting for the completion of io, we need account iowait time. As > wait_for_completion() calls schedule_timeout(), which doesn't account > iowait time. While wait_for_completion_io() calls io_schedule_timeout(), > which wi

Re: unregister_netdevice: waiting for DEV to become free (4)

2020-08-20 Thread Dmitry Vyukov
er.appspot.com/x/repro.syz?x=1585998690 > > > C reproducer: https://syzkaller.appspot.com/x/repro.c?x=1228fea190 > > > > > > IMPORTANT: if you fix the issue, please add the following tag to the > > > commit: > > > Reported-by: syzbot+df400f

Re: unregister_netdevice: waiting for DEV to become free (4)

2020-08-20 Thread Andrii Nakryiko
0 > > > > IMPORTANT: if you fix the issue, please add the following tag to the commit: > > Reported-by: syzbot+df400f2f24a1677cd...@syzkaller.appspotmail.com > > > > unregister_netdevice: waiting for lo to become free. Usage count = 1 > > Based on the repro,

Re: unregister_netdevice: waiting for DEV to become free (4)

2020-08-19 Thread syzbot
syzbot has bisected this issue to: commit 449325b52b7a6208f65ed67d3484fd7b7184477b Author: Alexei Starovoitov Date: Tue May 22 02:22:29 2018 + umh: introduce fork_usermode_blob() helper bisection log: https://syzkaller.appspot.com/x/bisect.txt?x=11f8618690 start commit: 18445bf

Re: unregister_netdevice: waiting for DEV to become free (4)

2020-08-19 Thread Dmitry Vyukov
7f2c1c72b6ea391e86e81) > syz repro: https://syzkaller.appspot.com/x/repro.syz?x=1585998690 > C reproducer: https://syzkaller.appspot.com/x/repro.c?x=1228fea190 > > IMPORTANT: if you fix the issue, please add the following tag to the commit: > Reported-by: syzbot+df400f2f24a1677cd..

unregister_netdevice: waiting for DEV to become free (4)

2020-08-19 Thread syzbot
r: https://syzkaller.appspot.com/x/repro.c?x=1228fea190 IMPORTANT: if you fix the issue, please add the following tag to the commit: Reported-by: syzbot+df400f2f24a1677cd...@syzkaller.appspotmail.com unregister_netdevice: waiting for lo to become free. Usage count = 1 --- This report is generated by

[PATCH] aio: use wait_for_completion_io() when waiting for completion of io

2020-08-05 Thread Xianting Tian
When waiting for the completion of io, we need account iowait time. As wait_for_completion() calls schedule_timeout(), which doesn't account iowait time. While wait_for_completion_io() calls io_schedule_timeout(), which will account iowait time. So using wait_for_completion_io() inste

[PATCH 5.4 39/90] nvme-tcp: fix possible hang waiting for icresp response

2020-08-03 Thread Greg Kroah-Hartman
From: Sagi Grimberg [ Upstream commit adc99fd378398f4c58798a1c57889872967d56a6 ] If the controller died exactly when we are receiving icresp we hang because icresp may never return. Make sure to set a high finite limit. Fixes: 3f2304f8c6d6 ("nvme-tcp: add NVMe over TCP host driver") Signed-off-

[PATCH 5.7 042/120] nvme-tcp: fix possible hang waiting for icresp response

2020-08-03 Thread Greg Kroah-Hartman
From: Sagi Grimberg [ Upstream commit adc99fd378398f4c58798a1c57889872967d56a6 ] If the controller died exactly when we are receiving icresp we hang because icresp may never return. Make sure to set a high finite limit. Fixes: 3f2304f8c6d6 ("nvme-tcp: add NVMe over TCP host driver") Signed-off-

WAITING FOR YOUR URGENT RESPONSE!!!

2020-08-01 Thread Mr. Ali Zango.
Dear Friend, I am Mr.Ali Zango Working with a reputable bank here in Burkina Faso as the manager in audit department. During our last banking audits we discovered an abandoned account belongs to one of our deceased customer, late Mr.Hamid Amine Razzaq, a billionaire businessman. Meanwhile,

[PATCH 4.4 09/54] SUNRPC reverting d03727b248d0 ("NFSv4 fix CLOSE not waiting for direct IO compeletion")

2020-07-30 Thread Greg Kroah-Hartman
From: Olga Kornievskaia commit 65caafd0d2145d1dd02072c4ced540624daeab40 upstream. Reverting commit d03727b248d0 "NFSv4 fix CLOSE not waiting for direct IO compeletion". This patch made it so that fput() by calling inode_dio_done() in nfs_file_release() would wait uninterruptab

[PATCH 4.9 09/61] SUNRPC reverting d03727b248d0 ("NFSv4 fix CLOSE not waiting for direct IO compeletion")

2020-07-30 Thread Greg Kroah-Hartman
From: Olga Kornievskaia commit 65caafd0d2145d1dd02072c4ced540624daeab40 upstream. Reverting commit d03727b248d0 "NFSv4 fix CLOSE not waiting for direct IO compeletion". This patch made it so that fput() by calling inode_dio_done() in nfs_file_release() would wait uninterruptab

Re: [PATCH v2] soc: qcom: rpmh-rsc: Sleep waiting for tcs slots to be free

2020-07-28 Thread Stanimir Varbanov
Hi Doug, On 7/28/20 5:48 PM, Doug Anderson wrote: > Hi, > > On Sun, Jul 26, 2020 at 2:44 AM Stanimir Varbanov > wrote: >> >> Hi Stephen, >> >> On 7/25/20 12:17 AM, Stephen Boyd wrote: >>> From: Stephen Boyd >>> >>> The busy loop in rpmh_rsc_send_data() is written with the assumption >>> that th

Re: [PATCH v2] soc: qcom: rpmh-rsc: Sleep waiting for tcs slots to be free

2020-07-28 Thread Doug Anderson
Hi, On Sun, Jul 26, 2020 at 2:44 AM Stanimir Varbanov wrote: > > Hi Stephen, > > On 7/25/20 12:17 AM, Stephen Boyd wrote: > > From: Stephen Boyd > > > > The busy loop in rpmh_rsc_send_data() is written with the assumption > > that the udelay will be preempted by the tcs_tx_done() irq handler whe

[PATCH 4.19 15/86] SUNRPC reverting d03727b248d0 ("NFSv4 fix CLOSE not waiting for direct IO compeletion")

2020-07-27 Thread Greg Kroah-Hartman
From: Olga Kornievskaia commit 65caafd0d2145d1dd02072c4ced540624daeab40 upstream. Reverting commit d03727b248d0 "NFSv4 fix CLOSE not waiting for direct IO compeletion". This patch made it so that fput() by calling inode_dio_done() in nfs_file_release() would wait uninterruptab

[PATCH 5.7 026/179] SUNRPC reverting d03727b248d0 ("NFSv4 fix CLOSE not waiting for direct IO compeletion")

2020-07-27 Thread Greg Kroah-Hartman
From: Olga Kornievskaia commit 65caafd0d2145d1dd02072c4ced540624daeab40 upstream. Reverting commit d03727b248d0 "NFSv4 fix CLOSE not waiting for direct IO compeletion". This patch made it so that fput() by calling inode_dio_done() in nfs_file_release() would wait uninterruptab

[PATCH 5.4 026/138] SUNRPC reverting d03727b248d0 ("NFSv4 fix CLOSE not waiting for direct IO compeletion")

2020-07-27 Thread Greg Kroah-Hartman
From: Olga Kornievskaia commit 65caafd0d2145d1dd02072c4ced540624daeab40 upstream. Reverting commit d03727b248d0 "NFSv4 fix CLOSE not waiting for direct IO compeletion". This patch made it so that fput() by calling inode_dio_done() in nfs_file_release() would wait uninterruptab

[PATCH 4.14 12/64] SUNRPC reverting d03727b248d0 ("NFSv4 fix CLOSE not waiting for direct IO compeletion")

2020-07-27 Thread Greg Kroah-Hartman
From: Olga Kornievskaia commit 65caafd0d2145d1dd02072c4ced540624daeab40 upstream. Reverting commit d03727b248d0 "NFSv4 fix CLOSE not waiting for direct IO compeletion". This patch made it so that fput() by calling inode_dio_done() in nfs_file_release() would wait uninterruptab

Re: [PATCH v2] soc: qcom: rpmh-rsc: Sleep waiting for tcs slots to be free

2020-07-26 Thread Maulik Shah
Hi, Change looks good to me. Reviewed-by: Maulik Shah Thanks, Maulik On 7/25/2020 2:47 AM, Stephen Boyd wrote: From: Stephen Boyd The busy loop in rpmh_rsc_send_data() is written with the assumption that the udelay will be preempted by the tcs_tx_done() irq handler when the TCS slots are a

Re: [PATCH v2] soc: qcom: rpmh-rsc: Sleep waiting for tcs slots to be free

2020-07-26 Thread Stanimir Varbanov
Hi Stephen, On 7/25/20 12:17 AM, Stephen Boyd wrote: > From: Stephen Boyd > > The busy loop in rpmh_rsc_send_data() is written with the assumption > that the udelay will be preempted by the tcs_tx_done() irq handler when > the TCS slots are all full. This doesn't hold true when the calling > thr

Re: [PATCH v2] soc: qcom: rpmh-rsc: Sleep waiting for tcs slots to be free

2020-07-24 Thread Doug Anderson
Hi, On Fri, Jul 24, 2020 at 2:17 PM Stephen Boyd wrote: > > From: Stephen Boyd > > The busy loop in rpmh_rsc_send_data() is written with the assumption > that the udelay will be preempted by the tcs_tx_done() irq handler when > the TCS slots are all full. This doesn't hold true when the calling

[PATCH v2] soc: qcom: rpmh-rsc: Sleep waiting for tcs slots to be free

2020-07-24 Thread Stephen Boyd
From: Stephen Boyd The busy loop in rpmh_rsc_send_data() is written with the assumption that the udelay will be preempted by the tcs_tx_done() irq handler when the TCS slots are all full. This doesn't hold true when the calling thread is an irqthread and the tcs_tx_done() irq is also an irqthread

Re: [PATCH] soc: qcom: rpmh-rsc: Sleep waiting for tcs slots to be free

2020-07-24 Thread Stephen Boyd
Quoting Doug Anderson (2020-07-24 13:31:39) > Hi, > > On Fri, Jul 24, 2020 at 1:27 PM Stephen Boyd wrote: > > > > Quoting Doug Anderson (2020-07-24 13:11:59) > > > > > > I wasn't suggesting adding a timeout. I was just saying that if > > > claim_tcs_for_req() were to ever return an error code ot

Re: [PATCH] soc: qcom: rpmh-rsc: Sleep waiting for tcs slots to be free

2020-07-24 Thread Doug Anderson
Hi, On Fri, Jul 24, 2020 at 1:27 PM Stephen Boyd wrote: > > Quoting Doug Anderson (2020-07-24 13:11:59) > > > > I wasn't suggesting adding a timeout. I was just saying that if > > claim_tcs_for_req() were to ever return an error code other than > > -EBUSY that we'd need a check for it because ot

Re: [PATCH] soc: qcom: rpmh-rsc: Sleep waiting for tcs slots to be free

2020-07-24 Thread Stephen Boyd
Quoting Doug Anderson (2020-07-24 13:11:59) > > I wasn't suggesting adding a timeout. I was just saying that if > claim_tcs_for_req() were to ever return an error code other than > -EBUSY that we'd need a check for it because otherwise we'd interpret > the result as a tcs_id. > Ok that sounds l

Re: [PATCH] soc: qcom: rpmh-rsc: Sleep waiting for tcs slots to be free

2020-07-24 Thread Lina Iyer
On Fri, Jul 24 2020 at 14:11 -0600, Stephen Boyd wrote: Quoting Lina Iyer (2020-07-24 13:08:41) On Fri, Jul 24 2020 at 14:01 -0600, Stephen Boyd wrote: >Quoting Doug Anderson (2020-07-24 12:49:56) >> Hi, >> >> On Fri, Jul 24, 2020 at 12:44 PM Stephen Boyd wrote: >I think Lina was alluding to th

Re: [PATCH] soc: qcom: rpmh-rsc: Sleep waiting for tcs slots to be free

2020-07-24 Thread Doug Anderson
Hi, On Fri, Jul 24, 2020 at 1:01 PM Stephen Boyd wrote: > > Quoting Doug Anderson (2020-07-24 12:49:56) > > Hi, > > > > On Fri, Jul 24, 2020 at 12:44 PM Stephen Boyd wrote: > > > > > > > > - if (ret) > > > > > - goto unlock; > > > > > > > > > > - ret = find_free_tcs(tcs

Re: [PATCH] soc: qcom: rpmh-rsc: Sleep waiting for tcs slots to be free

2020-07-24 Thread Stephen Boyd
Quoting Lina Iyer (2020-07-24 13:08:41) > On Fri, Jul 24 2020 at 14:01 -0600, Stephen Boyd wrote: > >Quoting Doug Anderson (2020-07-24 12:49:56) > >> Hi, > >> > >> On Fri, Jul 24, 2020 at 12:44 PM Stephen Boyd wrote: > >I think Lina was alluding to this earlier in this > >thread. > I was thinking

Re: [PATCH] soc: qcom: rpmh-rsc: Sleep waiting for tcs slots to be free

2020-07-24 Thread Lina Iyer
On Fri, Jul 24 2020 at 14:01 -0600, Stephen Boyd wrote: Quoting Doug Anderson (2020-07-24 12:49:56) Hi, On Fri, Jul 24, 2020 at 12:44 PM Stephen Boyd wrote: I think Lina was alluding to this earlier in this thread. I was thinking more of threaded irq handler than a kthread to post the reques

Re: [PATCH] soc: qcom: rpmh-rsc: Sleep waiting for tcs slots to be free

2020-07-24 Thread Stephen Boyd
Quoting Doug Anderson (2020-07-24 12:49:56) > Hi, > > On Fri, Jul 24, 2020 at 12:44 PM Stephen Boyd wrote: > > > > > > - if (ret) > > > > - goto unlock; > > > > > > > > - ret = find_free_tcs(tcs); > > > > - if (ret < 0) > > > > - goto unlock; > > > >

Re: [PATCH] soc: qcom: rpmh-rsc: Sleep waiting for tcs slots to be free

2020-07-24 Thread Doug Anderson
Hi, On Fri, Jul 24, 2020 at 12:44 PM Stephen Boyd wrote: > > > > - if (ret) > > > - goto unlock; > > > > > > - ret = find_free_tcs(tcs); > > > - if (ret < 0) > > > - goto unlock; > > > - tcs_id = ret; > > > + wait_event_lock_irq(drv->tcs_w

Re: [PATCH] soc: qcom: rpmh-rsc: Sleep waiting for tcs slots to be free

2020-07-24 Thread Stephen Boyd
Quoting Doug Anderson (2020-07-24 10:42:55) > Hi, > > On Wed, Jul 22, 2020 at 6:01 PM Stephen Boyd wrote: > > diff --git a/drivers/soc/qcom/rpmh-internal.h > > b/drivers/soc/qcom/rpmh-internal.h > > index ef60e790a750..9a325bac58fe 100644 > > --- a/drivers/soc/qcom/rpmh-internal.h > > +++ b/driv

Re: [PATCH] soc: qcom: rpmh-rsc: Sleep waiting for tcs slots to be free

2020-07-24 Thread Doug Anderson
Hi, On Wed, Jul 22, 2020 at 6:01 PM Stephen Boyd wrote: > > The busy loop in rpmh_rsc_send_data() is written with the assumption > that the udelay will be preempted by the tcs_tx_done() irq handler when > the TCS slots are all full. This doesn't hold true when the calling > thread is an irqthread

Re: [PATCH] soc: qcom: rpmh-rsc: Sleep waiting for tcs slots to be free

2020-07-23 Thread Stephen Boyd
Quoting Lina Iyer (2020-07-23 10:42:54) > On Wed, Jul 22 2020 at 19:01 -0600, Stephen Boyd wrote: > >The busy loop in rpmh_rsc_send_data() is written with the assumption > >that the udelay will be preempted by the tcs_tx_done() irq handler when > >the TCS slots are all full. This doesn't hold true

Re: [PATCH] soc: qcom: rpmh-rsc: Sleep waiting for tcs slots to be free

2020-07-23 Thread Lina Iyer
On Wed, Jul 22 2020 at 19:01 -0600, Stephen Boyd wrote: The busy loop in rpmh_rsc_send_data() is written with the assumption that the udelay will be preempted by the tcs_tx_done() irq handler when the TCS slots are all full. This doesn't hold true when the calling thread is an irqthread and the t

[PATCH] soc: qcom: rpmh-rsc: Sleep waiting for tcs slots to be free

2020-07-22 Thread Stephen Boyd
The busy loop in rpmh_rsc_send_data() is written with the assumption that the udelay will be preempted by the tcs_tx_done() irq handler when the TCS slots are all full. This doesn't hold true when the calling thread is an irqthread and the tcs_tx_done() irq is also an irqthread. That's because kern

[PATCH AUTOSEL 5.7 39/40] SUNRPC reverting d03727b248d0 ("NFSv4 fix CLOSE not waiting for direct IO compeletion")

2020-07-20 Thread Sasha Levin
From: Olga Kornievskaia [ Upstream commit 65caafd0d2145d1dd02072c4ced540624daeab40 ] Reverting commit d03727b248d0 "NFSv4 fix CLOSE not waiting for direct IO compeletion". This patch made it so that fput() by calling inode_dio_done() in nfs_file_release() would wait uninterruptab

[PATCH AUTOSEL 5.4 34/34] SUNRPC reverting d03727b248d0 ("NFSv4 fix CLOSE not waiting for direct IO compeletion")

2020-07-20 Thread Sasha Levin
From: Olga Kornievskaia [ Upstream commit 65caafd0d2145d1dd02072c4ced540624daeab40 ] Reverting commit d03727b248d0 "NFSv4 fix CLOSE not waiting for direct IO compeletion". This patch made it so that fput() by calling inode_dio_done() in nfs_file_release() would wait uninterruptab

[PATCH AUTOSEL 4.9 9/9] SUNRPC reverting d03727b248d0 ("NFSv4 fix CLOSE not waiting for direct IO compeletion")

2020-07-20 Thread Sasha Levin
From: Olga Kornievskaia [ Upstream commit 65caafd0d2145d1dd02072c4ced540624daeab40 ] Reverting commit d03727b248d0 "NFSv4 fix CLOSE not waiting for direct IO compeletion". This patch made it so that fput() by calling inode_dio_done() in nfs_file_release() would wait uninterruptab

[PATCH AUTOSEL 4.19 19/19] SUNRPC reverting d03727b248d0 ("NFSv4 fix CLOSE not waiting for direct IO compeletion")

2020-07-20 Thread Sasha Levin
From: Olga Kornievskaia [ Upstream commit 65caafd0d2145d1dd02072c4ced540624daeab40 ] Reverting commit d03727b248d0 "NFSv4 fix CLOSE not waiting for direct IO compeletion". This patch made it so that fput() by calling inode_dio_done() in nfs_file_release() would wait uninterruptab

[PATCH AUTOSEL 4.14 13/13] SUNRPC reverting d03727b248d0 ("NFSv4 fix CLOSE not waiting for direct IO compeletion")

2020-07-20 Thread Sasha Levin
From: Olga Kornievskaia [ Upstream commit 65caafd0d2145d1dd02072c4ced540624daeab40 ] Reverting commit d03727b248d0 "NFSv4 fix CLOSE not waiting for direct IO compeletion". This patch made it so that fput() by calling inode_dio_done() in nfs_file_release() would wait uninterruptab

[PATCH 5.7 048/112] btrfs: fix RWF_NOWAIT writes blocking on extent locks and waiting for IO

2020-07-07 Thread Greg Kroah-Hartman
well, instead of waiting for it to complete. Finally, don't bother trying to lock the snapshot lock of the root when attempting a RWF_NOWAIT write, as that is only important for buffered writes. Fixes: edf064e7c6fec3 ("btrfs: nowait aio support") Signed-off-by: Filipe Manana Signed-

[PATCH 5.7 260/265] NFSv4 fix CLOSE not waiting for direct IO compeletion

2020-06-29 Thread Sasha Levin
From: Olga Kornievskaia commit d03727b248d0dae6199569a8d7b629a681154633 upstream. Figuring out the root case for the REMOVE/CLOSE race and suggesting the solution was done by Neil Brown. Currently what happens is that direct IO calls hold a reference on the open context which is decremented as

[PATCH 4.4 132/135] NFSv4 fix CLOSE not waiting for direct IO compeletion

2020-06-29 Thread Sasha Levin
From: Olga Kornievskaia commit d03727b248d0dae6199569a8d7b629a681154633 upstream. Figuring out the root case for the REMOVE/CLOSE race and suggesting the solution was done by Neil Brown. Currently what happens is that direct IO calls hold a reference on the open context which is decremented as

[PATCH 4.9 188/191] NFSv4 fix CLOSE not waiting for direct IO compeletion

2020-06-29 Thread Sasha Levin
From: Olga Kornievskaia commit d03727b248d0dae6199569a8d7b629a681154633 upstream. Figuring out the root case for the REMOVE/CLOSE race and suggesting the solution was done by Neil Brown. Currently what happens is that direct IO calls hold a reference on the open context which is decremented as

[PATCH 4.19 127/131] NFSv4 fix CLOSE not waiting for direct IO compeletion

2020-06-29 Thread Sasha Levin
From: Olga Kornievskaia commit d03727b248d0dae6199569a8d7b629a681154633 upstream. Figuring out the root case for the REMOVE/CLOSE race and suggesting the solution was done by Neil Brown. Currently what happens is that direct IO calls hold a reference on the open context which is decremented as

[PATCH 5.4 173/178] NFSv4 fix CLOSE not waiting for direct IO compeletion

2020-06-29 Thread Sasha Levin
From: Olga Kornievskaia commit d03727b248d0dae6199569a8d7b629a681154633 upstream. Figuring out the root case for the REMOVE/CLOSE race and suggesting the solution was done by Neil Brown. Currently what happens is that direct IO calls hold a reference on the open context which is decremented as

[PATCH 4.14 75/78] NFSv4 fix CLOSE not waiting for direct IO compeletion

2020-06-29 Thread Sasha Levin
From: Olga Kornievskaia commit d03727b248d0dae6199569a8d7b629a681154633 upstream. Figuring out the root case for the REMOVE/CLOSE race and suggesting the solution was done by Neil Brown. Currently what happens is that direct IO calls hold a reference on the open context which is decremented as

amdgpu: *ERROR* Waiting for fences timed out!

2020-06-24 Thread Ilkka Prusi
gt; 8b 4d 00 48 8d 15 e3 2e 1b 03 48 89 c8 48 29 d0 48 c1 c8 04 48 [ 3303.068482][  T186] [drm:amdgpu_dm_commit_planes.constprop.0 [amdgpu]] *ERROR* Waiting for fences timed out! [ 3303.068619][  T184] [drm:amdgpu_dm_commit_planes.constprop.0 [amdgpu]] *ERROR* Waiting for fences timed out! [ 3303.

[PATCH 5.7 434/477] io_uring: reap poll completions while waiting for refs to drop on exit

2020-06-23 Thread Greg Kroah-Hartman
From: Jens Axboe [ Upstream commit 56952e91acc93ed624fe9da840900defb75f1323 ] If we're doing polled IO and end up having requests being submitted async, then completions can come in while we're waiting for refs to drop. We need to reap these manually, as nobody else will be lookin

[PATCH AUTOSEL 5.6 09/50] scsi: qla2xxx: set UNLOADING before waiting for session deletion

2020-05-07 Thread Sasha Levin
From: Martin Wilck [ Upstream commit 856e152a3c08bf7987cbd41900741d83d9cddc8e ] The purpose of the UNLOADING flag is to avoid port login procedures to continue when a controller is in the process of shutting down. It makes sense to set this flag before starting session teardown. Furthermore, u

[PATCH AUTOSEL 5.4 06/35] scsi: qla2xxx: set UNLOADING before waiting for session deletion

2020-05-07 Thread Sasha Levin
From: Martin Wilck [ Upstream commit 856e152a3c08bf7987cbd41900741d83d9cddc8e ] The purpose of the UNLOADING flag is to avoid port login procedures to continue when a controller is in the process of shutting down. It makes sense to set this flag before starting session teardown. Furthermore, u

[PATCH 4.19 15/37] scsi: qla2xxx: set UNLOADING before waiting for session deletion

2020-05-04 Thread Greg Kroah-Hartman
From: Martin Wilck commit 856e152a3c08bf7987cbd41900741d83d9cddc8e upstream. The purpose of the UNLOADING flag is to avoid port login procedures to continue when a controller is in the process of shutting down. It makes sense to set this flag before starting session teardown. Furthermore, use

[PATCH 5.4 34/57] scsi: qla2xxx: set UNLOADING before waiting for session deletion

2020-05-04 Thread Greg Kroah-Hartman
From: Martin Wilck commit 856e152a3c08bf7987cbd41900741d83d9cddc8e upstream. The purpose of the UNLOADING flag is to avoid port login procedures to continue when a controller is in the process of shutting down. It makes sense to set this flag before starting session teardown. Furthermore, use

[PATCH 5.6 41/73] scsi: qla2xxx: set UNLOADING before waiting for session deletion

2020-05-04 Thread Greg Kroah-Hartman
From: Martin Wilck commit 856e152a3c08bf7987cbd41900741d83d9cddc8e upstream. The purpose of the UNLOADING flag is to avoid port login procedures to continue when a controller is in the process of shutting down. It makes sense to set this flag before starting session teardown. Furthermore, use

  1   2   3   4   5   6   7   >