Add synchronous O_DIRECT read support to AFS (no AIO yet). It can
theoretically handle reads up to the maximum size describable by loff_t -
and given an iterator with sufficiently capacity to handle that and given
support on the server.
Signed-off-by: David Howells
---
fs/afs/file.c
This RFC enables P2PDMA transfers in userspace between NVMe drives using
existing O_DIRECT operations or the NVMe passthrough IOCTL.
This is accomplished by allowing userspace to allocate chunks of any CMB
by mmaping the NVMe ctrl device (Patches 14 and 15). The resulting memory
will be backed by
Add synchronous O_DIRECT read support to AFS (no AIO yet). It can
theoretically handle reads up to the maximum size describable by loff_t -
and given an iterator with sufficiently capacity to handle that and given
support on the server.
Signed-off-by: David Howells
---
fs/afs/file.c
From: Jens Axboe
[ Upstream commit 6e014c621e7271649f0d51e54dbe1db4c10486c8 ]
Running with some debug patches to detect illegal blocking triggered the
extend/unaligned condition in ext4. If ext4 needs to extend the file (and
hence go to buffered IO), or if the app is doing unaligned IO, then ext
From: Jens Axboe
[ Upstream commit 6e014c621e7271649f0d51e54dbe1db4c10486c8 ]
Running with some debug patches to detect illegal blocking triggered the
extend/unaligned condition in ext4. If ext4 needs to extend the file (and
hence go to buffered IO), or if the app is doing unaligned IO, then ext
Hi Linus
On Sun, Apr 26, 2020 at 4:59 AM Li Wang wrote:
From kernel code seems you are right. The pipe indeed takes use of
PAGE_SIZE(ppc64le: 64kB) to split the writes data in the packetized mode
(marked by O_DIRECT). But in the manual page, O_DIRECT indicates us the
PIPE_BUF is the
From: Trond Myklebust
commit 031d73ed768a40684f3ca21992265ffdb6a270bf upstream.
When a series of O_DIRECT reads or writes are truncated, either due to
eof or due to an error, then we should return the number of contiguous
bytes that were received/sent starting at the offset specified by the
From: Trond Myklebust
commit 031d73ed768a40684f3ca21992265ffdb6a270bf upstream.
When a series of O_DIRECT reads or writes are truncated, either due to
eof or due to an error, then we should return the number of contiguous
bytes that were received/sent starting at the offset specified by the
From: Trond Myklebust
commit 031d73ed768a40684f3ca21992265ffdb6a270bf upstream.
When a series of O_DIRECT reads or writes are truncated, either due to
eof or due to an error, then we should return the number of contiguous
bytes that were received/sent starting at the offset specified by the
On Wed, 2019-10-16 at 01:00 +, Su, Yanjun wrote:
> Hi trond,
> Because My mail system cant receive nfs mail list’s mails, I reply
> your patch here.
> I have some question for the patch.
>
> > No. Basic O_DIRECT does not guarantee atomicity of requests, which
> >
Hi trond,
Because My mail system cant receive nfs mail list’s mails, I reply your patch
here.
I have some question for the patch.
>No. Basic O_DIRECT does not guarantee atomicity of requests, which is
>why we do not have generic locking at the VFS level when reading and
>writing.
在 2019/10/1 2:06, Trond Myklebust 写道:
Hi Su,
On Mon, 2019-09-30 at 17:11 +0800, Su Yanjun wrote:
In xfstests generic/465 tests failed. Because O_DIRECT r/w use
async rpc calls, when r/w rpc calls are running concurrently we
may read partial data which is wrong.
For example as follows
Hi Su,
On Mon, 2019-09-30 at 17:11 +0800, Su Yanjun wrote:
> In xfstests generic/465 tests failed. Because O_DIRECT r/w use
> async rpc calls, when r/w rpc calls are running concurrently we
> may read partial data which is wrong.
>
> For example as follows.
In xfstests generic/465 tests failed. Because O_DIRECT r/w use
async rpc calls, when r/w rpc calls are running concurrently we
may read partial data which is wrong.
For example as follows.
user buffer
/\
|||
rpc0 rpc1
When rpc0 runs it encounters eof so return 0, then another
From: Trond Myklebust
commit eb2c50da9e256dbbb3ff27694440e4c1900cfef8 upstream.
If the attempt to resend the I/O results in no bytes being read/written,
we must ensure that we report the error.
Signed-off-by: Trond Myklebust
Fixes: 0a00b77b331a ("nfs: mirroring support for direct io")
Cc: sta.
[ Upstream commit eb2c50da9e256dbbb3ff27694440e4c1900cfef8 ]
If the attempt to resend the I/O results in no bytes being read/written,
we must ensure that we report the error.
Signed-off-by: Trond Myklebust
Fixes: 0a00b77b331a ("nfs: mirroring support for direct io")
Cc: sta...@vger.kernel.org #
[ Upstream commit eb2c50da9e256dbbb3ff27694440e4c1900cfef8 ]
If the attempt to resend the I/O results in no bytes being read/written,
we must ensure that we report the error.
Signed-off-by: Trond Myklebust
Fixes: 0a00b77b331a ("nfs: mirroring support for direct io")
Cc: sta...@vger.kernel.org #
OGAWA Hirofumi writes:
> Hou Tao writes:
>
>> Now splice() on O_DIRECT-opened fat file will return -EFAULT, that is
>> because the default .splice_write, namely default_file_splice_write(),
>> will construct an ITER_KVEC iov_iter and dio_refill_pages() in dio p
Hou Tao writes:
> Now splice() on O_DIRECT-opened fat file will return -EFAULT, that is
> because the default .splice_write, namely default_file_splice_write(),
> will construct an ITER_KVEC iov_iter and dio_refill_pages() in dio path
> can not handle it.
>
> Fix it by implemen
ping ?
On 2019/2/10 17:47, Hou Tao wrote:
> Now splice() on O_DIRECT-opened fat file will return -EFAULT, that is
> because the default .splice_write, namely default_file_splice_write(),
> will construct an ITER_KVEC iov_iter and dio_refill_pages() in dio path
> can not handle it.
&g
Now splice() on O_DIRECT-opened fat file will return -EFAULT, that is
because the default .splice_write, namely default_file_splice_write(),
will construct an ITER_KVEC iov_iter and dio_refill_pages() in dio path
can not handle it.
Fix it by implementing .splice_write through
Add synchronous O_DIRECT read support to AFS (no AIO yet). It can
theoretically handle reads up to the maximum size describable by loff_t -
and given an iterator with sufficiently capacity to handle that and given
support on the server.
Signed-off-by: David Howells
---
fs/afs/file.c
Add synchronous O_DIRECT read support to AFS (no AIO yet). It can
theoretically handle reads up to the maximum size describable by loff_t -
and given an iterator with sufficiently capacity to handle that and given
support on the server.
Signed-off-by: David Howells
---
fs/afs/file.c
s/file.c b/fs/overlayfs/file.c
>> index 3f610a5b38e4..e5e7ccaaf9ec 100644
>> --- a/fs/overlayfs/file.c
>> +++ b/fs/overlayfs/file.c
>> @@ -110,6 +110,9 @@ static int ovl_open(struct inode *inode, struct file
>> *file)
>> if (IS_ERR(realfile))
>>
-- a/fs/overlayfs/file.c
> +++ b/fs/overlayfs/file.c
> @@ -110,6 +110,9 @@ static int ovl_open(struct inode *inode, struct file
> *file)
> if (IS_ERR(realfile))
> return PTR_ERR(realfile);
>
> + /* For O_DIRECT dentry_open() checks f_mapping->a_ops-
*inode, struct file *file)
if (IS_ERR(realfile))
return PTR_ERR(realfile);
+ /* For O_DIRECT dentry_open() checks f_mapping->a_ops->direct_IO */
+ file->f_mapping = realfile->f_mapping;
+
file->private_data = realfile;
return 0;
--
2.14.3
*inode, struct file *file)
if (IS_ERR(realfile))
return PTR_ERR(realfile);
+ /* For O_DIRECT dentry_open() checks f_mapping->a_ops->direct_IO */
+ file->f_mapping = realfile->f_mapping;
+
file->private_data = realfile;
return 0;
--
2.14.3
*inode, struct file *file)
if (IS_ERR(realfile))
return PTR_ERR(realfile);
+ /* For O_DIRECT dentry_open() checks f_mapping->a_ops->direct_IO */
+ file->f_mapping = realfile->f_mapping;
+
file->private_data = realfile;
return 0;
--
2.14.3
On 30/03/2018 10:53, jiangyiwen wrote:
> Currently, I found virtio-9p in VirtFS don't support "O_DIRECT + aio"
> mode, both v9fs and qemu. So when user use "O_DIRECT + aio" mode and
> increase iodepths, they can't get higher IOPS.
>
> I want to know w
Hi everyone,
Currently, I found virtio-9p in VirtFS don't support "O_DIRECT + aio"
mode, both v9fs and qemu. So when user use "O_DIRECT + aio" mode and
increase iodepths, they can't get higher IOPS.
I want to know why v9fs don't implement this mode? And I will
4.9-stable review patch. If anyone has any objections, please let me know.
--
From: Trond Myklebust
commit e231c6879cfd44e4fffd384bb6dd7d313249a523 upstream.
When locking the file in order to do O_DIRECT on it, we must unmap
any mmapped ranges on the pagecache so that we can
4.15-stable review patch. If anyone has any objections, please let me know.
--
From: Trond Myklebust
commit e231c6879cfd44e4fffd384bb6dd7d313249a523 upstream.
When locking the file in order to do O_DIRECT on it, we must unmap
any mmapped ranges on the pagecache so that we can
4.14-stable review patch. If anyone has any objections, please let me know.
--
From: Trond Myklebust
commit e231c6879cfd44e4fffd384bb6dd7d313249a523 upstream.
When locking the file in order to do O_DIRECT on it, we must unmap
any mmapped ranges on the pagecache so that we can
O_DSYNC so following check picks up either */
+ if (f_flags & O_SYNC)
+ create_options |= CREATE_WRITE_THROUGH;
+
+ if (f_flags & O_DIRECT)
+ create_options |= CREATE_NO_BUFFER;
+
oparms.tcon = tcon;
oparms.cifs_sb =
From: Al Viro
In all versions from 2.5.62 to 3.15, on each iteration through the loop
by iovec array in do_blockdev_direct_IO() we used to do this:
sdio.head = 0;
sdio.tail = 0;
...
retval = do_direct_IO(dio, &sdio, &map_bh);
if (re
bit for O_DSYNC so following check picks up either */
+ if (f_flags & O_SYNC)
+ create_options |= CREATE_WRITE_THROUGH;
+
+ if (f_flags & O_DIRECT)
+ create_options |= CREATE_NO_BUFFER;
+
oparms.tcon = tcon;
oparms.cifs_sb =
bit for O_DSYNC so following check picks up either */
+ if (f_flags & O_SYNC)
+ create_options |= CREATE_WRITE_THROUGH;
+
+ if (f_flags & O_DIRECT)
+ create_options |= CREATE_NO_BUFFER;
+
oparms.tcon = tcon;
oparms.cifs_sb =
has bit for O_DSYNC so following check picks up either */
+ if (f_flags & O_SYNC)
+ create_options |= CREATE_WRITE_THROUGH;
+
+ if (f_flags & O_DIRECT)
+ create_options |= CREATE_NO_BUFFER;
+
oparms.tcon = tcon;
oparms.cifs_sb =
has bit for O_DSYNC so following check picks up either */
+ if (f_flags & O_SYNC)
+ create_options |= CREATE_WRITE_THROUGH;
+
+ if (f_flags & O_DIRECT)
+ create_options |= CREATE_NO_BUFFER;
+
oparms.tcon = tcon;
oparms.cifs_sb =
O_DIRECT is declared disabled in data=journal mode but still served
as buffered-IO when one would expect such operations to be canceled.
This patch clarifies documentation and removes redundancy.
Thanks to Theodore Ts'o for explanations.
Signed-off-by: Fabian Frederick
---
Document
This patch enables a hybrid polling mode. Instead of polling after IO
submission, we can induce an artificial delay, and then poll after that.
For example, if the IO is presumed to complete in 8 usecs from now, we
can sleep for 4 usecs, wake up, and then do our polling. This still puts
a sleep/wake
On 11/03/2016 08:01 AM, Bart Van Assche wrote:
On 11/01/2016 03:05 PM, Jens Axboe wrote:
+static void blk_mq_poll_hybrid_sleep(struct request_queue *q,
+ struct request *rq)
+{
+struct hrtimer_sleeper hs;
+ktime_t kt;
+
+if (!q->poll_nsec || test_bit(REQ_ATOM_POLL
On 11/01/2016 03:05 PM, Jens Axboe wrote:
+static void blk_mq_poll_hybrid_sleep(struct request_queue *q,
+struct request *rq)
+{
+ struct hrtimer_sleeper hs;
+ ktime_t kt;
+
+ if (!q->poll_nsec || test_bit(REQ_ATOM_POLL_SLEPT, &rq->atomic_flag
On 11/03/2016 06:27 AM, Ming Lei wrote:
On Wed, Nov 2, 2016 at 5:05 AM, Jens Axboe wrote:
This patch enables a hybrid polling mode. Instead of polling after IO
submission, we can induce an artificial delay, and then poll after that.
For example, if the IO is presumed to complete in 8 usecs from
On Wed, Nov 2, 2016 at 5:05 AM, Jens Axboe wrote:
> This patch enables a hybrid polling mode. Instead of polling after IO
> submission, we can induce an artificial delay, and then poll after that.
> For example, if the IO is presumed to complete in 8 usecs from now, we
> can sleep for 4 usecs, wak
On Tue, Nov 01, 2016 at 03:05:24PM -0600, Jens Axboe wrote:
> This patch enables a hybrid polling mode. Instead of polling after IO
> submission, we can induce an artificial delay, and then poll after that.
> For example, if the IO is presumed to complete in 8 usecs from now, we
> can sleep for 4 u
This patch enables a hybrid polling mode. Instead of polling after IO
submission, we can induce an artificial delay, and then poll after that.
For example, if the IO is presumed to complete in 8 usecs from now, we
can sleep for 4 usecs, wake up, and then do our polling. This still puts
a sleep/wake
If the filesystem does not support O_DIRECT, then
open(...O_CREAT|O_DIRECT..) fails but creates the file anyway.
Eric Sandeen@RedHat thought a fix would need a lot of vfs restructuring.
(I reported this in 2013 to RedHat (bug#1008073), but just realized he
was talking about "upstream"
I wrote:
To reproduce, run this as:
./a.out /dev/test-direct.dat
(Originally I wrote to /dev/shm/, but tmpfs now accepts O_DIRECT.)
No, I was running on /tmp, not /tmpfs:-)
"./a.out /dev/shm/test-direct.dat" is fine.
--
Hallvard
Hi Stanislav,
[auto build test WARNING on v4.4-rc5]
[also build test WARNING on next-20151215]
url:
https://github.com/0day-ci/linux/commits/Stanislav-Kinsburskiy/fcntl-allow-to-set-O_DIRECT-flag-on-pipe/20151216-000234
config: i386-randconfig-x009-12141102 (attached as .config)
reproduce
Hi Stanislav,
[auto build test ERROR on v4.4-rc5]
[also build test ERROR on next-20151215]
url:
https://github.com/0day-ci/linux/commits/Stanislav-Kinsburskiy/fcntl-allow-to-set-O_DIRECT-flag-on-pipe/20151216-000234
config: x86_64-randconfig-x011-12141150 (attached as .config)
reproduce
With packetized mode for pipes, it's not possible to set O_DIRECT on pipe file
via sys_fcntl, because of unsupported (by pipes) sanity checks.
Ability to set this flag will be used by CRIU to migrate packetized pipes.
Signed-off-by: Stanislav Kinsburskiy
---
fs/fcntl.c |3 ++-
1
With packetized mode for pipes, it's not possible to set O_DIRECT on pipe file
via sys_fcntl, because of unsupported sanity checks.
Ability to set this flag will be used by CRIU to migrate packetized pipes.
v2:
Fixed typos and mode variable to check.
Signed-off-by: Stanislav Kinsburskiy
--
4.2-stable review patch. If anyone has any objections, please let me know.
--
From: Jeff Moyer
commit e94f5a2285fc94202a9efb2c687481f29b64132c upstream.
commit bbab37ddc20b (block: Add support for DAX reads/writes to
block devices) caused a regression in mkfs.xfs. That utilit
On Tue, Sep 8, 2015 at 9:10 AM, Linda Knippers wrote:
> This patch and the 2/2 patch don't seem to have gone anywhere.
> Willy? or Ross?
>
Yes, these should have gone into 4.2, The nvdimm.git tree will pick
them up after 4.3-rc1 and tag them for -stable.
--
To unsubscribe from this list: send th
This patch and the 2/2 patch don't seem to have gone anywhere.
Willy? or Ross?
-- ljk
On 8/14/2015 4:53 PM, Linda Knippers wrote:
> On 8/14/2015 4:15 PM, Jeff Moyer wrote:
>> commit bbab37ddc20b (block: Add support for DAX reads/writes to
>> block devices) caused a regression in mkfs.xfs. That
On Mon, 2015-08-24 at 21:28 -0400, Chris Mason wrote:
> This is what btrfs already does for O_DIRECT plus compressed, or
> other
> cases where people don't want their applications to break on top of
> new
> features that aren't quite compatible with it.
I do not know how
Chris Mason writes:
>> I do think we should at least document what file systems appear to be
>> doing. Here's a man page patch for open (generated with extra context
>> for easier reading). Let me know what you think.
>
> We shouldn't be ignoring it, but instead call it similar to O_DSYNC plus
which
> >> > can/should be fixed).
> >>
> >> Even if it wasn't a test suite it should still fail. Either the fs
> >> supports O_DIRECT or it doesn't. Right now, the only way an application
> >> can figure this out is to try an open and see
gt;
>> > I think the whole argument rested on what it means when "some user space
>> > fails"; apparently that "user space" is just a test suite (which
>> > can/should be fixed).
>>
>> Even if it wasn't a test suite it should still f
ils when direct I/O is not supported.
> > >
> > > I think the whole argument rested on what it means when "some user space
> > > fails"; apparently that "user space" is just a test suite (which
> > > can/should be fixed).
> >
> >
on what it means when "some user space
> > fails"; apparently that "user space" is just a test suite (which
> > can/should be fixed).
>
> Even if it wasn't a test suite it should still fail. Either the fs
> supports O_DIRECT or it doesn't. Right
pace" is just a test suite (which
> can/should be fixed).
Even if it wasn't a test suite it should still fail. Either the fs
supports O_DIRECT or it doesn't. Right now, the only way an application
can figure this out is to try an open and see if it fails. Don't break
th
should be fixed).
> We can
> chose to fake direct I/O or fix user-space. The latter seems to be the
> preferred course of actions, and you are correctly pointing the man
> page.
>
> However, if
>
> 1. we are the only FS erroring out on O_DIRECT
> 2. other file-systems not
Am 24.08.2015 um 11:34 schrieb Artem Bityutskiy:
> On Mon, 2015-08-24 at 01:03 -0700, Christoph Hellwig wrote:
>> On Mon, Aug 24, 2015 at 11:02:42AM +0300, Artem Bityutskiy wrote:
>>> Back when we were writing UBIFS, we did not need direct IO, so we
>>> did
>>> not implement it. But yes, probably s
On Mon, 2015-08-24 at 01:03 -0700, Christoph Hellwig wrote:
> On Mon, Aug 24, 2015 at 11:02:42AM +0300, Artem Bityutskiy wrote:
> > Back when we were writing UBIFS, we did not need direct IO, so we
> > did
> > not implement it. But yes, probably someone who cares could just
> > try
> > implementing
iscussion I want to introduce ubifs into xfstests.
FYI, xfstests is a test suite, _not_ an application. Adding O_DIRECT
just for xfstests is utterly dumb and suggeting that is even dumber.
xfstests should check for supported features, and skip tests that use
it.
--
To unsubscribe from this lis
On 08/24/2015 04:03 PM, Christoph Hellwig wrote:
On Mon, Aug 24, 2015 at 11:02:42AM +0300, Artem Bityutskiy wrote:
Back when we were writing UBIFS, we did not need direct IO, so we did
not implement it. But yes, probably someone who cares could just try
implementing this feature.
So I think th
On Mon, Aug 24, 2015 at 11:02:42AM +0300, Artem Bityutskiy wrote:
> Back when we were writing UBIFS, we did not need direct IO, so we did
> not implement it. But yes, probably someone who cares could just try
> implementing this feature.
So I think the answer here is to implement a real version in
On Mon, 2015-08-24 at 00:53 -0700, Christoph Hellwig wrote:
> On Mon, Aug 24, 2015 at 10:13:25AM +0300, Artem Bityutskiy wrote:
> > 1. we are the only FS erroring out on O_DIRECT
> > 2. other file-systems not supporting direct IO just fake it
>
> There are lots of file s
On Mon, Aug 24, 2015 at 10:13:25AM +0300, Artem Bityutskiy wrote:
> 1. we are the only FS erroring out on O_DIRECT
> 2. other file-systems not supporting direct IO just fake it
There are lots of file systems not supporting O_DIRECT, but ubifs might
be the most common one. Given that O_
follow it. This requires a small research. What would be the most
popular Linux FS which does not support direct I/O? Can we check
what
it does?
All popular filesystems seem to support direct IO.
That's the problem, application do not expect O_DIRECT to fail.
My intention was to do it lik
uld be the most
popular Linux FS which does not support direct I/O? Can we check
what
it does?
All popular filesystems seem to support direct IO.
That's the problem, application do not expect O_DIRECT to fail.
My intention was to do it like exofs:
Fair enough, thanks!
Signed-off-by: A
st
> > > popular Linux FS which does not support direct I/O? Can we check
> > > what
> > > it does?
> >
> > All popular filesystems seem to support direct IO.
> > That's the problem, application do not expect O_DIRECT to fail.
> >
> &
0 +0800, Dongsheng Yang wrote:
> > > > On 08/20/2015 04:35 AM, Richard Weinberger wrote:
> > > > > Currently UBIFS does not support direct IO, but some
> > > > > applications
> > > > > blindly use the O_DIRECT flag.
> > > > > Instead o
wrote:
> >>> Currently UBIFS does not support direct IO, but some applications
> >>> blindly use the O_DIRECT flag.
> >>> Instead of failing upon open() we can do better and fall back
> >>> to buffered IO.
> >>
> >> H, to be
check
> > what
> > it does?
>
> All popular filesystems seem to support direct IO.
> That's the problem, application do not expect O_DIRECT to fail.
>
> My intention was to do it like exofs:
Fair enough, thanks!
Signed-off-by: Artem Bityutskiy
--
To unsubscribe fro
Artem,
Am 20.08.2015 um 13:31 schrieb Artem Bityutskiy:
> On Thu, 2015-08-20 at 11:00 +0800, Dongsheng Yang wrote:
>> On 08/20/2015 04:35 AM, Richard Weinberger wrote:
>>> Currently UBIFS does not support direct IO, but some applications
>>> blindly use the O_DIRECT f
On Thu, 2015-08-20 at 11:00 +0800, Dongsheng Yang wrote:
> On 08/20/2015 04:35 AM, Richard Weinberger wrote:
> > Currently UBIFS does not support direct IO, but some applications
> > blindly use the O_DIRECT flag.
> > Instead of failing upon open() we can do better and fall ba
On Wed, 2015-08-19 at 22:35 +0200, Richard Weinberger wrote:
> Currently UBIFS does not support direct IO, but some applications
> blindly use the O_DIRECT flag.
> Instead of failing upon open() we can do better and fall back
> to buffered IO.
>
> Cc: Dongsheng Yang
> Cc
S does not support direct IO, but some applications
blindly use the O_DIRECT flag.
Instead of failing upon open() we can do better and fall back
to buffered IO.
H, to be honest, I am not sure we have to do it as Dave
suggested. I think that's just a work-around for current fstests.
IMHO, per
Yang, (Sorry if I've used your last name lately)
Am 20.08.2015 um 05:00 schrieb Dongsheng Yang:
> On 08/20/2015 04:35 AM, Richard Weinberger wrote:
>> Currently UBIFS does not support direct IO, but some applications
>> blindly use the O_DIRECT flag.
>> Instead of fai
On 08/20/2015 04:35 AM, Richard Weinberger wrote:
Currently UBIFS does not support direct IO, but some applications
blindly use the O_DIRECT flag.
Instead of failing upon open() we can do better and fall back
to buffered IO.
H, to be honest, I am not sure we have to do it as Dave
suggested
Currently UBIFS does not support direct IO, but some applications
blindly use the O_DIRECT flag.
Instead of failing upon open() we can do better and fall back
to buffered IO.
Cc: Dongsheng Yang
Cc: dedeki...@gmail.com
Suggested-by: Dave Chinner
Signed-off-by: Richard Weinberger
---
fs/ubifs
On 8/14/2015 4:15 PM, Jeff Moyer wrote:
> commit bbab37ddc20b (block: Add support for DAX reads/writes to
> block devices) caused a regression in mkfs.xfs. That utility
> sets the block size of the device to the logical block size
> using the BLKBSZSET ioctl, and then issues a single sector read
>
commit bbab37ddc20b (block: Add support for DAX reads/writes to
block devices) caused a regression in mkfs.xfs. That utility
sets the block size of the device to the logical block size
using the BLKBSZSET ioctl, and then issues a single sector read
from the last sector of the device. This results
toggles O_DIRECT flag
during IO and it can deadlock because we grab inode->i_mutex in
nfs_file_direct_write(). So return 0 for such case. Then the generic
layer will fall back to buffer IO.
Signed-off-by: Peng Tao
Signed-off-by: Trond Myklebust
Signed-off-by: Kamal Mostafa
---
fs/nfs/direc
Hi Al,
I wonder if you would consider these two patches.
They extend the functionality of mlockall(MCL_FUTURE) to apply
to memory allocations when performing O_DIRECT io.
i.e. The first read or write to an O_DIRECT file descriptor will,
if MCL_FUTURE is in effect, cache any allocated memory
O_DIRECT flag
during IO and it can deadlock because we grab inode->i_mutex in
nfs_file_direct_write(). So return 0 for such case. Then the generic
layer will fall back to buffer IO.
Signed-off-by: Peng Tao
Signed-off-by: Trond Myklebust
Signed-off-by: Jiri Slaby
---
fs/nfs/direct.c | 6 ++
toggles O_DIRECT flag
during IO and it can deadlock because we grab inode->i_mutex in
nfs_file_direct_write(). So return 0 for such case. Then the generic
layer will fall back to buffer IO.
Signed-off-by: Peng Tao
Signed-off-by: Trond Myklebust
Signed-off-by: Luis Henriques
---
fs/nfs/direc
O_DIRECT flag
during IO and it can deadlock because we grab inode->i_mutex in
nfs_file_direct_write(). So return 0 for such case. Then the generic
layer will fall back to buffer IO.
Signed-off-by: Peng Tao
Signed-off-by: Trond Myklebust
Signed-off-by: Greg Kroah-Hartman
---
fs/nfs/direc
O_DIRECT flag
during IO and it can deadlock because we grab inode->i_mutex in
nfs_file_direct_write(). So return 0 for such case. Then the generic
layer will fall back to buffer IO.
Signed-off-by: Peng Tao
Signed-off-by: Trond Myklebust
Signed-off-by: Greg Kroah-Hartman
---
fs/nfs/direc
O_DIRECT flag
during IO and it can deadlock because we grab inode->i_mutex in
nfs_file_direct_write(). So return 0 for such case. Then the generic
layer will fall back to buffer IO.
Signed-off-by: Peng Tao
Signed-off-by: Trond Myklebust
Signed-off-by: Greg Kroah-Hartman
---
fs/nfs/direc
page cache of the outer
>> filesystem (one that keeps image file of the loop device).
>
> Yes, I agree avoidance of double cache is very good, at least
> page consumption can be decreased, avoid one copy and make the backed
> file more like a 'block' device.
>
ore like a 'block' device.
>
> So I don't think it's correct to compare the performance of aio based
> loop-mq with loop-mq v3. Aio based approach is OK as long as it doesn't
> introduce significant overhead as compared with submitting bio-s
> straightforward
with garbage, one
>> > with actual data. Kernel wins.
>> >
>> > So, how to implement Linus's advice?
>>
>> Use O_DIRECT. There are lots of problems with the mmap() model, in
>> particular with how mmu table changes scale to large numbers of CPU
>
partition data. So, kernel and
> > DMA actually compete on the RAM area to fill it - one with garbage, one
> > with actual data. Kernel wins.
> >
> > So, how to implement Linus's advice?
>
> Use O_DIRECT. There are lots of problems with the mmap() model, in
> part
On 12/31/2014 04:52 PM, Ming Lei wrote:
On Thu, Jan 1, 2015 at 6:35 AM, Sedat Dilek wrote:
On Wed, Dec 31, 2014 at 10:52 PM, Dave Kleikamp
wrote:
On 12/31/2014 02:38 PM, Sedat Dilek wrote:
What has happened to that aio_loop patchset?
Is it in Linux-next?
( /me started to play with "block: lo
t - one with garbage, one
> with actual data. Kernel wins.
>
> So, how to implement Linus's advice?
Use O_DIRECT. There are lots of problems with the mmap() model, in
particular with how mmu table changes scale to large numbers of CPU
threads (ie they don't).
You would need to modi
On Thu, Jan 1, 2015 at 6:35 AM, Sedat Dilek wrote:
> On Wed, Dec 31, 2014 at 10:52 PM, Dave Kleikamp
> wrote:
>> On 12/31/2014 02:38 PM, Sedat Dilek wrote:
>>>
>>> What has happened to that aio_loop patchset?
>>> Is it in Linux-next?
>>> ( /me started to play with "block: loop: convert to blk-mq
1 - 100 of 536 matches
Mail list logo