ase():
> tries to return an error if it can't free preallocated blocks.
>
> xfs_release():
> similar to the previous case.
Not quite right. XFS only returns an error if there is data
writeback failure or filesystem corruption or shutdown detected
during whatev
On Fri, Nov 30, 2018 at 01:00:52PM -0500, Ric Wheeler wrote:
> On 11/30/18 7:55 AM, Dave Chinner wrote:
> >On Thu, Nov 29, 2018 at 06:53:14PM -0500, Ric Wheeler wrote:
> >>Other file systems also need to
> >>accommodate/probe behind the fictitious visible storage devic
ork
> properly for these modified functions.
>
> Miscellanea:
>
> o Remove extra trailing ; and blank line from xfs_agf_verify
>
> Signed-off-by: Joe Perches
> ---
XFS bits look fine.
Acked-by: Dave Chinner
--
Dave Chinner
da...@fromorbit.com
On Wed, Dec 21, 2016 at 09:46:37PM -0800, Linus Torvalds wrote:
> On Wed, Dec 21, 2016 at 9:13 PM, Dave Chinner wrote:
> >
> > There may be deeper issues. I just started running scalability tests
> > (e.g. 16-way fsmark create tests) and about a minute in I got a
> > di
> report, so I'm not really sure what's going on here anyway.
http://www.gossamer-threads.com/lists/linux/kernel/2587485
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
On Thu, Dec 22, 2016 at 04:13:22PM +1100, Dave Chinner wrote:
> On Wed, Dec 21, 2016 at 04:13:03PM -0800, Chris Leech wrote:
> > On Wed, Dec 21, 2016 at 03:19:15PM -0800, Linus Torvalds wrote:
> > > Hi,
> > >
> > > On Wed, Dec 21, 2016 at 2:16 PM, Dave Chinner
On Wed, Dec 21, 2016 at 04:13:03PM -0800, Chris Leech wrote:
> On Wed, Dec 21, 2016 at 03:19:15PM -0800, Linus Torvalds wrote:
> > Hi,
> >
> > On Wed, Dec 21, 2016 at 2:16 PM, Dave Chinner wrote:
> > > On Fri, Dec 16, 2016 at 10:59:06AM -0800, Chris L
iscsi guys
seem to have bounced it and no-one is looking at it.
I'm disappearing for several months at the end of tomorrow, so I
thought I better make sure you know about it. I've also added
linux-scsi, linux-block to the cc list
Cheers,
Dave.
> On Thu, Dec 15, 2016 at 09:29
On Tue, Jul 19, 2016 at 02:22:47PM -0700, Calvin Owens wrote:
> On 07/18/2016 07:05 PM, Calvin Owens wrote:
> >On 07/17/2016 11:02 PM, Dave Chinner wrote:
> >>On Sun, Jul 17, 2016 at 10:00:03AM +1000, Dave Chinner wrote:
> >>>On Fri, Jul 15, 2016 at 05:18:
On Sun, Jul 17, 2016 at 10:00:03AM +1000, Dave Chinner wrote:
> On Fri, Jul 15, 2016 at 05:18:02PM -0700, Calvin Owens wrote:
> > Hello all,
> >
> > I've found a nasty source of slab corruption. Based on seeing similar
> > symptoms
> > on boxes at Faceboo
argv[1], O_RDWR|O_CREAT, 0644);
> if (fd == -1) {
> perror("Can't open");
> return 1;
> }
>
> if (!fork()) {
> count = atol(argv[2]);
>
> while (1) {
> for (i = 0; i < coun
ace the IO may not have
even been sent to the device (e.g. it could be queued by the IO
scheduler in the block layer). i.e. you're not timing IO, you're
timing CPU overhead of IO submission.
For an apples to apples comparison, you need to use fsync() to
physically force the written data to
eak now or forever hold your peace"
> review deadline?
I say just ask Linus to pull it immediately after the next merge
window closes
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
e which patches your note is refering to here.
The XFS change here looks fine.
Acked-by: Dave Chinner
-Dave.
--
Dave Chinner
da...@fromorbit.com
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
cific plugging problem you've identified (i.e. do_direct_IO() is
flushing far too frequently) rather than making a sweeping
generalisation that the IO stack plugging infrastructure
needs fundamental change?
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
--
To unsubscribe from this list: send t
On Mon, Mar 16, 2015 at 08:12:16PM -0500, Alireza Haghdoost wrote:
> On Mon, Mar 16, 2015 at 3:32 PM, Dave Chinner wrote:
> > On Mon, Mar 16, 2015 at 11:28:53AM -0400, James Bottomley wrote:
> >> Probably need to cc dm-devel here. However, I think we're all agreed
&
On Mon, Mar 16, 2015 at 11:28:53AM -0400, James Bottomley wrote:
> [cc to linux-scsi added since this seems relevant]
> On Mon, 2015-03-16 at 17:00 +1100, Dave Chinner wrote:
> > Hi Folks,
> >
> > As I told many people at Vault last week, I wrote a document
> > outl
; > as of yet ignored the zone management pieces. I have thought
> > (briefly) of the possible need for a new allocator: the group
> > allocator. As there can only be a few (relatively) zones available at
> > any one time, We might need a mechanism to tell which are available
> > and which are not. The stack will have to collectively work together
> > to find a way to request and use zones in an orderly fashion.
>
> Here I think the sense of LSF/MM was that only allowing a fixed number
> of zones to be open would get a bit unmanageable (unless the drive
> silently manages it for us). The idea of different sized zones is also
> a complicating factor.
Not for XFS - my proposal handles variable sized zones without any
additional complexity. Indeed, it will handle zone sizes from 16MB
to 1TB without any modification - mkfs handles it all when it
queries the zones and sets up the zone allocation inodes...
And we limit the number of "open zones" by the number of zone groups
we alow concurrent allocation to
> The other open question is that if we go for
> fully drive managed, what sort of alignment, size, trim + anything else
> should we do to make the drive's job easier. I'm guessing we won't
> really have a practical answer to any of these until we see how the
> market responds.
I'm not aiming this proposal at drive managed, or even host-managed
drives: this proposal is for full host-aware (i.e. error on
out-of-order write) drive support. If you have drive managed SMR,
then there's pretty much nothing to change in existing filesystems.
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
t;mke2fs -t ext3
> /dev/vdc" where /dev/vdc is a 5 gig virtio partition.
Short reads are more likely a bug in all the iovec iterator stuff
that got merged in from the vfs tree. ISTR a 32 bit-only bug in that
stuff go past in to do with not being able to partition a 32GB block
de
me representation, and the kernel
to be independent of the physical filesystem time encoding
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majord...@vger.kernel.org
More ma
iables, possibly with sparse support to help us out. Big Job.
Yes, that's what the Christoph's patchset did.
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
On Thu, Jan 23, 2014 at 04:44:38PM +, Mel Gorman wrote:
> On Thu, Jan 23, 2014 at 07:47:53AM -0800, James Bottomley wrote:
> > On Thu, 2014-01-23 at 19:27 +1100, Dave Chinner wrote:
> > > On Wed, Jan 22, 2014 at 10:13:59AM -0800, James Bottomley wrote:
> > > > On
On Thu, Jan 23, 2014 at 07:55:50AM -0500, Theodore Ts'o wrote:
> On Thu, Jan 23, 2014 at 07:35:58PM +1100, Dave Chinner wrote:
> > >
> > > I expect it would be relatively simple to get large blocksizes working
> > > on powerpc with 64k PAGE_SIZE. So before div
page, one buffer
head, one filesystem block.
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
> > > > I will have to see if I can get a storage vendor to make a public
> > > > statement, but there are vendors hoping to see this land in Linux in
> > > > the next few years.
> > >
> > > What about the second and third questions -- is someone wor
n't impact performance
> dramatically. The real question is can the FS make use of this layout
> information *without* changing the page cache granularity? Only if you
> answer me "no" to this do I think we need to worry about changing page
> cache granularity.
We already do
lready have such infrastructure in XFS to support directory
blocks larger than filesystem block size
FWIW, as to the original "large sector size" support question, XFS
already supports sector sizes up to 32k in size. The limitation is
actually a limitation of the journal format, so going larger than
that would take some work...
Cheers,
Dave.
--
Dave Chinner
da...@fromorbit.com
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
On Mon, Jan 20, 2014 at 05:58:55AM -0800, Christoph Hellwig wrote:
> On Thu, Jan 16, 2014 at 09:07:21AM +1100, Dave Chinner wrote:
> > Yes, I think it can be done relatively simply. We'd have to change
> > the code in xfs_file_aio_write_checks() to check whether EOF zeroing
>
by removing a single if() check in
xfs_iomap_write_direct(). We already use unwritten extents for DIO
within EOF to avoid races that could expose uninitialised blocks, so
we just need to make that unconditional behaviour. Hence racing IO
on concurrent appending i_size updates will only ever see a hole
29 matches
Mail list logo