Test flow is to run fsstress after triggering quota rescan.
the ruler is simple, we just remove all files and directories,
sync filesystem and see if qgroup's ref and excl are nodesize.
Signed-off-by: Wang Shilong
---
v2->v3: addressed comments from josef:
- remove unnecessary redirection
On 05/09/2014 10:13 AM, Josef Bacik wrote:
The inode cache is saved in the FS tree itself for every individual FS tree,
that affects the sizes reported by qgroup show so we need to explicitly turn it
off to get consistent values. Thanks,
Right, so we need turn off inode_cache explicitly rather
The inode cache is saved in the FS tree itself for every individual FS tree,
that affects the sizes reported by qgroup show so we need to explicitly turn it
off to get consistent values. Thanks,
Josef
Wang Shilong wrote:
On 05/09/2014 02:33 AM, Josef Bacik wrote:
> On 05/07/2014 11:38 PM, W
(cc David)
-liubo
On Fri, May 09, 2014 at 10:01:02AM +0800, Liu Bo wrote:
> For inline data extent, we need to make its length aligned, otherwise,
> we can get a phantom extent map which confuses readpages() to return -EIO.
>
> This can be detected by xfstests/btrfs/035.
>
> Reported-by: David
For inline data extent, we need to make its length aligned, otherwise,
we can get a phantom extent map which confuses readpages() to return -EIO.
This can be detected by xfstests/btrfs/035.
Reported-by: David Disseldorp
Signed-off-by: Liu Bo
---
fs/btrfs/ioctl.c | 6 +-
1 file changed, 5 i
When running low on available disk space and having several processes
doing buffered file IO, I got the following trace in dmesg:
[ 4202.720152] INFO: task kworker/u8:1:5450 blocked for more than 120 seconds.
[ 4202.720401] Not tainted 3.13.0-fdm-btrfs-next-26+ #1
[ 4202.720596] "echo 0 > /p
On 05/09/2014 02:33 AM, Josef Bacik wrote:
On 05/07/2014 11:38 PM, Wang Shilong wrote:
On 05/08/2014 04:58 AM, Josef Bacik wrote:
On 03/09/2014 11:44 PM, Wang Shilong wrote:
Test flow is to run fsstress after triggering quota rescan.
the ruler is simple, we just remove all files and directorie
> +#ifdef CONFIG_BTRFS_FS_REF_VERIFY
> +int btrfs_build_ref_tree(struct btrfs_fs_info *fs_info);
> +void btrfs_free_ref_cache(struct btrfs_fs_info *fs_info);
> +int btrfs_ref_tree_mod(struct btrfs_root *root, u64 bytenr, u64 num_bytes,
> +u64 parent, u64 ref_root, u64 owner, u64
The compression layer seems to have been built to return -1 and have
callers make up errors that make sense. This isn't great because there
are different classes of errors that originate down in the compression
layer. Allocation failure and corrupt compressed data to name two.
So let's return re
uncompress_inline() is silently dropping an error from
btrfs_decompress() after testing it and zeroing the page that was
supposed to hold decompressed data. This can silently turn compressed
inline data in to zeros if decompression fails due to corrupt compressed
data or memory allocation failure.
The btrfs compression wrappers translated errors from workspace
allocation to either -ENOMEM or -1. The compression type workspace
allocators are already returning a ERR_PTR(-ENOMEM). Just return that
and get rid of the magical -1.
This helps a future patch return errors from the compression wra
To Chris and others, if you want anything from that filesystem,
please let me know today, I'll destroy it tonight (12H from mow my time)
and rebuild it.
If 3.14.0 has known bugs that cause corruption, please let me know
and I'll create the new filesystem with 3.15-rc4 even if I don't love
running
On Thu, May 08, 2014 at 04:58:34PM +0100, Hugo Mills wrote:
>The first axis is selection of a suitable device from a list of
> candidates. I've renamed things from my last email to try to make
> things clearer, but example algorithms here could be:
>
> - first: The old algorithm, which simp
We were having corruption issues that were tied back to problems with the extent
tree. In order to track them down I built this tool to try and find the
culprit, which was pretty successful. If you compile with this tool on it will
live verify every ref update that the fs makes and make sure it i
On 5/8/14, 1:38 PM, Josef Bacik wrote:
> I don't have flink support in my xfsprogs, but it doesn't fail with "command
> not
> found" or whatever, it fails because I don't have the -T option, whereas Eric
> gets an error about $TEST_DIR being a directory because his xfs_io tries to
> open
> the di
I don't have flink support in my xfsprogs, but it doesn't fail with "command not
found" or whatever, it fails because I don't have the -T option, whereas Eric
gets an error about $TEST_DIR being a directory because his xfs_io tries to open
the directory first before it parses the options. So fix t
On 05/07/2014 11:38 PM, Wang Shilong wrote:
On 05/08/2014 04:58 AM, Josef Bacik wrote:
On 03/09/2014 11:44 PM, Wang Shilong wrote:
Test flow is to run fsstress after triggering quota rescan.
the ruler is simple, we just remove all files and directories,
sync filesystem and see if qgroup's ref a
When running low on available disk space and having several processes
doing buffered file IO, I got the following trace in dmesg:
[ 4202.720152] INFO: task kworker/u8:1:5450 blocked for more than 120 seconds.
[ 4202.720401] Not tainted 3.13.0-fdm-btrfs-next-26+ #1
[ 4202.720596] "echo 0 > /p
On Wed, May 7, 2014 at 6:34 PM, Marc MERLIN wrote:
> Can btrfs restore be used to navigate the filesystem and look for files and
> patterns
> without dumping the entire filesystem, which I don't have room for?
On recent versions of btrfs-progs, you can run btrfs restore with both
the verbose and
On 05/08/2014 05:50 AM, Russell Coker wrote:
I've got a server/workstation (KDE desktop and file server) running kernel
3.14.1 from the Debian package 3.14-trunk-amd64.
It was running well until I decided to do a full balance of the BTRFS RAID-1
array of 3TB SATA disks (which hadn't been balance
On Mon, May 05, 2014 at 10:17:38PM +0100, Hugo Mills wrote:
>A passing remark I made on this list a day or two ago set me to
> thinking. You may all want to hide behind your desks or in a similar
> safe place away from the danger zone (say, Vladivostok) at this
> point...
>
>If we switch t
Russell Coker posted on Thu, 08 May 2014 19:50:23 +1000 as excerpted:
> I've got a server/workstation (KDE desktop and file server) running
> kernel 3.14.1 from the Debian package 3.14-trunk-amd64.
>
> It was running well until I decided to do a full balance of the BTRFS
> RAID-1 array of 3TB SAT
Good day,
I ran into some troubles with inode-cache rebuilding on root fs after
filesystem was mounted without inode_cache, which stalls boot of my
box by several minutes.
I boot from commandline like:
root=/dev/sda4 rootfstype=btrfs
rootflags=inode_cache,space_cache,autodefrag rw ...
However whe
On Wed, May 7, 2014 at 11:48 PM, Liu Bo wrote:
>
> On Wed, May 07, 2014 at 09:35:06AM -0300, Kenny MacDermid wrote:
> > On Tue, May 6, 2014 at 11:22 PM, Liu Bo wrote:
> > >
> > > What does sysrq+w say when the hang happens?
> >
> > The whole system isn't hung, I may have explained that wrong. The
This seemed to happen after a power failure. I rebooted and the FS was
mounted, but read-only and there were some errors (journalctl not able
to start. I did not capture all the errors). I rebooted again and then
it wouldn't mount at all. Is there anything else I can do?
uname -a
Linux sysresccd 3
> I canceled the balance after
> about 5 days when it had been claiming to be about 65% done for a day
> while doing a lot of disk IO.
I can see similar behaviour with 3.14.2 - after 4 days, it's only 25% done:
root 8382 2.1 0.0 17840 628 pts/1D+ May04 124:12 \_ btrfs
balan
On 8/5/2014 4:26 πμ, Wang Shilong wrote:
This patch adds an option '--check-data-csum' to verify data csums.
fsck won't check data csums unless users specify this option explictly.
Can this option be added to btrfs restore as well? i think it would be a
good thing if users can tell restore to on
On Thu, May 08, 2014 at 11:05:22AM +0200, David Disseldorp wrote:
> Hi liubo,
>
> On Thu, 8 May 2014 12:11:24 +0800, Liu Bo wrote:
>
> > Something different here, I didn't get EIO on 3.15.0-rc4.
>
> Strange, I'm able to consistently reproduce this on a
> vanilla v3.15-rc4-202-g30321c7 kernel.
>
I've got a server/workstation (KDE desktop and file server) running kernel
3.14.1 from the Debian package 3.14-trunk-amd64.
It was running well until I decided to do a full balance of the BTRFS RAID-1
array of 3TB SATA disks (which hadn't been balanced before due to previous
kernels performing
Hi liubo,
On Thu, 8 May 2014 12:11:24 +0800, Liu Bo wrote:
> Something different here, I didn't get EIO on 3.15.0-rc4.
Strange, I'm able to consistently reproduce this on a
vanilla v3.15-rc4-202-g30321c7 kernel.
Does that mean the updated test passes successfully for you?
Cheers, David
--
To un
30 matches
Mail list logo