On Wed, 30 Dec 2015 18:18:26 +0100
David Sterba wrote:
> That's just the comment copied, the changelog does not explain why
> it's ok to do just the run_xor there. It does not seem trivial to me.
> Please describe that the end result after the code change is expected.
In the RAID 6 case after a
On Wed, 30 Dec 2015 17:17:22 +0100
David Sterba wrote:
> Let me note that a good reputation is also built from patch reviews
> (hint hint).
Unfortunately, not too many patches coming in for BTRFS presently.
Mailing list activity is down to 25-35 mails per day. Mostly feature
and bug requests.
I
On Wed, 30 Dec 2015 16:58:05 +0100
David Sterba wrote:
> On Wed, Dec 30, 2015 at 06:15:23AM -0500, Sanidhya Solanki wrote:
> > - Implement a way to do an in-place Stripe Length change.
>
> How are you going to implement that? I've suggested the balance filter
> style of conversion, which is not
David Sterba wrote on 2015/12/30 17:17 +0100:
On Wed, Dec 30, 2015 at 10:10:44PM +0800, Qu Wenruo wrote:
Now I am on the same side of David.
Which means a runtime interface to change them. (along with mkfs option)
If provide some configurable features, then it should be able to be
tuned at bo
Hello,
Ever since 4.4.0-rc1 or so, BTRFS and XFS hasn't played well with
hibernation. It may be deeper down as both filesystems seem to have
issues with not being able to commit/freeze as can be seen below:
[81167.893207] PM: Syncing filesystems ... done.
[81168.194298] Freezing user space pr
kernel and btrfs-progs versions
and output from:
'btrfs fi show '
'btrfs fi usage '
'btrfs-show-super '
'df -h'
Then umount the volume, and mount with option enospc_debug, and try to
reproduce the problem, then include everything from dmesg from the
time the volume was mounted.
--
Chris Murphy
-
Hi,
I have a 6TB partition here, it filled up while still just under 2TB
were on it. btrfs fi df showed that Data is 1.92TB:
Data, single: total=1.92TiB, used=1.92TiB
System, DUP: total=8.00MiB, used=224.00KiB
System, single: total=4.00MiB, used=0.00B
Metadata, DUP: total=5.00GiB, used=3.32GiB
Met
On Wed, 2015-12-30 at 21:02 +, Duncan wrote:
> For something like that, it'd pretty much /have/ to be done as COW,
> at
> least at the chunk level, tho the address from the outside may stay
> the
> same. That's what balance already does, after all.
Ah... of course,... it would be basically C
On Wed, 2015-12-30 at 20:57 +, Duncan wrote:
> Meanwhile, it's a pretty clever solution, I think. =:^)
Well the problem with such workaround-solutions is... end-users get
used to it, rely on it, and suddenly they don't work anymore (which the
user wouldn't probably notice, though).
> I was thi
On Tue, Dec 29, 2015 at 08:10:07PM -0500, Nicholas Krause wrote:
> This fixes error handling in the function btrfs_dev_replace_kthread
> by checking if the call to the function btrfs_dev_replace_continue_on_mount
> has failed and if so return the error code to this function's caller in
> order to s
On Tue, Dec 29, 2015 at 08:20:47PM -0500, Nicholas Krause wrote:
> This fixes the incorrect return statement if failure occurs by
> returning 0 rather then the variable ret which may hold a error
> code to signal when a failure has occurred in the function
> btrfs_mark_extent_written to its callers
Christoph Anton Mitterer posted on Wed, 30 Dec 2015 21:00:11 +0100 as
excerpted:
> On Tue, 2015-12-29 at 19:06 +0100, David Sterba wrote:
>> > Both open of course many questions (how to deal with crashes,
>> > etc.)...
>> > maybe having a look at how mdadm handles similar problems could be
>> > wo
Christoph Anton Mitterer posted on Wed, 30 Dec 2015 20:28:00 +0100 as
excerpted:
> On Wed, 2015-12-30 at 18:26 +, Duncan wrote:
>> That should work. Cat the files to /dev/null and check dmesg. For
>> single mode it should check the only copy. For raid1/10 or dup,
>> running two checks, ensu
On Tue, 2015-12-29 at 19:06 +0100, David Sterba wrote:
> > Both open of course many questions (how to deal with crashes,
> > etc.)...
> > maybe having a look at how mdadm handles similar problems could be
> > worth.
>
> The crash consistency should remain, other than that we'd have to
> enhance th
On Wed, 2015-12-30 at 22:10 +0800, Qu Wenruo wrote:
> Or, just don't touch it until there is really enough user demand.
I definitely think that there is demand... as I've written previously,
when I did some benchmarking tests (though on MD and HW raid) then
depending on the RAID chunk size, one got
On Wed, 2015-12-30 at 18:26 +, Duncan wrote:
> That should work. Cat the files to /dev/null and check dmesg. For
> single mode it should check the only copy. For raid1/10 or dup,
> running
> two checks, ensuring one is even-PID while the other is odd-PID,
> should
> work to check both cop
On Wed, 2015-12-30 at 18:39 +0100, David Sterba wrote:
> The closest would be to read the files and look for any reported
> errors.
Doesn't that fail for any multi-device setup, in which case btrfs reads
the blocks only from one device, and if that verifies, doesn't check
the other?
Cheers,
Chris
Waxhead wrote:
Chris Murphy wrote:
Well all the generations on all devices are now the same, and so are
the chunk trees. I haven't looked at them in detail to see if there
are any discrepancies among them.
If you don't care much for this file system, then you could try btrfs
check --repair, usi
Chris Murphy wrote:
Well all the generations on all devices are now the same, and so are
the chunk trees. I haven't looked at them in detail to see if there
are any discrepancies among them.
If you don't care much for this file system, then you could try btrfs
check --repair, using btrfs-progs 4
David Sterba posted on Wed, 30 Dec 2015 18:39:49 +0100 as excerpted:
> On Wed, Dec 30, 2015 at 01:00:34AM +0100, Sree Harsha Totakura wrote:
>> Is it possible to confine scrubbing to a subvolume instead of the whole
>> file system?
>
> No. Srub reads the blocks from devices (without knowing which
On Wed, Dec 30, 2015 at 01:00:34AM +0100, Sree Harsha Totakura wrote:
> Is it possible to confine scrubbing to a subvolume instead of the whole
> file system?
No. Srub reads the blocks from devices (without knowing which files own
them) and compares them to the stored checksums.
> [...] Therefor
On Wed, Dec 30, 2015 at 01:28:36AM -0500, Sanidhya Solanki wrote:
> The patch adds the xor function after the P stripe
> has failed, without bad data or the Q stripe.
That's just the comment copied, the changelog does not explain why it's
ok to do just the run_xor there. It does not seem trivial t
On Wed, Dec 30, 2015 at 10:10:44PM +0800, Qu Wenruo wrote:
> Now I am on the same side of David.
> Which means a runtime interface to change them. (along with mkfs option)
>
> If provide some configurable features, then it should be able to be
> tuned at both right time and mkfs time.
> Or, just
On Wed, Dec 30, 2015 at 04:02:04PM +, fdman...@kernel.org wrote:
> From: Filipe Manana
>
> While running a stress test I ran into the following trace/transaction
> abort:
>
> [471626.672243] [ cut here ]
> [471626.673322] WARNING: CPU: 9 PID: 19107 at fs/btrfs/extent-
From: Filipe Manana
While running a stress test I ran into the following trace/transaction
abort:
[471626.672243] [ cut here ]
[471626.673322] WARNING: CPU: 9 PID: 19107 at fs/btrfs/extent-tree.c:3740
btrfs_write_dirty_block_groups+0x17c/0x214 [btrfs]()
[471626.675492] B
On Wed, Dec 30, 2015 at 06:15:23AM -0500, Sanidhya Solanki wrote:
> Only one problem. I do not run BTRFS on my systems nor do I have a
> RAID setup, due to possessing a limited number of free drives. So, while
> I may be able to code for it, I will not be able to test it. I will need
> the communit
Hi,
Is it possible to confine scrubbing to a subvolume instead of the whole
file system?
The problem I am facing is that I have a 5 TB btrfs FS. On it I have
created subvolumes for weekly backups, personal photos, music, and
documents. Obviously, I am more concerned about my photos and document
On Wed, 30 Dec 2015 22:10:44 +0800
Qu Wenruo wrote:
> Understood now.
Good.
> I totally understand that implement ... to polish your
> skill.
That has got to be the most hilarious way I believe I have seen someone
delegate a task. But it was effective.
Only one problem. I do not run BTRFS on m
On 12/30/2015 05:54 PM, Sanidhya Solanki wrote:
On Wed, 30 Dec 2015 19:59:16 +0800
Qu Wenruo wrote:
Not really sure about the difference between 2 and 3.
I should have made it clear before, I was asking the exact use case in
mind when listing the choices. Option 2 would be for SysAdmins run
On Wed, 30 Dec 2015 19:59:16 +0800
Qu Wenruo wrote:
> Not really sure about the difference between 2 and 3.
I should have made it clear before, I was asking the exact use case in
mind when listing the choices. Option 2 would be for SysAdmins running
production software and configuring it as they
On 12/29/2015 06:11 PM, Chandan Rajendra wrote:
On Tuesday 08 Dec 2015 16:40:42 Qu Wenruo wrote:
Enhance chunk validation:
1) Num_stripes
We already have such check but it's only in super block sys chunk
array.
Now check all on-disk chunks.
2) Chunk logical
It should be aligned
On 12/30/2015 02:39 PM, Sanidhya Solanki wrote:
On Tue, 29 Dec 2015 18:06:11 +0100
David Sterba wrote:
So you want to make the stripe size configurable?...
As I see it there are 3 ways to do it:
-Make it a compile time option that only configures it for a single
system with any devices tha
On Tue, 29 Dec 2015 18:06:11 +0100
David Sterba wrote:
> So you want to make the stripe size configurable?...
As I see it there are 3 ways to do it:
-Make it a compile time option that only configures it for a single
system with any devices that are added to the RAID.
-Make it a runtime option t
The patch adds the xor function after the P stripe
has failed, without bad data or the Q stripe.
Signed-off-by: Sanidhya Solanki
---
fs/btrfs/raid56.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/fs/btrfs/raid56.c b/fs/btrfs/raid56.c
index 1a33d3e..d33734a 100644
--- a/fs/
Qu Wenruo wrote on 2015/12/30 16:38 +0800:
Hi Marcel
Thanks a lot for the feedback.
Marcel Ritter wrote on 2015/12/30 09:15 +0100:
Hi Qu Wenrou,
I just wanted to give some feedback on yesterdays dedup patches:
I just applied them to a 4.4-rc7 kernel and did some (very basic)
testing:
Test
Hi Marcel
Thanks a lot for the feedback.
Marcel Ritter wrote on 2015/12/30 09:15 +0100:
Hi Qu Wenrou,
I just wanted to give some feedback on yesterdays dedup patches:
I just applied them to a 4.4-rc7 kernel and did some (very basic)
testing:
Test1: in-memory
Didn't crash on my 350 GB test f
36 matches
Mail list logo