On 2017年11月02日 13:50, Lakshmipathi.G wrote:
>>
>> I'll try to reproduce it with FS_CHECK_INTEGRITY enabled.
>>
>
> Okay thanks for the details. Here's the kernel config file which hits
> this issue:
> https://github.com/Lakshmipathi/btrfsqa/blob/master/setup/config/kernel.config#L3906
Although
This patch maintains consistency of mode in device get and put. And is just
a cleanup patch. There isn't any problem that was noticed, so no functional
changes.
Signed-off-by: Anand Jain
---
v2: commit update
v3: commit update
fs/btrfs/volumes.c | 16
1 file changed, 8 insertio
We feedback IO progress when it falls below 2/3 times of the limit
obtained from btrfs_async_submit_limit(), and creates a wait for the
write process and makes progress during the async submission.
In general device/transport q depth is 256 and, btrfs_async_submit_limit()
returns 256 times per dev
On 10/31/2017 10:21 PM, Nikolay Borisov wrote:
On 31.10.2017 04:11, Anand Jain wrote:
On 10/30/2017 10:39 PM, David Sterba wrote:
On Fri, Oct 20, 2017 at 10:07:15PM +0800, Anand Jain wrote:
We aren't setting the FMODE_WRITE when initializing btrfs_device
structure and when calling blkdev
On 10/31/2017 10:18 PM, Nikolay Borisov wrote:
On 31.10.2017 14:59, Anand Jain wrote:
btrfs_async_submit_limit() would return the q depth to be 256, however
when we are using it we are making it 2/3 times of it. So instead let
the function return the final computed value.
Signed-off-by: Ana
>
> I'll try to reproduce it with FS_CHECK_INTEGRITY enabled.
>
Okay thanks for the details. Here's the kernel config file which hits
this issue:
https://github.com/Lakshmipathi/btrfsqa/blob/master/setup/config/kernel.config#L3906
thanks!
Cheers,
Lakshmipathi.G
http://www.giis.co.in http://
On 2017年11月02日 13:19, Lakshmipathi.G wrote:
> Hi.
>
> I'm constantly hitting this bug while running btrfs-progs:fsck-test:012 .
> Screencast: https://asciinema.org/a/wQxZjCeVvX2kVqKjVBGyR0klq
>
> Logged more details: https://bugzilla.kernel.org/show_bug.cgi?id=197587
> Anyone else got this issu
On 11/02/2017 12:13 PM, Wang Shilong wrote:
在 2017年11月2日,上午11:44,Su Yue 写道:
Sorry, the patchset does not work as expected.
Please ignore it.
On 11/02/2017 11:23 AM, Su Yue wrote:
The patchset adds an option '--compress-force' to work with
'btrfs fi defrag -c'. Then no-compression files wil
Hi.
I'm constantly hitting this bug while running btrfs-progs:fsck-test:012 .
Screencast: https://asciinema.org/a/wQxZjCeVvX2kVqKjVBGyR0klq
Logged more details: https://bugzilla.kernel.org/show_bug.cgi?id=197587
Anyone else got this issue or something wrong with my test environment?
Cheers,
在 2017年11月2日,上午11:44,Su Yue 写道:
Sorry, the patchset does not work as expected.
Please ignore it.
On 11/02/2017 11:23 AM, Su Yue wrote:
> The patchset adds an option '--compress-force' to work with
> 'btrfs fi defrag -c'. Then no-compression files will be set with
> compression property specifie
Sorry, the patchset does not work as expected.
Please ignore it.
On 11/02/2017 11:23 AM, Su Yue wrote:
The patchset adds an option '--compress-force' to work with
'btrfs fi defrag -c'. Then no-compression files will be set with
compression property specified by '-c' (zlib default).
patch[1-4] d
Previously, function prototype prop_handler_t() in props.h is
just for setting and getting @value. However, it prints @value
directly instead of returns the value. It was difficult to get
@value by @name.
For code-reuse, here divide prop_handler_t into three handlers:
1) prop_setter_t: set @value
Introduce set_prop_label(), get_prop_label(), print_prop_label()
to set/get/print label.
Signed-off-by: Su Yue
---
props.c | 64 +++-
1 file changed, 55 insertions(+), 9 deletions(-)
diff --git a/props.c b/props.c
index da69f6e9314c..2
Introduce set_prop_compression(), get_prop_compression()
and print_prop_compression() to set/get/print compression property.
Signed-off-by: Su Yue
---
props.c | 68 -
1 file changed, 55 insertions(+), 13 deletions(-)
diff --git a/p
Now, files which don't have compession property won't be defraged with
compression.
So add an option '--compress-force' to extend -c to drop nocompress flag
on files.
If the option is enable and a file doesn't have compression property:
First add a compression property which is specified by optio
Introduce set_prop_read_only(), get_prop_read_only(),
print_prop_read_only() to set, get and print the subvolume
is read-only or not.
Signed-off-by: Su Yue
---
props.c | 72 -
1 file changed, 62 insertions(+), 10 deletions(-)
diff
The patchset adds an option '--compress-force' to work with
'btrfs fi defrag -c'. Then no-compression files will be set with
compression property specified by '-c' (zlib default).
patch[1-4] divide property handler to setter, getter, and printer.
Then patch[5] could enhance defragment easier.
Su
Has this been discussed here? Has anything changed since it was written?
Parity-based redundancy (RAID5/6/triple parity and beyond) on BTRFS
and MDADM (Dec 2014) – Ronny Egners Blog
http://blog.ronnyegner-consulting.de/2014/12/10/parity-based-redundancy-raid56triple-parity-and-beyond-on-btrfs-and-
Here we have defined two flags,
- Fautly
- In_sync
Currently only In_sync is in use, it only matters when device serves
as part of a raid profile. The flag In_sync is decided when mounting
a btrfs and opening a device, by default every device is set with
In_sync, but would not be set if its gener
The current method sometimes can report In_sync status wrongly if the
out-of-sync device happens to be the first one we check, and since it
will be the only one, it'll be set In_sync, but later if a newer
device is found, we then change it to !In_sync, but it's not reported
in kernel log.
This cha
With raid6 profile, btrfs can end up with data corruption by the
following steps.
Say we have a 5 disks that are set up with raid6 profile,
1) mount this btrfs
2) one disk gets pulled out
3) write something to btrfs and sync
4) another disk gets pulled out
5) write something to btrfs and sync
6)
This is an attempt to fix a raid6 bug, that is, if two disks got
hotplugged at run time, reconstruction process is not able to rebuild
correct data to use even if raid6 can theoretically tolerate two disk
failure.
Patch 1 is a preparation patch, which introduces the necessary flags
and flag handli
If a device is not in_sync, avoid reading data from it as data on it
might be stale.
Although checksum can detect stale data so we won't return stale data
to users , this helps us read the good copy directly.
Signed-off-by: Liu Bo
---
fs/btrfs/volumes.c | 23 ---
1 file chan
On Wed, Nov 1, 2017 at 8:21 AM, Austin S. Hemmelgarn
wrote:
>> The cache is in a separate location from the profiles, as I'm sure you
>> know. The reason I suggested a separate BTRFS subvolume for
>> $HOME/.cache is that this will prevent the cache files for all
>> applications (for that user) f
This is to reproduce a raid6 reconstruction bug after two drives
getting offline and online via hotplug.
Signed-off-by: James Alandt
Signed-off-by: Liu Bo
---
tests/btrfs/152 | 121 ++
tests/btrfs/group | 1 +
2 files changed, 122 insertio
> Another one is to find the most fragmented files first or all
> files of at least 1M with with at least say 100 fragments as in:
> find "$HOME" -xdev -type f -size +1M -print0 | xargs -0 filefrag \
> | perl -n -e 'print "$1\0" if (m/(.*): ([0-9]+) extents/ && $1 > 100)' \
> | xargs -0 btrfs fi d
[ ... ]
> The poor performance has existed from the beginning of using
> BTRFS + KDE + Firefox (almost 2 years ago), at a point when
> very few snapshots had yet been created. A comparison system
> running similar hardware as well as KDE + Firefox (and LVM +
> EXT4) did not have the performance pr
These rules have been hidden in several if-else and are not
straightforward to follow, for example, dio submit hook's nocsum case
has a bug , i.e. doing async submit instead of sync submit, which has
been fixed recently.
This is documenting the rules for reference.
Signed-off-by: Liu Bo
---
fs/
On Wed, Nov 1, 2017 at 1:48 PM, Peter Grandi wrote:
>> When defragmenting individual files on a BTRFS filesystem with
>> COW, I assume reflinks between that file and all snapshots are
>> broken. So if there are 30 snapshots on that volume, that one
>> file will suddenly take up 30 times more space
On Wed, Nov 1, 2017 at 9:31 AM, Duncan <1i5t5.dun...@cox.net> wrote:
> Dave posted on Tue, 31 Oct 2017 17:47:54 -0400 as excerpted:
>
>> 6. Make sure Firefox is running in multi-process mode. (Duncan's
>> instructions, while greatly appreciated and very useful, left me
>> slightly confused about pu
I'm guessing this is related.
I noticed my tv wasn't recording to my drive and when I tried to touch
a file on the drive, my console become unresponsive.
Trying to reboot took like 5 minutes to even stop the processes and in
the end couldn't unmount the drive and I had to cut the power to
finally g
On Wed, Nov 1, 2017 at 4:34 AM, Marat Khalili wrote:
>> We do experience severe performance problems now, especially with
>> Firefox. Part of my experiment is to reduce the number of snapshots on
>> the live volumes, hence this question.
>
> Just for statistics, how many snapshots do you have and
tree: https://git.kernel.org/pub/scm/linux/kernel/git/josef/btrfs-next.git
bpf-override-return
head: 4c8ff03808e83b25a9771d1171741d550072ba36
commit: 4608d4f4e271703f1609a78c9c0ac2f5d1fb59a2 [1/2] bpf: add a
bpf_override_function helper
config: i386-randconfig-x0-11020159 (attached as .config
Ben Hooper posted on Wed, 01 Nov 2017 16:18:25 + as excerpted:
> Hello,
>
> I am trying to upgrade capacity on by btrfs filesystem by replacing
> smaller disks with larger ones. I added 2x8TB drives to the existing
> RAID10 but am not seeing the expected increase in space and am
> experiencin
Last few days, made small progress on this. Though Python/bash script
is working, its not yet perfect.
Gladly welcome any feedback and/or scripts :-)
https://github.com/Lakshmipathi/btrfsqa
Cheers,
Lakshmipathi.G
http://www.giis.co.in http://www.webminal.org
--
To unsubscribe from this list:
On 2017-11-01 13:52, Andrei Borzenkov wrote:
01.11.2017 15:01, Austin S. Hemmelgarn пишет:
...
The default subvolume is what gets mounted if you don't specify a
subvolume to mount. On a newly created filesystem, it's subvolume ID 5,
which is the top-level of the filesystem itself. Debian does
On Wed, Nov 01, 2017 at 11:36:05AM +0200, Nikolay Borisov wrote:
> Right before we go into this loop locked_end is set to alloc_end - 1 and is
> being used in nearby functions, no need to have exceptions. This just makes
> the
> code consistent, no functional changes
>
> Signed-off-by: Nikolay Bo
On Wed, Nov 01, 2017 at 12:42:18PM +0200, Nikolay Borisov wrote:
>
>
> On 1.11.2017 11:46, Roman Mamedov wrote:
> > On Wed, 1 Nov 2017 11:32:18 +0200
> > Nikolay Borisov wrote:
> >
> >> Fallocating a file in btrfs goes through several stages. The one before
> >> actually
> >> inserting the f
01.11.2017 15:01, Austin S. Hemmelgarn пишет:
...
> The default subvolume is what gets mounted if you don't specify a
> subvolume to mount. On a newly created filesystem, it's subvolume ID 5,
> which is the top-level of the filesystem itself. Debian does not
> specify a subvo9lume in /etc/fstab d
> When defragmenting individual files on a BTRFS filesystem with
> COW, I assume reflinks between that file and all snapshots are
> broken. So if there are 30 snapshots on that volume, that one
> file will suddenly take up 30 times more space... [ ... ]
Defragmentation works by effectively making
On 2017-11-01 10:05, ST wrote:
3. in my current ext4-based setup I have two servers while one syncs
files of certain dir to the other using lsyncd (which launches rsync on
inotify events). As far as I have understood it is more efficient to use
btrfs send/receive (over ssh) than rsync (over ssh
Hello,
I am trying to upgrade capacity on by btrfs filesystem by replacing smaller
disks with larger ones. I added 2x8TB drives to the existing RAID10 but am not
seeing the expected increase in space and am experiencing enospc errors during
balance. This array has been extended several times bu
On 11/01/2017 03:05 PM, ST wrote as excerpted:
>> However, it's important to know that if your users have shell access,
>> they can bypass qgroups. Normal users can create subvolumes, and new
>> subvolumes aren't added to an existing qgroup by default (and unless I'm
>> mistaken, aren't constra
ODroid XU4
Arch Linux
Kernel 4.13 (custom)
4TB USB 3.0 mechanical WD Drive/hub (had bad-block issues in the past
that were "corrected")
Occurred when using rsync to copy files to an encfs mount over nfs
(only 22MB made it).
Note, I keep the activity on my odroid very low or things start to bug
out
On Tue, Oct 31, 2017 at 07:21:59PM +, Nick Terrell wrote:
> On 10/31/17, 9:48 AM, "David Sterba" wrote:
> > The current default for the compression file flag is 'zlib', the zstd
> > patch silently changed that to zstd. Though the choice of zlib might not
> > be the best one, we should keep the
> >>> 3. in my current ext4-based setup I have two servers while one syncs
> >>> files of certain dir to the other using lsyncd (which launches rsync on
> >>> inotify events). As far as I have understood it is more efficient to use
> >>> btrfs send/receive (over ssh) than rsync (over ssh) to sync
Dave posted on Tue, 31 Oct 2017 17:47:54 -0400 as excerpted:
> 6. Make sure Firefox is running in multi-process mode. (Duncan's
> instructions, while greatly appreciated and very useful, left me
> slightly confused about pulseaudio's compatibility with multi-process
> mode.)
Just to clarify:
The
For the following types, we have items with variable length:
(With BTRFS_ prefix and _KEY suffix snipped)
DIR_ITEM
DIR_INDEX
XATTR_ITEM
INODE_REF
INODE_EXTREF
ROOT_REF
ROOT_BACKREF
They all use @name_len to indicate their name length, and XATTR_ITEM has
extra @data_len to indicate it data length.
On 2017-10-31 20:37, Dave wrote:
On Tue, Oct 31, 2017 at 7:06 PM, Peter Grandi
wrote:
Also nothing forces you to defragment a whole filesystem, you
can just defragment individual files or directories by using
'find' with it.
Thanks for that info. When defragmenting individual files on a BTR
Since tree-checker has verified leaf when reading from disk, we don't
need the existing checker.
This cleanup reverts the following commits:
fbc326159a01 ("btrfs: Verify dir_item in iterate_object_props")
64c7b01446f4 ("btrfs: Check name_len before in btrfs_del_root_ref")
488d7c456653 ("btrfs: Che
ST posted on Tue, 31 Oct 2017 22:06:24 +0200 as excerpted:
> Also another questions in this regard - I tried to "set-default" and
> then reboot and it worked nice - I landed indeed in the snapshot, not
> top-level volume. However /etc/fstab didn't change and actually showed
> that top-level volume
For the following types, we have items with variable length:
(With BTRFS_ prefix and _KEY suffix snipped)
DIR_ITEM
DIR_INDEX
XATTR_ITEM
INODE_REF
INODE_EXTREF
ROOT_REF
ROOT_BACKREF
They all use @name_len to indicate their name length, and XATTR_ITEM has
extra @data_len to indicate it data length.
Since tree-checker has verified leaf when reading from disk, we don't
need the existing checker.
This cleanup reverts the following commits:
fbc326159a01 ("btrfs: Verify dir_item in iterate_object_props")
64c7b01446f4 ("btrfs: Check name_len before in btrfs_del_root_ref")
488d7c456653 ("btrfs: Che
On 11/01/2017 01:07 PM, Qu Wenruo wrote:
>
>
> On 2017年11月01日 19:31, ST wrote:
>> On Wed, 2017-11-01 at 19:17 +0800, Qu Wenruo wrote:
>>>
>>> On 2017年11月01日 19:04, ST wrote:
Hello,
I read in different places that one should keep amount of snapshots low
- around 15-20. My quest
On 2017年11月01日 19:31, ST wrote:
> On Wed, 2017-11-01 at 19:17 +0800, Qu Wenruo wrote:
>>
>> On 2017年11月01日 19:04, ST wrote:
>>> Hello,
>>>
>>> I read in different places that one should keep amount of snapshots low
>>> - around 15-20. My question - is this limitation on total number of
>>> snapsh
On 2017-10-31 16:06, ST wrote:
Thank you very much for such an informative response!
On Tue, 2017-10-31 at 13:45 -0400, Austin S. Hemmelgarn wrote:
On 2017-10-31 12:23, ST wrote:
Hello,
I've recently learned about btrfs and consider to utilize for my needs.
I have several questions in this r
On Wed, 2017-11-01 at 19:17 +0800, Qu Wenruo wrote:
>
> On 2017年11月01日 19:04, ST wrote:
> > Hello,
> >
> > I read in different places that one should keep amount of snapshots low
> > - around 15-20. My question - is this limitation on total number of
> > snapshots on the system or only on related
On 2017年11月01日 19:04, ST wrote:
> Hello,
>
> I read in different places that one should keep amount of snapshots low
> - around 15-20. My question - is this limitation on total number of
> snapshots on the system or only on related (parent<->child) chain of
> snapshots?
Independent subvolume do
Hello,
I read in different places that one should keep amount of snapshots low
- around 15-20. My question - is this limitation on total number of
snapshots on the system or only on related (parent<->child) chain of
snapshots?
What I want to do is the following: create (and then rotate) last 7
dai
On Wed, Nov 1, 2017 at 10:34 AM, Nikolay Borisov wrote:
>
>
> On 25.10.2017 17:59, fdman...@kernel.org wrote:
>> From: Filipe Manana
>>
>> This implements support the zero range operation of fallocate. For now
>> at least it's as simple as possible while reusing most of the existing
>> fallocate
On 1.11.2017 11:46, Roman Mamedov wrote:
> On Wed, 1 Nov 2017 11:32:18 +0200
> Nikolay Borisov wrote:
>
>> Fallocating a file in btrfs goes through several stages. The one before
>> actually
>> inserting the fallocated extents is to create a qgroup reservation, covering
>> the desired range.
On 2017年11月01日 18:22, Lars Noschinski wrote:
> Hi everyone,
>
> I have a machine which had an almost full 4TB btrfs partition on a
> SATA harddisk. I added another 4TB harddisk, put a btrfs partition on
> it and moved 1TB of data to the new partition.
>
> After that, I moved another part of the
On 25.10.2017 17:59, fdman...@kernel.org wrote:
> From: Filipe Manana
>
> This implements support the zero range operation of fallocate. For now
> at least it's as simple as possible while reusing most of the existing
> fallocate and hole punching infrastructure.
>
> Signed-off-by: Filipe Mana
Hi everyone,
I have a machine which had an almost full 4TB btrfs partition on a
SATA harddisk. I added another 4TB harddisk, put a btrfs partition on
it and moved 1TB of data to the new partition.
After that, I moved another part of the data (probably around 500GB)
to the new partition and delete
On Wed, 1 Nov 2017 11:32:18 +0200
Nikolay Borisov wrote:
> Fallocating a file in btrfs goes through several stages. The one before
> actually
> inserting the fallocated extents is to create a qgroup reservation, covering
> the desired range. To this end there is a loop in btrfs_fallocate which
Right before we go into this loop locked_end is set to alloc_end - 1 and is
being used in nearby functions, no need to have exceptions. This just makes the
code consistent, no functional changes
Signed-off-by: Nikolay Borisov
---
fs/btrfs/file.c | 4 ++--
1 file changed, 2 insertions(+), 2 delet
Fallocating a file in btrfs goes through several stages. The one before actually
inserting the fallocated extents is to create a qgroup reservation, covering
the desired range. To this end there is a loop in btrfs_fallocate which checks
to see if there are holes in the fallocated range or !PREALLOC
On 01/11/17 09:51, Dave wrote:
As already said by Romain Mamedov, rsync is viable alternative to
send-receive with much less hassle. According to some reports it can even be
faster.
Thanks for confirming. I must have missed those reports. I had never
considered this idea until now -- but I like
On Tue, Oct 31, 2017 at 05:47:54PM -0400, Dave wrote:
> I'm following up on all the suggestions regarding Firefox performance
> on BTRFS.
>
>
>
> 5. Firefox profile sync has not worked well for us in the past, so we
> don't use it.
> 6. Our machines generally have plenty of RAM so we could put th
On 2017年10月28日 01:37, David Sterba wrote:
> On Fri, Oct 27, 2017 at 03:29:28PM +0800, Qu Wenruo wrote:
>> This patchset adds quota support, which means the result fs will have
>> quota enabled by default, and its accounting is already consistent, no
>> manually rescan or quota enable is needed.
>
70 matches
Mail list logo