Chris Murphy posted on Sun, 20 Apr 2014 14:26:37 -0600 as excerpted:
> On Apr 20, 2014, at 2:18 PM, Chris Murphy wrote:
>> What is unknown?
>
>
> /dev/sd[bcd] are 2GB, 3GB, and 4GB respectively.
>
> [root@localhost ~]# mkfs.btrfs -d raid0 -m raid1 /dev/sd[bcd]
[...]
> [root@localhost ~]# mou
I experimented with RAID5, but now I want to get rid of it:
$ sudo btrfs balance start -dconvert=raid1,soft -v /
Dumping filters: flags 0x1, state 0x0, force is off
DATA (flags 0x300): converting, target=16, soft is on
ERROR: error during balancing '/' - No space left on device
There may be more
On Apr 20, 2014, at 11:48 PM, Marc MERLIN wrote:
> On Sun, Apr 20, 2014 at 11:39:22PM -0600, Chris Murphy wrote:
>>
>> On Apr 20, 2014, at 1:46 PM, Marc MERLIN wrote:
>>
>>> Can you help me design this right?
>>>
>>> Long story short, I'm wondering if I can use btrfs send to copy sub
>>> sub
On Sun, Apr 20, 2014 at 11:39:22PM -0600, Chris Murphy wrote:
>
> On Apr 20, 2014, at 1:46 PM, Marc MERLIN wrote:
>
> > Can you help me design this right?
> >
> > Long story short, I'm wondering if I can use btrfs send to copy sub
> > subvolumes (by snapshotting a parent subvolume, and hopefull
Marc MERLIN posted on Sun, 20 Apr 2014 12:59:01 -0700 as excerpted:
> I was looking at using qgroups for my backup server, which will be
> filled with millions of files in subvolumes with snapshots.
>
> I read a warning that quota groups had performance issues, at least in
> the past.
Yes. Addi
On Apr 20, 2014, at 1:46 PM, Marc MERLIN wrote:
> Can you help me design this right?
>
> Long story short, I'm wondering if I can use btrfs send to copy sub
> subvolumes (by snapshotting a parent subvolume, and hopefully getting
> all the children underneath). My reading so far, says no.
That'
On Apr 20, 2014, at 10:56 PM, Adam Brenner wrote:
> On 04/20/2014 01:54 PM, Chris Murphy wrote:
>>
>> This is expected. And although I haven't tested it, I think you'd get
>> the same results with multiple threads writing at the same time: the
>> allocation would aggregate the threads to one ch
Marc MERLIN posted on Sun, 20 Apr 2014 12:46:27 -0700 as excerpted:
> Long story short, I'm wondering if I can use btrfs send to copy sub
> subvolumes (by snapshotting a parent subvolume, and hopefully getting
> all the children underneath). My reading so far, says no.
I don't do much with subvol
On 04/20/2014 01:54 PM, Chris Murphy wrote:
This is expected. And although I haven't tested it, I think you'd get
the same results with multiple threads writing at the same time: the
allocation would aggregate the threads to one chunk at a time until
full, which means writing to one device at a
On Sat, Apr 19, 2014 at 2:45 PM, Marcel Partap wrote:
> This is the BTRFS development list, right? Someone here should know how
> to achieve this I hope?
> #Regards
>
>> On 01/03/14 02:21, Marcel Partap wrote:
>>> Dear BTFRS devs,
>>> I have a 1TB btrfs volume mounted read-only since two years bec
This is the BTRFS development list, right? Someone here should know how
to achieve this I hope?
#Regards
> On 01/03/14 02:21, Marcel Partap wrote:
>> Dear BTFRS devs,
>> I have a 1TB btrfs volume mounted read-only since two years because I
>> deleted a bunch of files and didn't want to give up on
On Apr 20, 2014, at 2:54 PM, Chris Murphy wrote:
>
> Ergo, there is no such thing as single device raid0, so the point at which
> all but 1 drive is full, writes fail.
Correction. Data writes fail. Metadata writes apparently still succeed, as zero
length files are created. I now have several
On Apr 20, 2014, at 11:27 AM, Adam Brenner wrote:
>
>mkfs.btrfs -d single /dev/sda3 /dev/sdb /dev/sdc -f
>
> Once setup, I transferred roughly 3.1TB of data and noticed the write speed
> was limited to 200MB/s. This is the same write speed that I would see across
> a single device. I used
So that applications can find out what's the highest send stream
version supported/implemented by the running kernel:
$ cat /sys/fs/btrfs/send/stream_version
2
Signed-off-by: Filipe David Borba Manana
---
V2: Renamed /sys/fs/btrfs/send_stream_version to
/sys/fs/btrfs/send/stream_versio
On Apr 20, 2014, at 2:18 PM, Chris Murphy wrote:
> What is unknown?
/dev/sd[bcd] are 2GB, 3GB, and 4GB respectively.
[root@localhost ~]# mkfs.btrfs -d raid0 -m raid1 /dev/sd[bcd]
WARNING! - Btrfs v3.14 IS EXPERIMENTAL
WARNING! - see http://btrfs.wiki.kernel.org before using
Performing full d
kernel 3.15.0-0.rc1.git0.1.fc21.x86_64
btrfs-progs v3.14
One 80GB virtual disk, formatted btrfs by installer and Fedora Rawhide
installed to it. Post-install I see:
[root@localhost ~]# btrfs fi show
Label: 'fedora' uuid: d372e5d1-386f-460c-b036-611469e0155e
Total devices 1 FS bytes used
I was looking at using qgroups for my backup server, which will be
filled with millions of files in subvolumes with snapshots.
I read a warning that quota groups had performance issues, at least in
the past.
Is it still true?
If there is a performance issue, is it as simple as just turning off
q
Can you help me design this right?
Long story short, I'm wondering if I can use btrfs send to copy sub
subvolumes (by snapshotting a parent subvolume, and hopefully getting
all the children underneath). My reading so far, says no.
So, Ideally I would have:
/mnt/btrfs_pool/backup: backup is a sub
Howdy,
I recently setup a new BTRFS filesystem based on BTRFS version 3.12 on
Linux kernel 3.13-1 running Debian Jessie.
The BTRFS volume spans 3x 4TB disks, two of which are using the entire
raw block device, and one of them is using a partition (OS disks). The
setup is like so:
root@gra-
Fix double free of memory if btrfs_open_devices fails:
*** Error in `btrfs': double free or corruption (fasttop): 0x0066e020
***
Crash happened because when open failed on device inside
btrfs_open_devices it freed all memory by calling btrfs_close_devices but
inside disk-io.c we call btrf
This test verifies that after an incremental btrfs send the
replicated file has the same exact hole and data structure as in
the origin filesystem. This didn't use to be the case before the
send stream version 2 - holes were sent as write operations of 0
valued bytes instead of punching holes with
The fallocate send stream command, added in stream version 2, is used to
pre-allocate space for files and punch file holes. This change implements
the callback for that new command, using the fallocate function from the
standard C library to carry out the specified action (allocate file space
or pu
This is a followup to the kernel patch titled:
Btrfs: send, implement total data size command to allow for progress
estimation
This makes the btrfs send and receive commands aware of the new send flag,
named BTRFS_SEND_C_TOTAL_DATA_SIZE, which tells us the amount of file data
that is new bet
This increases the send stream version from version 1 to version 2, adding
new commands:
1) total data size - used to tell the receiver how much file data the stream
will add or update;
2) fallocate - used to pre-allocate space for files and to punch holes in files;
3) inode set flags;
4) se
So that applications can find out what's the highest send stream
version supported/implemented by the running kernel:
$ cat /sys/fs/btrfs/send_stream_version
2
Signed-off-by: Filipe David Borba Manana
---
fs/btrfs/send.h | 1 +
fs/btrfs/sysfs.c | 36 +++
If we failed during initialization of sysfs, we weren't unregistering the
top level btrfs sysfs entry nor the debugfs stuff.
Not unregistering the top level sysfs entry makes future attempts to reload
the btrfs module impossible and the following is reported in dmesg:
[ 2246.451296] WARNING: CPU:
The send stream version 2 adds the fallocate command, which can be used to
allocate extents for a file or punch holes in a file. Previously we were
ignoring file prealloc extents or treating them as extents filled with 0
bytes and sending a regular write command to the stream.
After this change, t
Instead of sending a write command with a data buffer filled with 0 value bytes,
use the fallocate command, introduced in the send stream version 2, to tell the
receiver to punch a file hole using the fallocate system call.
Signed-off-by: Filipe David Borba Manana
---
V2: A v2 stream is now only
This new send flag makes send calculate first the amount of new file data (in
bytes)
the send root has relatively to the parent root, or for the case of a
non-incremental
send, the total amount of file data the stream will create (including holes and
prealloc
extents). In other words, it compute
This increases the send stream version from version 1 to version 2, adding
new commands:
1) total data size - used to tell the receiver how much file data the stream
will add or update;
2) fallocate - used to pre-allocate space for files and to punch holes in files;
3) inode set flags;
4) se
30 matches
Mail list logo