On 2016-04-27 19:19, Chris Murphy wrote:
On Wed, Apr 27, 2016 at 5:22 AM, Austin S. Hemmelgarn
wrote:
On 2016-04-26 20:58, Chris Murphy wrote:
On Tue, Apr 26, 2016 at 5:44 AM, Juan Alberto Cirez
wrote:
With GlusterFS as a distributed volume, the files are already spread
among the servers
On Wed, Apr 27, 2016 at 5:22 AM, Austin S. Hemmelgarn
wrote:
> On 2016-04-26 20:58, Chris Murphy wrote:
>>
>> On Tue, Apr 26, 2016 at 5:44 AM, Juan Alberto Cirez
>> wrote:
>>>
>>>
>>> With GlusterFS as a distributed volume, the files are already spread
>>> among the servers causing file I/O to be
On 04/27/16 18:40, Juan Alberto Cirez wrote:
> If this is so, then it leaves even confused. I was under the
> impression that the driving imperative for the creation of btrfs was
> to address the shortcomings of current filesystems (within the context
> of distributed data). That the idea was to cr
Holger,
If this is so, then it leaves even confused. I was under the
impression that the driving imperative for the creation of btrfs was
to address the shortcomings of current filesystems (within the context
of distributed data). That the idea was to create a low level
filesystem that would be the
Holger,
If this is so, then it leaves with even confused. I was under the
impression that the driving imperative for the creation of btrfs was
to address the shortcomings of current filesystem within the context
of distributed data. That the idea was to create a low level
filesystem that would the
On 04/27/16 17:58, Juan Alberto Cirez wrote:
> Correct me if I'm wrong but the sum total of the above seems to
> suggest (at first glance) that BRTFS add several layers of complexity,
> but for little real benefit (at least in the case use of btrfs at the
> brick layer with a distributed filesystem
WOW!
Correct me if I'm wrong but the sum total of the above seems to
suggest (at first glance) that BRTFS add several layers of complexity,
but for little real benefit (at least in the case use of btrfs at the
brick layer with a distributed filesystem on top)...
"...I've always though it'd be neat
On 2016-04-26 20:58, Chris Murphy wrote:
On Tue, Apr 26, 2016 at 5:44 AM, Juan Alberto Cirez
wrote:
With GlusterFS as a distributed volume, the files are already spread
among the servers causing file I/O to be spread fairly evenly among
them as well, thus probably providing the benefit one mig
Chris Murphy posted on Tue, 26 Apr 2016 18:58:06 -0600 as excerpted:
> On Tue, Apr 26, 2016 at 5:44 AM, Juan Alberto Cirez
> wrote:
>> RAID10 configuration, on the other hand, requires a minimum of four
>> HDD, but it stripes data across mirrored pairs. As long as one disk in
>> each mirrored pa
On Tue, Apr 26, 2016 at 5:44 AM, Juan Alberto Cirez
wrote:
> Well,
> RAID1 offers no parity, striping, or spanning of disk space across
> multiple disks.
Btrfs raid1 does span, although it's typically called the "volume", or
a "pool" similar to ZFS terminology. e.g. 10 2TiB disks will get you a
s
On 2016-04-26 08:14, Juan Alberto Cirez wrote:
Thank you again, Austin.
My ideal case would be high availability coupled with reliable data
replication and integrity against accidental lost. I am willing to
cede ground on the write speed; but the read has to be as optimized as
possible.
So far B
Thank you again, Austin.
My ideal case would be high availability coupled with reliable data
replication and integrity against accidental lost. I am willing to
cede ground on the write speed; but the read has to be as optimized as
possible.
So far BTRFS, RAID10 on the 32TB test server is quite goo
On 2016-04-26 07:44, Juan Alberto Cirez wrote:
Well,
RAID1 offers no parity, striping, or spanning of disk space across
multiple disks.
RAID10 configuration, on the other hand, requires a minimum of four
HDD, but it stripes data across mirrored pairs. As long as one disk in
each mirrored pair is
Well,
RAID1 offers no parity, striping, or spanning of disk space across
multiple disks.
RAID10 configuration, on the other hand, requires a minimum of four
HDD, but it stripes data across mirrored pairs. As long as one disk in
each mirrored pair is functional, data can be retrieved.
With Gluster
On 2016-04-26 06:50, Juan Alberto Cirez wrote:
Thank you guys so very kindly for all your help and taking the time to
answer my question. I have been reading the wiki and online use cases
and otherwise delving deeper into the btrfs architecture.
I am managing a 520TB storage pool spread across 1
Thank you guys so very kindly for all your help and taking the time to
answer my question. I have been reading the wiki and online use cases
and otherwise delving deeper into the btrfs architecture.
I am managing a 520TB storage pool spread across 16 server pods and
have tried several methods of d
On 2016-04-25 08:43, Duncan wrote:
Austin S. Hemmelgarn posted on Mon, 25 Apr 2016 07:18:10 -0400 as
excerpted:
On 2016-04-23 01:38, Duncan wrote:
And again with snapshotting operations. Making a snapshot is normally
nearly instantaneous, but there's a scaling issue if you have too many
per
Austin S. Hemmelgarn posted on Mon, 25 Apr 2016 07:18:10 -0400 as
excerpted:
> On 2016-04-23 01:38, Duncan wrote:
>>
>> And again with snapshotting operations. Making a snapshot is normally
>> nearly instantaneous, but there's a scaling issue if you have too many
>> per filesystem (try to keep it
On 2016-04-23 01:38, Duncan wrote:
Juan Alberto Cirez posted on Fri, 22 Apr 2016 14:36:44 -0600 as excerpted:
Good morning,
I am new to this list and to btrfs in general. I have a quick question:
Can I add a new device to the pool while the btrfs filesystem balance
command is running on the dri
Juan Alberto Cirez posted on Fri, 22 Apr 2016 14:36:44 -0600 as excerpted:
> Good morning,
> I am new to this list and to btrfs in general. I have a quick question:
> Can I add a new device to the pool while the btrfs filesystem balance
> command is running on the drive pool?
Adding a device whil
Good morning,
I am new to this list and to btrfs in general. I have a quick
question: Can I add a new device to the pool while the btrfs
filesystem balance command is running on the drive pool?
Thanks
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a messag
21 matches
Mail list logo