Hello David,
Friday, June 2, 2006, 4:03:45 AM, you wrote:
DJO> - Original Message -
DJO> From: Robert Milkowski <[EMAIL PROTECTED]>
DJO> Date: Thursday, June 1, 2006 1:17 pm
DJO> Subject: Re[2]: [zfs-discuss] question about ZFS performance for
webserving/java
>> Hello David,
>>
>> The
On Thu, Jun 01, 2006 at 06:40:15PM -0500, Tao Chen wrote:
> >ABR> What about small random writes? Won't those also require reading
> >ABR> from all disks in RAID-Z to read the blocks for update, where in
> >ABR> mirroring only one disk need be accessed? Or am I missing something?
> >
> >If I under
Please add to the list the differences on locally or remotely attach
vdevs: FC, SCSI/SATA, or iSCSI. This is the part that is troubling me
most, as there are wildly different performance characteristics when
you use NFS with any of these backends with the various configs of
ZFS. Another thing is w
- Original Message -
From: Robert Milkowski <[EMAIL PROTECTED]>
Date: Thursday, June 1, 2006 1:17 pm
Subject: Re[2]: [zfs-discuss] question about ZFS performance for webserving/java
> Hello David,
>
> The system itself won't take too much space.
> You can create one large slice form the r
Hello Robert,
On 6/1/06, Robert Milkowski <[EMAIL PROTECTED]> wrote:
Hello Anton,
Thursday, June 1, 2006, 5:27:24 PM, you wrote:
ABR> What about small random writes? Won't those also require reading
ABR> from all disks in RAID-Z to read the blocks for update, where in
ABR> mirroring only one d
Maybe the best thing here is to have us (i.e. the people on this list)
come up with a set of standard and expected use cases, and have the ZFS
team tell us what the relative performance/tradeoffs are. I mean,
rather than us just asking a bunch of specific cases, a good whitepaper
Best Practices /
Hello David,
Friday, June 2, 2006, 12:52:05 AM, you wrote:
DJO> - Original Message -
DJO> From: Matthew Ahrens <[EMAIL PROTECTED]>
DJO> Date: Thursday, June 1, 2006 12:30 pm
DJO> Subject: Re: [zfs-discuss] question about ZFS performance for
webserving/java
>>
>>
>> There is no need for
- Original Message -
From: Matthew Ahrens <[EMAIL PROTECTED]>
Date: Thursday, June 1, 2006 12:30 pm
Subject: Re: [zfs-discuss] question about ZFS performance for webserving/java
> Why would you use NFS? These zones are on the same machine as the
> storage, right? You can simply export
On Thu, Jun 01, 2006 at 11:35:41AM -1000, David J. Orman wrote:
> 3 - App server would be running in one zone, with a (NFS) mounted ZFS
> filesystem as storage.
>
> 4 - DB server (PgSQL) would be running in another zone, with a (NFS)
> mounted ZFS filesystem as storage.
Why would you use NFS? Th
Hello Adam,
Friday, June 2, 2006, 12:10:47 AM, you wrote:
AL> On Thu, Jun 01, 2006 at 02:46:32PM +0200, Robert Milkowski wrote:
>> btw: what differences there'll be between raidz1 and raidz2? I guess
>> two checksums will be stored so one loose approximately space of two
>> disks in a one raidz2
Hello David,
Thursday, June 1, 2006, 11:35:41 PM, you wrote:
DJO> Just as a hypothetical (not looking for exact science here
DJO> folks..), how would ZFS fare (in your educated opinion) in this sitation:
DJO> 1 - Machine with 8 10k rpm SATA drives. High performance machine
DJO> of sorts (ie dual
On Thu, Jun 01, 2006 at 02:46:32PM +0200, Robert Milkowski wrote:
> btw: what differences there'll be between raidz1 and raidz2? I guess
> two checksums will be stored so one loose approximately space of two
> disks in a one raidz2 group. Any other things?
The difference between raidz1 and raidz2
Just as a hypothetical (not looking for exact science here folks..), how would
ZFS fare (in your educated opinion) in this sitation:
1 - Machine with 8 10k rpm SATA drives. High performance machine of sorts (ie
dual proc, etc..let's weed out cpu/memory/bus bandwidth as much as possible
from the
Hello Anton,
Thursday, June 1, 2006, 5:27:24 PM, you wrote:
ABR> What about small random writes? Won't those also require reading
ABR> from all disks in RAID-Z to read the blocks for update, where in
ABR> mirroring only one disk need be accessed? Or am I missing something?
If I understand it cor
Jeff Bonwick wrote:
http://blogs.sun.com/roller/page/roch?entry=when_to_and_not_to
thanks, that is very useful information. it pretty much rules out raid-z
for this workload with any reasonable configuration I can dream up
with only 12 disks available. it looks like mirroring is g
I have some questions about modify filesystem block.
When we want to modify existing block ZFS makes new one and destroy old. OK -
it is copy-on-write
mechanism. But if we want to modify only a part of the block what does it work?
What does ZFS do with rest of the block? Whether size of it is re
What about small random writes? Won't those also require reading from all disks
in RAID-Z to read the blocks for update, where in mirroring only one disk need
be accessed? Or am I missing something?
(It seems like RAID-Z is similar to RAID-3 in its performance characteristics,
since both spread
>We'll be much better able to help you reach your performance goals
>if you can state them as performance goals.
In particular, knowing the latency requirements is important.
Uncompressed HD video runs at 1.5 Gbps; two streams would require 3 Gbps, or
375 MB/sec. The requirement for real-time m
Hello Roch,
Thursday, June 1, 2006, 3:00:46 PM, you wrote:
RBPE> Robert Milkowski writes:
>>
>>
>>
>> btw: just a quick thought - why not to write one block only on 2 disks
>> (+checksum on a one disk) instead of spreading one fs block to N-1
>> disks? That way zfs could read many fs blo
On Thu, 2006-06-01 at 04:36, Jeff Bonwick wrote:
> It would be far
> better, when allocating a B-byte intent log block in an N-disk
> RAID-Z group, to allocate B*N bytes but only write to one disk
> (or two if you want to be paranoid). This simple change should
> make synchronous I/O on N-way RAI
Robert Milkowski writes:
>
>
>
> btw: just a quick thought - why not to write one block only on 2 disks
> (+checksum on a one disk) instead of spreading one fs block to N-1
> disks? That way zfs could read many fs block at the same time in case
> of larger raid-z pools. ?
That's what y
Hello Jeff,
Thursday, June 1, 2006, 10:36:18 AM, you wrote:
>> That helps a lot - thank you.
>> I wish I knew it before... Information Roch put on his blog should be
>> explained both in MAN pages and ZFS Admin Guide - as this is something
>> one would not expect.
>>
>> It actually means raid-z
Hello grant,
Thursday, June 1, 2006, 4:01:26 AM, you wrote:
gb> On Wed, May 31, 2006 at 03:28:12PM +0200, Roch Bourbonnais - Performance
Engineering wrote:
>> Hi Grant, this may provide some guidance for your setup;
>>
>> it's somewhat theoretical (take it for what it's worth) but
>> it spells
Jeff Bonwick wrote:
...
Since we know that intent log blocks don't live for more than a
single transaction group (which is about five seconds), there's
no reason to allocate them space-efficiently. It would be far
better, when allocating a B-byte intent log block in an N-disk
RAID-Z group, to
> That helps a lot - thank you.
> I wish I knew it before... Information Roch put on his blog should be
> explained both in MAN pages and ZFS Admin Guide - as this is something
> one would not expect.
>
> It actually means raid-z is useless in many enviroments compare to
> traditional raid-5.
Wel
Jeff Bonwick wrote:
In many cases ZFS will perform better already; in some cases it will
perform worse; but in almost all cases it will perform *differently*.
and as a result many of the solutions to getting better performance for
ZFS will be completely different to the types of things done in
> > http://blogs.sun.com/roller/page/roch?entry=when_to_and_not_to
>
> thanks, that is very useful information. it pretty much rules out raid-z
> for this workload with any reasonable configuration I can dream up
> with only 12 disks available. it looks like mirroring is going to
> provide hig
> Uhm... that's the point where you are IMO slightly wrong. The exact
> requirement is that inodes and data need to be seperated.
I find that difficult to believe.
What you need is performance. Based on your experiences with
completely different, static-metadata architectures, you've
concluded (
28 matches
Mail list logo