I had a similar problem on a quad core amd box with 8 gig of ram...
The performance was nice for a few minutes but then the system will
crawl to a halt.
The problem was that the areca SATA drivers can't do DMA when the dom0
memory wasn't at 3 gig or lower.
On 04/11/2007, at 3:49 PM, Martin
Jeff, this sounds like the notorious array cache flushing issue. See
http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide#Cache_Flushes
-- richard
Jeff Meidinger wrote:
> Hello,
>
> I received the following question from a company I am working with:
>
> We are having issues with
> ---8<--- run last in client_end_script ---8<---
>
> #!/bin/sh
>
> zpool list | grep -w data > /dev/null || exit 0
>
> echo /sbin/zpool export data
> /sbin/zpool export data
> echo /sbin/mount -F lofs /devices /a/devices
> /sbin/mount -F lofs /devices /a/devices
> echo chroot /a /sbin/zpool im
On Sun, 4 Nov 2007, Rob Windsor wrote:
> Eric Haycraft wrote:
>> The drives (6 in total) are external (eSATA) ones, so they have their own
>> enclosure that I can't open without voiding the warranty... I destroyed one
>> enclosure trying out ways to get it to work and learned that there was no
While doing some testing of ZFS on systems which house the storage backend for
a custom imap data store I have witnessed 90-100% sys utilization during
moderately high file creation periods. I'm not sure if this is something
inherent in the design of ZFS or if this can be tuned out. But the sys
Eric Haycraft wrote:
> That explains the problems; however, I am able to get them to run by
> jumpering them down to SATA1 which brings me back to my original question. Is
> there a way to force sata 1 without cracking the drive case and voiding the
> warranty? I only have so many expansion slot
That explains the problems; however, I am able to get them to run by jumpering
them down to SATA1 which brings me back to my original question. Is there a way
to force sata 1 without cracking the drive case and voiding the warranty? I
only have so many expansion slots, so an 8 port supermicro is
Peter Tribble wrote:
> I'm not worried about the compression effect. Where I see problems is
> backing up million/tens of millions of files in a single
> dataset. Backing up
> each file is essentially a random read (and this isn't helped by raidz
> which gives you a single disks worth of random
On Mon, 2007-11-05 at 10:27 +, [EMAIL PROTECTED] wrote:
> On Mon, 5 Nov 2007, Mark Phalan wrote:
>
> >
> > On Mon, 2007-11-05 at 02:16 -0800, Thomas Lecomte wrote:
> >> Hello there -
> >>
> >> I'm still waiting for an answer from Phillip Lougher [the SquashFS
> >> developer].
> >> I had alre
[EMAIL PROTECTED] wrote:
> > *me thinks it would be cool to finally have a generic filesystem
> > community*
>
> _Do_ we finally get one ? Can't wait :-)
I would like to have a generic filesystem community.
. or declare the ufs communtiy to be the generic part in addition.
Jörg
--
EMail:
On Mon, 5 Nov 2007, Mark Phalan wrote:
>
> On Mon, 2007-11-05 at 02:16 -0800, Thomas Lecomte wrote:
>> Hello there -
>>
>> I'm still waiting for an answer from Phillip Lougher [the SquashFS
>> developer].
>> I had already contacted him some month ago, without any answer though.
>>
>> I'll still w
On Mon, 2007-11-05 at 02:16 -0800, Thomas Lecomte wrote:
> Hello there -
>
> I'm still waiting for an answer from Phillip Lougher [the SquashFS developer].
> I had already contacted him some month ago, without any answer though.
>
> I'll still write a proposal, and probably start the work soon t
Original Message
Subject: [zfs-discuss] MySQL benchmark
Date: Tue, 30 Oct 2007 00:32:43 +
From: Robert Milkowski <[EMAIL PROTECTED]>
Reply-To: Robert Milkowski <[EMAIL PROTECTED]>
Organization: CI TASK http://www.task.gda.pl
To:
13 matches
Mail list logo