Joe Little wrote:
> On Nov 16, 2007 9:13 PM, Neil Perrin <[EMAIL PROTECTED]> wrote:
>> Joe,
>>
>> I don't think adding a slog helped in this case. In fact I
>> believe it made performance worse. Previously the ZIL would be
>> spread out over all devices but now all synchronous traffic
>> is direc
On Nov 16, 2007 9:17 PM, Joe Little <[EMAIL PROTECTED]> wrote:
> On Nov 16, 2007 9:13 PM, Neil Perrin <[EMAIL PROTECTED]> wrote:
> > Joe,
> >
> > I don't think adding a slog helped in this case. In fact I
> > believe it made performance worse. Previously the ZIL would be
> > spread out over all dev
On Nov 16, 2007 9:13 PM, Neil Perrin <[EMAIL PROTECTED]> wrote:
> Joe,
>
> I don't think adding a slog helped in this case. In fact I
> believe it made performance worse. Previously the ZIL would be
> spread out over all devices but now all synchronous traffic
> is directed at one device (and every
Joe,
I don't think adding a slog helped in this case. In fact I
believe it made performance worse. Previously the ZIL would be
spread out over all devices but now all synchronous traffic
is directed at one device (and everything is synchronous in NFS).
Mind you 15MB/s seems a bit on the slow side
I've been observing two threads on zfs-discuss with the following
Subject lines:
Yager on ZFS
ZFS + DB + "fragments"
and have reached the rather obvious conclusion that the author "can
you guess?" is a professional spinmeister, who gave up a promising
career in political speech writing, to ha
I have historically noticed that in ZFS, when ever there is a heavy
writer to a pool via NFS, the reads can held back (basically paused).
An example is a RAID10 pool of 6 disks, whereby a directory of files
including some large 100+MB in size being written can cause other
clients over NFS to pause
Manoj,
# zpool destroy -f mstor0
Regards,
Marco Lopes.
Manoj Nayak wrote:
>How I can destroy the following pool ?
>
>pool: mstor0
>id: 5853485601755236913
> state: FAULTED
>status: One or more devices contains corrupted data.
>action: The pool cannot be imported due to damaged devices o
I have the following layout
A 490 with 8 1.8Ghz and 16G mem. 6 6140s with 2 FC controllers using
A1 anfd B1 controller port 4Gbps speed.
Each controller has 2G NVRAM
On 6140s I setup raid0 lun per SAS disks with 16K segment size.
On 490 I created a zpool with 8 4+1 raidz1s
I am getting zpool IO
On Nov 15, 2007 9:42 AM, Nabeel Saad <[EMAIL PROTECTED]> wrote:
> I am sure I will not use ZFS to its fullest potential at all.. right now I'm
> trying to recover the dead disk, so if it works to mount a single disk/boot
> disk, that's all I need, I don't need it to be very functional. As I
> s
Hey folks,
I have no knowledge at all about how streams work in Solaris, so this might
have a simple answer, or be completely impossible. Unfortunately I'm a windows
admin so haven't a clue which :)
We're looking at rolling out a couple of ZFS servers on our network, and
instead of tapes we'r
msl wrote:
> Hello all...
> I'm migrating a nfs server from linux to solaris, and all clients(linux) are
> using read/write block sizes of 8192. That was the better performance that i
> got, and it's working pretty well (nfsv3). I want to use all the zfs'
> advantages, and i know i can have a p
> can you guess? metrocast.net> writes:
> >
> > You really ought to read a post before responding
> to it: the CERN study
> > did encounter bad RAM (and my post mentioned that)
> - but ZFS usually can't
> > do a damn thing about bad RAM, because errors tend
> to arise either
> > before ZFS ever
On Thu, 15 Nov 2007, Brian Lionberger wrote:
> The question is, should I create one zpool or two to hold /export/home
> and /export/backup?
> Currently I have one pool for /export/home and one pool for /export/backup.
>
> Should it be on pool for both??? Would this be better and why?
One thing to
A little extra info:
ZFS brings in a ZFS spare device the next time the pool is accessed, not
a raidbox hot spare. Resilvering starts automatically and increases disk
access times by about 30%. The first hour of estimated time left ( for
5-6 TB pools ) is wildly inaccurate, but it starts to set
I'll be setting up a small server and need two SATA-II ports for an x86
box. The cheaper the better.
Thanks!!
-brian
--
"Perl can be fast and elegant as much as J2EE can be fast and elegant.
In the hands of a skilled artisan, it can and does happen; it's just
that most of the shit out there is
> Brain damage seems a bit of an alarmist label. While you're certainly right
> that for a given block we do need to access all disks in the given stripe,
> it seems like a rather quaint argument: aren't most environments that
> matter trying to avoid waiting for the disk at all? Intelligent prefet
Razvan Corneliu VILT wrote:
> Hi,
>
> In my infinite search for a reliable work-around for the lack of bandwidth in
> the United States*, I've reached the conclusion that I need a file-system
> replication solution for the data stored on my ZFS partition.
> I've noticed that I'm not the only one
I have a zpool issue that I need to discuss.
My application is going to run on a 3120 with 4 disks. Two(mirrored)
disks will represent /export/home and the other two(mirrored) will be
/export/backup.
The question is, should I create one zpool or two to hold /export/home
and /export/backup?
Cur
Splitting this thread and changing the subject to reflect that...
On 11/14/07, can you guess? <[EMAIL PROTECTED]> wrote:
> Another prominent debate in this thread revolves around the question of
> just how significant ZFS's unusual strengths are for *consumer* use.
> WAFL clearly plays no part in
I was doing some disaster recovery testing with ZFS, where I did a mass backup
of a family of ZFS filesystems using snapshots, destroyed them, and then did a
mass restore from the backups. The ZFS filesystems I was testing with had only
one parent in the ZFS namespace; and the backup and restor
...
> I personally believe that since most people will have
> hardware LUN's
> (with underlying RAID) and cache, it will be
> difficult to notice
> anything. Given that those hardware LUN's might be
> busy with their own
> wizardry ;) You will also have to minimize the effect
> of the database
> c
If you're running over NFS, the ZFS block size most likely won't have a
measurable impact on your performance. Unless you've got multiple gigabit
ethernet interfaces, the network will generally be the bottleneck rather than
your disks, and NFS does enough caching at both client & server end to
Hi,
In my infinite search for a reliable work-around for the lack of bandwidth in
the United States*, I've reached the conclusion that I need a file-system
replication solution for the data stored on my ZFS partition.
I've noticed that I'm not the only one asking for this, but I still have no
c
On Fri, Nov 16, 2007 at 11:31:00AM +0100, Paul Boven wrote:
> Thanks for your reply. The SCSI-card in the X4200 is a Sun Single
> Channel U320 card that came with the system, but the PCB artwork does
> sport a nice 'LSI LOGIC' imprint.
That is probably the same card i'm using; it's actually a "Sun
We are having the same problem.
First with 125025-05 and then also with 125205-07
Solaris 10 update 4 - Know with all Patchesx
We opened a Case and got
T-PATCH 127871-02
we installed the Marvell Driver Binary 3 Days ago.
T127871-02/SUNWckr/reloc/kernel/misc/sata
T127871-02/SUNWmv88sx/reloc/ke
How I can destroy the following pool ?
pool: mstor0
id: 5853485601755236913
state: FAULTED
status: One or more devices contains corrupted data.
action: The pool cannot be imported due to damaged devices or data.
see: http://www.sun.com/msg/ZFS-8000-5E
config:
mstor0 UNAVAIL
On 16-Nov-07, at 4:36 AM, Anton B. Rang wrote:
> This is clearly off-topic :-) but perhaps worth correcting --
>
>> Long-time MAC users must be getting used to having their entire world
>> disrupted and having to re-buy all their software. This is at
>> least the
>> second complete flag-day (no
Yeah, this is annoying. I'm seeing this on a Thumper running Update 3 too...
Has this issue been fixed in Update 4 and/or current releases of OpenSolaris?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.o
Hi Dan,
Dan Pritts wrote:
> On Tue, Nov 13, 2007 at 12:25:24PM +0100, Paul Boven wrote:
>> We've building a storage system that should have about 2TB of storage
>> and good sequential write speed. The server side is a Sun X4200 running
>> Solaris 10u4 (plus yesterday's recommended patch cluster),
Hi all,
we have just bought a sun X2200M2 (4GB / 2 opteron 2214 / 2 disks 250GB
SATA2, solaris 10 update 4)
and a sun STK 2540 FC array (8 disks SAS 146 GB, 1 raid controller).
The server is attached to the array with a single 4 Gb Fibre Channel link.
I want to make a mirror using ZFS with this
On Thu, Nov 08, 2007 at 07:28:47PM -0800, can you guess? wrote:
> > How so? In my opinion, it seems like a cure for the brain damage of RAID-5.
>
> Nope.
>
> A decent RAID-5 hardware implementation has no 'write hole' to worry about,
> and one can make a software implementation similarly robust
31 matches
Mail list logo