I would look at what size IOs you are doing in each case.
I have been playing with a T5240 and got 400Mb/s read and 200Mb/s write speeds
with iozone throughput tests on a 6 disk mirror pool, so the box and ZFS can
certainly push data around - but that was using 128k blocks.
You mention the disk
Thanks Lori - from what I could gather in the bug report about this the issue
is simply that the profile causes a SEGV of the installer correct? Is it
possible this might be addressed in a patch in the near future that can be
applied to the installer miniroots? Is it worth raising a case/escal
With much excitement I have been reading the new features coming into Solaris
10 in 10/08 and am eager to start playing with zfs root. However one thing
which struck me as strange and somewhat annoying is that it appears in the FAQs
and documentation that its not possible to do a ZFS root insta
Howdy,
I have at several times had issues with consumer grade PC hardware and ZFS not
getting along. The problem is not the disks but the fact I dont have ECC and
end to end checking on the datapath. What is happening is that random memory
errors and bit flips are written out to disk and when
thanks for the replies - I imagined it would have been discussed but must have
been searching the wrong terms :)
Any idea on the timeline or future of "zfs split" ?
Cheers,
Adrian
This message posted from opensolaris.org
___
zfs-discuss mailing li
Not sure how technically feasible it is, but something I thought of while
shuffling some files around my home server. My poor understanding of ZFS
internals is that the entire pool is effectivly a tree structure, with nodes
either being data or metadata. Given that, couldnt ZFS just change a d
when playing with ZFS to try and come up with some standards for using it in
our environment I also disliked having the pool directory mounted when my
intentions were not to us it, but subdivide the space with in it.
Simple fix:
zpool create data
zfs create data/share
zfs create data/oracle
zf
There are also a number of procmail processes with the following thread stacks
> 0t1611::pid2proc|::walk thread|::findstack -v
stack pointer for thread d8ae1a00: d3bcbbac
[ d3bcbbac 0xfe826b37() ]
d3bcbbc4 swtch+0x13e()
d3bcbbe8 cv_wait_sig+0x119(da58fb4c, d46a8680)
d3bcbc00 wait_for_lock+0x
Oh - and by instant hang I mean I can reproduce it simply by rebooting,
enabling the dovecot service and then connecting to dovecot with thunderbird.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
ht
Hi,
I just upgraded my home box to run Solaris 10 6/06 and converted my previous
filesytems over to ZFS, including /var/mail. Previously on S10 FCS I was
running dovecot mail server from blastwave.org without issue. On upgrading to
Update 2 I have found that the mail server hangs frequently.
10 matches
Mail list logo