Not to be a conspiracy nut but anyone anywhere could have registered
that gmail account and supplied that answer. It would be a lot more
believable from Mr Kay's Oracle or Sun account.
On 4/20/2010 9:40 AM, Ken Gunderson wrote:
On Tue, 2010-04-20 at 13:57 +0100, Dominic Kay wrote:
Oracle
If her adds the spare and then manually forces a replace, it will take
no more time than any other way. I do this quite frequently and without
needing the scrub which does take quite a lot of time.
cindy.swearin...@sun.com wrote:
Hi Andreas,
Good job for using a mirrored configuration. :-)
I believe there are a couple of ways that work. The commands I've
always used are to attach the new disk as a spare (if not already) and
then replace the failed disk with the spare. I don't know if there are
advantages or disavantages but I also have never had a problem doing it
this way.
A
This may have been mentioned elsewhere and, if so, I apologize for
repeating.
Is it possible your difficulty here is with the Marvell driver and not,
strictly speaking, ZFS? The Solaris Marvell driver has had many, MANY
bug fixes and continues to this day to be supported by IDR patches and
o
Thanks for the suggestion!
We've fiddled with this in the past. Our app is 32k instead of 8k
blocks and it is data warehousing so the I/O model is a lot more long
sequential reads generally. Changing the blocksize has very little
effect on us. I'll have to look at fsync; hadn't considered t
I work with Greenplum which is essentially a number of Postgres database
instances clustered together. Being postgres, the data is held in a lot
of individual files which can be each fairly big (hundreds of MB or
several GB) or very small (50MB or less). We've noticed a performance
difference