Hi,
I have hit the synchronous NFS writing wall just like many people do.
There also have lots of discussion about the solutions here.
I want to post all of my exploring fighting done recently to discuss and share:
1): using the normal SATA-SSDs(intel/ocz) as ZIL device. For intel just EOLed
never mindjust found more info on this...shoudl have held back from
asking
On Mon, May 24, 2010 at 1:26 AM, Thomas Burgess wrote:
> did this come out?
>
> http://cr.opensolaris.org/~gman/opensolaris-whats-new-2010-05/
>
> i was googling trying to find info about the next release and ran
did this come out?
http://cr.opensolaris.org/~gman/opensolaris-whats-new-2010-05/
i was googling trying to find info about the next release and ran across
this
Does this mean it's actually about to come out before the end of the month
or is this something else?
_
On Sun, May 23, 2010 at 5:00 PM, Andreas Iannou
wrote:
> Is it safe or possible to do a zpool replace for multiple drives at once? I
> think I have one of the troublesome WD Green drives as replacing it has
> taken 39hrs and only reslivered 58Gb, I have another two I'd like to replace
> but I'm wo
On May 23, 2010, at 6:05 PM, Chris Dunbar - Earthside, LLC wrote:
> Hello,
>
> I think I know the answer to this, but not being an iSCSI expert I am hoping
> to be pleasantly surprised by your answers. I currently use ZFS plus NFS to
> host a shared VMFS store for my VMware ESX cluster. It's ea
Yes, it requires a clustered filesystem to share out a single LUN to
multiple hosts. Vmfs3, however bad of an implementation, is in fact a
clustered filesystem. I highly doubt nfs is your problem though. I'd take
nfs over iscsi and vmfs any day.
On May 23, 2010 8:06 PM, "Chris Dunbar - Earthside
Hello,
I think I know the answer to this, but not being an iSCSI expert I am hoping to
be pleasantly surprised by your answers. I currently use ZFS plus NFS to host a
shared VMFS store for my VMware ESX cluster. It's easy to set up and high
availability works great since all the ESX hosts see t
> Also, I don't have a feel for how replacing more than 1 drives in a RaidZ[23]
> affects resilver performance.
Thats okay Erik, you've provided the information I need. This is a RAIDZ zpool,
I think I'll wait and see how it goes before moving on.
I cried seeing 673hrs remaining at 7%
On 5/23/2010 5:00 PM, Andreas Iannou wrote:
Is it safe or possible to do a zpool replace for multiple drives at
once? I think I have one of the troublesome WD Green drives as
replacing it has taken 39hrs and only reslivered 58Gb, I have another
two I'd like to replace but I'm wondering whether
Is it safe or possible to do a zpool replace for multiple drives at once? I
think I have one of the troublesome WD Green drives as replacing it has taken
39hrs and only reslivered 58Gb, I have another two I'd like to replace but I'm
wondering whether I should do that now as the other is being r
This worked perfecto! deleted the zpool.cache, rebooted lost all my zpools and
just reimported them and tank came back to me!
Thanks,
Andre
> Date: Fri, 21 May 2010 19:06:41 -0700
> Subject: Re: [zfs-discuss] Tank zpool has tanked out :(
> From: bh...@freaks.com
> To: andreas_wants_the_w...@h
On 5/23/2010 11:49 AM, Richard Elling wrote:
FWIW, the A5100 went end-of-life (EOL) in 2001 and end-of-service-life
(EOSL) in 2006. Personally, I hate them with a passion and would like to
extend an offer to use my tractor to bury the beast:-).
I'm sure I can get some others to help. Can I sm
On May 23, 2010, at 6:01 AM, Demian Phillips wrote:
> On Sat, May 22, 2010 at 11:33 AM, Bob Friesenhahn
> wrote:
>> On Fri, 21 May 2010, Demian Phillips wrote:
>>
>>> For years I have been running a zpool using a Fibre Channel array with
>>> no problems. I would scrub every so often and dump huge
On Sat, May 22, 2010 at 11:33 AM, Bob Friesenhahn
wrote:
> On Fri, 21 May 2010, Demian Phillips wrote:
>
>> For years I have been running a zpool using a Fibre Channel array with
>> no problems. I would scrub every so often and dump huge amounts of
>> data (tens or hundreds of GB) around and it ne
> prtconf -v is your friend. Example:
Genius! I've got all my serial numbers now :)
Cheers,
Andre
> Date: Sat, 22 May 2010 17:44:21 +1000
> From: j...@opensolaris.org
> To: andreas_wants_the_w...@hotmail.com
> CC: zfs-discuss@opensolaris.org
> Subject: Re: [zfs-discuss] HDD Serial numb
On Sat, May 22, 2010 at 9:08 PM, Thomas Burgess wrote:
> ok, so forcing just basically makes it drop whatever "changes" were made
> Thats what i was wondering...this is what i expected
If you're doing 'zfs send -i @first tank/foo/b...@second | zfs recv -F
newtank/foo/bar' then 'zfs recv -F' is th
16 matches
Mail list logo