Sriram,
"sharenfs" is an inherited property.
Looks like in your case you set the "sharenfs=on" on datapool/vmwarenfs
after the underlying filesystems were created.
If you had set the "sharenfs=on" before creating the underlying filesystems,
then the property would have been inherited by the chil
Sriram,
On Sun, Feb 08, 2009 at 12:04:22AM +0530, Sriram Narayanan wrote:
> >From the presentation "ZFS - The last word in filesystems", Page 22
> "In a multi-disk pool, ZFS survives any non-consecutive disk failures"
>
> Questions:
> If I have a 3 disk RAIDZ with disks A, B and C, then:
>
> I've always felt squeamish when I had to move boxes with spinning
> disks,
> or when I had to watch someone else do it. Thanks for justifying my
> paranoia... and good luck with the replacement drives.
This reminds me of a story. Many years ago a friend of mine had to move
some servers from
On Feb 8, 2009, at 16:12, Vincent Fox wrote:
> Do you think having log on a 15K RPM drive with the main pool
> composed of 10K RPM drives will show worthwhile improvements? Or am
> I chasing a few percentage points?
Another important question is whether it would be sufficient to
purchase o
On Sun, Feb 8, 2009 at 22:12, Vincent Fox wrote:
> Thanks I think I get it now.
>
> Do you think having log on a 15K RPM drive with the main pool composed of 10K
> RPM drives will show worthwhile improvements? Or am I chasing a few
> percentage points?
>
> I don't have money for new hardware &
Thanks I think I get it now.
Do you think having log on a 15K RPM drive with the main pool composed of 10K
RPM drives will show worthwhile improvements? Or am I chasing a few percentage
points?
I don't have money for new hardware & SSD. Just recycling some old components
here are and there a
On Sun, 8 Feb 2009, Andrew Gabriel wrote:
>
> Just thinking out loud here, but given such a disk (i.e. one which is
> bigger than required), I might be inclined to slice it up, creating a
> slice for the log at the outer edge of the disk. The outer edge of the
> disk has the highest data rate, and
You were just lucky before and unlucky now.
I had a PC back in like Pentium-133 days go CRASH because I moved it too
roughly while the drive was spinning.
I moved many PC in my life with drive spinning no problems, but I don't COUNT
on it and avoid it if humanly possible. Don't people do it a
Neil Perrin wrote:
> On 02/08/09 11:50, Vincent Fox wrote:
>
>> So I have read in the ZFS Wiki:
>>
>> # The minimum size of a log device is the same as the minimum size of
>> device in
>> pool, which is 64 Mbytes. The amount of in-play data that might be stored on
>> a log
>> device is relati
On 02/08/09 11:50, Vincent Fox wrote:
> So I have read in the ZFS Wiki:
>
> # The minimum size of a log device is the same as the minimum size of device
> in
> pool, which is 64 Mbytes. The amount of in-play data that might be stored on
> a log
> device is relatively small. Log blocks are fre
So I have read in the ZFS Wiki:
# The minimum size of a log device is the same as the minimum size of device
in pool, which is 64 Mbytes. The amount of in-play data that might be stored on
a log device is relatively small. Log blocks are freed when the log transaction
(system call) is committe
Yesterday I gained my first experience with installing Solaris 10U6 on
on a SPARC workstation with ZFS boot. I had one non-Sun disk (c0t1d0)
which was previously used in a Solaris 10U5 install and already had a
ZFS partition. Another non-Sun disk (c0t0d0) was brand new. It was
my intention t
On Sat, Feb 7, 2009 at 1:55 AM, Will Murnane wrote:
> On Thu, Jan 29, 2009 at 23:00, Will Murnane
> wrote:
> > *sigh* The 9010b is ordered. Ground shipping, unfortunately, but
> > eventually I'll post my impressions of it.
> Well, the drive arrived today. It's as nice-looking as it appears in
On Feb 8, 2009, at 09:30, Volker A. Brandt wrote:
>> yes, you've guessed it, the drive errors originated when the box was
>> moved. A zpool scrub generated thousands of errors on the damaged
>> drive. Now it's offline. Al is sad. :(
>
> [...]
>
>> Just a heads up - it might just help someone e
> yes, you've guessed it, the drive errors originated when the box was
> moved. A zpool scrub generated thousands of errors on the damaged
> drive. Now it's offline. Al is sad. :(
[...]
> Just a heads up - it might just help someone else on the list who has
> developed bad habits over the year
Thank you. I will spread the word.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Too bad. I will follow this thread. Me, and others hope you find a solution. We
would like to hear about this setup.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
On Sun, Feb 8, 2009 at 1:29 AM, Frank Cusack wrote:
>
> what mirror? there is no mirror. you have a raidz. you can have 1
> disk failure.
Thanks for the correction. I was thinking RAIDZ, but typed "mirror". I
have only RAIDZs on my servers.
>
>> - if disks a and c fail, then I will be be able
On Sun, Feb 8, 2009 at 1:56 AM, Peter Tribble wrote:
> No. That quote is part of the discussion of ditto blocks.
>
> See the following:
>
> http://blogs.sun.com/bill/entry/ditto_blocks_the_amazing_tape
>
Thank you, Peter.
-- Sriram
___
zfs-discuss mai
Hi,
I'm aware that if we talking about DMP on Solaris the preferred way is to use
MPxIO, still I have a question if any of you got any experience with ZFS on top
of Veritas DMP?
Does it work? Is it supported? Any real life experience/tests in this subject?
Regards,
sendai
--
This message post
20 matches
Mail list logo