On 04/ 6/11 01:05 PM, Richard Elling wrote:
On Apr 6, 2011, at 12:01 PM, Linder, Doug wrote:
Torrey Minton wrote:
I'm sure someone has a really good reason to keep /var separated but those cases
are fewer and> far between than I saw 10 years ago.
I agree that the causes and repercussions a
Richard Elling wrote:
> Malachi de Ælfweald wrote:
>
>> I have to say, looking at that confuses me a little. How can the two
>> disks be mirrored when the partition tables don't match?
>>
>
> Welcome to ZFS! In traditional disk mirrors,
> disk A block 0 == disk B block 0
> disk
Luca Morettoni reported a similar behavior (i.e. a perfectly running
system that drops into grub on reboot) on indiana-discuss. I wonder if
the issue is that installgrub is updating the MBR on one disk. If the
second disk does not have an updated grub menu, that would explain what
you are see
W. Wayne Liauh wrote:
> I installed OS 2008.05 onto a USB HD (WD Passport), and was able to boot from
> it (knock on wood!).
>
> However, when plugged into a different machine, I then am unable to boot from
> it.
>
> Is there any permission issue that I must address on this ZFS HD before I can
>
Mike Gerdts wrote:
> On Wed, Jul 2, 2008 at 10:08 AM, David Magda <[EMAIL PROTECTED]> wrote:
>
>> Quite often swap and dump are the same device, at least in the
>> installs that I've worked with, and I think the default for Solaris
>> is that if dump is not explicitly specified it defaults to sw
On Sep 20, 2007, at 12:55 AM, Tore Johansson wrote:
> Hi,
>
> I am running solaris 10 on ufs and the rest on ZFS. Now has the
> solaris disk crashed.
> How can I recover the other ZFS disks?
> Can I reinstall solaris and recreate the zfs systems without data
> loss?
Zpool import is your frie
I am trying to nfs share a zfs file system using zfs share
I get an error saying
sump/install_image': legacy share
I have not set legacy mount on this dataset. Here's what zfs get shows:
sump/install_image mountpoint /sump/install_imagedefault
Thanks
-Sanjay
SVM did RAID 0+1 i.e. mirrored entire sub-mirrors. However SVM
mirroring did not incur the problem that Richard alludes to, i.e. a
single disk failure on a sub-mirror did not take down the entire
sub-mirror, because the reads and writes are smart and acted as though
it was a RAID 1+0. Th
Darren Dunham wrote:
I know that VxVM stores the "autoimport" information on the disk
itself. It sounds like ZFS doesn't and it's only in the cache (is this
correct?)
I'm not sure what 'autoimport' is, but ZFS always stores enough
information on the disks to open the pool, provided al
I don't believe ZFS toggles write cache on disks on the fly. Rather,
write caching is enabled on disks which support this functionality.
Then at appropriate points in the code ioctl is called to flush the
cache thereby providing the appropriate data guarantees.
However this by no means
Since it's not exactly clear what you did with SVM I am assuming the
following:
You had a file system on top of the mirror and there was some I/O
occurring to the mirror. The *only* time, SVM puts a device into
maintenance is when we receive an EIO from the underlying device. So,
in case
11 matches
Mail list logo