On Wed, 7 Feb 2007, Jerry Jelinek wrote:
> Just to be clear, both live-upgrade and the mini-root upgrade
> do not yet know about zfs so if you place your zones on zfs,
> you won't be able to do either style of upgrade until that is fixed.
Understood; that won't be an issue for me.
Many thanks fo
Richard Elling wrote:
In the disk, at the disk block level, there is fairly substantial ECC.
Yet, we still see data loss. There are many mechanisms at work here. One
that we have studied to some detail is superparamagnetic decay -- the
medium wishes to decay to a lower-enegy state, losing info
Hello Kevin,
Wednesday, February 7, 2007, 9:39:35 PM, you wrote:
KB> Hello, I test right now the beauty of zfs. I have installed
KB> opensolaris on a spare server to test nfs exports. After creating
KB> tank1 with zpool and a subfilesystem with zfs tank1/nfsshare, I
KB> have set the option sharen
Hello Kory,
Wednesday, February 7, 2007, 9:03:38 PM, you wrote:
KW> What are the necessary steps to try to troubleshoot a degraded
KW> disk, and also what are the steps for replacing a disk in a ZFS mirrored
pool.
KW> I have a identical disk, but it has UFS filesystem on it,(but
KW> not used
On Wed, 7 Feb 2007, Jerry Jelinek wrote:
> This is incorrect. All S10 updates have supported upgrading systems
> with zones. I believe what you are thinking of is that live-upgrade
> does not support upgrading systems with zones. This is being
> fixed in the next S10 update. It is already fixe
Rich Teer wrote:
Excellent; disk space won't be an issue for me, nor will the
non-live-upgradability, so I'll be putting my zone roots on
ZFS.
Rich,
Just to be clear, both live-upgrade and the mini-root upgrade
do not yet know about zfs so if you place your zones on zfs,
you won't be able to d
Jerry Jelinek wrote:
John Clingan wrote:
This is incorrect. All S10 updates have supported upgrading systems
with zones. I believe what you are thinking of is that live-upgrade
does not support upgrading systems with zones. This is being
fixed in the next S10 update. It is already fixed in n
> Thanks for the input Darren, but I'm still confused about DNODE
> atomicity ... it's difficult to imagine that a change that is made
> anyplace in the zpool would require copy operations all the way back
> up to the uberblock (e.g. if some single file in one of many file
> systems in a zpool was
Thanks for the input Darren, but I'm still confused about DNODE atomicity ...
it's difficult to imagine that a change that is made anyplace in the zpool
would require copy operations all the way back up to the uberblock (e.g. if
some single file in one of many file systems in a zpool was suddenl
roland wrote:
We've considered looking at porting the AOE _server_ module to Solaris,
especially since the Solaris loopback driver (/dev/lofi) is _much_ more
stable than the loopback module in Linux (the Linux loopback module is a
stellar piece of crap).
ok, it`s quite old and probably no
sigh, ZFS bits *still* not identified in /etc/magic.
bug 6446509 open since july 16...
thumper# file /ocean/backup/[EMAIL PROTECTED]
/ocean/backup/[EMAIL PROTECTED]:data
[should have said "ZFS snapshot stream"]
--
ozan s. yigit | [EMAIL PROTECTED] | o: 416-348-1540
if you want to have your
Hello, I test right now the beauty of zfs. I have installed opensolaris on a
spare server to test nfs exports. After creating tank1 with zpool and a
subfilesystem with zfs tank1/nfsshare, I have set the option sharenfs=on to
tank1/nfsshare.
With Mac OS X as client I can mount the filesystem in
John Clingan wrote:
This is incorrect. All S10 updates have supported upgrading systems
with zones. I believe what you are thinking of is that live-upgrade
does not support upgrading systems with zones. This is being
fixed in the next S10 update. It is already fixed in nevada.
Which Nevada
What are the necessary steps to try to troubleshoot a degraded disk, and also
what are the steps for replacing a disk in a ZFS mirrored pool.
I have a identical disk, but it has UFS filesystem on it,(but not used for any
purpose), can I format the disk and then make this a replacement in the Z
Rich,
Rich Teer wrote:
Hi all,
Last time I checked, having one's zone roots (zonepaths) on
ZFS file systems was not a recommended practice, despite the
fact that this works. IIRC, the problem was that the upgrade
code didn't grok zfs and would therefore get terribly confused
should the zone ro
Hi all,
Last time I checked, having one's zone roots (zonepaths) on
ZFS file systems was not a recommended practice, despite the
fact that this works. IIRC, the problem was that the upgrade
code didn't grok zfs and would therefore get terribly confused
should the zone roots reside on ZFS.
Howeve
Sorry, that's dd from /dev/zero to /dev/null
I think there's an issue with my SATA card
On 2/7/07, Bart Smaalders <[EMAIL PROTECTED]> wrote:
Tom Buskey wrote:
>> Tom Buskey wrote:
>>> As a followup, the system I'm trying to use this on
>> is a dual PII 400 with 512MB. Real low budget.
>
>> Hm
Tom Buskey wrote:
Tom Buskey wrote:
As a followup, the system I'm trying to use this on
is a dual PII 400 with 512MB. Real low budget.
Hmm... that's lower than I would have expected.
Something is
ikely wrong. These machines do have very limited
memory
How fast can you DD from the raw d
> Tom Buskey wrote:
> > As a followup, the system I'm trying to use this on
> is a dual PII 400 with 512MB. Real low budget.
>
> Hmm... that's lower than I would have expected.
> Something is
> ikely wrong. These machines do have very limited
> memory
> How fast can you DD from the raw device
> ZFS documentation lists snapshot limits on any single file system in a
> pool at 2**48 snaps, and that seems to logically imply that a snap on
> a file system does not require an update to the poolâs
> currently active uberblock.
All commited changes (including snapshot creation) require a new
Robert Milkowski writes:
> Hello Jonathan,
>
> Tuesday, February 6, 2007, 5:00:07 PM, you wrote:
>
> JE> On Feb 6, 2007, at 06:55, Robert Milkowski wrote:
>
> >> Hello zfs-discuss,
> >>
> >> It looks like when zfs issues write cache flush commands se3510
> >> actually honors it. I
21 matches
Mail list logo