On Jun 12, 2008, at 12:46 PM, Chris Siebenmann wrote:
>
> Or to put it another way: disk space is a permanent commitment,
> servers are not.
In the olden times (e.g. 1980s) on various CDC and Univac timesharing
services, I recall there being two kinds of storage ... "dayfiles"
and permanen
...There was a post just this afternoon stating the opensolaris update
track would be back to following sxce with b91 so I haven't a clue
what you're talking about.
As for the features/support they're looking for, if they wanted
enterprise infallible storage, a thumper was the wrong choice day 1.
On Thu, Jun 12, 2008 at 10:12 PM, Tim <[EMAIL PROTECTED]> wrote:
> I guess I find the "difference" between b90 and opensolaris trivial
> given we're supposed to be getting constant updates following the sxce
> builds.
But the supported version of OpenSolaris will not be on the same
schedule as sxc
I guess I find the "difference" between b90 and opensolaris trivial
given we're supposed to be getting constant updates following the sxce
builds.
On 6/12/08, Mike Gerdts <[EMAIL PROTECTED]> wrote:
> On Thu, Jun 12, 2008 at 9:22 PM, Tim <[EMAIL PROTECTED]> wrote:
>> They aren't even close to eac
On Thu, Jun 12, 2008 at 9:22 PM, Tim <[EMAIL PROTECTED]> wrote:
> They aren't even close to each other. ?Things like in-kernel cifs will
> never be put back.
>
> My question is, what is holding you back from just deploying on sxce?
> Sun now offers support for it.
To the best of my knowledge, Sun
They aren't even close to each other. Things like in-kernel cifs will
never be put back.
My question is, what is holding you back from just deploying on sxce?
Sun now offers support for it.
On 6/12/08, Paul B. Henson <[EMAIL PROTECTED]> wrote:
>
> How close is Solaris Express build 90 to what
On Thu, 2008-06-12 at 17:52 -0700, Paul B. Henson wrote:
> How close is Solaris Express build 90 to what will be released as the
> official Solaris 10 update 6?
>
> We just bought five x4500 servers, but I don't really want to deploy in
> production with U5. There are a number of features in U6 I
How close is Solaris Express build 90 to what will be released as the
official Solaris 10 update 6?
We just bought five x4500 servers, but I don't really want to deploy in
production with U5. There are a number of features in U6 I'd like to have
(zfs allow for better integration with our local id
Rich Teer wrote:
> Hi all,
>
> Booting from a two-way mirrored metadevice created using SVM
> can be a bit risky, especially when one of the drives fail
> (not being able to form a quarum, the kernel will panic).
> Is booting from mirrored vdev created by using ZFS similarly
> iffy? That is, if on
| Every time I've come across a usage scenario where the submitter asks
| for per user quotas, its usually a university type scenario where
| univeristies are notorious for providing lots of CPU horsepower (many,
| many servers) attached to a simply dismal amount of back-end storage.
Speaking as
On Thu, Jun 12, 2008 at 07:31:49PM +0200, Richard Elling wrote:
> Kurt Schreiner wrote:
> > On Thu, Jun 12, 2008 at 06:38:56AM +0200, Richard Elling wrote:
> >
> >> Vincent Fox wrote:
> >>
> >>> So I decided to test out failure modes of ZFS root mirrors.
> >>>
> >>> Installed on a V240 with nv90.
i folks
i have set up a new BE on zfs root, but it does not want to activate. server is
build 90, x86 (64 bit)
i already have 2 other BE's on UFS/SVM
when i try to activate the zfs BE it seems OK, but on reboot now zfs BE option
is shown in grub.
i have 2 disks: disk 1 has the 2 SVM metadevic
Kurt Schreiner wrote:
> On Thu, Jun 12, 2008 at 06:38:56AM +0200, Richard Elling wrote:
>
>> Vincent Fox wrote:
>>
>>> So I decided to test out failure modes of ZFS root mirrors.
>>>
>>> Installed on a V240 with nv90. Worked great.
>>>
>>> Pulled out disk1, then replaced it and attached ag
Vincent,
I think you are running into some existing bugs, particularly this one:
http://bugs.opensolaris.org/view_bug.do?bug_id=6668666
Please review the list of known issues here:
http://opensolaris.org/os/community/zfs/boot/
Also check out the issues described on page 77 in this section:
Bo
> On Thu, Jun 12, 2008 at 06:38:56AM +0200, Richard
>
> pull disk1
> replace
> *resilver*
> pull disk0
> ...
> So the 2 disks should be in sync (due to
> resilvering)? Or is there
> another step needed to get the disks in sync?
That is an accurate summary. I thought
On Thu, Jun 12, 2008 at 07:28:23AM -0400, Brian Hechinger wrote:
> I think something else that might help is if ZFS were to boot, see that
> the volume it booted from is older than the other one, print a message
> to that effect and either halt the machine or issue a reboot pointing
> at the other
On Thu, Jun 12, 2008 at 07:29:08AM -0700, Rich Teer wrote:
> Hi all,
>
> Booting from a two-way mirrored metadevice created using SVM
> can be a bit risky, especially when one of the drives fail
> (not being able to form a quarum, the kernel will panic).
SVM doesn't panic in that situation. At b
Hi all,
Booting from a two-way mirrored metadevice created using SVM
can be a bit risky, especially when one of the drives fail
(not being able to form a quarum, the kernel will panic).
Is booting from mirrored vdev created by using ZFS similarly
iffy? That is, if one disk in the vdev dies, will
Hi,
After managing to upgrade to svn90 after a few failed attempts, I was left with
a ton of zfs datasets (see previous post) most of which I've managed to
destroy, however there's something that stumps me
NAME USED AVAIL REFER
MOUNTPO
On Wed, Jun 11, 2008 at 10:43:26PM -0700, Richard Elling wrote:
>
> AFAIK, SVM will not handle this problem well. ZFS and Solaris
> Cluster can detect this because the configuration metadata knows
> the time difference (ZFS can detect this by the latest txg).
Having been through this myself with
On Thu, Jun 12, 2008 at 06:38:56AM +0200, Richard Elling wrote:
> Vincent Fox wrote:
> > So I decided to test out failure modes of ZFS root mirrors.
> >
> > Installed on a V240 with nv90. Worked great.
> >
> > Pulled out disk1, then replaced it and attached again, resilvered, all good.
> >
> > Now
21 matches
Mail list logo