Sasidhar Kasturi wrote:
> Thank you,
> Is it that /usr/bin binaries are more advanced than that of
> /xpg4 things or .. the extensions of the /xpg4 things?
They *should* be the same level of "advancement", but each has a
different set of promises and expectations it needs to live up to..
Nicolas Williams wrote:
> I'm curious as to why you think this
The characteristics of /, /usr and /var are quite different,
from a usage and backup requirements perspective:
/ is read-mostly, but contains critical config data.
/usr is read-only, and
/var (/var/mail, /var/mysql, ...) can be high v
Lori Alt wrote:
> I'm not surprised that having /usr in a separate pool failed.
> The design of zfs boot largely assumes that root, /usr, and
> /var are all on the same pool, and it is unlikely that we would
> do the work to support any other configuration any time soon.
This seems, uhm, undesira
Many/most of these are available at
http://www.opensolaris.org/os/community/arc/caselog//CCC
replacing /CCC with the case numbers below, as in
http://www.opensolaris.org/os/community/arc/caselog/2007/171
for the 2nd one below. I'm not sure why the first one (2007/142) isn't
there - I'
>> It seems to me that the URL above refers to the publishing
>> materials of *historical* cases. Do you think the case in hand
>> should be considered historical ?
In this context, historical means "any case that was not originally
"open", and so can not be presumed to be clear of any proprieta
Mark J Musante wrote:
Note that if you use the recursive snapshot and destroy, only one line is
My "problem" (and it really is /not/ an important one) was that
I had a cron job that every minute did
min=`date "+%d"`
snap="$pool/[EMAIL PROTECTED]"
zfs destroy "$snap"
eric kustarz wrote:
We specifically didn't allow the admin the ability to truncate/prune the
log as then it becomes unreliable - ooops i made a mistake, i better
clear the log and file the bug against zfs
I understand - auditing means never getting to blame someone else :-)
There are th
sends
snapshots that haven't already been sent so that I could do
the initial time-intensive copies while the system was still
in use and only have to do a faster "resync" while down in
single user mode.
It isn't pretty (it /is/ a perl script) but it worked :-)
-
Why not simply have a SMF sequence that does
early in boot, after / and /usr are mounted:
create /etc/nologin (contents="coming up, not ready yet")
enable login
later in boot, when user filesystems are all mounted:
delete /etc/nologin
Wouldn't this would give the
Thru a sequence of good intentions, I find myself with a raidz'd
pool that has a failed drive that I can't replace.
We had a generous department donate a fully configured V440 for
use as our departmental server. Of course, I installed SX/b56
on it, created a pool with 3x 148Gb drives and made a
10 matches
Mail list logo