In addition to the standard "containing the carnage" arguments used to
justify splitting /var/tmp, /var/mail, /var/adm (process accounting
etc), is there an interesting use-case where would one split out /var
for "compression reasons" (as in, turn on compression for /var so that
process accounting,
I'd expect it's the old standard.
if /var/tmp is filled, and that's part of /, then bad things happen.
there are often other places in /var that are writable by more than
root, and always the possibility that something barfs heavily into syslog.
Since the advent of reasonably sized disks, I kno
Rich Teer wrote:
> On Wed, 4 Jun 2008, Bob Friesenhahn wrote
>> Did you actually choose to keep / and /var combined? Is there any
>>
>
> THat's what I'd do...
>
>> reason to do that with a ZFS root since both are sharing the same pool
>> and so there is no longer any disk space advantage
On Wed, 4 Jun 2008, Bob Friesenhahn wrote:
> Did you actually choose to keep / and /var combined? Is there any
THat's what I'd do...
> reason to do that with a ZFS root since both are sharing the same pool
> and so there is no longer any disk space advantage? If / and /var are
> not combine
On Wed, 4 Jun 2008, Henrik Johansson wrote:
>
> Anyone knows what the deal with /export/home is? I though /home was
> the default home directory in Solaris?
It seems that they expect you to use the automounter to mount it.
That allows the same automount map to be used for all systems. It has
be
Can someone in the know please provide a recipe to upgrade a nv81 (e.g.) to
ZFS-root, if possible?
That would be, just listing the commands step by step for the uninitiated; for
me.
Uwe
This message posted from opensolaris.org
___
zfs-discuss maili
On Thu, Jun 05, 2008 at 12:14:46PM +1000, Nathan Kroenert wrote:
> format -e is your window to cache settings.
Ah ha!
> As for the auto-enabling, I'm not sure, as IIRC, we do different things
> based on disk technology.
>
> eg: IDE + SATA - Always enabled
> SCSI - Disabled by default, unles
format -e is your window to cache settings.
As for the auto-enabling, I'm not sure, as IIRC, we do different things
based on disk technology.
eg: IDE + SATA - Always enabled
SCSI - Disabled by default, unless you give ZFS the whole disk.
I think.
On a couple of my systems, this seems to r
On Wed, Jun 04, 2008 at 09:17:05PM -0400, Ellis, Mike wrote:
> The FAQ document (
> http://opensolaris.org/os/community/zfs/boot/zfsbootFAQ/ ) has a
> jumpstart profile example:
Speaking of the FAQ and mentioning the need to use slices, how does that
affect the ability of Solaris/ZFS to automatica
A Darren Dunham <[EMAIL PROTECTED]> writes:
> On Tue, Jun 03, 2008 at 05:56:44PM -0700, Richard L. Hamilton wrote:
>> How about SPARC - can it do zfs install+root yet, or if not, when?
>> Just got a couple of nice 1TB SAS drives, and I think I'd prefer to
>> have a mirrored pool where zfs owns the
The FAQ document (
http://opensolaris.org/os/community/zfs/boot/zfsbootFAQ/ ) has a
jumpstart profile example:
install_type initial_install
pool newpool auto auto auto mirror c0t0d0 c0t1d0
bootenv installbe bename sxce_xx
The B90 jumpstart "check" program (SPARC) flags th
On Wed, Jun 04, 2008 at 06:28:58PM -0400, Luke Scharf wrote:
>2. The number s2 is arbitrary. If it were s0, then there would at
> least be the beginning of the list. If it were s3, it would be at
> the end of a 2-bit list, which could be explained historically.
> If it were
On Ter, 2008-06-03 at 23:33 +0100, Paulo Soeiro wrote:
> 6)Remove and attached the usb sticks:
>
> zpool status
> pool: myPool
> state: UNAVAIL
> status: One or more devices could not be used because the label is
> missing
> or invalid. There are insufficient replicas for the pool to continue
> f
Tim,
Start at the zfs boot page, here:
http://www.opensolaris.org/os/community/zfs/boot/
Review the information and follow the links to the docs.
Cindy
- Original Message -
From: Tim <[EMAIL PROTECTED]>
Date: Wednesday, June 4, 2008 4:29 pm
Subject: Re: [zfs-discuss] Get your SXCE on Z
Luke Scharf wrote:
>1. Calling s2 a slice makes it a subset of the device, not the whole
> device. But it's the whole device.
>2. The number s2 is arbitrary. If it were s0, then there would at
> least be the beginning of the list. If it were s3, it would be at
> the en
Luke Scharf wrote:
>3. None of the grey-haired Solaris gurus that I've talked to have
> ever been able to explain why.
>
I do realize that older architectures needed some way to record the disk
geometry. But why do it that way?
-Luke
___
Volker A. Brandt wrote:
> Hmmm... the current scheme seems to be "subject verb ".
> E.g.
>
>disk list
That would work fine for me!
It would also be easy enough to put on a "rosetta stone" type reference
card.
>> [0] Including lspci and lsusb with Solaris would be a great idea --
>>
>
>
On Wed, Jun 4, 2008 at 5:01 PM, Kyle McDonald <[EMAIL PROTECTED]> wrote:
> andrew wrote:
> > With the release of the Nevada build 90 binaries, it is now possible to
> install SXCE directly onto a ZFS root filesystem, and also put ZFS swap onto
> a ZFS filesystem without worrying about having it de
On Jun 5, 2008, at 12:05 AM, Rich Teer wrote:
> On Wed, 4 Jun 2008, Henrik Johansson wrote:
>
>> Anyone knows what the deal with /export/home is? I though /home was
>> the default home directory in Solaris?
>
> Nope, /export/home has always been the *physical* location for
> users' home directorie
On Wed, 4 Jun 2008, Henrik Johansson wrote:
> Anyone knows what the deal with /export/home is? I though /home was
> the default home directory in Solaris?
Nope, /export/home has always been the *physical* location for
users' home directories. They're usually automounted under /home,
though.
-
andrew wrote:
> With the release of the Nevada build 90 binaries, it is now possible to
> install SXCE directly onto a ZFS root filesystem, and also put ZFS swap onto
> a ZFS filesystem without worrying about having it deadlock. ZFS now also
> supports crash dumps!
>
> To install SXCE to a ZFS r
Works great! Even tested to create boot environments with live
upgrade, fast, easy and elegant!
Anyone knows what the deal with /export/home is? I though /home was
the default home directory in Solaris?
( I put up some screenshots of the installation process for those
interested: http://s
With the release of the Nevada build 90 binaries, it is now possible to install
SXCE directly onto a ZFS root filesystem, and also put ZFS swap onto a ZFS
filesystem without worrying about having it deadlock. ZFS now also supports
crash dumps!
To install SXCE to a ZFS root, simply use the text-
> I'd like to suggest a name: lsdisk and lspart to list the disks and also
> the disks/partitions that are available. (Or maybe lsdisk should just
> list the disks and partitions in an indented list? Listing the
> partitions is important. Listing the controllers might not hurt
> anything, either
MC wrote:
>> Putting into the zpool command would feel odd to me, but I agree that
>> there may be a useful utility here.
>>
>
> There MAY be a useful utility here? I know this isn't your fight Dave, but
> this tipped me and I have to say something :)
>
> Can we agree that the format command
Very true, didnt mean to derail the topic, sorry. Just trying to
learn as i go!
Thanks again!
On 4-Jun-08, at 3:05 PM, MC wrote:
>> You may all have 'shared human errors', but i dont
>> have that issue
>> whatsoever :) I find it quite interesting the issues
>> that you guys
>> bring up with
> You may all have 'shared human errors', but i dont
> have that issue
> whatsoever :) I find it quite interesting the issues
> that you guys
> bring up with these drives. All manufactured goods
> suffer the same
> pitfalls of production. Would you say that WD and
> Seagate are the
> fro
> Putting into the zpool command would feel odd to me, but I agree that
> there may be a useful utility here.
There MAY be a useful utility here? I know this isn't your fight Dave, but
this tipped me and I have to say something :)
Can we agree that the format command lists the disks it can use
On Wed, 4 Jun 2008, Jeff Bonwick wrote:
> I agree with that. format(1M) and cfgadm(1M) are, ah, not the most
> user-friendly tools. It would be really nice to have 'zpool disks'
> go out and taste all the drives to see which ones are available.
>
> We already have most of the code to do it. 'zp
This link worked for me:
http://www.opensolaris.org/jive/thread.jspa?messageID=190763
Thanks for the info, and that did have some good tips!
I think I might try hacking the label on the log device first. I'll post if I
get something working.
If that works, it might be best to make a bit for bi
Jeff Bonwick wrote:
> I agree with that. format(1M) and cfgadm(1M) are, ah, not the most
> user-friendly tools. It would be really nice to have 'zpool disks'
> go out and taste all the drives to see which ones are available.
>
> We already have most of the code to do it. 'zpool import' already
I agree with that. format(1M) and cfgadm(1M) are, ah, not the most
user-friendly tools. It would be really nice to have 'zpool disks'
go out and taste all the drives to see which ones are available.
We already have most of the code to do it. 'zpool import' already
contains the taste-all-disks-a
You may all have 'shared human errors', but i dont have that issue
whatsoever :) I find it quite interesting the issues that you guys
bring up with these drives. All manufactured goods suffer the same
pitfalls of production. Would you say that WD and Seagate are the
front runners or just
I'm not sure why the URL didn't show correctly in the previous reply ... I'll
try again:
http://www.opensolaris.org/jive/thread.jspa?messageID=190763𮤫
If this URL still doesn't list show properly, you can track the thread down by
searching the forum with a search for logand a user name of
Bob Friesenhahn wrote:
> On Tue, 3 Jun 2008, Dave Miner wrote:
>
>> Putting into the zpool command would feel odd to me, but I agree that
>> there may be a useful utility here.
>>
>
> There is value to putting this functionality in zpool for the same
> reason that it was useful to put 'ios
I experienced a similar problem some time ago and I received some potentially
usefull information from Eric Shrock ... the details are in a thread that can
be found at:
http://www.opensolaris.org/jive/thread.jspa?messageID=190763𮤫
... good luck ... Bill
This message posted from opensolaris.
On Jun 4, 2008, at 10:40 AM, Brad Diggs wrote:
>
> At this point, the only way in which I can free up sufficient
> space to remove either file is to first remove the snapshot.
Can't you just truncate a large file or two?
Sadly I lack the time to try your example right now, but I'd have
guess
After having to reset my i-ram card, I can no longer import my raidz pool on
2008.05.
Also trying to import the pool using the zpool.cache causes a kernel panic on
2008.05 and B89 (I'm waiting to try B90 when released).
So I have 2 options:
* Wait for a release that can import after log failure
Update:
The BCME drivers & solution from this thread worked, I think:
http://opensolaris.org/jive/thread.jspa?messageID=195224
At least, it says the broadcom is "up" -- I just don't have it in a location
where I can get an IP and outside access to test it, yet.
I had some intermittent video issue
On Jun 4, 2008, at 10:47 AM, Bill Sommerfeld wrote:
>
> On Wed, 2008-06-04 at 11:52 -0400, Bill McGonigle wrote:
>> but we got one server in
>> where 4 of the 8 drives failed in the first two months, at which
>> point we called Seagate and they were happy to swap out all 8 drives
>> for us. I s
I dropped logicsupply a note asking if they had any specifics on the backplane
used in my case. My guess is that it just isn't compatible with opensolaris --
I have disks in the bays, but they show up empty in the OS. If I connect one
of the drives directly to the mobo it shows up fine.
Chanc
On Wed, 2008-06-04 at 11:52 -0400, Bill McGonigle wrote:
> but we got one server in
> where 4 of the 8 drives failed in the first two months, at which
> point we called Seagate and they were happy to swap out all 8 drives
> for us. I suspect a bad lot, and even found some other complaints
Hello,
A customer recently brought to my attention that ZFS can get
into a situation where the filesystem is full but no files
can be removed. The workaround is to remove a snapshot and
then you should have enough free space to remove a file.
Here is a sample series of commands to reproduce th
Tobias Hintze wrote:
> hi list,
>
> initial situation:
>
> SunOS alusol 5.11 snv_86 i86pc i386 i86pc
> SunOS Release 5.11 Version snv_86 64-bit
>
> 3 USB HDDs on 1 USB hub:
>
> zpool status:
>
> state: ONLINE
> NAME STATE READ WRITE CKSUM
> usbpoolONLINE
On Jun 4, 2008, at 11:59, Darryl wrote:
> good thing i asked about the seagates... i always thought they
> were highly regarded. I know that everyone has their own opinions,
> but this is still quite informative. I've always been partial to
> seagate and WD, for not other reason than, tha
good thing i asked about the seagates... i always thought they were highly
regarded. I know that everyone has their own opinions, but this is still quite
informative. I've always been partial to seagate and WD, for not other reason
than, thats all i've had :)
This message posted from open
On Tue, 3 Jun 2008, Dave Miner wrote:
>
> Putting into the zpool command would feel odd to me, but I agree that
> there may be a useful utility here.
There is value to putting this functionality in zpool for the same
reason that it was useful to put 'iostat' and other "duplicate"
functionality i
On Jun 4, 2008, at 11:40, Luke Scharf wrote:
> All drives suck. Use RAID. :-)
So true. I've been using Seagate Barracudas mostly because of their
warranty, and they've been generally good, but we got one server in
where 4 of the 8 drives failed in the first two months, at which
point we
Darryl wrote:
> Is there a consensus that seagate barracuda drives are worthwhile, stable,
> etc...?
>
All drives suck. Use RAID. :-)
-Luke
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-
On Wed, Jun 04, 2008 at 02:12:10PM +0200, Tobias Hintze wrote:
[...]
>
> 2) how do i get rid of the metadata errors?
>
scrubbing fixed the metadata errors.
th
--
>
> thanks,
> --
> Tobias Hintze // threllis GmbH
--
Tobias Hintze // threllis GmbH
http://threllis.de/impressum/
__
Is there a consensus that seagate barracuda drives are worthwhile, stable,
etc...?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
P.S. the ST31000640SS drives, together with the LSI SAS 3800x
controller (in a 64-bit 66MHz slot) gave me, using dd with
a block size of either 1024k or 16384k (1MB or 16MB) and a count
of 1024, a sustained read rate that worked out to a shade over 119MB/s,
even better than the nominal "sustained t
> On Tue, Jun 03, 2008 at 05:56:44PM -0700, Richard L.
> Hamilton wrote:
> > How about SPARC - can it do zfs install+root yet,
> or if not, when?
> > Just got a couple of nice 1TB SAS drives, and I
> think I'd prefer to
> > have a mirrored pool where zfs owns the entire
> drives, if possible.
> > (
> Hi, I know there is no single-command way to shrink a
> zpool (say evacuate the data from a disk and then
> remove the disk from a pool), but is there a logical
> way? I.e mirrror the pool to a smaller pool and then
> split the mirror? In this case I'm not talking about
> disk size (moving from
hi list,
initial situation:
SunOS alusol 5.11 snv_86 i86pc i386 i86pc
SunOS Release 5.11 Version snv_86 64-bit
3 USB HDDs on 1 USB hub:
zpool status:
state: ONLINE
NAME STATE READ WRITE CKSUM
usbpoolONLINE 0 0 0
mirror ON
55 matches
Mail list logo