[zfs-discuss] boot from zfs, mirror config issue
2008-03-26
Thread
Peter Brouwer, Principal Storage Architect, Office of the Chief Technologist, Sun MicroSystems
Hello, I ran into the following issue when configuring a system with ZFS for root. The restriction is that it can only be either a single disk or a mirror pool for root. Trying to set bootfs on a zpool that does not satisfy the above criteria fails, so this is good. However when adding a mirr
[zfs-discuss] zpool automount issue
2008-03-26
Thread
Peter Brouwer, Principal Storage Architect, Office of the Chief Technologist, Sun MicroSystems
Hello, I ran into an issue with the automount feature of zpool. Normal default behavior is for the pool and filesystems in it to be automatically mounted, unless you set zfs/zpool set mountpoint=[legacy|/] When I used 'export' as pool name I could not get it to automount. Wonder if export is
Re: [zfs-discuss] Best practices for ZFS plaiding
If you are using 6 Thumpers via iSCSI to provide storage to your zpool and don't use either mirroring or RAIDZ/RAIDZ2 across the Thumpers, if one Thumper goes down then your storage pool is unavailable. I think you want some form of RAID at both levels. This message posted from opensolaris.
Re: [zfs-discuss] Best practices for ZFS plaiding
On Wed, 26 Mar 2008, Tim wrote: > No raid at all. The system should just stripe across all of the LUN's > automagically, and since you're already doing your raid on the thumper's, > they're *protected*. You can keep growing the zpool indefinitely, I'm not > aware of any maximum disk limitation.
Re: [zfs-discuss] Best practices for ZFS plaiding
Larry Lui wrote: > Hello, > I have a situation here at the office I would like some advice on. > > I have 6 Sun Fire x4550(Thumper) that I want to aggregate the storage > and create a unified namespace for my client machines. My plan was to > export the zpools from each thumper as an iscsi targe
Re: [zfs-discuss] Best practices for ZFS plaiding
This issue with not having RAID on the front end solaris box is what happens when 1 of the backend thumpers dies. I would imagine that the entire zpool would become useless if 1 of the thumpers should die since the data would be across all the thumpers. Tim wrote: > What you want to do should
Re: [zfs-discuss] Best practices for ZFS plaiding
My definition of a "unified namespace" is to provide the end user with 1 logical mount point which would be comprised of an aggregate of all the thumpers. A very simple example, 6 thumpers (17TB each). I want the end user to see one mount point that is 102TB large. I agree with you that there
Re: [zfs-discuss] Best practices for ZFS plaiding
On Wed, Mar 26, 2008 at 11:04 AM, Larry Lui <[EMAIL PROTECTED]> wrote: > This issue with not having RAID on the front end solaris box is what > happens when 1 of the backend thumpers dies. I would imagine that the > entire zpool would become useless if 1 of the thumpers should die since > the dat
Re: [zfs-discuss] Best practices for ZFS plaiding
Larry Lui wrote: > My definition of a "unified namespace" is to provide the end user with 1 > logical mount point which would be comprised of an aggregate of all the > thumpers. A very simple example, 6 thumpers (17TB each). I want the > end user to see one mount point that is 102TB large. > >
[zfs-discuss] Status of ZFS boot for sparc?
Hey all, I haven't read any notices for ZFS boot / ZFS root filesystem support for sparc based systems? Please tell me, dear god please tell me, that this hasn't been set aside. I'm really hoping to see it in the next release. Brandon Wilson [EMAIL PROTECTED] This message posted from openso
Re: [zfs-discuss] Status of ZFS boot for sparc?
zfs boot support for sparc (included in the overall delivery of zfs boot, which includes install support, support for swap and dump zvols, and various other improvements) is still planned for Update 6. We are working very hard to get it into build 88. Lori Brandon Wilson wrote: > Hey all, > > I
Re: [zfs-discuss] Status of ZFS boot for sparc?
Awesome, thanks Lori! Brandon Wilson [EMAIL PROTECTED] This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Status of ZFS boot for sparc?
On Wed, 26 Mar 2008, Lori Alt wrote: > zfs boot support for sparc (included in the overall delivery > of zfs boot, which includes install support, support for > swap and dump zvols, and various other improvements) > is still planned for Update 6. Does zfs boot have any particular firmware depende
Re: [zfs-discuss] Status of ZFS boot for sparc?
zfs boot has no firmware dependencies. It should work on any sparc platform that supports ufs boot of the same release. Lori Bob Friesenhahn wrote: > On Wed, 26 Mar 2008, Lori Alt wrote: > > >> zfs boot support for sparc (included in the overall delivery >> of zfs boot, which includes install
Re: [zfs-discuss] Status of ZFS boot for sparc?
> We are working very hard to get it into build 88. *sigh* Last I heard it was going to be build 86. I saw build 85 come out and thought "GREAT only a couple more weeks!" Oh well.. Will we ever be able to boot from a RAIDZ pool, or is that fantasy? This message posted from opensolaris
Re: [zfs-discuss] Best practices for ZFS plaiding
Best option is to stripe pairs of mirrors. So in your case create a pool which stripes over 3 mirrors, this will look like: pool mirror: thumper1 thumper2 mirror: thumper3 thumper4 mirror: thumper5 thumper6 So this will stripe over those 3 mi
Re: [zfs-discuss] Status of ZFS boot for sparc?
On Mar 26, 2008, at 18:45, Vincent Fox wrote: > *sigh* > > Last I heard it was going to be build 86. I saw build 85 come out > and thought "GREAT only a couple more weeks!" > > Oh well.. After a little while no one remembers if a product was late or on time, but everyone remembers if it
Re: [zfs-discuss] Status of ZFS boot for sparc?
David Magda wrote: > On Mar 26, 2008, at 18:45, Vincent Fox wrote: > > >> *sigh* >> >> Last I heard it was going to be build 86. I saw build 85 come out >> and thought "GREAT only a couple more weeks!" >> >> Oh well.. >> > > After a little while no one remembers if a product was late
[zfs-discuss] Periodic flush
My application processes thousands of files sequentially, reading input files, and outputting new files. I am using Solaris 10U4. While running the application in a verbose mode, I see that it runs very fast but pauses about every 7 seconds for a second or two. This is while reading 50MB/seco
[zfs-discuss] Mount order of ZFS filesystems vs. other filesystems?
I seem to be having a problem mounting the filesystems on my machine, and I suspect it's due to the order of processing of /etc/vfstab vs. ZFS mount properties. I have a UFS /export, then I have a ZFS that mounts on /export/OSImages. In that ZFS I have a couple of directories with many .ISO file
Re: [zfs-discuss] ZFS performance lower than expected
> The disks in the SAN servers were indeed striped together with Linux LVM > and exported as a single volume to ZFS. That is really going to hurt. In general, you're much better off giving ZFS access to all the individual LUNs. The intermediate LVM layer kills the concurrency that's native to ZF
Re: [zfs-discuss] Periodic flush
Bob Friesenhahn wrote: > My application processes thousands of files sequentially, reading > input files, and outputting new files. I am using Solaris 10U4. > While running the application in a verbose mode, I see that it runs > very fast but pauses about every 7 seconds for a second or two.