On 23/01/07, Darren J Moffat <[EMAIL PROTECTED]> wrote:
Can you pick another name for this please because that name has already
been suggested for zfs(1) where the argument is a directory in an
existing ZFS file system and the result is that the directory becomes a
new ZFS file system while reta
On 25/01/07, Adam Leventhal <[EMAIL PROTECTED]> wrote:
On Wed, Jan 24, 2007 at 08:52:47PM +, Dick Davies wrote:
> that's an excellent feature addition, look forward to it.
> Will it be accompanied by a 'zfs join'?
Out of curiosity, what will you (or anyone else) u
On 25/01/07, Brian Hechinger <[EMAIL PROTECTED]> wrote:
The other point is, how many other volume management systems allow you to remove
disks? I bet if the answer is not zero, it's not large. ;)
Even Linux LVM can do this (with pvmove) - slow, but you can do it online.
--
Rasputin :: Jack
Have a look at:
http://blogs.sun.com/ahl/entry/a_little_zfs_hack
On 27/01/07, roland <[EMAIL PROTECTED]> wrote:
is it planned to add some other compression algorithm to zfs ?
lzjb is quite good and especially performing very well, but i`d like to have
better compression (bzip2?) - no matter
OSX *loves* NFS - it's a lot faster than Samba - but
you need a bit of extra work.
You need a user on the other end with the right uid and gid
(assuming you're using NFSv3 - you probably are).
Have a look at :
http://number9.hellooperator.net/articles/2007/01/12/zfs-for-linux-and-osx-and-window
On 12/03/07, Darren Dunham <[EMAIL PROTECTED]> wrote:
> On Sun, Mar 11, 2007 at 11:21:13AM -0700, Frank Cusack wrote:
> > On March 11, 2007 6:05:13 PM + Tim Foster <[EMAIL PROTECTED]> wrote:
> > >* ability to add disks to mirror the root filesystem at any time,
> > > should they become avai
I don't Solaris dom0 does Pacifica (amd-v) yet.
That would rule out windows for now.
You can run centOS zones on SXCR.
That just leaves freebsd (which hasn't got fantastic xen support either,
despite Kip Macys excellent work).
Unless you've got an app that needs that, zones sound like a much sa
Just saw a message on xen-discuss that HVM is in the next version (b60-ish).
On 15/03/07, Dick Davies <[EMAIL PROTECTED]> wrote:
I don't Solaris dom0 does Pacifica (amd-v) yet.
That would rule out windows for now.
You can run centOS zones on SXCR.
That just leaves freebsd (which
On 13/04/07, Toby Thain <[EMAIL PROTECTED]> wrote:
Those who promulgate the tag for whatever motive - often agencies of
Microsoft - have all foundered on the simple fact that the GPL
applies ONLY to MY code as licensor (*and modifications thereto*); it
has absolutely nothing to say about what yo
On 13/04/07, Lori Alt <[EMAIL PROTECTED]> wrote:
sparc support is in the works. We're waiting on some other development
work going on right now in the area of sparc booting in general
(not specific to zfs booting, although the zfs boot loader
is part of that project). I can't give you a date r
On 17/04/07, Erik Trimble <[EMAIL PROTECTED]> wrote:
And, frankly, I can think of several very good reasons why Sun would NOT
want to release a ZFS under the GPL
Not to mention the knock-on effects of those already using ZFS (apple, BSD)
who would be adversely affected by a GPL license.
--
Ra
On 17/04/07, Rayson Ho <[EMAIL PROTECTED]> wrote:
On 4/17/07, Rich Teer <[EMAIL PROTECTED]> wrote:
> Same here. I think anyone who dismisses ZFS as being inappropriate for
> desktop use ("who needs access to Petabytes of space in their desktop
> machine?!") doesn't get it.
Well, for many of tho
Hi Malachi
Tims SMF bits work well (and also supports remote backups (via send/recv)).
I use something like the process laid out at the bottom of:
http://blogs.sun.com/mmusante/entry/rolling_snapshots_made_easy
because it's dirt-simple and easily understandable.
On 10/05/07, Malachi de Ælfw
Take off every ZIL!
http://number9.hellooperator.net/articles/2007/02/12/zil-communication
On 22/05/07, Albert Chin
<[EMAIL PROTECTED]> wrote:
On Mon, May 21, 2007 at 06:11:36PM -0500, Nicolas Williams wrote:
> On Mon, May 21, 2007 at 06:09:46PM -0500, Albert Chin wrote:
> > But still, how
On 24/05/07, Brian Hechinger <[EMAIL PROTECTED]> wrote:
I don't know about FreeBSD PORTS, but NetBSD's ports system works very
well on solaris. The only thing I didn't like about it is it considers
gcc a dependency to certain things, so even though I have Studio 11
installed, it would insist on
On 08/06/07, BVK <[EMAIL PROTECTED]> wrote:
On 6/8/07, Toby Thain <[EMAIL PROTECTED]> wrote:
>
> When should we expect Solaris kernel under OS X? 10.6? 10.7? :-)
>
I think its quite possible. I believe, very soon they will ditch their
Mach based (?) BSD and switch to solaris.
I think that's ex
I used a zpool on a usb key today to get some core files off a non-networked
Thumper running S10U4 beta.
Plugging the stick into my SXCE b61 x86 machine worked fine; I just had to
'zpool import sticky' and it worked ok.
But when we attach the drive to a blade 100 (running s10u3), it sees the
poo
Thanks to everyone for the sanity check - I think
it's a platform issue, but not an endian one.
The stick was originally DOS-formatted, and the zpool was built on the first
fdisk partition. So Sparcs aren't seeing it, but the x86/x64 boxes are.
--
Rasputin :: Jack of All Trades - Master of Nuns
I've found it's fairly easy to trim down a 'core' install, installing
to a temporary UFS root,
doing the ufs -> zfs thing, and then re-use the old UFS slice as swap.
Obviously you need a separate /boot slice in this setup.
On 03/07/07, Douglas Atique <[EMAIL PROTECTED]> wrote:
> I'm afraid the S
On 18/08/07, Matthew Ahrens <[EMAIL PROTECTED]> wrote:
> Blake wrote:
> > Now I'm curious.
> >
> > I was recursively removing snapshots that had been generated recursively
> > with the '-r' option. I'm running snv65 - is this a recent feature?
>
> No; it was integrated in snv_43, and is in s10u3.
I've got 12Gb or so of db+web in a zone on a ZFS filesystem on a mirrored zpool.
Noticed during some performance testing today that its i/o bound but
using hardly
any CPU, so I thought turning on compression would be a quick win.
I know I'll have to copy files for existing data to be compressed, s
On 11/09/2007, Mike DeMarco <[EMAIL PROTECTED]> wrote:
> > I've got 12Gb or so of db+web in a zone on a ZFS
> > filesystem on a mirrored zpool.
> > Noticed during some performance testing today that
> > its i/o bound but
> > using hardly
> > any CPU, so I thought turning on compression would be
> >
Bah, wrong list.
A timeline would be really nice for when this is likely to be sorted
out - higher
priority than ZFS root IMO.
-- Forwarded message --
From: Dick Davies <[EMAIL PROTECTED]>
Date: 22 Sep 2007 23:21
Subject: Re: [zfs-discuss] "zoneadm clone" doe
On 26/09/2007, Christopher <[EMAIL PROTECTED]> wrote:
> I'm about to build a fileserver and I think I'm gonna use OpenSolaris and ZFS.
>
> I've got a 40GB PATA disk which will be the OS disk,
Would be nice to remove that as a SPOF.
I know ZFS likes whole disks, but I wonder how much would perform
On 04/10/2007, Nathan Kroenert <[EMAIL PROTECTED]> wrote:
> Client A
> - import pool make couple-o-changes
>
> Client B
> - import pool -f (heh)
> Oct 4 15:03:12 fozzie ^Mpanic[cpu0]/thread=ff0002b51c80:
> Oct 4 15:03:12 fozzie genunix: [ID 603766 kern.notice] assertion
> failed: dmu_r
On 30/09/2007, William Papolis <[EMAIL PROTECTED]> wrote:
> Henk,
>
> By upgrading do you mean, rebooting and installing Open Solaris from DVD or
> Network?
>
> Like, no Patch Manager install some quick patches and updates and a quick
> reboot, right?
You can live upgrade and then do a quick reb
On 30/09/2007, William Papolis <[EMAIL PROTECTED]> wrote:
> OK,
>
> I guess using this ...
>
> set md:mirrored_root_flag=1
>
> for Solaris Volume Manager (SVM) is not supported and could cause problems.
>
> I guess it's back to my first idea ...
>
> With 2 disks, setup three SDR's (State Datab
I had some trouble installing a zone on ZFS with S10u4
(bug in the postgres packages) that went away when I used a
ZVOL-backed UFS filesystem
for the zonepath.
I thought I'd push on with the experiment (in the hope Live Upgrade
would be able to upgrade such a zone).
It's a bit unwieldy, but every
On 08/10/2007, Thomas Liesner <[EMAIL PROTECTED]> wrote:
> [EMAIL PROTECTED] # ./filebench
> filebench> load fileserver
> filebench> run 60
> IO Summary: 8088 ops 8017.4 ops/s, (997/982 r/w) 155.6mb/s,508us
> cpu/op, 0.2ms
> 12746: 65.266: Shutting down processes
> filebench>[/i]
>
>
Hi Thomas
the point I was making was that you'll see low performance figures
with 100 concurrent threads. If you set nthreads to something closer
to your expected load, you'll get a more accurate figure.
Also, there's a new filebench out now, see
http://blogs.sun.com/erickustarz/entry/filebench
No, they aren't (i.e. zoneadm clone on S10u4 doesn't use zfs snapshots).
I have a workaround I'm about to blog, the gist of which is
make the 'template' zone on zfs
boot, configure, etc.
zonecfg -z template detach
zfs snapshot tank/zones/[EMAIL PROTECTED]
zfs clone tank/zones/[EMAIL PROTECTED]
On 11/10/2007, Dick Davies <[EMAIL PROTECTED]> wrote:
> No, they aren't (i.e. zoneadm clone on S10u4 doesn't use zfs snapshots).
>
> I have a workaround I'm about to blog
Here it is - hopefully be of some use:
http://number9.hellooperator.net/articles/2007/10/1
On 16/10/2007, Renato Ferreira de Castro - Sun Microsystems - Gland Switzerland
> What he try to do :
> ---
> - re-mount and umount manually, then try to destroy.
> # mount -F zfs zpool_dokeos1/dokeos1/home /mnt
> # umount /mnt
> # zfs destroy dokeos1_pool/dokeos1/home
> cannot
On 16/10/2007, Michael Goff <[EMAIL PROTECTED]> wrote:
> Hi,
>
> When jumpstarting s10x_u4_fcs onto a machine, I have a postinstall script
> which does:
>
> zpool create tank c1d0s7 c2d0s7 c3d0s7 c4d0s7
> zfs create tank/data
> zfs set mountpoint=/data tank/data
> zpool export -f tank
Try without
On 29/10/2007, Tek Bahadur Limbu <[EMAIL PROTECTED]> wrote:
> I created a ZFS file system like the following with /mypool/cache being
> the partition for the Squid cache:
>
> 18:51:27 [EMAIL PROTECTED]:~$ zfs list
> NAME USED AVAIL REFER MOUNTPOINT
> mypool 478
On 04/11/2007, Ima <[EMAIL PROTECTED]> wrote:
> I'm setting up a home file server, which will mainly just consist of a ZFS
> pool and access with SAMBA. I'm not sure if I should use SXDE for this, or
> Sol 10u4. Does SXDE offer any ZFS improvements over 10u4 for this purpose?
I'd be inclined
Does anybody know if the upcoming CIFS integration in b77 will
provide a mechanism for users to see snapshots (like .zfs/snapshot/
does for NFS)?
--
Rasputnik :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss maili
Just a +1 - I use an fdisk partition for my zpool and it works
fine (plan was to dual-boot with freebsd and this makes the vdevs slightly
easier to address from both OSes).
zpool doesn't care what the partition ID is, just give it
zpool create gene c0d0pN
_
On Dec 5, 2007 9:54 PM, Brian Lionberger <[EMAIL PROTECTED]> wrote:
> I create two zfs's on one pool of four disks with two mirrors, such as...
> /
> zpool create tank mirror disk1 disk2 mirror disk3 disk4
>
> zfs create tank/fs1
> zfs create tank/fs2/
>
> Are fs1 and fs2 striped across all four di
On Dec 6, 2007 1:13 AM, Bakul Shah <[EMAIL PROTECTED]> wrote:
> Note that I don't wish to argue for/against zfs/billtodd but
> the comment above about "no *real* opensource software
> alternative zfs automating checksumming and simple
> snapshotting" caught my eye.
>
> There is an open source alte
Have you ever used a Mac? HFS has had these features for years.
On Mon, Mar 17, 2008 at 6:33 PM, Bryan Wagoner <[EMAIL PROTECTED]> wrote:
> Actually, having a database on top of an FS is really useful. It's a Content
> Addressable Storage system.One of the problem home users have is that
>
Does 'zpool attach' enough for a root pool?
I mean, does it install GRUB bootblocks on the disk?
On Wed, Jul 2, 2008 at 1:10 PM, Robert Milkowski <[EMAIL PROTECTED]> wrote:
> Hello Tommaso,
>
> Wednesday, July 2, 2008, 1:04:06 PM, you wrote:
> the root filesystem of my thumper is a ZFS with a si
Hi, I was just wondering if ZFS root is likely to be an install
option any time soon, either in SXCR or the June Solaris update?
The mechanism at Tabriz's blog seem to work well, but it'd be nice
to get it out of the box.
--
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.helloope
On 15/05/06, Tabriz Leman <[EMAIL PROTECTED]> wrote:
For those who haven't already gone through the painful manual process of
setting up a ZFS Root, Tim Foster has put together a script. It is
available on his blog (http://blogs.sun.com/roller/page/timf/20060425).
I haven't tried it out, but am
On 11/06/06, Gregory Shaw <[EMAIL PROTECTED]> wrote:
Pardon me if this scenario has been discussed already, but I haven't
seen anything as yet.
I'd like to request a 'zpool evacuate pool ' command.
'zpool evacuate' would migrate the data from a disk device to other
disks in the pool.
Here's the
I was wondering if anyone could recommend hardware
forr a ZFS-based NAS for home use.
The 'zfs on 32-bit' thread has scared me of a mini-itx fanless
setup, so I'm looking at sparc or opteron. Ideally it would:
a) run quiet (blade 100/150 is ok, x4100 ain't :) )
b) take advantage of cheap disks
Just wondered if there'd been any progress in this area?
Correct me if i'm wrong, but as it stands, there's no way
to remove a device you accidentally 'zpool add'ed without
destroying the pool.
On 12/06/06, Gregory Shaw <[EMAIL PROTECTED]> wrote:
Yes, if zpool remove works like you describe, it
On 01/07/06, mkontakt <[EMAIL PROTECTED]> wrote:
I need a help. I have added a borrowed disk in pool (zpool add
pool_name c1d0s2) and I thought that (disks are of different size) it
would be the fastest way to copy data from one disk to the other one
(unfortunately I had not read the help for zpo
Notice there's a product announcement on Tuesday:
http://www.prnewswire.com/cgi-bin/stories.pl?ACCT=104&STORY=/www/story/06-30-2006/0004390495&EDATE=
and Jonathan mentioned Thumper was due for release at the end of june:
http://blogs.sun.com/roller/page/jonathan?entry=phase_2
With ZFS offici
Well, glue a beard on me and call me Nostradamus :
http://blogs.sun.com/roller/page/jonathan?entry=the_rise_of_the_general
On 03/07/06, Dick Davies <[EMAIL PROTECTED]> wrote:
With ZFS officially supported now, I'd say The Stars Are Right
--
Rasputin :: Jack of All Trades
On 13/07/06, Yacov Ben-Moshe <[EMAIL PROTECTED]> wrote:
How can I remove a device or a partition from a pool.
NOTE: The devices are not mirrored or raidz
Then you can't - there isn't a 'zfs remove' command yet.
--
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net
On 15/07/06, Torrey McMahon <[EMAIL PROTECTED]> wrote:
eric kustarz wrote:
> martin wrote:
> To monitor activity, use 'zpool iostat 1' to monitor just zfs
> datasets, or iostat(1M) to include non-zfs devices.
Perhaps Martin was asking for something a little more robust. Something
like SNMP tr
On 15/08/06, Lori Alt <[EMAIL PROTECTED]> wrote:
Brian Hechinger wrote:
> On Fri, Jul 28, 2006 at 02:26:24PM -0600, Lori Alt wrote:
>
>>>What about Express?
>>
>>Probably not any time soon. If it makes U4,
>>I think that would make it available in Express late
>>this year.
>
>
> Is there a speci
On 16/08/06, Joerg Schilling <[EMAIL PROTECTED]> wrote:
"Dick Davies" <[EMAIL PROTECTED]> wrote:
> As an aside, is there a general method to generate bootable
> opensolaris DVDs? The only way I know of getting opensolaris on
> is installing sxcr and then BF
On 17/08/06, Lori Alt <[EMAIL PROTECTED]> wrote:
Dick Davies wrote:
> That's excellent news Lori, thanks to everyone who's working
> on this. Are you planning to use a single pool,
> or an 'os pool/application pool' split?
Thus I think of the most import
On 18/08/06, Lori Alt <[EMAIL PROTECTED]> wrote:
No, zfs boot will be supported on both x86 and sparc. Sparc's
OBP, and various x86 BIOS's both have restrictions on the devices
that can be accessed at boot time, so we need to limit the
devices in a root pool on both architectures.
Gotcha. I w
On 22/08/06, Bill Moore <[EMAIL PROTECTED]> wrote:
On Mon, Aug 21, 2006 at 02:40:40PM -0700, Anton B. Rang wrote:
> Yes, ZFS uses this command very frequently. However, it only does this
> if the whole disk is under the control of ZFS, I believe; so a
> workaround could be to use slices rather th
This is fantastic work!
How long have you been at it?
You seem a lot further on than the ZFS-Fuse project.
On 22/08/06, Pawel Jakub Dawidek <[EMAIL PROTECTED]> wrote:
Hi.
I started porting the ZFS file system to the FreeBSD operating system.
There is a lot to do, but I'm making good progress,
On 30/08/06, Matthew Ahrens <[EMAIL PROTECTED]> wrote:
'zfs send' is *incredibly* faster than rsync.
That's interesting. We had considered it as a replacement for a
certain task (publishing a master docroot to multiple webservers)
but a quick test with ~500Mb of data showed the zfs send/recv
t
On 30/08/06, Matthew Ahrens <[EMAIL PROTECTED]> wrote:
Yes. The architectural benefits of 'zfs send' over rsync only apply to
sending incremental changes. When sending a full backup, both schemes
have to traverse all the metadata and send all the data, so the *should*
be about the same speed.
Just did my first dataset delegation, so be gentle :)
Was initially terrified to see that changes to the mountpoint in the non-global
zone were visible in the global zone.
Then I realised it wasn't actually mounted (except in the delegated zone).
But I couldn't see any obvious indication that th
On 06/09/06, Kenneth Mikelinich <[EMAIL PROTECTED]> wrote:
Are you suggesting that I not get too granular with datasets and use a
higher level one versus several?
I tihnk what he's saying is you should only have to
delegate one dataset (telecom/oracle/production, for example),
and all the 'chi
.
I too was confused with the zfs list readout.
On Wed, 2006-09-06 at 07:37, Dick Davies wrote:
> Just did my first dataset delegation, so be gentle :)
>
> Was initially terrified to see that changes to the mountpoint in the
non-global
> zone were visible in the global zone.
>
A colleague just asked if zfs delegation worked with zvols too.
Thought I'd give it a go and got myself in a mess
(tank/linkfixer is the delegated dataset):
[EMAIL PROTECTED] / # zfs create -V 500M tank/linkfixer/foo
cannot create device links for 'tank/linkfixer/foo': permission denied
cannot cr
On 06/09/06, Eric Schrock <[EMAIL PROTECTED]> wrote:
On Wed, Sep 06, 2006 at 03:53:52PM +0100, Dick Davies wrote:
> That's a bit nicer, thanks.
> Still not that clear which zone they belong to though - would
> it be an idea to add a 'zone' property be a stri
On 12/09/06, Matthew Ahrens <[EMAIL PROTECTED]> wrote:
Here is a proposal for a new 'copies' property which would allow
different levels of replication for different filesystems.
Your comments are appreciated!
Flexibility is always nice, but this seems to greatly complicate things,
both techni
On 12/09/06, Darren J Moffat <[EMAIL PROTECTED]> wrote:
Dick Davies wrote:
> The only real use I'd see would be for redundant copies
> on a single disk, but then why wouldn't I just add a disk?
Some systems have physical space for only a single drive - think most
laptops!
On 12/09/06, Celso <[EMAIL PROTECTED]> wrote:
One of the great things about zfs, is that it protects not just against
mechanical failure, but against silent data corruption. Having this available
to laptop owners seems to me to be important to making zfs even more attractive.
I'm not arguing
On 12/09/06, Celso <[EMAIL PROTECTED]> wrote:
...you split one disk in two. you then have effectively two partitions which
you can then create a new mirrored zpool with. Then everything is mirrored.
Correct?
Everything in the filesystems in the pool, yes.
With ditto blocks, you can selecti
On 12/09/06, Celso <[EMAIL PROTECTED]> wrote:
I think it has already been said that in many peoples experience, when a disk
fails, it completely fails. Especially on laptops. Of course ditto blocks
wouldn't help you in this situation either!
Exactly.
I still think that silent data corrupti
On 13/09/06, Matthew Ahrens <[EMAIL PROTECTED]> wrote:
Dick Davies wrote:
> For the sake of argument, let's assume:
>
> 1. disk is expensive
> 2. someone is keeping valuable files on a non-redundant zpool
> 3. they can't scrape enough vdevs to make a redundant
On 13/09/06, Matthew Ahrens <[EMAIL PROTECTED]> wrote:
Dick Davies wrote:
> But they raise a lot of administrative issues
Sure, especially if you choose to change the copies property on an
existing filesystem. However, if you only set it at filesystem creation
time (which is the re
Since we were just talking about resilience on laptops,
I wondered if it there had been any progress in sorting
some of the glitches that were involved in:
http://www.opensolaris.org/jive/thread.jspa?messageID=25144戸
?
--
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperat
On 14/09/06, James C. McPherson <[EMAIL PROTECTED]> wrote:
Hi folks,
I'm in the annoying position of having to replace my rootdisk
(since it's a [EMAIL PROTECTED]@$! maxtor and dying). I'm currently running
with zfsroot after following Tabriz' and TimF's procedure to
enable that. However, I'd li
That looks a bit serious - did you say both disks are on
the same SATA controller?
On 19/09/06, Ian Collins <[EMAIL PROTECTED]> wrote:
# zpool status -v
pool: tank
state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption. Applications may be af
On 22/09/06, Alf <[EMAIL PROTECTED]> wrote:
1) It's not possible anymore within a pool create a file system with a
specific sizeIf I have 2 file systems I can't decide to give for
example 10g to one and 20g to the other one unless I set a reservation
for them. Also I tried to manually create
Would 'zfs snapshot -r poolname' achieve what you want?
On 29/09/06, Patrick <[EMAIL PROTECTED]> wrote:
Hi,
Is it possible to create a snapshot, for ZFS send purposes, of an entire pool ?
--
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
Need a bit of help salvaging a perfectly working ZFS
mirror that I've managed to render unbootable.
I've had a ZFS root (x86, mirored zpool, SXCR b46 ) working fine for months.
I very foolishly decided to mirror /grub using SVM
(so I could boot easily if a disk died). Shrank swap partitions
to m
On 05/10/06, Richard Elling - PAE <[EMAIL PROTECTED]> wrote:
Dick Davies wrote:
> I very foolishly decided to mirror /grub using SVM
> (so I could boot easily if a disk died). Shrank swap partitions
> to make somewhere to keep the SVM database (2 copies on each
> disk).
D
On 11/10/06, Peter van Gemert <[EMAIL PROTECTED]> wrote:
Hi There,
You might want to check the HCL at http://www.sun.com/bigadmin/hcl to find out
which hardware is supported by Solaris 10.
Greetings,
Peter
I tried that myself - there really isn't very much on there.
I can't believe Solaris r
On 11/10/06, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
Dick Davies wrote:
> On 11/10/06, Peter van Gemert <[EMAIL PROTECTED]> wrote:
>> You might want to check the HCL at http://www.sun.com/bigadmin/hcl to
>> find out which hardware is supported by Solaris 1
On 12/10/06, Matthew Ahrens <[EMAIL PROTECTED]> wrote:
FYI, /etc/zfs/zpool.cache just tells us what pools to open when you boot
up. Everything else (mountpoints, filesystems, etc) is stored in the
pool itself.
Does anyone know of any plans or strategies to remove this dependancy?
--
Rasputin
On 12/10/06, Michael Schuster <[EMAIL PROTECTED]> wrote:
James C. McPherson wrote:
> Dick Davies wrote:
>> On 12/10/06, Matthew Ahrens <[EMAIL PROTECTED]> wrote:
>>
>>> FYI, /etc/zfs/zpool.cache just tells us what pools to open when you boot
>>> u
On 12/10/06, Ceri Davies <[EMAIL PROTECTED]> wrote:
On Wed, Oct 11, 2006 at 11:49:48PM -0700, Matthew Ahrens wrote:
> FYI, /etc/zfs/zpool.cache just tells us what pools to open when you boot
> up. Everything else (mountpoints, filesystems, etc) is stored in the
> pool itself.
What happens if
On 12/10/06, Michael Schuster <[EMAIL PROTECTED]> wrote:
Ceri Davies wrote:
> On Thu, Oct 12, 2006 at 02:06:15PM +0100, Dick Davies wrote:
>> I'd expect:
>>
>> zpool import -f
>>
>> (see the manpage)
>> to probe /dev/dsk/ and rebuild the zpoo
On 14/10/06, Darren Dunham <[EMAIL PROTECTED]> wrote:
> So the warnings I've heard no longer apply?
> If so, that's great. Thanks for all replies.
Umm, which warnings? The "don't import a pool on two hosts at once"
definitely still applies.
Sure :)
I meant the reason I'd heard
( at http://s
I started sharing out zfs filesystems via NFS last week using
sharenfs=on. That seems to work fine until I reboot. Turned
out the NFS server wasn't enabled - I had to enable
nfs/server, nfs/lockmgr and nfs/status manually. This is a stock
SXCR b49 (ZFS root) install - don't think I'd changed anyth
On 24/10/06, Eric Schrock <[EMAIL PROTECTED]> wrote:
On Tue, Oct 24, 2006 at 08:01:21PM +0100, Dick Davies wrote:
> Shouldn't a ZFS share be permanently enabling NFS?
# svcprop -p application/auto_enable nfs/server
true
This property indicates that reg
On 27/10/06, Christopher Scott <[EMAIL PROTECTED]> wrote:
You can manually set up a ZFS root environment but it requires a UFS
partition to boot off of.
See: http://blogs.sun.com/tabriz/entry/are_you_ready_to_rumble
There's a slightly improved procedure at
http://solaristhings.blogspot.com/20
On 28/10/06, Mike Gerdts <[EMAIL PROTECTED]> wrote:
On 10/28/06, Dick Davies <[EMAIL PROTECTED]> wrote:
> http://solaristhings.blogspot.com/2006/06/zfs-root-on-solaris-part-2.html
The original question was about using ZFS root on a T1000. /grub
looks suspiciously incompatible
On 01/11/06, Adam Leventhal <[EMAIL PROTECTED]> wrote:
Rick McNeal and I have been working on building support for sharing ZVOLs
as iSCSI targets directly into ZFS. Below is the proposal I'll be
submitting to PSARC. Comments and suggestions are welcome.
Adam
Am I right in thinking we're effect
On 01/11/06, Dick Davies <[EMAIL PROTECTED]> wrote:
And we'll be able to use sparse zvols
for this too (can't think why we couldn't, but it'd be dead handy)?
Thinking about this, we won't be able to (without some changes) -
I think a target is zero-filled befo
On 01/11/06, Cyril Plisko <[EMAIL PROTECTED]> wrote:
On 11/1/06, Dick Davies <[EMAIL PROTECTED]> wrote:
> On 01/11/06, Dick Davies <[EMAIL PROTECTED]> wrote:
> > And we'll be able to use sparse zvols
> > for this too (can't think why we couldn't,
On 01/11/06, Rick McNeal <[EMAIL PROTECTED]> wrote:
I too must be missing something. I can't imagine why it would take 5
minutes to online a target. A ZVOL should automatically be brought
online since now initialization is required.
s/now/no/ ?
Thanks for the explanation. The '5 minute online
On 14/11/06, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
>Actually, we have considered this. On both SPARC and x86, there will be
>a way to specify the root file system (i.e., the bootable dataset) to be
>booted,
>at either the GRUB prompt (for x86) or the OBP prompt (for SPARC).
>If no root f
On 15/11/06, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
>I suppose it depends how 'catastrophic' the failture is, but if it's
>very low level,
>booting another root probabyl won't help, and if it's too high level, how will
>you detect it (i.e. you've booted the kernel, but it is buggy).
If it
On 16/11/06, Peter Eriksson <[EMAIL PROTECTED]> wrote:
Is there some way to "dump" all information from a ZFS filesystem? I suppose I
*could* backup the raw disk devices that is used by the zpool but that'll eat up a lot of
tape space...
If you want to have another copy somewhere, use zfs se
Is there a difference between setting mountpoint=legacy and mountpoint=none?
--
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailma
mountpoint
cannot be inherited
zoneadm: zone ganesh failed to verify
vera / # zfs set mountpoint=none tank/delegated/ganesh
vera / # zoneadm -z ganesh boot
vera / #
On 28/11/06, Dick Davies <[EMAIL PROTECTED]> wrote:
Is there a difference between setting mountpoint=legacy and mountpoin
On 28/11/06, Terence Patrick Donoghue <[EMAIL PROTECTED]> wrote:
Is there a difference - Yep,
'legacy' tells ZFS to refer to the /etc/vfstab file for FS mounts and
options
whereas
'none' tells ZFS not to mount the ZFS filesystem at all. Then you would
need to manually mount the ZFS using 'zfs se
1 - 100 of 109 matches
Mail list logo