Hardware Platform: Sun Fire T2000
SunOS webz2.unige.ch 5.10 Generic_120011-14 sun4v sparc SUNW,Sun-Fire-T200
OBP 4.26.1 2007/04/02 16:26
SUNWzfskr VERSION: 11.10.0,REV=2006.05.18.02.15
SUNWzfsr VERSION: 11.10.0,REV=2006.05.18.02.15
SUNWzfsu VERSION: 11.10.0,REV=2006.05.18.02.15
/net/kromo.sw
Hi,
When jumpstarting s10x_u4_fcs onto a machine, I have a postinstall script which
does:
zpool create tank c1d0s7 c2d0s7 c3d0s7 c4d0s7
zfs create tank/data
zfs set mountpoint=/data tank/data
zpool export -f tank
When jumpstart finishes and the node reboots, the pool is not imported
automatica
On 16/10/2007, Renato Ferreira de Castro - Sun Microsystems - Gland Switzerland
> What he try to do :
> ---
> - re-mount and umount manually, then try to destroy.
> # mount -F zfs zpool_dokeos1/dokeos1/home /mnt
> # umount /mnt
> # zfs destroy dokeos1_pool/dokeos1/home
> cannot
On 16/10/2007, Michael Goff <[EMAIL PROTECTED]> wrote:
> Hi,
>
> When jumpstarting s10x_u4_fcs onto a machine, I have a postinstall script
> which does:
>
> zpool create tank c1d0s7 c2d0s7 c3d0s7 c4d0s7
> zfs create tank/data
> zfs set mountpoint=/data tank/data
> zpool export -f tank
Try without
Hi Mike,
After rebooting a UNIX machine (HP-UX/Linux/Solaris), it will mount (or
import) only the file systems which are mounted (or imported) before the reboot.
In your case the zfs file system tank/data is exported(or unmounted) before
reboot.Thats the reason why the zpool is not im
Michael,
If you don't call "zpool export -f tank" it should work.
However, it would be necessary to understand why you are using the above
command after creation of the zpool.
Can you avoid exporting after the creation ?
Regards,
Sanjeev
Michael Goff wrote:
> Hi,
>
> When jumpstarting s10x_u4
Hello Sanjeev,
Tuesday, October 16, 2007, 10:14:01 AM, you wrote:
SB> Michael,
SB> If you don't call "zpool export -f tank" it should work.
SB> However, it would be necessary to understand why you are using the above
SB> command after creation of the zpool.
SB> Can you avoid exporting after the
Great, thanks Robert. That's what I was looking for. I was thinking that
I would have to transfer the state somehow from the temporary jumpstart
environment to /a so that it would be persistent. I'll test it out tomorrow.
Sanjeev, when I did not have the zpool export, it still did not import
au
Thanks Robert ! I missed that part.
-- Sanjeev.
Michael Goff wrote:
> Great, thanks Robert. That's what I was looking for. I was thinking
> that I would have to transfer the state somehow from the temporary
> jumpstart environment to /a so that it would be persistent. I'll test
> it out tomorr
>
> Would the bootloader have issues here? On x86 I would
> imagine that you
> would have to reload grub, would a similar thing need
> to be done on SPARC?
>
Yeah, that's also what I'm thinking, apparently zfs mirror doesn't take care of
boot sector. So as of now, estimating size of a zfs root
Hi.
I have created some zfs-partitions. First I create the
home/user-partitions. Beneath that I create additional partitions.
Then I have do a chown -R for that user. These partitions are shared
using the sharenfs=on. The owner- and group-id is 1009.
These partitions are visible as the user assig
and what about compression?
:D
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
For anyone who is interested the solution to this issue was to set the zfs
mountpoint of the dataset being shared to legacy. This enables the proper
sharing of the dataset to a client and the ability to open a terminal window in
the zone sharing out the dataset. Does anyone know why this fixed t
you mean c9n ? ;)
does anyone actually *use* compression ? i'd like to see a poll on how many
people are using (or would use) compression on production systems that are
larger than your little department catch-all dumping ground server. i mean,
unless you had some NDMP interface directly to Z
We use compression on almost all of our zpools. We see very little if
any I/O slowdown because of this, and you get free disk space. In fact,
I believe read I/O gets a boost from this, since decompression is cheap
compared to normal disk I/O.
Jon
Dave Johnson wrote:
you mean c9n ? ;)
doe
On Oct 16, 2007, at 4:36 PM, Jonathan Loran wrote:
>
> We use compression on almost all of our zpools. We see very little
> if any I/O slowdown because of this, and you get free disk space.
> In fact, I believe read I/O gets a boost from this, since
> decompression is cheap compared to nor
Claus,
Is the mount using NFSv4? If so, there is likely a midguided
mapping of the user/groups between the client and server.
While not including BSD info, there is a little bit on
NFSv4 user/group mappings at this blog:
http://blogs.sun.com/nfsv4
Spencer
On Oct 16, 2007, at 2:11 PM, Claus Gu
On Fri, 12 Oct 2007, Matthew Ahrens wrote:
> You can use delegated administration ("zfs allow someone send pool/fs").
> This is in snv_69. RBAC is much more coarse-grained, but you could use
> it too.
Out of curiosity, what kind of things are going to be added via patches to
S10u4 vs things that
On Fri, 12 Oct 2007, Paul B. Henson wrote:
> I've read a number of threads and blog posts discussing zfs send/receive
> and its applicability is such an implementation, but I'm curious if
> anyone has actually done something like that in practice, and if so how
> well it worked.
So I didn't hear
Paul B. Henson wrote:
> On Fri, 12 Oct 2007, Paul B. Henson wrote:
>
>> I've read a number of threads and blog posts discussing zfs send/receive
>> and its applicability is such an implementation, but I'm curious if
>> anyone has actually done something like that in practice, and if so how
>> well
Richard Elling wrote:
> Paul B. Henson wrote:
>> On Fri, 12 Oct 2007, Paul B. Henson wrote:
>>
>>> I've read a number of threads and blog posts discussing zfs send/receive
>>> and its applicability is such an implementation, but I'm curious if
>>> anyone has actually done something like that in pra
21 matches
Mail list logo