Re: [zones-discuss] Re: [zfs-discuss] Downsides to zone roots on ZFS?

2007-02-08 Thread Casper . Dik

>Many thanks for answering my question.  Hopefully my noisy X4200
>will be installed in the data centre tomorrow (Thursday); I had
>a set back today while fighting with the Remote Console feature
>of ILOM 1.1.1 (i.e., it doesn't work).  :-(

Just ssh into it and use the serial console from within SSH.

Casper
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: NFS share problem with mac os x client

2007-02-08 Thread Sergey
The setup below works fine for me.

macmini:~ jimb$ mount | grep jimb
ride:/xraid2/home/jimb on /private/var/automount/home/jimb (nosuid, automounted)

macmini:~ jimb$ nidump fstab / | grep jimb
ride:/xraid2/home/jimb /home/jimb nfs rw,nosuid,tcp 0 0

NFS server: Solaris 10 11/06 x86_64 + patches, NFSv3.
On the client there's latest release MacOSX version + patches.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: zfs crashing

2007-02-08 Thread Gino Ruopolo
Same problem here after some patching :(((
42GB free in a 4.2TB zpool

We can't upgrade to U3 without planning it.
Is there any way to solve the problem?? remove latest patches?

Our uptime with ZFS is going very low ... 

thanks
Gino
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] NFS share problem with mac os x client

2007-02-08 Thread Dick Davies

OSX *loves* NFS - it's a lot faster than Samba - but
you need a bit of extra work.

You need a user on the other end with the right uid and gid
(assuming you're using NFSv3 - you probably are).


Have a look at :
http://number9.hellooperator.net/articles/2007/01/12/zfs-for-linux-and-osx-and-windows-and-bsd

(especially the 'create a user' bit).

On 07/02/07, Kevin Bortis <[EMAIL PROTECTED]> wrote:

Hello, I test right now the beauty of zfs. I have installed opensolaris on a 
spare server to test nfs exports. After creating tank1 with zpool and a 
subfilesystem with zfs tank1/nfsshare, I have set the option sharenfs=on to 
tank1/nfsshare.

With Mac OS X as client I can mount the filesystem in Finder.app with 
nfs://server/tank1/nfsshare, but if I copy a file an error ocours. Finder say "The 
operation cannot be completed because you do not have sufficient privileges for some of 
the items.".

Until now I have shared the filesystems always with samba so I have almost no 
experience with nfs. Any ideas?

Kevin


This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss




--
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Panic with "really out of space."

2007-02-08 Thread Roshan Perera
Hi All,

I have a problem with Global Zone crashing along with the rest of the zones 
when a ZFS partition goes to 100% with the following funny messge (really out 
of space..) :-) It didn't leave the lying or joking out of space message 
though:-)

Feb  8 10:39:35 ss44bsdvgza01 ^Mpanic[cpu1]/thread=2a100a43cc0:
Feb  8 10:39:35 ss44bsdvgza01 unix: [ID 858701 kern.notice] really out of space

Feb  8 10:41:03 ss44bsdvgza01 savecore: [ID 570001 auth.error] reboot after 
panic:
 really out of space
Feb  8 10:41:03 ss44bsdvgza01 savecore: [ID 748169 auth.error] saving system 
crash
 dump in /var/crash/ss44bsdvzgza01/*.3


Checked on SunSolve and found that Update 3 might solve the problem. Is that 
the case ? Is there a patch coming out soon for this ? I don;'t think 124204-04 
fix this problem. Correct me if I am wrong..

Thanks


Roshan


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS multi-threading

2007-02-08 Thread Reuven Kaswin
With the CPU overhead imposed in checksum of blocks by ZFS, on a large 
sequential write test, the CPU was heavily loaded in a test that I ran. By 
turning off the checksum, the CPU load was greatly reduced. Obviously, this 
caused a tradeoff in reliability for CPU cycles.

Would the logic behind ZFS take full advantage of a heavily multicored system, 
such as on the Sun Niagara platform? Would it utilize of the 32 concurrent 
threads for generating its checksums? Has anyone compared ZFS on a Sun Tx000, 
to that of a 2-4 thread x64 machine?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS multi-threading

2007-02-08 Thread Casper . Dik

>With the CPU overhead imposed in checksum of blocks by ZFS, on a large
>sequential write test, the CPU was heavily loaded in a test that I ran.
>By turning off the checksum, the CPU load was greatly reduced.
>Obviously, this caused a tradeoff in reliability for CPU cycles.

What hardware platform and what was the I/O throughput at the peak
and what was the difference in throughput and CPU utilization between
both cases?

Casper
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zones-discuss] Re: [zfs-discuss] Downsides to zone roots on ZFS?

2007-02-08 Thread Jerry Jelinek

Ivan Buetler wrote:
Is this true for OpenSolaris? My experience: 

I was trying to upgrade from "SunOS 5.11 snv_28" to "SunOS 5.11 snv_54" where 
my NGZ zone roots were set to a zfs mount point like below:


NAME USED  AVAIL  REFER  MOUNTPOINT
zpool   93.8G  40.1G26K  /zpool
zpool/zones 3.50G  40.1G  1.68G  /zpool/zones

Upgrading to SNV_54 did not work for me (CD|DVD|Live-Upgrade). The install 
procedure was cancelled after it came to the NGZ ZFS setup part. However - I 
was enforced to to a full re-install of the whole OS. By this time, I decided 
to have an OS independent application setup: I decided to leave all my 
Non-Solaris apps within the following structure:


NAME USED  AVAIL  REFER  MOUNTPOINT
zpool   93.8G  40.1G26K  /zpool
zpool/applic2.40G  40.1G  2.40G  /zpool/applic
zpool/bin108M  40.1G   108M  /zpool/bin
zpool/data   644M  40.1G   644M  /zpool/data
zpool/logs  1.03G  40.1G  1.03G  /zpool/logs

This means, Apache, Tomcat, Bind DNS, Postfix, MySQL, Berkeley-DB, ... was 
installed using a prefix (e.g. ./configure --prefix=/zpool/applic/named)


This gives me some independencies to the core OS located 
in /sbin; /usr/bin, ...


After I moved all my apps into my own prefix path (ZFS mount poing), I did 
another full reinstall of the OS, where I found out that I should have backed 
up some files from the core OS before. Especially I should have backed up the 
following files from the GZ and all NGZ. 


a) /etc/hosts, /etc/passwd, /etc/shadow, /etc/nsswitch.conf, /etc/resolv.conf
b) /etc/hostname.XX, 
c) /etc/init.d/startup-scripts (my own releases)


After I did another full setup (not upgrading), I created the zones using the 
famous zonemgr script and brought back all applications by just mounting 
the /zpool/applic/path into the NGZ path. 

This way, I was pretty fast in upgrading the whole system to a new Nevada 
build, even upgrading would be the preferred solution to me. 

I do not know if I with SNV_54, another upgrade from SNV_54 to SNV_55 is 
supported by OpenSolaris. That is why this thread is of interest to me. 


Ivan,

I am not sure if I completely understand your configuration, but you
can upgrade a system with zones that have delegated zfs datasets or
where you just used lofs mounts to mount the zfs filesystems into the
zone.  This would apply when all you have is data or non-Solaris pkgs
installed in the zfs filesystems.  Since the upgrade code does not
have to discover and mount the zfs filesystems to perform the upgrade
of the OS, this type of configuration works fine.  We would have to
see your actual zonecfg info to be sure that you haven't set things
up in a way that would prevent the upgrade though.

Jerry
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: NFS share problem with mac os x client

2007-02-08 Thread Wes Williams
> In short - make sure your UID on Mac is enough to
> access the files on
> nfs (as it would be if you would try to access those
> files locally).
> Or perhaps you tried from user with uid=0 in which
> case it's mapped to
> nobody user by default.
> 
> -- 
> Best regards,
> Robert

Exactly as Robert suggests.

I've had the same "problem" but it turned out to simply be that I needed the OS 
X group and user ID's to match those setup on the ZFS OpenSolaris server.  Once 
you correct this, the Finder will work great to a ZFS NFS share.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zones-discuss] Re: [zfs-discuss] Downsides to zone roots on ZFS?

2007-02-08 Thread Jerry Jelinek

Ivan Buetler wrote:

Jerry, Thank you for your response. See my zonecfg of the named NGZ here:

[EMAIL PROTECTED] ~ # zonecfg -z named export
create -b
set zonepath=/zpool/zones/named
set autoboot=true
add inherit-pkg-dir
set dir=/lib
end
add inherit-pkg-dir
set dir=/platform
end
add inherit-pkg-dir
set dir=/sbin
end
add inherit-pkg-dir
set dir=/usr
end
add fs
set dir=/zpool/applic/bind-9.3.2-P1
set special=/zpool/applic/bind-9.3.2-P1/
set type=lofs
add options ro
add options nodevices
end
add fs
set dir=/zpool/data/named
set special=/zpool/data/named
set type=lofs
add options rw
add options nodevices
end
add net
set address=1.2.3.4/27
set physical=qfe3
end
add net
set address=21.2.3.5/27
set physical=qfe3
end
add net
set address=10.10.10.10/24
set physical=qfe4
end
add attr
set name=comment
set type=string
set value="Zone named"
end


It looks like you won't be able to upgrade.  Assuming /zpool
is the mount of your zfs zpool, then your zonepath is on a
zfs dataset so this is the exact issue that upgrade cannot
handle yet.  If you were to place your zones on a UFS filesystem
then the other fs entries you have to mount zfs datasets
within the zone would be fine.

Jerry
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: ZFS on PC Based Hardware for NAS?

2007-02-08 Thread Wes Williams
> I believe there is a write limit (commonly 10
> writes) on CF and
> similar storage devices, but I don't know for sure.
> Apart from that
> I think it's a good idea.
> 
> 
> James C. McPherson

As a consequence, the /tmp, /var, and swap could eventually be moved to the ZFS 
hard drives to greatly reduce I/O to the CF card.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: ZFS on PC Based Hardware for NAS?

2007-02-08 Thread Wade . Stuart






[EMAIL PROTECTED] wrote on 02/08/2007 10:23:19 AM:

> > I believe there is a write limit (commonly 10
> > writes) on CF and
> > similar storage devices, but I don't know for sure.
> > Apart from that
> > I think it's a good idea.
> >
> >
> > James C. McPherson
>
> As a consequence, the /tmp, /var, and swap could eventually be moved
> to the ZFS hard drives to greatly reduce I/O to the CF card.
>

Or zfs's slab logic could have a randomized block selector where every
write to the cf device gets written to a random free block instead of the
disk based weighing that is done now.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: ZFS on PC Based Hardware for NAS?

2007-02-08 Thread Richard Elling

[EMAIL PROTECTED] wrote:

I believe there is a write limit (commonly 10
writes) on CF and
similar storage devices, but I don't know for sure.
Apart from that
I think it's a good idea.

James C. McPherson

As a consequence, the /tmp, /var, and swap could eventually be moved
to the ZFS hard drives to greatly reduce I/O to the CF card.


Or zfs's slab logic could have a randomized block selector where every
write to the cf device gets written to a random free block instead of the
disk based weighing that is done now.


Most flash devices have wear leveling builtin, so I'm not sure that adding
this feature to ZFS will accomplish much.  IMHO, the more important ZFS
feature, wrt flash, is COW.

BTW, CFs can do a large number of random iops, much larger than any disk.
 -- richard
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Advice on a cheap home NAS machine using ZFS

2007-02-08 Thread Luke Scharf

Dave Sneddon wrote:

Can anyone shed any light on whether the actual software side of this can
be achieved? Can I share my entire ZFS pool as a "folder" or "network drive"
so WinXP can read it? Will this be fast enough to read/write to at DV speeds
(25mbit/s)? Once the pool is set up and I have it shared within XP (assuming 
it
can be done) can I then easily copy files to/from it?


I've used Samba to share a Unix filesystem with Windows clients many 
times, and would recommend it for this project:

   http://www.samba.org
It runs as a regular program on the Unix side, and does a very 
appropriate job of translating Unix filesystem semantics into Windows 
filesystem semantics.


There are a few things that translate oddly -- Unix symlinks appear to 
be the real file on the Windows side, so one could really confuse a 
Windows user who isn't aware that they exist.  Fortunately, most people 
who use symlinks know what they are, so I never had a problem with 
symlinks when I was running a 400-user NFS/Samba server (in my previous 
job).  Also, traditional Unix permissions look funny if you try to 
adjust the permissions from a Windows workstation.  And, lastly, 
usernames and passwords are hashed differently on Unix then they are on 
Windows, so you have to run smbpasswd on the Unix server before a 
particular user can access their files on the Unix server via Samba -- 
unless you configure Samba to play along with an existing Windows Domain 
or AD.


All in all, Samba provides a nice bridge -- and I've found it to be 
worlds better than the Windows-based NFS clients, and more secure as well.



I don't mind if I have to use something like FTP but ultimately it appearing 
as
a drive in XP is my final goal. If this can't be done then I don't believe I 
will
even attempt to install/create this server.


Windows XP with the newer Office installations has a "web folders" 
facility, that kind-of-almost mounts an FTP server.  It doesn't show up 
as a drive-letter, but it does appear in My Computer.


It's not uncommon to run Samba, SSH/SFTP,  and FTP servers on the same 
host -- though there is  quite a lot to to be said for avoiding 
protocols such as plain old FTP where the login is sent across the 
network in plaintext.



I hope this helps,
-Luke


smime.p7s
Description: S/MIME Cryptographic Signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zones-discuss] Re: [zfs-discuss] Downsides to zone roots on ZFS?

2007-02-08 Thread Rich Teer
On Thu, 8 Feb 2007, [EMAIL PROTECTED] wrote:

> >Many thanks for answering my question.  Hopefully my noisy X4200
> >will be installed in the data centre tomorrow (Thursday); I had
> >a set back today while fighting with the Remote Console feature
> >of ILOM 1.1.1 (i.e., it doesn't work).  :-(
> 
> Just ssh into it and use the serial console from within SSH.

That's how I usually use the console on the X4200.  However, that
arrangement doesn't work when one wants to (re)install Solaris.
Unless there's a way of telling the installer to use the serial
console while booting from DVD, rather than using the GUI?

-- 
Rich Teer, SCSA, SCNA, SCSECA, OpenSolaris CAB member

President,
Rite Online Inc.

Voice: +1 (250) 979-1638
URL: http://www.rite-group.com/rich
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Peculiar behaviour of snapshot after zfs receive

2007-02-08 Thread Trevor Watson
I am seeing what I think is very peculiar behaviour of ZFS after sending a 
full stream to a remote host - the upshot being that I can't send an 
incremental stream afterwards.


What I did was this:

host1 is Solaris 10 Update 2 SPARC
host2 is Solaris 10 Update 2 x86


host1 # zfs snapshot work/[EMAIL PROTECTED]
host1 # zfs send work/[EMAIL PROTECTED] | ssh host2 zfs recv export/home
host1 # ssh host2
host2 # zfs list
export/home  1.02G  47.8G  1.02G  /export/home
export/[EMAIL PROTECTED]  70.5K  -  1.02G  -
host2 #

Note that the snapshot on the remote system is showing changes to the 
underlying filesystem, even though it is not accessed by any application on host2.


Now, I try to send an incremental stream:

host1 # zfs snapshot work/[EMAIL PROTECTED]
host1 # zfs send -i work/[EMAIL PROTECTED] work/[EMAIL PROTECTED] | ssh host2 zfs recv 
export/home

cannot receive: destination has been modified since most recent snapshot --
use 'zfs rollback' to discard changes

Am I using send/recv incorrectly or is there something else going on here that 
I am missing?


Thanks,
Trev



smime.p7s
Description: S/MIME Cryptographic Signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] update of zfs boot support

2007-02-08 Thread Lori Alt

We've gotten a lot of questions lately about when we'll have
an updated version of support for booting from zfs.   We
are aiming at a new version of this going in to build 60.  New
instructions for setting up this configuration will be
made available at the same time.   If build 60 turns out
not to be possible, I'll notify this mailing list.  But for now,
build 60 is our target.  This update will still be for x86
platforms only. 


Lori
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Peculiar behaviour of snapshot after zfs receive

2007-02-08 Thread Robert Milkowski
Hello Trevor,

Thursday, February 8, 2007, 6:23:21 PM, you wrote:

TW> I am seeing what I think is very peculiar behaviour of ZFS after sending a
TW> full stream to a remote host - the upshot being that I can't send an 
TW> incremental stream afterwards.

TW> What I did was this:

TW> host1 is Solaris 10 Update 2 SPARC
TW> host2 is Solaris 10 Update 2 x86


TW> host1 # zfs snapshot work/[EMAIL PROTECTED]
TW> host1 # zfs send work/[EMAIL PROTECTED] | ssh host2 zfs recv export/home
TW> host1 # ssh host2
TW> host2 # zfs list
TW> export/home  1.02G  47.8G  1.02G  /export/home
TW> export/[EMAIL PROTECTED]  70.5K  -  1.02G  -
TW> host2 #

TW> Note that the snapshot on the remote system is showing changes to the 
TW> underlying filesystem, even though it is not accessed by any application on 
host2.

TW> Now, I try to send an incremental stream:

TW> host1 # zfs snapshot work/[EMAIL PROTECTED]
TW> host1 # zfs send -i work/[EMAIL PROTECTED] work/[EMAIL PROTECTED] | ssh 
host2 zfs recv
TW> export/home
TW> cannot receive: destination has been modified since most recent snapshot --
TW> use 'zfs rollback' to discard changes

TW> Am I using send/recv incorrectly or is there something else going on here 
that
TW> I am missing?


It's a known bug.

umount and rollback file system on host 2. You should see 0 used space
on a snapshot and then it should work.

-- 
Best regards,
 Robertmailto:[EMAIL PROTECTED]
   http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Peculiar behavior of snapshot after zfs receive

2007-02-08 Thread Wade . Stuart




>
> TW> Am I using send/recv incorrectly or is there something else
> going on here that
> TW> I am missing?
>
>
> It's a known bug.
>
> umount and rollback file system on host 2. You should see 0 used space
> on a snapshot and then it should work.

Bug ID?  Is it related to atime changes?

>
> --
> Best regards,
>  Robertmailto:[EMAIL PROTECTED]
>http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re[2]: [zfs-discuss] ZFS Degraded Disks

2007-02-08 Thread Robert Milkowski
Hello Kory,

Thursday, February 8, 2007, 12:33:13 AM, you wrote:

KW> I run the ZFS command and get this below.  How do you  fix a  degraded
KW> disk?

KW> zpool replace moodle c1t3d0
KW> invalid vdev specification
KW> use '-f' to override the following errors:
KW> /dev/dsk/c1t3d0s0 is part of active ZFS pool moodle. Please see zpool(1M).
KW> /dev/dsk/c1t3d0s2 is part of active ZFS pool moodle. Please see zpool(1M).


zpool status

output please first



-- 
Best regards,
 Robertmailto:[EMAIL PROTECTED]
   http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zones-discuss] Re: [zfs-discuss] Downsides to zone roots on ZFS?

2007-02-08 Thread Casper . Dik

>That's how I usually use the console on the X4200.  However, that
>arrangement doesn't work when one wants to (re)install Solaris.
>Unless there's a way of telling the installer to use the serial
>console while booting from DVD, rather than using the GUI?


I thought there were a grub "use ttya" and "use ttyb" line on the DVD?

Casper
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zones-discuss] Re: [zfs-discuss] Downsides to zone roots on ZFS?

2007-02-08 Thread Rich Teer
On Thu, 8 Feb 2007, [EMAIL PROTECTED] wrote:

> I thought there were a grub "use ttya" and "use ttyb" line on the DVD?

Yes but one needs to be able to see that menu in order to select
the correct item first.  A chicken-and-egg situation!

Not that it matters so much for this case now, as I've hooked up
a spare monitor and keyboard to it.  But connecting a monitor and
keyboard directly to a server just feels ... wrong.  But then I'm
an old-school SPARC guy, so I guess that's not too surprising!  :-)

-- 
Rich Teer, SCSA, SCNA, SCSECA, OpenSolaris CAB member

President,
Rite Online Inc.

Voice: +1 (250) 979-1638
URL: http://www.rite-group.com/rich
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zones-discuss] Re: [zfs-discuss] Downsides to zone roots on ZFS?

2007-02-08 Thread Casper . Dik

>Yes but one needs to be able to see that menu in order to select
>the correct item first.  A chicken-and-egg situation!

But the console redirection setting in the BIOS should address
that, right?

Casper
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re[2]: [zfs-discuss] Peculiar behavior of snapshot after zfs receive

2007-02-08 Thread Robert Milkowski
Hello Wade,

Thursday, February 8, 2007, 8:00:40 PM, you wrote:




>>
>> TW> Am I using send/recv incorrectly or is there something else
>> going on here that
>> TW> I am missing?
>>
>>
>> It's a known bug.
>>
>> umount and rollback file system on host 2. You should see 0 used space
>> on a snapshot and then it should work.

WSfc> Bug ID?  Is it related to atime changes?

It has to do with delete queue being processed when fs is mounted.


The bug id is: 6343779
http://bugs.opensolaris.org/view_bug.do?bug_id=6343779

-- 
Best regards,
 Robertmailto:[EMAIL PROTECTED]
   http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Peculiar behaviour of snapshot after zfs receive

2007-02-08 Thread eric kustarz


On Feb 8, 2007, at 10:53 AM, Robert Milkowski wrote:


Hello Trevor,

Thursday, February 8, 2007, 6:23:21 PM, you wrote:

TW> I am seeing what I think is very peculiar behaviour of ZFS  
after sending a
TW> full stream to a remote host - the upshot being that I can't  
send an

TW> incremental stream afterwards.

TW> What I did was this:

TW> host1 is Solaris 10 Update 2 SPARC
TW> host2 is Solaris 10 Update 2 x86


TW> host1 # zfs snapshot work/[EMAIL PROTECTED]
TW> host1 # zfs send work/[EMAIL PROTECTED] | ssh host2 zfs recv export/home
TW> host1 # ssh host2
TW> host2 # zfs list
TW> export/home  1.02G  47.8G  1.02G  /export/home
TW> export/[EMAIL PROTECTED]  70.5K  -  1.02G  -
TW> host2 #

TW> Note that the snapshot on the remote system is showing changes  
to the
TW> underlying filesystem, even though it is not accessed by any  
application on host2.


TW> Now, I try to send an incremental stream:

TW> host1 # zfs snapshot work/[EMAIL PROTECTED]
TW> host1 # zfs send -i work/[EMAIL PROTECTED] work/[EMAIL PROTECTED] | ssh host2  
zfs recv

TW> export/home
TW> cannot receive: destination has been modified since most recent  
snapshot --

TW> use 'zfs rollback' to discard changes

TW> Am I using send/recv incorrectly or is there something else  
going on here that

TW> I am missing?


It's a known bug.

umount and rollback file system on host 2. You should see 0 used space
on a snapshot and then it should work.




And with snv_48 (s10u4 when it becomes available), you can use 'zfs  
recv -F' to force the rollback.


eric

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: Re[2]: [zfs-discuss] Peculiar behavior of snapshot after zfs receive

2007-02-08 Thread Wade . Stuart




> Hello Wade,
>
> Thursday, February 8, 2007, 8:00:40 PM, you wrote:
>
>
>
>
> >>
> >> TW> Am I using send/recv incorrectly or is there something else
> >> going on here that
> >> TW> I am missing?
> >>
> >>
> >> It's a known bug.
> >>
> >> umount and rollback file system on host 2. You should see 0 used space
> >> on a snapshot and then it should work.
>
> WSfc> Bug ID?  Is it related to atime changes?
>
> It has to do with delete queue being processed when fs is mounted.
>
>
> The bug id is: 6343779
> http://bugs.opensolaris.org/view_bug.do?bug_id=6343779
>

Robert,

  Thanks!  This is good to know,  I was having issues with one of my
boxes and zfs send/recive that may very well have been this bug.

-Wade


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: The ZFS MOS and how DNODES are stored

2007-02-08 Thread Matthew Ahrens

Bill Moloney wrote:

Thanks for the input Darren, but I'm still confused about DNODE
atomicity ... it's difficult to imagine that a change that is made
anyplace in the zpool would require copy operations all the way back
up to the uberblock 


This is in fact what happens.  However, these changes are all batched up 
(into a transaction group, or "txg"), so the overhead is minimal.


> the DNODE

implementation appears to include its own checksum field
(self-checksumming), 


That is not the case.  Only the uberblock and intent log blocks are 
self-checksumming.


> if this is not the case, than 'any'

modification in the zpool would require copying up to the uberblock


That's correct, any modifications require modifying the uberblock (with 
the exception of intent log writes).


FYI, dnodes are not involved with the snapshot mechanism.  Snapshotting 
happens at the dsl dataset layer, while dnodes are implemented above 
that in the dmu layer.  Check out dsl_dataset.[ch].


--matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS multi-threading

2007-02-08 Thread johansen-osdev
> Would the logic behind ZFS take full advantage of a heavily multicored
> system, such as on the Sun Niagara platform? Would it utilize of the
> 32 concurrent threads for generating its checksums? Has anyone
> compared ZFS on a Sun Tx000, to that of a 2-4 thread x64 machine?

Pete and I are working on resolving ZFS scalability issues with Niagara and
StarCat right now.  I'm not sure if any official numbers about ZFS
performance on Niagara have been published.

As far as concurrent threads generating checksums goes, the system
doesn't work quite the way you have postulated.  The checksum is
generated in the ZIO_STAGE_CHECKSUM_GENERATE pipeline state for writes,
and verified in the ZIO_STAGE_CHECKSUM_VERIFY pipeline stage for reads.
Whichever thread happens to advance the pipline to the checksum generate
stage is the thread that will actually perform the work.  ZFS does not
break the work of the checksum into chunks and have multiple CPUs
perform the computation.  However, it is possible to have concurrent
writes simultaneously in the checksum_generate stage.

More details about this can be found in zfs/zio.c and zfs/sys/zio_impl.h

-j

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Overview (rollup) of recent activity on zfs-discuss

2007-02-08 Thread Eric Boutilier

For background on what this is, see:

http://www.opensolaris.org/jive/message.jspa?messageID=24416#24416
http://www.opensolaris.org/jive/message.jspa?messageID=25200#25200

=
zfs-discuss 01/16 - 01/31
=

Size of all threads during period:

Thread size Topic
--- -
 76   Thumper Origins Q
 68   ZFS or UFS - what to do?
 52   How much do we really want zpool remove?
 42   Heavy writes freezing system
 28   can I use zfs on just a partition?
 56   External drive enclosures + Sun Server for massstorage
 16   hot spares - in standby?
 16   Cheap ZFS homeserver.
 15   panic with zfs
 12   multihosted ZFS
 11   zpool split
 10   zfs rewrite?
 10   What SATA controllers are people using for ZFS?
 10   UFS on zvol: volblocksize and maxcontig
 10   Adding my own compression to zfs
  9   Synchronous Mount?
  8   External drive enclosures + Sun
  7   zpool dumps core with did device
  7   ZFS inode equivalent
  7   ZFS direct IO
  7   External drive enclosures + Sun Server for mass
  6   need advice: ZFS config ideas for X4500 Thumper?
  6   file not persistent after node bounce when there is a bad disk?
  6   Project Proposal: Availability Suite
  6   External drive enclosures + Sun Server for
  6   Backup/Restore idea?
  5   zfs / nfs issue (not performance :-) with courier-imap
  5   restore pool from detached disk from mirror
  5   high density SAS
  5   ditto==RAID1, parity==RAID5?
  5   ZFS and HDLM 5.8 ... does that coexist well ?
  5   Solaris-Supported cards with battery backup
  5   I only see 5.33TB of my 7.25TB zfs-pool. Why?
  5   Can you turn on zfs compression when the fs is already populated?
  4   dumpadm and using dumpfile on zfs?
  4   ZFS patches for Solaris 10U2 ?
  4   ZFS panics system during boot, after 11/06 upgrade
  4   ZFS on PC Based Hardware for NAS?
  4   ZFS brings System to panic/freeze
  4   Some questions I had while testing ZFS.
  4   On-failure policies for pools
  4   Mounting a ZFS clone
  4   Implementation Question
  4   Folders vs. ZFS
  4   Export ZFS over NFS ?
  3   zfs crashing
  3   periodic disk i/o upon pool upgrade
  3   patch making tar multi-thread
  3   iSCSI on a single interface?
  3   data wanted: disk kstats
  3   bug id 6381203
  3   Need Help on device structure
  3   How to reconfigure ZFS?
  3   Actual (cache) memory use of ZFS?
  3   A little different look at filesystems ... Just looking for ideas
  2   zpool overlay
  2   unable to boot zone
  2   question: zfs code size statistics
  2   bug id 6343667
  2   ZFS volume is hosing BIOS POST on Ultra20 (BIOS 2.1.7)
  2   ZFS block squashing (CAS)
  2   ZFS and HDLM 5.8 ... does that coexist well ? [MD21]
  2   X2100 not hotswap
  2   Why replacing a drive generates writes to other disks?
  2   SAS support on Solaris
  2   Problems adding drive
  2   Extremely poor ZFS perf and other observations
  2   Enhance 1U eSATA storage device and Solaris 10?
  1   yet another blog: ZFS space, performance, MTTDL
  1   unsubscribe
  1   question about self healing
  1   ftruncate is failing on ZFS
  1   Zpooling problems
  1   ZFSroot hanging at boot time with os nv54
  1   ZFS ARC blog
  1   VxVM volumes in a zpool.
  1   Understanding ::memstat in terms of the ARC
  1   Remote Replication
  1   Raid Edition drive with RAIDZ
  1   Possibility to change GUID zfs pool at import
  1   On the SATA framework
  1   Multiple Read one Writer Filesystem
  1   MTTDL blogfest continues
  1   HELP please zfs can't open drives!!!
  1   Eliminating double path with ZFS's volume manager
  1   Distributed FS
  1   Converting home directory from ufs to zfs
  1   Bathing ape hoody Bathing ape bape hoodie lil wayne BBC
  1   Almost lost my data
  1   A little different look at filesystems ... Justlooking for ideas


Posting activity by person for period:

# of posts  By
--   --
 38   rmilkowski at task.gda.pl (robert milkowski)
 38   fcusack at fcusack.com (frank cusack)
 33   jasonjwwilliams at gmail.com (jason j. w. williams)
 26   rheilke at dragonhearth.com (rainer heilke)
 25   r

[zfs-discuss] FROSUG February Meeting Announcement (2/22/2007)

2007-02-08 Thread Jim Walker
This month's FROSUG (Front Range OpenSolaris User Group) meeting is on
Thursday, February 22, 2007.  Our presentation is "ZFS as a Root File
System" by Lori Alt. In addition, Jon Bowman will be giving an OpenSolaris
Update, and we will also be doing an InstallFest. So, if you want help
installing an OpenSolaris distribution, backup your laptop and bring it
to the meeting!

About the presentation(s):
One of the next steps in the evolution of ZFS is to enable
its use as a root file system.  This presentation will focus
on how booting from ZFS will work, how installation
will be affected by ZFS's feature set, and the many advantages
that will result from being able to use ZFS as a root file system.

The presentation(s)s will be posted here prior to the meeting:
http://www.opensolaris.org/os/community/os_user_groups/frosug/

About our presenter(s):
Lori Alt is a Staff Engineer at Sun Microsystems, where
she has worked since 1991.  Lori worked on Solaris install
and upgrade and then on UFS, where she led the multi-terabyte
UFS project.  She has Bachelor's and Master's degrees in
computer science from Washington University in St. Louis, MO.

-

Meeting Details:

When: Thursday, February 22, 2007
Times: 6:00pm - 6:30pm Doors open and Pizza
   6:30pm - 6:45pm OpenSolaris Update (Jon Bowman)
   6:45pm - 8:30pm ZFS as a Root File System (Lori Alt)
Where: Sun Broomfield Campus
   Building 1 - Conference Center
   500 Eldorado Blvd.
   Broomfield, CO 80021

Note:  The location of this meeting may change. We will send out an
   additional email prior to the meeting if this happens.

Pizza and soft drinks will be served at the beginning of the meeting.
Please RSVP to frosug-rsvp(AT)opensolaris(DOT)org in order to help us
plan for food and setup access to the Sun campus.

We hope to see you there!
Thanks,
FROSUG

+++

Future Meeting Plans:
March 29, 2007: Doug McCallum presents "sharemgr"
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss