Re: [zfs-discuss] ZFS snapshot GUI

2007-11-22 Thread Ross
Very good points Rang, I'm going to add to them with a few of my own.

It should be possible to restore individual files rather than rolling back the 
snapshot and I guess that's what was meant here.  I think the terminology in 
the original post may not be too clear.

However, my impression reading this is that this is an application that runs 
directly on the machine.  If so, we're missing an opportunity here.  Solaris 
isn't really an end user OS, it's more of a server OS.  If you are going to 
implement a nice GUI for restoring files from a snapshot, you really want that 
to work over a network as well as on the local machine.

Ironically, if you're a windows user you already have that ability over the 
network with Solaris.  Run ZFS and Samba and windows users can use Microsoft's 
Shadow Copy Client to right-click any file and easily restore it from a 
snapshot:  http://helpdesk.its.uiowa.edu/windows/instructions/shadowcopy.htm

What's really needed is a way to do that on Solaris and Linux machines over the 
network.  Integration with Apple's time machine would be great too (especially 
as it sounds like they may be making it compatible with ZFS), but unless 
somebody high up in Sun speaks to Apple I don't see that happening.

So you need two UI's:

 - On the server side a simple UI is needed for creating and scheduling 
snapshots of the filesystem.  Tim Foster's service would be a good starting 
point for that: http://blogs.sun.com/timf/entry/zfs_automatic_for_the_people

 - On the client side a simple UI is needed that allows users to easily see 
previous versions of files and folders, and either restore them in place or 
copy old versions to a new location.

And the client side of this would want to be capable of running either locally 
or over the network.

I think you could probably bodge this by virtue of the fact that you can browse 
the files in a snapshot.  Performance would probably be slow however and I've 
no doubt that far better performance could be achieved with hooks into ZFS 
(which incidentally would benefit apple if they want to move time machine to 
ZFS).

That kind of thing is way outside my experience however, but it would be good 
if somebody at Sun could think about it.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS snapshot GUI

2007-11-22 Thread Russell Aspinwall
Hi,

In respect of snapshots :-

a)  should the snapshot process it self be modified to allow restoring 
of individual files via zfs rollback
b)  should there be a zfs rollfile  to selectively restore files from a 
snapshot
c) should there be a zfs purge which would allow file(s) to be removed 
from zfs including all snapshots

Ross wrote:
> Very good points Rang, I'm going to add to them with a few of my own.
>
> It should be possible to restore individual files rather than rolling back 
> the snapshot and I guess that's what was meant here.  I think the terminology 
> in the original post may not be too clear.
>
> However, my impression reading this is that this is an application that runs 
> directly on the machine.  If so, we're missing an opportunity here.  Solaris 
> isn't really an end user OS, it's more of a server OS.  If you are going to 
> implement a nice GUI for restoring files from a snapshot, you really want 
> that to work over a network as well as on the local machine.
>
> Ironically, if you're a windows user you already have that ability over the 
> network with Solaris.  Run ZFS and Samba and windows users can use 
> Microsoft's Shadow Copy Client to right-click any file and easily restore it 
> from a snapshot:  
> http://helpdesk.its.uiowa.edu/windows/instructions/shadowcopy.htm
>
> What's really needed is a way to do that on Solaris and Linux machines over 
> the network.  Integration with Apple's time machine would be great too 
> (especially as it sounds like they may be making it compatible with ZFS), but 
> unless somebody high up in Sun speaks to Apple I don't see that happening.
>
> So you need two UI's:
>
>  - On the server side a simple UI is needed for creating and scheduling 
> snapshots of the filesystem.  Tim Foster's service would be a good starting 
> point for that: http://blogs.sun.com/timf/entry/zfs_automatic_for_the_people
>
>  - On the client side a simple UI is needed that allows users to easily see 
> previous versions of files and folders, and either restore them in place or 
> copy old versions to a new location.
>
> And the client side of this would want to be capable of running either 
> locally or over the network.
>
> I think you could probably bodge this by virtue of the fact that you can 
> browse the files in a snapshot.  Performance would probably be slow however 
> and I've no doubt that far better performance could be achieved with hooks 
> into ZFS (which incidentally would benefit apple if they want to move time 
> machine to ZFS).
>
> That kind of thing is way outside my experience however, but it would be good 
> if somebody at Sun could think about it.
>  
>  
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
> __
> This email has been scanned by the MessageLabs Email Security System.
> For more information please visit http://www.messagelabs.com/email 
> __
>
>   


-- 
Regards

Russell



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS snapshot GUI

2007-11-22 Thread Tim Foster

hi there,

On Thu, 2007-11-22 at 00:53 -0800, Ross wrote:
> It should be possible to restore individual files rather than rolling
> back the snapshot and I guess that's what was meant here.  I think the
> terminology in the original post may not be too clear.

Yep, I agree.

> However, my impression reading this is that this is an application
> that runs directly on the machine.  If so, we're missing an
> opportunity here.  Solaris isn't really an end user OS, it's more of a
> server OS.  If you are going to implement a nice GUI for restoring
> files from a snapshot, you really want that to work over a network as
> well as on the local machine.

Definitely - you can do that now over NFS. Here space/timf is a ZFS
dataset on my desktop machine, haiiro. On "spoon" a client machine, I
browse to haiiro's NFS shares:

[EMAIL PROTECTED] cd /net/haiiro/space/timf/.zfs
[EMAIL PROTECTED] cd snapshot
[EMAIL PROTECTED] ls -1   
total 56
   3 backup-2007-09-25-16-21-05/
   3 backup-2007-09-25-16-49-42/
   3 backup-2007-09-25-16-53-37/
   3 backup-2007-09-25-17-35-07/
   .
   . 
   etc.

Those are all snapshots taken on the filesystems on haiiro.

You can also mkdir inside a remote directories to create snapshots,
assuming you've been delegated that permission:

[EMAIL PROTECTED] mkdir new
[EMAIL PROTECTED]  ssh [EMAIL PROTECTED] /usr/sbin/zfs list space/[EMAIL 
PROTECTED]
NAME USED  AVAIL  REFER  MOUNTPOINT
space/[EMAIL PROTECTED]  0  -  5,18G  -


>  - On the client side a simple UI is needed that allows users to
> easily see previous versions of files and folders, and either restore
> them in place or copy old versions to a new location.

And that's what this is all about - trying to find a cleaner way than
http://blogs.sun.com/timf/entry/zfs_on_your_desktop

to tie the client and server sides together.

cheers,
tim
-- 
Tim Foster, Sun Microsystems Inc, Solaris Engineering Ops
http://blogs.sun.com/timf

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Home Motherboard

2007-11-22 Thread Jason P. Warr
If you want a board that is a steal look at this one:

http://www.ascendtech.us/itemdesc.asp?ic=MBTAS2882G3NR

Tyan S2882, Dual Socket 940 Opteron, 8 DDR slots, 2  PCI-X 133 busses with 2 
slots each, Dual Core support.

$80.

Pair is with a couple of Opteron 270's from ebay for $195:

http://cgi.ebay.com/MATCH-PAIR-AMD-Opteron-270-64Bit-DualCore-940pin-2Ghz_W0QQitemZ290182379420QQihZ019QQcategoryZ80142QQssPageNameZWDVWQQrdZ1QQcmdZViewItem

Granted, you need an E-ATX case but those are not that expensive and an EPS12V 
power supply.  For less than $600 you can have a hell of a base server grade 
system with 4 cores and 2-4G of ram.


- Original Message -
From: "Rob Logan" <[EMAIL PROTECTED]>
To: zfs-discuss@opensolaris.org
Sent: Wednesday, November 21, 2007 11:17:19 PM (GMT-0600) America/Chicago
Subject: [zfs-discuss] Home Motherboard

grew tired of the recycled 32bit cpus in
http://www.opensolaris.org/jive/thread.jspa?messageID=127555

and bought this to put the two marvell88sx cards in:
$255 http://www.supermicro.com/products/motherboard/Xeon3000/3210/X7SBE.cfm
  http://www.supermicro.com/manuals/motherboard/3210/MNL-0970.pdf
$195 1333FSB 2.6GHz Xeon 3075 (basicly a E6750)
  Any Core 2 Quad/Duo in LGA775 will work, including 45nm dies:
  http://rob.com/sun/x7sbe/45nm-pricing.jpg
$270 Four 1G PC2-6400 DDRII 800MHz 240-pin ECC Unbuffered SDRAM
$ 55 LOM (IPMI and Serial over LAN)
  http://www.supermicro.com/manuals/other/AOC-SIMSOLC-HTC.pdf

# /usr/X11/bin/scanpci
pci bus 0x cardnum 0x00 function 0x00: vendor 0x8086 device 0x29f0
  Intel Corporation Server DRAM Controller

pci bus 0x cardnum 0x01 function 0x00: vendor 0x8086 device 0x29f1
  Intel Corporation Server Host-Primary PCI Express Bridge

pci bus 0x cardnum 0x1a function 0x00: vendor 0x8086 device 0x2937
  Intel Corporation USB UHCI Controller #4

pci bus 0x cardnum 0x1a function 0x01: vendor 0x8086 device 0x2938
  Intel Corporation USB UHCI Controller #5

pci bus 0x cardnum 0x1a function 0x02: vendor 0x8086 device 0x2939
  Intel Corporation USB UHCI Controller #6

pci bus 0x cardnum 0x1a function 0x07: vendor 0x8086 device 0x293c
  Intel Corporation USB2 EHCI Controller #2

pci bus 0x cardnum 0x1c function 0x00: vendor 0x8086 device 0x2940
  Intel Corporation PCI Express Port 1

pci bus 0x cardnum 0x1c function 0x04: vendor 0x8086 device 0x2948
  Intel Corporation PCI Express Port 5

pci bus 0x cardnum 0x1c function 0x05: vendor 0x8086 device 0x294a
  Intel Corporation PCI Express Port 6

pci bus 0x cardnum 0x1d function 0x00: vendor 0x8086 device 0x2934
  Intel Corporation USB UHCI Controller #1

pci bus 0x cardnum 0x1d function 0x01: vendor 0x8086 device 0x2935
  Intel Corporation USB UHCI Controller #2

pci bus 0x cardnum 0x1d function 0x02: vendor 0x8086 device 0x2936
  Intel Corporation USB UHCI Controller #3

pci bus 0x cardnum 0x1d function 0x07: vendor 0x8086 device 0x293a
  Intel Corporation USB2 EHCI Controller #1

pci bus 0x cardnum 0x1e function 0x00: vendor 0x8086 device 0x244e
  Intel Corporation 82801 PCI Bridge

pci bus 0x cardnum 0x1f function 0x00: vendor 0x8086 device 0x2916
  Intel Corporation  Device unknown

pci bus 0x cardnum 0x1f function 0x02: vendor 0x8086 device 0x2922
  Intel Corporation 6 port SATA AHCI Controller

pci bus 0x cardnum 0x1f function 0x03: vendor 0x8086 device 0x2930
  Intel Corporation SMBus Controller

pci bus 0x cardnum 0x1f function 0x06: vendor 0x8086 device 0x2932
  Intel Corporation Thermal Subsystem

pci bus 0x0001 cardnum 0x00 function 0x00: vendor 0x8086 device 0x0329
  Intel Corporation 6700PXH PCI Express-to-PCI Bridge A

pci bus 0x0001 cardnum 0x00 function 0x01: vendor 0x8086 device 0x0326
  Intel Corporation 6700/6702PXH I/OxAPIC Interrupt Controller A

pci bus 0x0001 cardnum 0x00 function 0x02: vendor 0x8086 device 0x032a
  Intel Corporation 6700PXH PCI Express-to-PCI Bridge B

pci bus 0x0001 cardnum 0x00 function 0x03: vendor 0x8086 device 0x0327
  Intel Corporation 6700PXH I/OxAPIC Interrupt Controller B

pci bus 0x0003 cardnum 0x02 function 0x00: vendor 0x11ab device 0x6081
  Marvell Technology Group Ltd. MV88SX6081 8-port SATA II PCI-X  
Controller

pci bus 0x000d cardnum 0x00 function 0x00: vendor 0x8086 device 0x108c
  Intel Corporation 82573E Gigabit Ethernet Controller (Copper)

pci bus 0x000f cardnum 0x00 function 0x00: vendor 0x8086 device 0x109a
  Intel Corporation 82573L Gigabit Ethernet Controller

pci bus 0x0011 cardnum 0x04 function 0x00: vendor 0x1002 device 0x515e
  ATI Technologies Inc ES1000

# cfgadm -a
Ap_Id  Type Receptacle   Occupant
Condition
pcie5  etherne/hp   connectedconfigured   ok
pcie6  etherne/hp   connectedconfigured   ok
sata0/0::dsk/c0t0d0disk connectedconfigured   ok
sata0/1::dsk/c0t1d0disk connectedconfigured   ok
sata0/2::ds

Re: [zfs-discuss] Home Motherboard

2007-11-22 Thread Vincent Fox
The new Intel D201GLY2 looks quite good.

Fanless 64-bit CPU, low-power consumption from what I have read.  Awaiting 
first substantive review from SilentPCReview.com before ordering one.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] cifs server?

2007-11-22 Thread Tim Cook
So now that cifs has finally been released in b77, anyone happen to have any 
documentation on setup.  I know the initial share is relatively simple... but 
what is the process after that for actually getting users authenticated?  I see 
in the idmap service there's some configurations for authenticating against an 
AD server, but is there anyway to get it to authenticate with a local database? 
 

In my test environment I only have a couple users, and no AD server, so it 
seems  a bit silly to set one up simply to get authentication to the cifs 
shares.  Docs/links, anyone?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cifs server?

2007-11-22 Thread Tim Cook
so apparently you need to use smbadm, but when I got to create the group:

smbadm create wheel
failed to create the group (NOT_SUPPORTED)
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cifs server?

2007-11-22 Thread Nicolas Williams
On Thu, Nov 22, 2007 at 10:27:18AM -0800, Tim Cook wrote:
> So now that cifs has finally been released in b77, anyone happen to

It hasn't been released.  It was integrated into build 77.

> have any documentation on setup.  I know the initial share is

The documentation will be available in the first SX release to
officially have the SMB server.

> relatively simple... but what is the process after that for actually
> getting users authenticated?  I see in the idmap service there's some
> configurations for authenticating against an AD server, but is there
> anyway to get it to authenticate with a local database?  

There's been a lot of bugfixes, so I higly recommend waiting at least
until build 79 closes.  What you would do is enable the idmap and
smb/server services, then run "smbadm join" and go from there.  But like
I said, I recommend waiting for build 79.

> In my test environment I only have a couple users, and no AD server,
> so it seems  a bit silly to set one up simply to get authentication to
> the cifs shares.  Docs/links, anyone?

There is a workgroup mode (this is controlled by a property of the
smb/server service), but I'm not sure how well it works as I've
not used it.  Perhaps the CIFS server project team can follow up here
(but keep in mind it's Thanksgiving week...).

Nico
-- 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Home Motherboard

2007-11-22 Thread Rob Logan
here is a simple layout for 6 disks toward "speed" :

/dev/dsk/c0t0d0s1 -  - swap-  no  -
/dev/dsk/c0t1d0s1 -  - swap-  no  -
root/snv_77   -  / zfs -  no  -
z/snv_77/usr  -  /usr  zfs -  yes -
z/snv_77/var  -  /var  zfs -  yes -
z/snv_77/opt  -  /opt  zfs -  yes -


root
@test[2:25pm]/root/boot/ 
grub 

27 
  % zpool iostat -v
  capacity operationsbandwidth
pool   used  avail   read  write   read  write
  -  -  -  -  -  -
root  4.39G  15.1G179  1  3.02M  16.0K
   mirror  4.39G  15.1G179  1  3.02M  16.0K
 c0t1d0s0  -  - 62  1  3.35M  22.3K
 c0t0d0s0  -  - 61  1  3.35M  22.3K
  -  -  -  -  -  -
z  319G   421G  1.15K 17  81.9M  83.8K
   mirror   113G   163G418  8  28.8M  40.8K
 c0t0d0s7  -  -272  2  29.4M  48.2K
 c0t1d0s7  -  -272  2  29.5M  48.2K
   mirror   103G   129G376  4  26.5M  21.4K
 c0t2d0-  -250  3  27.1M  28.9K
 c0t3d0-  -250  2  27.1M  28.9K
   mirror   104G   128G380  4  26.6M  21.6K
 c0t4d0-  -253  2  27.1M  29.0K
 c0t5d0-  -252  2  27.1M  29.0K
  -  -  -  -  -  -

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Home Motherboard

2007-11-22 Thread mike
I actually have a related motherboard, chassis, dual power-supplies
and 12x400 gig drives already up on ebay too. If I recall Areca cards
are supported in OpenSolaris...

http://cgi.ebay.com/ws/eBayISAPI.dll?ViewItem&item=300172982498


On 11/22/07, Jason P. Warr <[EMAIL PROTECTED]> wrote:
> If you want a board that is a steal look at this one:
>
> http://www.ascendtech.us/itemdesc.asp?ic=MBTAS2882G3NR
>
> Tyan S2882, Dual Socket 940 Opteron, 8 DDR slots, 2  PCI-X 133 busses with 2 
> slots each, Dual Core support.
>
> $80.
>
> Pair is with a couple of Opteron 270's from ebay for $195:
>
> http://cgi.ebay.com/MATCH-PAIR-AMD-Opteron-270-64Bit-DualCore-940pin-2Ghz_W0QQitemZ290182379420QQihZ019QQcategoryZ80142QQssPageNameZWDVWQQrdZ1QQcmdZViewItem
>
> Granted, you need an E-ATX case but those are not that expensive and an 
> EPS12V power supply.  For less than $600 you can have a hell of a base server 
> grade system with 4 cores and 2-4G of ram.
>
>
> - Original Message -
> From: "Rob Logan" <[EMAIL PROTECTED]>
> To: zfs-discuss@opensolaris.org
> Sent: Wednesday, November 21, 2007 11:17:19 PM (GMT-0600) America/Chicago
> Subject: [zfs-discuss] Home Motherboard
>
> grew tired of the recycled 32bit cpus in
> http://www.opensolaris.org/jive/thread.jspa?messageID=127555
>
> and bought this to put the two marvell88sx cards in:
> $255 http://www.supermicro.com/products/motherboard/Xeon3000/3210/X7SBE.cfm
>  http://www.supermicro.com/manuals/motherboard/3210/MNL-0970.pdf
> $195 1333FSB 2.6GHz Xeon 3075 (basicly a E6750)
>  Any Core 2 Quad/Duo in LGA775 will work, including 45nm dies:
>  http://rob.com/sun/x7sbe/45nm-pricing.jpg
> $270 Four 1G PC2-6400 DDRII 800MHz 240-pin ECC Unbuffered SDRAM
> $ 55 LOM (IPMI and Serial over LAN)
>  http://www.supermicro.com/manuals/other/AOC-SIMSOLC-HTC.pdf
>
> # /usr/X11/bin/scanpci
> pci bus 0x cardnum 0x00 function 0x00: vendor 0x8086 device 0x29f0
>  Intel Corporation Server DRAM Controller
>
> pci bus 0x cardnum 0x01 function 0x00: vendor 0x8086 device 0x29f1
>  Intel Corporation Server Host-Primary PCI Express Bridge
>
> pci bus 0x cardnum 0x1a function 0x00: vendor 0x8086 device 0x2937
>  Intel Corporation USB UHCI Controller #4
>
> pci bus 0x cardnum 0x1a function 0x01: vendor 0x8086 device 0x2938
>  Intel Corporation USB UHCI Controller #5
>
> pci bus 0x cardnum 0x1a function 0x02: vendor 0x8086 device 0x2939
>  Intel Corporation USB UHCI Controller #6
>
> pci bus 0x cardnum 0x1a function 0x07: vendor 0x8086 device 0x293c
>  Intel Corporation USB2 EHCI Controller #2
>
> pci bus 0x cardnum 0x1c function 0x00: vendor 0x8086 device 0x2940
>  Intel Corporation PCI Express Port 1
>
> pci bus 0x cardnum 0x1c function 0x04: vendor 0x8086 device 0x2948
>  Intel Corporation PCI Express Port 5
>
> pci bus 0x cardnum 0x1c function 0x05: vendor 0x8086 device 0x294a
>  Intel Corporation PCI Express Port 6
>
> pci bus 0x cardnum 0x1d function 0x00: vendor 0x8086 device 0x2934
>  Intel Corporation USB UHCI Controller #1
>
> pci bus 0x cardnum 0x1d function 0x01: vendor 0x8086 device 0x2935
>  Intel Corporation USB UHCI Controller #2
>
> pci bus 0x cardnum 0x1d function 0x02: vendor 0x8086 device 0x2936
>  Intel Corporation USB UHCI Controller #3
>
> pci bus 0x cardnum 0x1d function 0x07: vendor 0x8086 device 0x293a
>  Intel Corporation USB2 EHCI Controller #1
>
> pci bus 0x cardnum 0x1e function 0x00: vendor 0x8086 device 0x244e
>  Intel Corporation 82801 PCI Bridge
>
> pci bus 0x cardnum 0x1f function 0x00: vendor 0x8086 device 0x2916
>  Intel Corporation  Device unknown
>
> pci bus 0x cardnum 0x1f function 0x02: vendor 0x8086 device 0x2922
>  Intel Corporation 6 port SATA AHCI Controller
>
> pci bus 0x cardnum 0x1f function 0x03: vendor 0x8086 device 0x2930
>  Intel Corporation SMBus Controller
>
> pci bus 0x cardnum 0x1f function 0x06: vendor 0x8086 device 0x2932
>  Intel Corporation Thermal Subsystem
>
> pci bus 0x0001 cardnum 0x00 function 0x00: vendor 0x8086 device 0x0329
>  Intel Corporation 6700PXH PCI Express-to-PCI Bridge A
>
> pci bus 0x0001 cardnum 0x00 function 0x01: vendor 0x8086 device 0x0326
>  Intel Corporation 6700/6702PXH I/OxAPIC Interrupt Controller A
>
> pci bus 0x0001 cardnum 0x00 function 0x02: vendor 0x8086 device 0x032a
>  Intel Corporation 6700PXH PCI Express-to-PCI Bridge B
>
> pci bus 0x0001 cardnum 0x00 function 0x03: vendor 0x8086 device 0x0327
>  Intel Corporation 6700PXH I/OxAPIC Interrupt Controller B
>
> pci bus 0x0003 cardnum 0x02 function 0x00: vendor 0x11ab device 0x6081
>  Marvell Technology Group Ltd. MV88SX6081 8-port SATA II PCI-X
> Controller
>
> pci bus 0x000d cardnum 0x00 function 0x00: vendor 0x8086 device 0x108c
>  Intel Corporation 82573E Gigabit Ethernet Controller (Copper)
>
> pci bus 0x000f cardnum 0x00 function 0x00: vendor 0x8086 device 0x109a
>  Intel Corporation 82573L Gigabit Ethernet Controller
>
> pci bus 0x0011 cardnum 0x04 function 0x

[zfs-discuss] ZFS & Lustre

2007-11-22 Thread Rayson Ho
First reported by a Sun blogger (http://blogs.sun.com/simons/), most
of the HPC Consortium meeting in Reno presentations are now onlone:
https://events-at-sun.com/hpcreno/presentations.html

Some people on this list may be interested in this: "Lustre – CFS
Update, Peter Braam, CEO, Cluster File System"

In Lustre 1.8, ZFS will be available as an option for the disk filesystem
- Lustre servers will be in user space
- will have user space ZFS code – DMU

I don't know enough about Lustre to comment on how this is done -- any
volunteers?? :-)

Rayson
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Home Motherboard

2007-11-22 Thread Rob Logan

 > with 4 cores and 2-4G of ram.

not sure 2G is enough... at least with 64bit there are no kernel space  
issues.

6 % echo '::memstat' | mdb -k
Page SummaryPagesMB  %Tot
     
Kernel 692075  2703   66%
Anon33265   1293%
Exec and libs8690331%
Page cache   1143 40%
Free (cachelist) 3454130%
Free (freelist)307400  1200   29%

Total 1046027  4086
Physical  1046026  4086

this tree on the 64bit v20z box:

Page SummaryPagesMB  %Tot
     
Kernel 668799  2612   85%
Anon38477   1505%
Exec and libs4881191%
Page cache   5363201%
Free (cachelist) 7566291%
Free (freelist) 59052   2308%

Total  784138  3063
Physical   784137  3063

and the same tree on a 32bit box:

Page SummaryPagesMB  %Tot
     
Kernel 261359  1020   33%
Anon52314   2047%
Exec and libs   12245472%
Page cache   9885381%
Free (cachelist) 6816261%
Free (freelist) 7408819027247  240518168576  593093167776006144%

Total  784266  3063
Physical   784265  3063


from http://www.opensolaris.org/jive/message.jspa?messageID=173580

8 % ./zfs-mem-used
checking pool map size [B]: root
358424
checking pool map size [B]: z
4162512

9 % cat zfs-mem-used
#!/bin/sh

echo '::spa' | mdb -k | grep ACTIVE \
  | while read pool_ptr state pool_name
do
  echo "checking pool map size [B]: $pool_name"

  echo "${pool_ptr}::walk metaslab|::print -d struct metaslab  
ms_smo.smo_objsize" \
| mdb -k \
| nawk '{sub("^0t","",$3);sum+=$3}END{print sum}'
done

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Questions from a windows admin - Samba, shares & quotas

2007-11-22 Thread Ross
Hey folks,

This may sound a little crazy, but I'm a long time windows admin planning on 
rolling out a Solaris server to act as our main filestore, and I could do with 
a bit of advice.

The main reason for switching is so we can use snapshots.  With Samba and 
Microsoft's Shadow Copy Client we can backup everybody's files and give users 
the power to restore files themselves.  That plus the other benefits of ZFS 
mean we're seriously looking into this.  However, there are one or two side 
effects...

I'm more than a little concerned about how we go about creating user profiles 
and managing quotas.  On a windows server this is easy.  As you create a user 
account the appropriate folders are created for you.  So long as you have the 
right permissions on the parent folder, all those folders inherit their 
permissions automatically, and since quotas work per user, that's automatic too.

The question is, can I do anything to automate all this if I move to Solaris?

We've got to use ZFS to get the benefits, but that doesn't have user quotas so 
I'll have to script the creation of a filesystem for each user.  First of all, 
if I have multiple filesystems, can I still share those out under one path?  
ie:  each user has a home folder of \\server\share\username, can I still do 
that when every 'username' is a separate filesystem?

Then, if that's possible, is there any way I can make the creation of these 
filesystems automatic?   What I'm thinking is that I'll need a parent 
filesystem for these to inherit quota settings from, and that will be the 
'share' location above.  Now, when windows creates user accounts, it will 
automatically create subfolders in that parent filesystem to act as home 
directories.  Would I be able to write a script to watch that filesystem for 
new subfolders and have it automatically delete the subfolder and create a 
filesystem in it's place?

And can I set permissions on a filesystem?  Could the script to that too?

If it works, while it may be a bit messy, there will actually be some 
advantages over the windows quota system.  In windows, the quota is set by 
ownership of the file.  If users move files between each other for any reason, 
or an admin has to take ownership to change permissions that can get messed up, 
and it's nigh on impossible to find out where all your space has gone if that 
happens.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Home Motherboard

2007-11-22 Thread Nathan Kroenert
I was interested in that one till I read:

One 240-pin DDR2 SDRAM Dual Inline Memory Module (DIMM) sockets
Support for DDR2 667 MHz, DDR2 533 MHz and DDR2 400 MHz DIMMs (DDR 667 
MHz validated to run at 533 MHz only)
Support for up to 1 GB of system memory

Boo!!!

:)

Nathan.

Vincent Fox wrote:
> The new Intel D201GLY2 looks quite good.
> 
> Fanless 64-bit CPU, low-power consumption from what I have read.  Awaiting 
> first substantive review from SilentPCReview.com before ordering one.
>  
>  
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Home Motherboard

2007-11-22 Thread James C. McPherson
mike wrote:
> I actually have a related motherboard, chassis, dual power-supplies
> and 12x400 gig drives already up on ebay too. If I recall Areca cards
> are supported in OpenSolaris...

At the moment you can download the Areca "arcmsr" driver
from areca.com.tw, but I'm in the process of integrating
it into OpenSolaris

http://bugs.opensolaris.org/view_bug.do?bug_id=6614012
6614012 add Areca SAS/SATA RAID adapter driver


James C. McPherson
--
Senior Kernel Software Engineer, Solaris
Sun Microsystems
http://blogs.sun.com/jmcp   http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS + DB + "fragments"

2007-11-22 Thread can you guess?
> OK, I'll bite; it's not like I'm getting an answer to
> my other question.

Did I miss one somewhere?

> 
> Bill, please explain why deciding what to do about
> sequential scan 
> performance in ZFS is urgent?

It's not so much that it's 'urgent' (anyone affected by it simply won't use 
ZFS) as that it's a no-brainer.

> 
> ie why it's urgent rather than important (I agree
>  that if it's bad 
> hen it's going to be important eventually).

It's bad, and it's important now for anyone who cares whether ZFS is viable for 
such workloads.

> 
> ie why it's too urgent to work out, first, how to
> measure whether 
> we're succeeding.

You don't have to measure the *rate* at which the depth of the water in the 
boat is rising in order to know that you've got a problem that needs 
addressing.  You don't have to measure *just how bad* sequential performance in 
a badly-fragmented file is to know that you've got a problem that needs 
addressing (see both Anton's and Roch's comments if you don't find mine 
convincing).

*After* you've tried to fix things, *then* it makes sense to measure just how 
close you got to ideal streaming-sequential disk bandwidth in order to see 
whether you need to work some more.  Right now, the only reason to measure 
precisely how awful sequential scanning performance can get after severely 
fragmenting a file by updating it randomly in small chunks is to be able to 
hand out "Attaboy!"s for how much changing it improved things - even though 
this by itself *still* won't say anything about whether the result attained 
offered reasonable performance in comparison with what's attainable (which is 
what should *really* be the basis for handing out any "Attaboy!"s).

Rather than make a politically-incorrect comment about the Special Olympics 
here, I'll just ask whether common sense is no longer considered an essential 
attribute in an engineer:  given the nature of the discussions about this and 
about RAID-Z, I've really got to wonder.

- bill
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Questions from a windows admin - Samba, shares & quotas

2007-11-22 Thread Ross
Well, it looks like I've solved the question of whether you can auto-create the 
folders.  There's a nice little samba script that you can add to the share to 
do it for you:

>From http://www.edplese.com/samba-with-zfs.html

Samba's root preexec share parameter can really come in handy when setting up 
user home directories. Here we tell it to automatically create a ZFS filesystem 
for every new user, set the owner, and set the quota to 1 GB. This can easily 
be expanded to other filesystem properties as well.

Create a file /usr/bin/createhome.sh:

#!/usr/bin/tcsh
if ( ! -e /tank/home/$1 ) then
  zfs create tank/home/$1
  chown $1 tank/home/$1
  zfs set quota=1G tank/home/$1
endif

Modify smb.conf and modify [homes] to resemble:

[homes] 
  comment = User Home Directories 
  browseable = no 
  writable = yes
  root preexec = /usr/bin/createhome.sh '%U'
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss