Re: [zfs-discuss] OT: anyone aware how to obtain 1.8.0 for X2100M2?

2010-12-19 Thread Scott Lawson
rg/mailman/listinfo/zfs-discuss -- _______ Scott Lawson Systems Architect Manukau Institute of Technology Information Communication Technology Services Private Bag 94006 Manukau City Auckland New Zealand Phone : +64 09 968 7611 Fax: +64 09 968 7641 Mobile : +64 27 568 7611 mailto:sc...@manukau.

[zfs-discuss] ZFS Hard link space savings

2011-06-12 Thread Scott Lawson
Hi All, I have an interesting question that may or may not be answerable from some internal ZFS semantics. I have a Sun Messaging Server which has 5 ZFS based email stores. The Sun Messaging server uses hard links to link identical messages together. Messages are stored in standard SMTP MIME

Re: [zfs-discuss] ZFS Hard link space savings

2011-06-12 Thread Scott Lawson
On 13/06/11 10:28 AM, Nico Williams wrote: On Sun, Jun 12, 2011 at 4:14 PM, Scott Lawson wrote: I have an interesting question that may or may not be answerable from some internal ZFS semantics. This is really standard Unix filesystem semantics. I Understand this, just wanting

Re: [zfs-discuss] ZFS Hard link space savings

2011-06-12 Thread Scott Lawson
boxes we have left. M$ also heavily discounts Exchange CALS to Edu and Oracle is not very friendly the way Sun was with their JES licensing. So it is bye bye Sun Messaging Server for us. 2011-06-13 1:14, Scott Lawson пишет: Hi All, I have an interesting question that may or may not be an

Re: [zfs-discuss] ZFS on SAN?

2009-02-17 Thread Scott Lawson
to a v120 with 8 x 320 GB scsi's in a RAIDZ2 for all our home data and home business (which is a printing outfit which creates a lot of very big files on our macs). -- _______ Scott Lawson Systems Architect Manukau Institute o

Re: [zfs-discuss] ZFS on SAN?

2009-02-17 Thread Scott Lawson
it at home too with and old D1000 attached to a v120 with 8 x 320 GB scsi's in a RAIDZ2 for all our home data and home business (which is a printing outfit which creates a lot of very big files on our macs). -- _______ Scott Laws

Re: [zfs-discuss] ZFS on SAN?

2009-02-17 Thread Scott Lawson
Toby Thain wrote: On 17-Feb-09, at 3:01 PM, Scott Lawson wrote: Hi All, ... I have seen other people discussing power availability on other threads recently. If you want it, you can have it. You just need the business case for it. I don't buy the comments on UPS unreliability. H

Re: [zfs-discuss] ZFS on SAN?

2009-02-17 Thread Scott Lawson
David Magda wrote: On Feb 17, 2009, at 21:35, Scott Lawson wrote: Everything we have has dual power supplies, feed from dual power rails, feed from separate switchboards, through separate very large UPS's, backed by generators, feed by two substations and then cloned to another

Re: [zfs-discuss] ZFS on SAN?

2009-02-18 Thread Scott Lawson
Hi Andras, No problems writing direct. Answers inline below. (If there are any typo's it cause it's late and I have had a very long day ;)) andras spitzer wrote: Scott, Sorry for writing you directly, but most likely you have missed my questions regarding your SW design, whenever you have t

Re: [zfs-discuss] qmail on zfs

2009-02-18 Thread Scott Lawson
Robert Milkowski wrote: Hello Asif, Wednesday, February 18, 2009, 1:28:09 AM, you wrote: AI> On Tue, Feb 17, 2009 at 5:52 PM, Robert Milkowski wrote: Hello Asif, Tuesday, February 17, 2009, 7:43:41 PM, you wrote: AI> Hi All AI> Does anyone have any experience on running qmail on solar

Re: [zfs-discuss] ZFS on SAN? Availability edition.

2009-02-18 Thread Scott Lawson
list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss -- _______ Scott Lawson Systems Architect Manukau Institute of Technology Information Communication Technology Services Private Bag 94006 Ma

Re: [zfs-discuss] ZFS on SAN? Availability edition.

2009-02-18 Thread Scott Lawson
Miles Nordin wrote: "sl" == Scott Lawson writes: sl> Electricity *is* the lifeblood of available storage. I never meant to suggest computing machinery could run without electricity. My suggestion is, if your focus is _reliability_ rather than availability

Re: [zfs-discuss] Comstar production-ready?

2009-03-03 Thread Scott Lawson
ver a commercially supported solution for them. Thanks, S. -- _________ Scott Lawson Systems Architect Information Communication Technology Services Manukau Institute of Technology Private Bag 94006 South Auckland Mail Centre Manukau 2240 Auckland New Zealand Phone : +64 09 968 7611 Fax: +64 09 968 764

Re: [zfs-discuss] Comstar production-ready?

2009-03-04 Thread Scott Lawson
to them and they might not understand having to change their shell paths to get the userland that they want ;) On Wed, Mar 4, 2009 at 2:47 AM, Scott Lawson wrote: Stephen Nelson-Smith wrote: Hi, I recommended a ZFS-based archive solution to a client needing to have a network-b

Re: [zfs-discuss] Comstar production-ready?

2009-03-04 Thread Scott Lawson
r,http://www.GraphicsMagick.org/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss -- _______ Scott Lawson Systems Architect Manukau

Re: [zfs-discuss] ZFS on a SAN

2009-03-12 Thread Scott Lawson
_ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss -- ___ Scott Lawson Systems Architect Manukau Institute of Technology Information Communication Technology Services Privat

Re: [zfs-discuss] Can this be done?

2009-03-28 Thread Scott Lawson
pensolaris.org/mailman/listinfo/zfs-discuss -- _____ Scott Lawson Systems Architect Information Communication Technology Services Manukau Institute of Technology Private Bag 94006 South Auckland Mail Centre Manukau 2240 Auckla

Re: [zfs-discuss] Can this be done?

2009-03-31 Thread Scott Lawson
Michael Shadle wrote: On Mon, Mar 30, 2009 at 4:13 PM, Michael Shadle wrote: Sounds like a reasonable idea, no? Follow up question: can I add a single disk to the existing raidz2 later on (if somehow I found more space in my chassis) so instead of a 7 disk raidz2 (5+2) it becomes a 6+2

Re: [zfs-discuss] Can this be done?

2009-04-07 Thread Scott Lawson
Michael Shadle wrote: On Wed, Apr 1, 2009 at 3:19 AM, Michael Shadle wrote: I'm going to try to move one of my disks off my rpool tomorrow (since it's a mirror) to a different controller. According to what I've heard before, ZFS should automagically recognize this new location and have no

Re: [zfs-discuss] Can this be done?

2009-04-07 Thread Scott Lawson
zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss -- _______ Scott Lawson Systems Architect Manukau Institute of Technology Information Communication Technology

Re: [zfs-discuss] zfs as a cache server

2009-04-09 Thread Scott Lawson
olaris.org/mailman/listinfo/zfs-discuss -- _____ Scott Lawson Systems Architect Information Communication Technology Services Manukau Institute of Technology Private Bag 94006 South Auckland Mail Centre Manukau 2240 Auckland New Ze

Re: [zfs-discuss] Raidz vdev size... again.

2009-04-27 Thread Scott Lawson
.opensolaris.org/mailman/listinfo/zfs-discuss -- _______ Scott Lawson Systems Architect Manukau Institute of Technology Information Communication Technology Services Private Bag 94006 Manukau City Auckland New Zealand Phone : +64 09 968 7611 Fax

Re: [zfs-discuss] Raidz vdev size... again.

2009-04-27 Thread Scott Lawson
Michael Shadle wrote: On Mon, Apr 27, 2009 at 4:51 PM, Scott Lawson wrote: If possible though you would be best to let the 3ware controller expose the 16 disks as a JBOD to ZFS and create a RAIDZ2 within Solaris as you will then gain the full benefits of ZFS. Block self healing etc etc

Re: [zfs-discuss] Raidz vdev size... again.

2009-04-27 Thread Scott Lawson
Michael Shadle wrote: On Mon, Apr 27, 2009 at 5:32 PM, Scott Lawson wrote: One thing you haven't mentioned is the drive type and size that you are planning to use as this greatly influences what people here would recommend. RAIDZ2 is built for big, slow SATA disks as reconstruction

Re: [zfs-discuss] Raidz vdev size... again.

2009-04-27 Thread Scott Lawson
Richard Elling wrote: Some history below... Scott Lawson wrote: Michael Shadle wrote: On Mon, Apr 27, 2009 at 4:51 PM, Scott Lawson wrote: If possible though you would be best to let the 3ware controller expose the 16 disks as a JBOD to ZFS and create a RAIDZ2 within Solaris as you

Re: [zfs-discuss] ZFS + EMC Cx310 Array (JBOD ? Or Singe MetaLUN ?)

2009-04-30 Thread Scott Lawson
Wilkinson, Alex wrote: 0n Thu, Apr 30, 2009 at 11:11:55AM -0500, Bob Friesenhahn wrote: >On Thu, 30 Apr 2009, Wilkinson, Alex wrote: >> >> I currently have a single 17TB MetaLUN that i am about to present to an >> OpenSolaris initiator and it will obviously be ZFS. However

Re: [zfs-discuss] ZFS + EMC Cx310 Array (JBOD ? Or Singe MetaLUN ?)

2009-04-30 Thread Scott Lawson
Wilkinson, Alex wrote: 0n Thu, Apr 30, 2009 at 11:11:55AM -0500, Bob Friesenhahn wrote: >On Thu, 30 Apr 2009, Wilkinson, Alex wrote: >> >> I currently have a single 17TB MetaLUN that i am about to present to an >> OpenSolaris initiator and it will obviously be ZFS. However

Re: [zfs-discuss] SAS 15K drives as L2ARC

2009-05-06 Thread Scott Lawson
Roger Solano wrote: Hello, Does it make any sense to use a bunch of 15K SAS drives as L2ARC cache for several TBs of SATA disks? For example: A STK2540 storage array with this configuration: * Tray 1: Twelve (12) 146 GB @ 15K SAS HDDs. * Tray 2: Twelve (12) 1 TB @ 7200 SATA HDDs.

Re: [zfs-discuss] SAS 15K drives as L2ARC

2009-05-06 Thread Scott Lawson
Bob Friesenhahn wrote: On Thu, 7 May 2009, Scott Lawson wrote: A STK2540 storage array with this configuration: * Tray 1: Twelve (12) 146 GB @ 15K SAS HDDs. * Tray 2: Twelve (12) 1 TB @ 7200 SATA HDDs. Just thought I would point out that these are hardware backed RAID arrays. You

Re: [zfs-discuss] ZFS Path = ???

2009-05-19 Thread Scott Lawson
listinfo/zfs-discuss -- _________ Scott Lawson Systems Architect Information Communication Technology Services Manukau Institute of Technology Private Bag 94006 South Auckland Mail Centre Manukau 2240 Auckland New Zealand Phone : +64 09 968 7611 Fax: +64 09 968 7641 Mobi

Re: [zfs-discuss] Increase size of ZFS mirror

2009-06-24 Thread Scott Lawson
replacing a disk. HTH, Thomas ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss -- _______ Scott Lawson Systems Architect Manukau Insti

Re: [zfs-discuss] ZFS, power failures, and UPSes

2009-06-30 Thread Scott Lawson
Haudy Kazemi wrote: Hello, I've looked around Google and the zfs-discuss archives but have not been able to find a good answer to this question (and the related questions that follow it): How well does ZFS handle unexpected power failures? (e.g. environmental power failures, power supply

Re: [zfs-discuss] ZFS, power failures, and UPSes

2009-06-30 Thread Scott Lawson
Monish Shah wrote: A related question: If you are on a UPS, is it OK to disable ZIL? I think the answer to this is no. UPS's do fail. If you have two redundant units, answer *might* be maybe. But prudence says *no*. I have seen numerous UPS' failures over the years, cascading UPS failures

Re: [zfs-discuss] ZFS, power failures, and UPSes

2009-06-30 Thread Scott Lawson
David Magda wrote: On Jun 30, 2009, at 14:08, Bob Friesenhahn wrote: I have seen UPSs help quite a lot for short glitches lasting seconds, or a minute. Otherwise the outage is usually longer than the UPSs can stay up since the problem required human attention. A standby generator is neede

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-12 Thread Scott Lawson
Bob, Output of my run for you. System is a M3000 with 16 GB RAM and 1 zpool called test1 which is contained on a raid 1 volume on a 6140 with 7.50.13.10 firmware on the RAID controllers. RAid 1 is made up of two 146GB 15K FC disks. This machine is brand new with a clean install of S10 05/09. I

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-14 Thread Scott Lawson
) 'cpio -C 131072 -o > /dev/null' 48000256 blocks real3m25.13s user0m2.67s sys 0m28.40s Doing second 'cpio -C 131072 -o > /dev/null' 48000256 blocks real8m53.05s user0m2.69s sys 0m32.83s Feel free to clean up with 'zfs destroy test1/zfscachet

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-14 Thread Scott Lawson
Bob Friesenhahn wrote: On Wed, 15 Jul 2009, Scott Lawson wrote: NAME STATE READ WRITE CKSUM test1 ONLINE 0 0 0 mirror ONLINE 0 0

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-14 Thread Scott Lawson
second 'cpio -C 131072 -o > /dev/null' 48000256 blocks real1m59.11s user0m9.93s sys 1m49.15s Feel free to clean up with 'zfs destroy nbupool/zfscachetest'. Scott Lawson wrote: Bob, Output of my run for you. System is a M3000 with 16 GB RAM and 1 zpool called test

Re: [zfs-discuss] Migrating a zfs pool to another server

2009-07-21 Thread Scott Lawson
g/mailman/listinfo/zfs-discuss -- _____ Scott Lawson Systems Architect Information Communication Technology Services Manukau Institute of Technology Private Bag 94006 South Auckland Mail Centre Manukau 2240 Auckland New Zealand Phone : +64 09 968 7611 Fax: +64 09 968 7641 Mobile : +64 27 568 7611 ma

[zfs-discuss] L2ARC support in Solaris 10 (Update 8?)

2009-07-22 Thread Scott Lawson
support are invited from the list. Thanks, Scott. -- _______ Scott Lawson Systems Architect Manukau Institute of Technology Information Communication Technology Services Private Bag 94006 Manukau City Auckland New Zealand Phon

Re: [zfs-discuss] avail drops to 32.1T from 40.8T after create -o mountpoint

2009-07-28 Thread Scott Lawson
% ONLINE - nbupool 40.8T 34.4T 6.37T 84% ONLINE - [r...@solnbu1 /]#> -- _______ Scott Lawson Systems Architect Manukau Institute of Technology Information Communication Technology Services Private Bag 94006 Manukau City Auckland

Re: [zfs-discuss] avail drops to 32.1T from 40.8T after create -o mountpoint

2009-07-29 Thread Scott Lawson
work core routers and needless to say achieves very high throughput. I have seen it pushing the full capacity of the SAS link to the J4500 quite commonly. This is probably the choke point for this system. /Scott -- _______ Scott

Re: [zfs-discuss] Another user looses his pool (10TB) in this case and 40

2009-08-01 Thread Scott Lawson
Dave Stubbs wrote: I don't mean to be offensive Russel, but if you do ever return to ZFS, please promise me that you will never, ever, EVER run it virtualized on top of NTFS (a.k.a. worst file system ever) in a production environment. Microsoft Windows is a horribly unreliable operating system

Re: [zfs-discuss] grow zpool by replacing disks

2009-08-03 Thread Scott Lawson
Tobias Exner wrote: Hi list, some months ago I spoke with an zfs expert on a Sun Storage event. He told it's possible to grow a zpool by replacing every single disk with a larger one. After replacing and resilvering all disks of this pool zfs will provide the new size automatically. Now

Re: [zfs-discuss] zfs fragmentation

2009-08-11 Thread Scott Lawson
__ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss -- ___ Scott Lawson Systems Architect Manukau Institute of Technology Information Communication Technology Services Private Bag 94006 Manukau City Auckland New Zealand

Re: [zfs-discuss] zfs fragmentation

2009-08-12 Thread Scott Lawson
n than you have concurrent streams. This avoids having one save set that finishes long after all the others because of poorly balanced save sets. Couldn't agree more Mike. -- Mike Gerdts http://mgerdts.blogspot.com/ -- ______

Re: [zfs-discuss] problem with zfs

2009-08-26 Thread Scott Lawson
erved. Use is subject to license terms. Assembled 27 October 2008 -- ___ Scott Lawson Systems Architect Manukau Institute of Technology Information Communication Technology Ser

Re: [zfs-discuss] problem with zfs

2009-08-26 Thread Scott Lawson
serge goyette wrote: actually i did apply the latest recommended patches Recommended patches and upgrade clusters are different by the way. 10_Recommended != Upgrade Cluster that. Upgrade cluster will upgrade the system to a effectively the Solaris Release that the upgrade cluster is minu

Re: [zfs-discuss] How to find poor performing disks

2009-08-26 Thread Scott Lawson
Also you may wish to look at the output of 'iostat -xnce 1' as well. You can post those to the list if you have a specific problem. You want to be looking for error counts increasing and specifically 'asvc_t' for the service times on the disks. I higher number for asvc_t may help to isolate poo

Re: [zfs-discuss] ZFS & HW RAID

2009-09-18 Thread Scott Lawson
list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss -- _ Scott Lawson Systems Architect Information Communication Technology Services Manukau Institute of Technology Private Bag 94006 South Auckland Mail Centre M

Re: [zfs-discuss] ZFS & HW RAID

2009-09-18 Thread Scott Lawson
Bob Friesenhahn wrote: On Fri, 18 Sep 2009, David Magda wrote: If you care to keep your pool up and alive as much as possible, then mirroring across SAN devices is recommended. One suggestion I heard was to get a LUN that's twice the size, and set "copies=2". This way you have some redund