Re: [zfs-discuss] Using multiple logs on single SSD devices

2010-08-03 Thread Jonathan Loran
On Aug 2, 2010, at 8:18 PM, Edward Ned Harvey wrote: >> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- >> boun...@opensolaris.org] On Behalf Of Jonathan Loran >> > Because you're at pool v15, it does not matter if the log device fails while > you&

[zfs-discuss] Using multiple logs on single SSD devices

2010-08-02 Thread Jonathan Loran
ill the GUID for each pool get found by the system from the partitioned log drives? Please give me your sage advice. Really appreciate it. Jon - _/ _/ / - Jonathan Loran - - -/ /

Re: [zfs-discuss] Does zpool clear delete corrupted files

2009-06-01 Thread Jonathan Loran
he zfs layer, and also do backups. Unfortunately for me, penny pinching has precluded both for us until now. Jon On Jun 1, 2009, at 4:19 PM, A Darren Dunham wrote: On Mon, Jun 01, 2009 at 03:19:59PM -0700, Jonathan Loran wrote: Kinda scary then. Better make sure we delete all the bad fil

Re: [zfs-discuss] Does zpool clear delete corrupted files

2009-06-01 Thread Jonathan Loran
on On Jun 1, 2009, at 2:41 PM, Paul Choi wrote: "zpool clear" just clears the list of errors (and # of checksum errors) from its stats. It does not modify the filesystem in any manner. You run "zpool clear" to make the zpool forget that it ever had any issues. -Paul Jonat

[zfs-discuss] Does zpool clear delete corrupted files

2009-06-01 Thread Jonathan Loran
es in tact? I'm going to perform a full backup of this guy (not so easy on my budget), and I would rather only get the good files. Thanks, Jon - _/ _/ / - Jonathan Loran - - -/ / /

Re: [zfs-discuss] Inexpensive ZFS home server

2008-11-12 Thread Jonathan Loran
the system board for this machine would make use of ECC memory either, which is not good from a ZFS perspective. How many SATA plugs are there on the MB in this guy? Jon -- - _/ _/ / - Jonathan Loran - - -/ / /I

Re: [zfs-discuss] [storage-discuss] ZFS Success Stories

2008-10-20 Thread Jonathan Loran
tools, resilience of the platform, etc.).. > > .. Of course though, I guess a lot of people who may have never had a > problem wouldn't even be signed up on this list! :-) > > > Thanks! > ___ > storage-discuss mailing li

Re: [zfs-discuss] ZFS poor performance on Areca 1231ML

2008-09-26 Thread Jonathan Loran
two vdevs out of two raidz to see if you get twice the throughput, more or less. I'll bet the answer is yes. Jon -- - _/ _/ / - Jonathan Loran - - -/ / /IT Manager - - _ / _ /

Re: [zfs-discuss] pulling disks was: ZFS hangs/freezes after disk failure,

2008-08-28 Thread Jonathan Loran
value of a failure in one year: Fe = 46% failures/month * 12 months = 5.52 failures Jon -- - _/ _/ / - Jonathan Loran - - -/ / /IT Manager - - _ / _ / / Space Science

Re: [zfs-discuss] x4500 dead HDD, hung server, unable to boot.

2008-08-10 Thread Jonathan Loran
Jorgen Lundman wrote: > # /usr/X11/bin/scanpci | /usr/sfw/bin/ggrep -A1 "vendor 0x11ab device > 0x6081" > pci bus 0x0001 cardnum 0x01 function 0x00: vendor 0x11ab device 0x6081 > Marvell Technology Group Ltd. MV88SX6081 8-port SATA II PCI-X Controller > > But it claims resolved for our version:

Re: [zfs-discuss] The best motherboard for a home ZFS fileserver

2008-07-31 Thread Jonathan Loran
Miles Nordin wrote: >> "s" == Steve <[EMAIL PROTECTED]> writes: >> > > s> http://www.newegg.com/Product/Product.aspx?Item=N82E16813128354 > > no ECC: > > http://en.wikipedia.org/wiki/List_of_Intel_chipsets#Core_2_Chipsets > This MB will take these: http://www.inte

Re: [zfs-discuss] Supermicro AOC-SAT2-MV8 hang when drive removed

2008-07-30 Thread Jonathan Loran
e best position to monitor the device. > > > > The primary goal of ZFS is to be able to correctly read data which was > > successfully committed to disk. There are programming interfaces > > (e.g. fsync(), msync()) which may be used to en

Re: [zfs-discuss] Supermicro AOC-SAT2-MV8 hang when drive removed

2008-07-29 Thread Jonathan Loran
it be possible to have a number of possible places to store this > log? What I'm thinking is that if the system drive is unavailable, > ZFS could try each pool in turn and attempt to store the log there. > > In fact e-mail alerts or external error logging would be a great > addition to ZFS. Surely it makes sense that filesy

Re: [zfs-discuss] ZFS deduplication

2008-07-08 Thread Jonathan Loran
sed upon block reference count. If a block has few references, it should expire first, and vise versa, blocks with many references should be the last out. With all the savings on disks, think how much RAM you could buy ;) Jon -- - _/ _/ / -

Re: [zfs-discuss] ZFS deduplication

2008-07-08 Thread Jonathan Loran
t; Check out the following blog..: > > http://blogs.sun.com/erickustarz/entry/how_dedupalicious_is_your_pool > > Unfortunately we are on Solaris 10 :( Can I get a zdb for zfs V4 that will dump those checksums? Jon -- - _/ _/ / - Jonathan Loran -

Re: [zfs-discuss] ZFS deduplication

2008-07-08 Thread Jonathan Loran
e willing to run it and provide feedback. :) > > -Tim > > Me too. Our data profile is just like Tim's: Terra bytes of satellite data. I'm going to guess that the d11p ratio won't be fantastic for us. I sure would like

Re: [zfs-discuss] ZFS deduplication

2008-07-07 Thread Jonathan Loran
ardware and software, but they are all steep on the ROI curve. I would be very excited to see block level ZFS deduplication roll out. Especially since we already have the infrastructure in place using Solaris/ZFS. Cheers, Jon -- - _/ _/ / - Jonathan Loran -

Re: [zfs-discuss] Cannot delete errored file

2008-06-13 Thread Jonathan Loran
ions. > > Ben, Haven't read this whole thread, and this has been brought up before, but make sure you power supply is running clean. I can't tell you how many times I've seen very strange and intermittent system errors occur from a

Re: [zfs-discuss] Inconcistancies with scrub and zdb

2008-05-05 Thread Jonathan Loran
Jonathan Loran wrote: > Since no one has responded to my thread, I have a question: Is zdb > suitable to run on a live pool? Or should it only be run on an exported > or destroyed pool? In fact, I see that it has been asked before on this > forum, but is there a users

Re: [zfs-discuss] Inconcistancies with scrub and zdb

2008-05-05 Thread Jonathan Loran
-- - _/ _/ / - Jonathan Loran - - -/ / /IT Manager - - _ / _ / / Space Sciences Laboratory, UC Berkeley -/ / / (510) 643-5146 [EMAIL PROTECTED

[zfs-discuss] Inconcistancies with scrub and zdb

2008-05-04 Thread Jonathan Loran
Hi List, First of all: S10u4 120011-14 So I have the weird situation. Earlier this week, I finally mirrored up two iSCSI based pools. I had been wanting to do this for some time, because the availability of the data in these pools is important. One pool mirrored just fine, but the other po

Re: [zfs-discuss] share zfs hierarchy over nfs

2008-04-29 Thread Jonathan Loran
s, which use an indirect map, we just use the Solaris map, thus: auto_home: *zfs-server:/home/& Sorry to be so off (ZFS) topic. Jon -- - _____/ _/ / - Jonathan Loran - - -/ / /IT Manager - - __

Re: [zfs-discuss] ZFS - Implementation Successes and Failures

2008-04-29 Thread Jonathan Loran
Dominic Kay wrote: > Hi > > Firstly apologies for the spam if you got this email via multiple aliases. > > I'm trying to document a number of common scenarios where ZFS is used > as part of the solution such as email server, $homeserver, RDBMS and > so forth but taken from real implementations

Re: [zfs-discuss] ZFS for write-only media?

2008-04-22 Thread Jonathan Loran
Bob Friesenhahn wrote: > On Tue, 22 Apr 2008, Jonathan Loran wrote: >>> >> But that's the point. You can't correct silent errors on write once >> media because you can't write the repair. > > Yes, you can correct the error (at time of read) due to

Re: [zfs-discuss] ZFS for write-only media?

2008-04-22 Thread Jonathan Loran
Bob Friesenhahn wrote: >> The "problem" here is that by putting the data away from your machine, >> you loose the chance to "scrub" >> it on a regular basis, i.e. there is always the risk of silent >> corruption. >> > > Running a scrub is pointless since the media is not writeable. :-) > >

Re: [zfs-discuss] 24-port SATA controller options?

2008-04-15 Thread Jonathan Loran
Luke Scharf wrote: > Maurice Volaski wrote: > >>> Perhaps providing the computations rather than the conclusions would >>> be more persuasive on a technical list ;> >>> >>> >> 2 16-disk SATA arrays in RAID 5 >> 2 16-disk SATA arrays in RAID 6 >> 1 9-disk SATA array in RAID 5. >> >

Re: [zfs-discuss] OpenSolaris ZFS NAS Setup

2008-04-10 Thread Jonathan Loran
Chris Siebenmann wrote: > | What your saying is independent of the iqn id? > > Yes. SCSI objects (including iSCSI ones) respond to specific SCSI > INQUIRY commands with various 'VPD' pages that contain information about > the drive/object, including serial number info. > > Some Googling turns up

Re: [zfs-discuss] OpenSolaris ZFS NAS Setup

2008-04-09 Thread Jonathan Loran
Just to report back to the list... Sorry for the lengthy post So I've tested the iSCSI based zfs mirror on Sol 10u4, and it does more or less work as expected. If I unplug one side of the mirror - unplug or power down one of the iSCSI targets - I/O to the zpool stops for a while, perhaps a

Re: [zfs-discuss] [storage-discuss] OpenSolaris ZFS NAS Setup

2008-04-05 Thread Jonathan Loran
Vincent Fox wrote: > Followup, my initiator did eventually panic. > > I will have to do some setup to get a ZVOL from another system to mirror > with, and see what happens when one of them goes away. Will post in a day or > two on that. > > On Sol 10 U4, I could have told you that. A few

Re: [zfs-discuss] [storage-discuss] OpenSolaris ZFS NAS Setup

2008-04-05 Thread Jonathan Loran
kristof wrote: > If you have a mirrored iscsi zpool. It will NOT panic when 1 of the > submirrors is unavailable. > > zpool status will hang for some time, but after I thinkt 300 seconds it will > put the device on unavailable. > > The panic was the default in the past, And it only occurs if all

Re: [zfs-discuss] OpenSolaris ZFS NAS Setup

2008-04-04 Thread Jonathan Loran
> This guy seems to have had lots of fun with iSCSI :) > http://web.ivy.net/~carton/oneNightOfWork/20061119-carton.html > > This is scaring the heck out of me. I have a project to create a zpool mirror out of two iSCSI targets, and if the failure of one of them will panic my system, that wil

Re: [zfs-discuss] Backup-ing up ZFS configurations

2008-03-25 Thread Jonathan Loran
Bob Friesenhahn wrote: > On Tue, 25 Mar 2008, Robert Milkowski wrote: >> As I wrote before - it's not only about RAID config - what if you have >> hundreds of file systems, with some share{nfs|iscsi|cifs) enabled with >> specific parameters, then specific file system options, etc. > > Some zfs-re

Re: [zfs-discuss] zfs backups to tape

2008-03-14 Thread Jonathan Loran
Robert Milkowski wrote: Hello Jonathan, Friday, March 14, 2008, 9:48:47 PM, you wrote: > Carson Gaspar wrote: Bob Friesenhahn wrote: On Fri, 14 Mar 2008, Bill Shannon wrote: What's the best way to backup a zfs filesystem to tape, where the size of the filesystem is la

Re: [zfs-discuss] zfs backups to tape

2008-03-14 Thread Jonathan Loran
x27;s choice of NFS v4 ACLs. This is the only way to ensure CIFS compatibility, and it is the way the industry will be moving. Jon -- - _/ _/ / - Jonathan Loran - - -/ / /IT Manager - - _ /

Re: [zfs-discuss] Mirroring to a smaller disk

2008-03-04 Thread Jonathan Loran
Patrick Bachmann wrote: Jonathan, On Tue, Mar 04, 2008 at 12:37:33AM -0800, Jonathan Loran wrote: I'm 'not sure I follow how this would work. The keyword here is thin provisioning. The sparse zvol only uses as much space as the actual data needs. So, if you use a sparse

Re: [zfs-discuss] Mirroring to a smaller disk

2008-03-04 Thread Jonathan Loran
Patrick Bachmann wrote: > Jonathan, > > On Mon, Mar 03, 2008 at 11:14:14AM -0800, Jonathan Loran wrote: > >> What I'm left with now is to do more expensive modifications to the new >> mirror to increase its size, or using zfs send | receive or rsync to >>

Re: [zfs-discuss] Mirroring to a smaller disk

2008-03-03 Thread Jonathan Loran
Shawn Ferry wrote: On Mar 3, 2008, at 2:14 PM, Jonathan Loran wrote: Now I know this is counterculture, but it's biting me in the back side right now, and ruining my life. I have a storage array (iSCSI SAN) that is performing badly, and requires some upgrades/reconfiguration. I h

[zfs-discuss] Mirroring to a smaller disk

2008-03-03 Thread Jonathan Loran
with Solaris instead on the SAN box? It's just commodity x86 server hardware. My life is ruined by too many choices, and not enough time to evaluate everything. Jon -- - _/ _/ / - Jonathan Loran - - -/

Re: [zfs-discuss] Does a mirror increase read performance

2008-02-28 Thread Jonathan Loran
Roch Bourbonnais wrote: > > Le 28 févr. 08 à 21:00, Jonathan Loran a écrit : > >> >> >> Roch Bourbonnais wrote: >>> >>> Le 28 févr. 08 à 20:14, Jonathan Loran a écrit : >>> >>>> >>>> Quick question: >>>>

Re: [zfs-discuss] Does a mirror increase read performance

2008-02-28 Thread Jonathan Loran
Roch Bourbonnais wrote: > > Le 28 févr. 08 à 20:14, Jonathan Loran a écrit : > >> >> Quick question: >> >> If I create a ZFS mirrored pool, will the read performance get a boost? >> In other words, will the data/parity be read round robin between the >

[zfs-discuss] Does a mirror increase read performance

2008-02-28 Thread Jonathan Loran
Quick question: If I create a ZFS mirrored pool, will the read performance get a boost? In other words, will the data/parity be read round robin between the disks, or do both mirrored sets of data and parity get read off of both disks? The latter case would have a CPU expense, so I would thi

Re: [zfs-discuss] Can ZFS be event-driven or not?

2008-02-25 Thread Jonathan Loran
David Magda wrote: > On Feb 24, 2008, at 01:49, Jonathan Loran wrote: > >> In some circles, CDP is big business. It would be a great ZFS offering. > > ZFS doesn't have it built-in, but AVS made be an option in some cases: > > http://opensolaris.org/os/project/avs

Re: [zfs-discuss] Can ZFS be event-driven or not?

2008-02-23 Thread Jonathan Loran
Uwe Dippel wrote: > [i]google found that solaris does have file change notification: > http://blogs.sun.com/praks/entry/file_events_notification > [/i] > > Didn't see that one, thanks. > > [i]Would that do the job?[/i] > > It is not supposed to do a job, thanks :), it is for a presentation at a

Re: [zfs-discuss] Which DTrace provider to use

2008-02-14 Thread Jonathan Loran
[EMAIL PROTECTED] wrote: On Tue, Feb 12, 2008 at 10:21:44PM -0800, Jonathan Loran wrote: Thanks for any help anyone can offer. I have faced similar problem (although not exactly the same) and was going to monitor disk queue with dtrace but couldn't find any docs/urls abo

Re: [zfs-discuss] Which DTrace provider to use

2008-02-14 Thread Jonathan Loran
up for the VFS layer. > > I'd also check syscall latencies - it might be too obvious, but it can be > worth checking (eg, if you discover those long latencies are only on the > open syscall)... > > Brendan > > > -- - _/ _/ / -

Re: [zfs-discuss] Which DTrace provider to use

2008-02-14 Thread Jonathan Loran
Marion Hakanson wrote: [EMAIL PROTECTED] said: It's not that old. It's a Supermicro system with a 3ware 9650SE-8LP. Open-E iSCSI-R3 DOM module. The system is plenty fast. I can pretty handily pull 120MB/sec from it, and write at over 100MB/sec. It falls apart more on random I/O. The s

Re: [zfs-discuss] Which DTrace provider to use

2008-02-13 Thread Jonathan Loran
Marion Hakanson wrote: [EMAIL PROTECTED] said: ... I know, I know, I should have gone with a JBOD setup, but it's too late for that in this iteration of this server. We we set this up, I had the gear already, and it's not in my budget to get new stuff right now. What kind of arra

[zfs-discuss] Which DTrace provider to use

2008-02-12 Thread Jonathan Loran
Hi List, I'm wondering if one of you expert DTrace guru's can help me. I want to write a DTrace script to print out a a histogram of how long IO requests sit in the service queue. I can output the results with the quantize method. I'm not sure which provider I should be using for this. Doe

Re: [zfs-discuss] OpenSolaris, ZFS and Hardware RAID,

2008-02-10 Thread Jonathan Loran
Anton B. Rang wrote: Careful here. If your workload is unpredictable, RAID 6 (and RAID 5) for that matter will break down under highly randomized write loads. Oh? What precisely do you mean by "break down"? RAID 5's write performance is well-understood and it's used successfully in

Re: [zfs-discuss] OpenSolaris, ZFS and Hardware RAID, a recipe for success?

2008-02-09 Thread Jonathan Loran
Richard Elling wrote: Nick wrote: Using the RAID cards capability for RAID6 sounds attractive? Assuming the card works well with Solaris, this sounds like a reasonable solution. Careful here. If your workload is unpredictable, RAID 6 (and RAID 5) for that matter wil

Re: [zfs-discuss] ZIL controls in Solaris 10 U4?

2008-02-02 Thread Jonathan Loran
he irony is that the requirement for this very stability is why we haven't seen the features in the ZFS code we need in Solaris 10. Thanks, Jon Mike Gerdts wrote: On Jan 30, 2008 2:27 PM, Jonathan Loran <[EMAIL PROTECTED]> wrote: Before ranting any more, I'll do the test of disablin

Re: [zfs-discuss] ZIL controls in Solaris 10 U4?

2008-01-31 Thread Jonathan Loran
o using fast SSD for the ZIL when it comes to Solaris 10 U? as a preferred method. Jon -- - _/ _/ / - Jonathan Loran - - -/ / /IT Manager - - _ / _ /

Re: [zfs-discuss] ZIL controls in Solaris 10 U4?

2008-01-30 Thread Jonathan Loran
message posted from opensolaris.org > ___ > zfs-discuss mailing list > zfs-discuss@opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > -- - _/ _/ / - Jonathan Loran - - -

Re: [zfs-discuss] ZIL controls in Solaris 10 U4?

2008-01-30 Thread Jonathan Loran
Neil Perrin wrote: > > > Roch - PAE wrote: >> Jonathan Loran writes: >> > > Is it true that Solaris 10 u4 does not have any of the nice ZIL >> controls > that exist in the various recent Open Solaris flavors? I >> would like to > move my ZIL t

[zfs-discuss] ZIL controls in Solaris 10 U4?

2008-01-29 Thread Jonathan Loran
ZIL off to see how my NFS on ZFS performance is effected before spending the $'s. Anyone know when will we see this in Solaris 10? Thanks, Jon -- - _/ _/ / - Jonathan Loran - - -/ / /

Re: [zfs-discuss] hardware for zfs home storage

2008-01-14 Thread Jonathan Loran
worse yet, run windoz in a VM. Hardly practical. Why is it we always have to be second class citizens! Power to the (*x) people! Jon -- - _/ _/ / - Jonathan Loran - - -/ / /

Re: [zfs-discuss] hardware for zfs home storage

2008-01-14 Thread Jonathan Loran
is.org > ___ > zfs-discuss mailing list > zfs-discuss@opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > -- - _/ _/ / - Jonathan Loran - - -/ / /

Re: [zfs-discuss] rename(2) (mv(1)) between ZFS filesystems in the same zpool

2008-01-03 Thread Jonathan Loran
Joerg Schilling wrote: Carsten Bormann <[EMAIL PROTECTED]> wrote: On Dec 29 2007, at 08:33, Jonathan Loran wrote: We snapshot the file as it exists at the time of the mv in the old file system until all referring file handles are closed, then destroy the single file snap.

Re: [zfs-discuss] rename(2) (mv(1)) between ZFS filesystems in the same zpool

2007-12-30 Thread Jonathan Loran
l with the semantics. It's not just a path change as in a directory mv. Jon -- - _/ _/ / - Jonathan Loran - - -/ / /IT Manager - - _ / _ / / Space Sciences Laborat

Re: [zfs-discuss] rename(2) (mv(1)) between ZFS filesystems in the same zpool

2007-12-28 Thread Jonathan Loran
duced. Moving large file stores between zfs file systems would be so handy! From my own sloppiness, I've suffered dearly from the the lack of it. Jon -- - _/ _/ / - Jonathan Loran - - -/ / /IT M

Re: [zfs-discuss] Is round-robin I/O correct for ZFS?

2007-12-18 Thread Jonathan Loran
Jonathan Loran wrote: Gary Mills wrote: On Fri, Dec 14, 2007 at 10:55:10PM -0800, Jonathan Loran wrote: This is the same configuration we use on 4 separate servers (T2000, two X4100, and a V215). We do use a different iSCSI solution, but we have the same multi path config setup with

Re: [zfs-discuss] Is round-robin I/O correct for ZFS?

2007-12-18 Thread Jonathan Loran
Gary Mills wrote: On Fri, Dec 14, 2007 at 10:55:10PM -0800, Jonathan Loran wrote: This is the same configuration we use on 4 separate servers (T2000, two X4100, and a V215). We do use a different iSCSI solution, but we have the same multi path config setup with scsi_vhci. Dual GigE

Re: [zfs-discuss] Is round-robin I/O correct for ZFS?

2007-12-14 Thread Jonathan Loran
of the Iscsi ethernet interfaces. It certainly appears > to be doing round-robin. The I/O are going to the same disk devices, > of course, but by two different paths. Is this a correct configuration > for ZFS? I assume it's safe, but I thought I should check.

Re: [zfs-discuss] HAMMER

2007-10-17 Thread Jonathan Loran
Richard Elling wrote: > Jonathan Loran wrote: ... > Do not assume that a compressed file system will send compressed. > IIRC, it > does not. Let's say, if it were possible to detect the remote compression support, couldn't we send it compressed? With higher compression

Re: [zfs-discuss] HAMMER

2007-10-17 Thread Jonathan Loran
http://milek.blogspot.com _______ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss -- - _/ _/ / - Jonathan Loran -

Re: [zfs-discuss] HAMMER

2007-10-16 Thread Jonathan Loran
rg/mailman/listinfo/zfs-discuss -- - _/ _____/ / - Jonathan Loran - - -/ / /IT Manager - - _ / _ / / Space Sciences Laboratory, UC Berkeley -/ / /

Re: [zfs-discuss] Direct I/O ability with zfs?

2007-10-05 Thread Jonathan Loran
Nicolas Williams wrote: On Thu, Oct 04, 2007 at 10:26:24PM -0700, Jonathan Loran wrote: I can envision a highly optimized, pipelined system, where writes and reads pass through checksum, compression, encryption ASICs, that also locate data properly on disk. ... I've argued b

Re: [zfs-discuss] Direct I/O ability with zfs?

2007-10-04 Thread Jonathan Loran
rites enough to make a difference? Possibly not. Anton This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss -- - _/ _/

Re: [zfs-discuss] enterprise scale redundant Solaris 10/ZFS server providing NFSv4/CIFS

2007-09-24 Thread Jonathan Loran
Paul B. Henson wrote: On Sat, 22 Sep 2007, Jonathan Loran wrote: My gut tells me that you won't have much trouble mounting 50K file systems with ZFS. But who knows until you try. My questions for you is can you lab this out? Yeah, after this research phase has been comp

Re: [zfs-discuss] enterprise scale redundant Solaris 10/ZFS server providing NFSv4/CIFS

2007-09-22 Thread Jonathan Loran
roblem of worrying about where a user's files are when they want to access them :(. -- - _____/ _/ / - Jonathan Loran - - -/ / /IT Manager - - _ / _ / / Spa

Re: [zfs-discuss] hardware sizing for a zfs-based system?

2007-09-14 Thread Jonathan Loran
C-SAT2-MV8.cfm) > for about $100 each > >> Good luck, > Getting there - can anybody clue me into how much CPU/Mem ZFS > needs?I have an old 1.2Ghz with 1Gb of mem laying around - would > it be sufficient? > > > Thanks! > Kent &g

Re: [zfs-discuss] hardware sizing for a zfs-based system?

2007-09-13 Thread Jonathan Loran
ks! > Kent > > > > > > ___ > zfs-discuss mailing list > zfs-discuss@opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > -- - _/ _/ / - Jonathan Loran -

[zfs-discuss] Move data from the zpool (root) to a zfs file system

2007-04-13 Thread Jonathan Loran
be very much appreciated. Thanks, Jon -- - _/ _/ / - Jonathan Loran - - -/ / /IT Manager - - _ / _ / / Space Sciences Laboratory, UC Berkeley -/ /