Re: [zfs-discuss] ZFS where to go!

2010-07-26 Thread James
I might be mistaken, but it looks like 3ware does have a driver, several in fact: http://www.3ware.com/support/downloadpageprod.asp?pcode=9&path=Escalade9500SSeries&prodname=3ware%209500S%20Series Any comment on this? I'm thinking about picking up a server with this card, and it would be cool

Re: [zfs-discuss] Random system hang build 134

2010-07-26 Thread James
I've seen similar issues. However, it appears most of my problems stem from ZFS. I'd do something ZFS doesn't like and then I'd have to power cycle the server to get it back. I actually wrote a large post about my experiences with b134 and ZFS: http://opensolaris.org/jive/thread.jspa?message

Re: [zfs-discuss] corrupt pool?

2010-07-26 Thread James
I have been working on the same problem now for almost 48 straight hours. I have managed to recover some of my data using zpool import -f pool The command never completes, but you can do a zpool list and zpool status and you will see the pool. Then you do zfs list and the file systems sho

[zfs-discuss] Lower latency ZIL Option?: SSD behind Controller BB Write Cache

2011-01-26 Thread James
all. James * For NTFS 4kB clusters on VMWare / NFS, I believe 4kB zfs recordsize will provide best performance (avoid partial writes). Thoughts welcome on that too. ** Assumes 10k SAS can do max 900 sequential writes each striped across 12 mirrors and rounded down (900 based on TomsHardware

Re: [zfs-discuss] Lower latency ZIL Option?: SSD behind Controller BB Write Cache

2011-01-27 Thread James
Chris & Eff, Thanks for your expertise on this and other posts. Greatly appreciated. I've just been re-reading some of the great SSD-as-ZIL discussions. Chris, Cost: Our case is a bit non-representative as we have spare P410/512's that came with ESXi hosts (USB boot) so I've budgetted them at

[zfs-discuss] ZFS and spindle speed (7.2k / 10k / 15k)

2011-01-31 Thread James
G'day All. I’m trying to select the appropriate disk spindle speed for a proposal and would welcome any experience and opinions (e.g. has anyone actively chosen 10k/15k drives for a new ZFS build and, if so, why?). This is for ZFS over NFS for VMWare storage ie. primarily random 4kB read/

Re: [zfs-discuss] ZFS and spindle speed (7.2k / 10k / 15k)

2011-02-02 Thread James
Edward, Thanks for the reply. Good point on platter density. I'ld considered the benefit of lower fragmentation but not the possible increase in sequential iops due to density. I assume while a 2TB 7200rpm drive may have better sequential IOPS than a 500GB, it will not be double and therefor

Re: [zfs-discuss] ZFS and spindle speed (7.2k / 10k / 15k)

2011-02-02 Thread James
Thanks Richard & Edward for the additional contributions. I had assumed that "maximum sequential transfer rates" on datasheets (btw - those are the same for differing capacity seagate's) were based on large block sizes and a ZFS 4kB recordsize* would mean much lower IOPS. e.g. Seagate Constell

Re: [zfs-discuss] Zpool import not working - I broke my pool...

2008-10-18 Thread James
sed on Nevada build 99) and still no luck. Please advise, James -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] Problem importing pool

2009-08-06 Thread James
Hello, I am having a problem importing a pool in 2009.06 that was created on zfs-fuse (ubuntu 8.10). Basically, I was having issues with a controller, and took a disk offline. After restarting with a new controller, I was unable to import the pool (in ubuntu). Someone had suggested that I try

[zfs-discuss] Best option for my home file server?

2007-08-18 Thread James
Hi there, I think I have managed to confuse myself so i am asking outright hoping for a straight answer. First, my situation. I have several disks of varying sizes I would like to run as redundant storage ina file server at home. Performance is not my number one priority, largest capacity pos

Re: [zfs-discuss] Fishworks 2010Q1 and dedup bug?

2010-03-05 Thread James Dickens
ed) exceeds memory, your performance degrades exponentially probably before that. James Dickens http://uadmin.blogspot.com > I.e., I am not using any snapshots and have also turned off automatic > snapshots because I was bitten by system hangs while destroying datasets > with living s

Re: [zfs-discuss] why L2ARC device is used to store files ?

2010-03-05 Thread James Dickens
please post the output of zpool status -v. Thanks James Dickens On Fri, Mar 5, 2010 at 3:46 AM, Abdullah Al-Dahlawi wrote: > Greeting All > > I have create a pool that consists oh a hard disk and a ssd as a cache > > zpool create hdd c11t0d0p3 > zpool add hdd cache c8

Re: [zfs-discuss] why L2ARC device is used to store files ?

2010-03-06 Thread James Dickens
ly what to keep and what to throw away. James Dickens http://uadmin.blogspot.com On Sat, Mar 6, 2010 at 2:15 AM, Abdullah Al-Dahlawi wrote: > hi James > > > here is the out put you've requested > > abdul...@hp_hdx_16:~/Downloads# zpool status -v > pool: hdd >

Re: [zfs-discuss] Adaptec AAC driver

2010-03-29 Thread James Lee
i'm evaluating a system with an Adaptec 52445 Raid HBA, and >> the driver supplied by Opensolaris doesn't support JBOD drives. FYI, there is a bug report open for this issue: http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=686

Re: [zfs-discuss] Diagnosing Permanent Errors

2010-04-06 Thread James C.McPherson
On 6/04/10 11:47 PM, Willard Korfhage wrote: Yes, I was hoping to find the serial numbers. Unfortunately, it doesn't show any serial numbers for the disk attached to the Areca raid card. You'll need to reboot and go into the card bios to get that information. James C. McPherson

Re: [zfs-discuss] Areca ARC-1680 on OpenSolaris 2009.06?

2010-04-10 Thread James C.McPherson
ted since September 15th, 2008 (build 99). What do you mean by overpromised and underdelivered? James C. McPherson -- Senior Software Engineer, Solaris Oracle http://www.jmcp.homeunix.com/blog ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http:

[zfs-discuss] How to Catch ZFS error with syslog ?

2010-04-12 Thread J James
I have a simple mirror pool with 2 disks. I pulled out one disk to simulate a failed drive. zpool status shows that the pool is in DEGRADED state. I want syslog to log these type of ZFS errors. I have syslog running and logging all sorts of error to a log server. But this failed disk in ZFS pool

Re: [zfs-discuss] How to Catch ZFS error with syslog ?

2010-04-13 Thread J James
Thanks for the clue. Still not successful, but some hope is there. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] How to Catch ZFS error with syslog ?

2010-04-13 Thread J James
some of the error messages are generated only once. Joji James -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Which build is the most stable, mainly for NAS (zfs)?

2010-04-15 Thread James C.McPherson
ctual* problem with OpenSolaris binary distro as a base for a NAS system? James C. McPherson -- Senior Software Engineer, Solaris Oracle http://www.jmcp.homeunix.com/blog ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] Processes hang in /dev/zvol/dsk/poolname

2010-06-28 Thread James Dickens
ol/dsk/rpool/puddle_slog ONLINE 0 0 0 zfs list -rt volume puddle NAMEUSED AVAIL REFER MOUNTPOINT puddle/l2arc 8.25G 538G 7.20G - puddle/log_test1.25G 537G 1.25G - puddle/temp_cache 4.13G 537G 4.00G - James Dickens http://uadmin.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Using a zvol from your rpool as zil for another zpool

2010-07-02 Thread James Dickens
On Fri, Jul 2, 2010 at 1:18 AM, Ray Van Dolson wrote: > We have a server with a couple X-25E's and a bunch of larger SATA > disks. > > To save space, we want to install Solaris 10 (our install is only about > 1.4GB) to the X-25E's and use the remaining space on the SSD's for ZIL > attached to a z

Re: [zfs-discuss] Legality and the future of zfs...

2010-07-19 Thread James Litchfield
20 hours it just took you to fix that machine could have been 2 hours if it had a service contract. Doesn't take too long for that kind of math to blow out any savings whiteboxes may have had. Worst case, someone goes and buys Dell. :-) -- James Litchfield | Senior Consultant Ph

Re: [zfs-discuss] Fwd: zpool import despite missing log [PSARC/2010/292 Self Review]

2010-07-28 Thread James Dickens
+1 On Wed, Jul 28, 2010 at 6:11 PM, Robert Milkowski wrote: > > fyi > > -- > Robert Milkowski > http://milek.blogspot.com > > > Original Message Subject: zpool import despite missing > log [PSARC/2010/292 Self Review] Date: Mon, 26 Jul 2010 08:38:22 -0600 From: > Tim Haley

Re: [zfs-discuss] snapshot question

2010-07-29 Thread James Dickens
want. you need to clone a filesystem per guest because ZFS can only rollback full filesystems, not invidual files. your VM solution may have finer tuned controlls for its own snapshots but those are don't use ZFS' abililities. James Dickens uadmin.blogspot.com

Re: [zfs-discuss] Solaris 10 samba in AD mode broken when user in > 32 AD groups

2009-10-13 Thread James Lever
forced to stick with Samba for our CIFS needs. cheers, James ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Stupid to have 2 disk raidz?

2009-10-19 Thread James Dickens
existing data to a new device using functions from device removal modifications, i could be wrong but it may not be as far as people fear. Device removal was mentioned in the Next word for ZFS video. James Dickens http://uadmin.blogspot.com jamesd...@gmail.com > > > -- > Erik Trimble

Re: [zfs-discuss] dedupe is in

2009-11-02 Thread James Lever
evelopment branches - should be available to public in about 3-3.5 weeks time. Plenty of instructions on how to do this on the net and in this list. For Solaris, you need to wait for the next update release. cheers, James ___ zfs-discuss mailin

Re: [zfs-discuss] zfs inotify?

2009-11-11 Thread James Andrewartha
for gam_server / gamin. $ nm /usr/lib/gam_server | grep port_create [458] | 134589544| 0|FUNC |GLOB |0|UNDEF |port_create The patch for port_create has never gone upstream however, while gvfs uses glib's gio, which has backends for inotify, solari

Re: [zfs-discuss] ZFS ZIL/log on SSD weirdness

2009-11-17 Thread James Lever
Last I checked, iSCSI volumes go direct to the primary storage and not via the slog device. Can anybody confirm that is the case and if there is a mechanism/tuneable to force it via the slog and if there is any benefit/point in this for most cases?

[zfs-discuss] ZIL corrupt, not recoverable even with logfix

2009-12-02 Thread James Risner
I have a 9 drive system (four mirrors of two disks and one hot spare) with a 10th SSD drive for ZIL. The ZIL is corrupt. I've been unable to recover using FreeBSD 8, Opensolaris x86, and using logfix (http://github.com/pjjw/logfix) In FreeBSD 8.0RC3 and below (uses v13 ZFS): 1) Boot Single Use

Re: [zfs-discuss] ZIL corrupt, not recoverable even with logfix

2009-12-03 Thread James Risner
It was created on AMD64 FreeBSD with 8.0RC2 (which was version 13 of ZFS iirc.) At some point I knocked it out (export) somehow, I don't remember doing so intentionally. So I can't do commands like zpool replace since there are no pools. It says it was last used by the FreeBSD box, but the Fre

Re: [zfs-discuss] freeNAS moves to Linux from FreeBSD

2009-12-09 Thread James Andrewartha
the claims are meaningless. http://mail.opensolaris.org/pipermail/opensolaris-help/2009-November/015824.html -- James Andrewartha ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] will deduplication know about old blocks?

2009-12-09 Thread James Lever
ing on your rpool? (at install time, or if you need to migrate your rpool to new media) cheers, James ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] Weird ZFS errors

2009-12-13 Thread James Nelson
A majority of the time when the server is rebooted I get this on a zpool: pool: ipapool state: FAULTED status: An intent log record could not be read. Waiting for adminstrator intervention to fix the faulted pool. action: Either restore the affected device(s) and run 'zpool online',

Re: [zfs-discuss] slog / log recovery is here!

2009-12-19 Thread James Risner
devzero: when you have an exported pool with no log disk and you want to mount the pool. Here is the changes to make it compile on dev-129: --- logfix.c.2009-04-26 2009-12-18 11:39:40.917435361 -0800 +++ logfix.c2009-12-18 12:19:27.507337246 -0800 @@ -20,6 +20,7 @@ #include #include +#i

Re: [zfs-discuss] ZIL corrupt, not recoverable even with logfix

2009-12-19 Thread James Risner
Written by jktorn: >Have you tried build 128 which includes pool recovery support? > >This is because FreeBSD hostname (and hostid?) is recorded in the >labels along with active pool state. > >It does not work that way at the moment, though readonly import is >quite useful option that can be tried.

Re: [zfs-discuss] raidz data loss stories?

2009-12-21 Thread James Risner
If you are asking if anyone has experienced two drive failures simultaneously? The answer is yes. It has happened to me (at home) and to one client, at least that I can remember. In both cases, I was able to dd off one of the failed disks (with just bad sectors or less bad sectors) and recons

Re: [zfs-discuss] raidz data loss stories?

2009-12-22 Thread James Risner
ttabbal: If I understand correctly, raidz{1} is 1 drive protection and space is (drives - 1) available. Raidz2 is 2 drive protection and space is (drives - 2) etc. Same for raidz3 being 3 drive protection. Everything I've seen you should stay around 6-9 drives for raidz, so don't do

Re: [zfs-discuss] Why would some disks in a raidz use partitions and others not?

2009-12-22 Thread James Risner
Can you post a "zpool import -f" for us to see? One thing I ran into recently is that if the drives arrangement was changed (like drives swapped) it can't recover. I moved an 8 drive array recently, and didn't worry about the order of the drives. It could not be mounted without reordering the

Re: [zfs-discuss] Why would some disks in a raidz use partitions and others not?

2009-12-22 Thread James Risner
galenz: "I am on different hardware, thus I cannot restore the drive configuration exactly." Actually, you can learn most of it, if not all of it you need. Do "zpool import -f" with no pool name and it should dump the issue with the pool (what is making it fail.) If that doesn't contain privi

Re: [zfs-discuss] How to destroy your system in funny way with ZFS

2009-12-27 Thread James Lever
re not involved in the boot process though) HTH, James ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Troubleshooting dedup performance

2009-12-28 Thread James Dickens
0 4 0 7 2 0 0 2 8.00M zfs James Dickens On Thu, Dec 24, 2009 at 11:22 PM, Michael Herf wrote: > FWIW, I just disabled prefetch, and my dedup + zfs recv seems to be > running visibly faster (somewhere around 3-5x faster). > > echo zfs_prefetch_

Re: [zfs-discuss] Supermicro AOC-USAS-L8i

2009-12-29 Thread James Dickens
not sure of your experience level, but did you try running devfsadm and then checking in format for your new disks James Dickens uadmin.blogspot.com On Sun, Dec 27, 2009 at 3:59 AM, Muhammed Syyid wrote: > Hi > I just picked up one of these cards and had a few questions > After i

Re: [zfs-discuss] ZFS upgrade.

2010-01-07 Thread James Lever
t. You probably need a newer version of Solaris, but I cannot tell you if any newer versions support later zfs versions. This forum is for OpenSolaris support. You should contact your Solaris support provider for further help on this matter. cheers,

[zfs-discuss] ZFS Dedup Performance

2010-01-08 Thread James Lee
drives and prstat shows that my processor and memory aren't a bottleneck. What could cause such a marked decrease in throughput? Is anyone else experiencing similar effects? Thanks, James ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] [zones-discuss] Zones on shared storage - a warning

2010-01-08 Thread James Carlson
ust be specified by a full path. Could it be that "discouraged" and "experimental" mean "not tested as thoroughly as you might like, and certainly not a good idea in any sort of production environment?" It sounds like a bug, sure, but the fix might be to remove the o

Re: [zfs-discuss] [zones-discuss] Zones on shared storage - a warning

2010-01-08 Thread James Carlson
Mike Gerdts wrote: > This unsupported feature is supported with the use of Sun Ops Center > 2.5 when a zone is put on a "NAS Storage Library". Ah, ok. I didn't know that. -- James Carlson 42.703N 71.076W ___ zfs

Re: [zfs-discuss] ZFS Dedup Performance

2010-01-08 Thread James Dickens
On Fri, Jan 8, 2010 at 1:44 PM, Ian Collins wrote: > James Lee wrote: > >> I haven't seen much discussion on how deduplication affects performance. >> I've enabled dudup on my 4-disk raidz array and have seen a significant >> drop in write throughput, from

Re: [zfs-discuss] ZFS Dedup Performance

2010-01-08 Thread James Lee
On 01/08/2010 02:42 PM, Lutz Schumann wrote: > See the reads on the pool with the low I/O ? I suspect reading the > DDT causes the writes to slow down. > > See this bug > http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6913566. > It seems to give some backgrounds. > > Can you test sett

Re: [zfs-discuss] possible to remove a mirror pair from a zpool?

2010-01-10 Thread James Dickens
No, sorry Dennis, this functionality doesn't exist yet, but is being worked, but will take a while, lots of corner cases to handle. James Dickens uadmin.blogspot.com On Sun, Jan 10, 2010 at 3:23 AM, Dennis Clarke wrote: > > Suppose the requirements for storage shrink ( it can hap

Re: [zfs-discuss] Plan for upgrading a ZFS based SAN

2010-02-15 Thread James Dickens
Yes send and receive will do the job. see zfs manpage for details. James Dickens http://uadmin.blogspot.com On Mon, Feb 15, 2010 at 11:56 AM, Tiernan OToole wrote: > Good morning all. > > I am in the process of building my V1 SAN for media storage in house, and i > am already thin

Re: [zfs-discuss] Proposed idea for enhancement - damage control

2010-02-16 Thread James Dickens
hot spares to the system should one fail. If you are truly paranoid, 3-way mirror can be used. then you can loose 2 disks without a loss of data. Spread disks across multiple controllers, and get disks from different companies and different lots to less the likely hood of getting hit by a bad batch takin

Re: [zfs-discuss] zvol and zpool

2010-02-25 Thread James Dickens
/dev/dsk/c0d1s09.8G10M 9.7G 1%/test* > the act of deleting files in UFS simply does a few accounting changes to the filesystem thus has no affect on the blocks in ZFS volume, and in some cases could actually make the zvol space grow. The only possible way to have ZFS

[zfs-discuss] Importing old vdev

2011-10-07 Thread James Lee
g, our SAN guys don't really understand ZFS or else I would have made the pool redundant in the first place. Thanks, James [1] starlight ~ # zdb -l /dev/dsk/c4t0d0s0 LABEL 0 version=22 n

Re: [zfs-discuss] tuning zfs_arc_min

2011-10-10 Thread James Litchfield
The value of zfs_arc_min specified in /etc/system must be over 64MB (0x400). Otherwise the setting is ignored. The value is in bytes not pages. Jim --- n 10/ 6/11 05:19 AM, Frank Van Damme wrote: Hello, quick and stupid question: I'm breaking my head over how to tunz zfs_arc_min on a runn

Re: [zfs-discuss] Importing old vdev

2011-10-10 Thread James Lee
On 10/07/2011 11:02 AM, James Lee wrote: > Hello, > > I had a pool made from a single LUN, which I'll call c4t0d0 for the > purposes of this email. We replaced it with another LUN, c4t1d0, to > grow the pool size. Now c4t1d0 is hosed and I'd like to see about > rec

[zfs-discuss] Subscribe

2012-05-01 Thread james cypcar
Subscribe -- ORACLE James Cypcar | Solaris and Network Domain, Global Systems Support Oracle Global Customer Services Log, update, and monitor your Service Request online usinghttps://support.oracle.com ___ zfs-discuss mailing list zfs-discuss

Re: [zfs-discuss] Interaction between ZFS intent log and mmap'd files

2012-07-03 Thread James Litchfield
inline On 07/02/12 15:00, Nico Williams wrote: On Mon, Jul 2, 2012 at 3:32 PM, Bob Friesenhahn wrote: On Mon, 2 Jul 2012, Iwan Aucamp wrote: I'm interested in some more detail on how ZFS intent log behaves for updated done via a memory mapped file - i.e. will the ZIL log updates done to an m

Re: [zfs-discuss] Interaction between ZFS intent log and mmap'd files

2012-07-03 Thread James Litchfield
Agreed - msync/munmap is the only guarantee. On 07/ 3/12 08:47 AM, Nico Williams wrote: On Tue, Jul 3, 2012 at 9:48 AM, James Litchfield wrote: On 07/02/12 15:00, Nico Williams wrote: You can't count on any writes to mmap(2)ed files hitting disk until you msync(2) with MS_SYNC. The s

Re: [zfs-discuss] Interesting question about L2ARC

2012-09-11 Thread James H
s where it goes 2-5k reads and I'm seeing 20-80% l2arc hits. These have been running for about a week and, given my understanding of how L2ARC fills, I'd suggest maybe leaving it to warm up longer (e.g. 1-2 weeks?) caveat: I'm a complete newbie to zfs so I could be

[zfs-discuss] Online zpool expansion feature in Solaris 10 9/10

2010-10-12 Thread James Patterson
I’m testing the new online zpool expansion feature of Solaris 10 9/10. My zpool was created using the entire disk (ie. no slice number was used). When I resize my LUN on our SAN (an HP-EVA4400) the EFI label does not change. On the zpool, I have autoexpand=on, and I’ve tried using zpool online

Re: [zfs-discuss] Online zpool expansion feature in Solaris 10 9/10

2010-11-07 Thread James Patterson
it to the zpool to create a mirror, then detach the old smaller device. Then run zpool online -e to actually expand the zpool. James. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

[zfs-discuss] Pool error in a hex file name

2011-06-13 Thread James Sutherland
Anyone know what this means? After a scrub I apparently have an error in a file name that I don't understand: zpool status -v pumbaa1 pool: pumbaa1 state: ONLINE status: One or more devices has experienced an error resulting in data corruption. Applications may be

Re: [zfs-discuss] Pool error in a hex file name

2011-06-13 Thread James Sutherland
A reboot and then another scrub fixed this. Reboot made no difference. So after the reboot I started another scrub and now the pool shows clean. So the sequence was like this: 1. zpool reported ioerrors after a scrub with an error on a file in a snapshot 2. destroyed the snapshot with the err

[zfs-discuss] copying a ZFS

2008-07-20 Thread James Mauro
Is there an optimal method of making a complete copy of a ZFS, aside from the conventional methods (tar, cpio)? We have an existing ZFS that was not created with the optimal recordsize. We wish to create a new ZFS with the optimal recordsize (8k), and copy all the data from the existing ZFS to th

[zfs-discuss] zfs status -v tries too hard?

2008-08-06 Thread James Litchfield
After some errors were logged as to a problem with a ZFS file system, I ran zfs status followed by zfs status -v... # zpool status pool: ehome state: ONLINE status: One or more devices has experienced an error resulting in data corruption. Applications may be affected. action: Restore

Re: [zfs-discuss] [install-discuss] Will OpenSolaris and Nevada co-exist in peace on the same root zpool

2008-09-05 Thread James Carlson
re: http://blogs.sun.com/edp/entry/moving_from_nevada_and_live -- James Carlson, Solaris Networking <[EMAIL PROTECTED]> Sun Microsystems / 35 Network Drive71.232W Vox +1 781 442 2084 MS UBUR02-212 / Burlington MA 01803-2757 42.496N Fax +1 781 442 1677 _

Re: [zfs-discuss] ZFS over multiple iSCSI targets

2008-09-10 Thread James Andrewartha
ess > by calculating the best theoretical correct speed (which should be > really slow, one write per disc spin) > > this has been on my TODO list for ages.. :( Does the perl script at http://brad.livejournal.com/2116715.html do what you want? -- James Andrewartha ___

Re: [zfs-discuss] web interface not showing up

2008-09-24 Thread James Andrewartha
08-June/048457.html http://mail.opensolaris.org/pipermail/zfs-discuss/2008-June/048550.html -- James Andrewartha ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] "zfs set sharenfs" takes a long time to return.

2008-10-09 Thread James Neal
I have an X4500 fileserver (NFS, Samba) running OpenSolaris 2008.05 pkg upgraded to snv_91 with ~3200 filesystems (and ~27429 datasets, including snapshots). I've been encountering some pretty big slow-downs on this system when running certain zfs commands. The one causing me the most pain at

Re: [zfs-discuss] Verify files' checksums

2008-10-26 Thread James Litchfield
A nit on the nit... cat does not use mmap for files <= 32K in size. For those files it's a simple read() into a buffer and write() it out. Jim --- Chris Gerhard wrote: > A slight nit. > > Using cat(1) to read the file to /dev/null will not actually cause the data > to be read thanks to the mag

Re: [zfs-discuss] Hotplug issues on USB removable media.

2008-10-28 Thread James Litchfield
I believe the answer is in the last email in that thread. hald doesn't offer the notifications and it's not clear that ZFS can handle them. As is noted, there are complications with ZFS due to the possibility of multiple disks comprising a volume, etc. It would be a lot of work to make it work corr

Re: [zfs-discuss] [ldoms-discuss] Solaris 10 patch 137137-09 broke LDOM

2008-11-15 Thread James Black
I've tried using S10 U6 to reinstall the boot file (instead of U5) over jumpstart as its a ldom, noticed a another error. Boot device: /[EMAIL PROTECTED]/[EMAIL PROTECTED]/[EMAIL PROTECTED] File and args: -s Requesting Internet Address for 0:14:4f:f9:84:f3 boot: cannot open kernel/sparcv9/unix

Re: [zfs-discuss] [ldoms-discuss] Solaris 10 patch 137137-09 broke LDOM

2008-11-15 Thread James Black
ix] Help needed. Any ideas? Thanks, James -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] [ldoms-discuss] Solaris 10 patch 137137-09 broke LDOM

2008-11-15 Thread James Black
another update - instead of net booting to recovery i tried adding the iso to the primary ldom and adding it to the ldom to run installboot again from a S10 U6 dvd iso. I have return to my first error message: {0} ok boot /[EMAIL PROTECTED]/[EMAIL PROTECTED]/[EMAIL PROTECTED] -s  SPARC Enterpr

Re: [zfs-discuss] [ldoms-discuss] Solaris 10 patch 137137-09 broke LDOM

2008-11-16 Thread James Black
When installing the 137137-09 patch it run out of / space, just like http://www.opensolaris.org/jive/thread.jspa?threadID=82413&tstart=0 However tried the 6 steps to recover that didn't work. I just rebuild the ldom and attached the LDOM image files from the old system and did a zpool import to re

Re: [zfs-discuss] [storage-discuss] ZFS iscsi snapshot - VSS compatible?

2009-01-08 Thread James Dean
o idea where to even begin researching VSS unfortunately... James (Sent from my mobile) -Original Message- From: Tim Sent: Wednesday, 07 Jan 2009 23:18 To: Jason J. W. Williams Cc: zfs-discuss@opensolaris.org; storage-disc...@opensolaris.org Subject: Re: [storage-discuss] [zfs-dis

[zfs-discuss] zpool import fails to find pool

2009-01-23 Thread James Nord
Hi all, I moved from Sol 10 Update4 to update 6. Before doing this I exported both of my zpools, and replace the discs containing the ufs root on with two new discs (these discs did not have any zpool /zfs info and are raid mirrored in hardware) Once I had installed update6 I did a zpool impor

Re: [zfs-discuss] zpool import fails to find pool

2009-01-24 Thread James Nord
Looking at format it is missing 12 discs! Which is probably not suprisingly the number of discs in the external storage controller. The other presnt disc have moved to c2 from c0. The driver is the same for both the discs (it is the HP CQPAry3 driver) and the external storage is on the same con

[zfs-discuss] fmd dying in zfs shutdown?

2009-02-16 Thread James Litchfield
known issue? I've seen this 5 times over the past few days. I think these were, for the most part BFUs on top of B107. x86. # pstack fmd.733 core 'fmd.733' of 733:/usr/lib/fm/fmd/fmd - lwp# 1 / thread# 1 fe8c3347 libzfs_fini (0, fed9e000, 8047d08, fed749

[zfs-discuss] rename(2), atomicity, crashes and fsync()

2009-03-17 Thread James Andrewartha
ion-and-the-zero-length-file-problem/ http://lwn.net/Articles/323169/ http://mjg59.livejournal.com/108257.html http://lwn.net/Articles/323464/ http://thunk.org/tytso/blog/2009/03/15/dont-fear-the-fsync/ http://lwn.net/Articles/323752/ * http://lwn.net/Articles/322823/ * * are currently subscriber-only,

Re: [zfs-discuss] rename(2), atomicity, crashes and fsync()

2009-03-18 Thread James Litchfield
POSIX has a Synchronized I/O Data (and File) Integrity Completion definition (line 115434 of the Issue 7 (POSIX.1-2008) specification). What it says is that writes for a byte range in a file must complete before any pending reads for that byte range are satisfied. It does not say that if you

Re: [zfs-discuss] [storage-discuss] Supermicro AOC-SASLP-MV8

2009-04-21 Thread James Andrewartha
rvell SAS driver for Solaris at all, so I'd say it's not supported. http://www.hardforum.com/showthread.php?t=1397855 has a fair few people testing it out, but mostly under Windows. -- James Andrewartha ___ zfs-discuss mailing list zfs-discuss@op

Re: [zfs-discuss] LUN expansion

2009-06-11 Thread James Hess
> What you could do is to write a program which calls > efi_use_whole_disk(3EXT) to re-write the label for you. Once you have a > new label you will be able to export/import the pool Awesome.. Worked for me, anyways. .C file attached Although I did a "zpool export" before opening the device

Re: [zfs-discuss] zfs on 32 bit?

2009-06-14 Thread James Litchfield
There is a 32-bit and 64-bit version of the file system module available on x86. Given the quality of the development team, I'd be *very* surprised if such issues as suggested in your message exist. Jurgen's comment highlights the major issue - the lack of space to cache data when in 32-bit mode.

Re: [zfs-discuss] how to do backup

2009-06-20 Thread James Lever
ur backups? cheers, James ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] cutting up a SSD for read/log use...

2009-06-21 Thread James Lever
when using those drives as ZILs. Are you planning on using these drives as primary data storage and ZIL for the same volumes or as primary storage for (say) your rpool and ZIL for a data pool on spinning metal? cheers, James ___ zfs-discuss ma

Re: [zfs-discuss] SPARC SATA, please.

2009-06-25 Thread James Lever
On 25/06/2009, at 5:16 AM, Miles Nordin wrote: and mpt is the 1068 driver, proprietary, works on x86 and SPARC. then there is also itmpt, the third-party-downloadable closed-source driver from LSI Logic, dunno much about it but someone here used it. I'm confused. Why do you say the mpt dr

[zfs-discuss] surprisingly poor performance

2009-07-02 Thread James Lever
to the write operations. I'm not sure where to go from here, these results are appalling (about 3x the time of the old system with 8x 10kRPM spindles) even with two Enterprise SSDs as separate log devices. cheers, James ___ zfs-discuss maili

Re: [zfs-discuss] surprisingly poor performance

2009-07-03 Thread James Lever
t a good, valid test to measure the IOPS of these SSDs? cheers, James ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] surprisingly poor performance

2009-07-03 Thread James Lever
NFSv3 so far for these tests as it it widely regarded as faster, even though less functional. cheers, James ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] surprisingly poor performance

2009-07-03 Thread James Lever
tialy. So I'd suggest running the test from a lot of clients simultaneously I'm sure that it will be a more performant system in general, however, it is this explicit set of tests that I need to maintain or improve performance on. cheers, James

Re: [zfs-discuss] surprisingly poor performance

2009-07-03 Thread James Lever
developers here had explicitly performed tests to check these similar assumptions and found no evidence that the Linux/XFS sync implementation to be lacking even though there were previous issues with it in one kernel revision. cheers, James ___ zfs

Re: [zfs-discuss] [storage-discuss] surprisingly poor performance

2009-07-03 Thread James Lever
ther tests and compare linux/XFS and perhaps remove LVM (though, I don't see why you would remove LVM from the equation). cheers, James ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] surprisingly poor performance

2009-07-03 Thread James Lever
insightful observations? cheers, James ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] [storage-discuss] surprisingly poor performance

2009-07-03 Thread James Lever
is much faster for deletes. cheers, James bash-3.2# cd /nfs/xfs_on_LVM bash-3.2# ( date ; time tar xf zeroes-10k.tar ; date ; time rm -rf zeroes/ ; date ) 2>&1 Sat Jul 4 15:31:13 EST 2009 real0m18.145s user0m0.055s sys 0m0.500s Sat Jul 4 15:31:31 EST 2009 real0m4.585

Re: [zfs-discuss] surprisingly poor performance

2009-07-05 Thread James Lever
u have any methods to "correctly" measure the performance of an SSD for the purpose of a slog and any information on others (other than anecdotal evidence)? cheers, James ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://

Re: [zfs-discuss] [storage-discuss] surprisingly poor performance

2009-07-05 Thread James Lever
id controller w/ BBWC enabled and the cache disabled on the HDDs? (i.e. correctly configured for data safety) Should a correctly performing raid card be ignoring barrier write requests because it is already on stable storage? cheers, James ___ zfs-

  1   2   3   4   5   6   7   >