[zfs-discuss] Where does set the value to zio->io_offset?

2009-01-23 Thread Jin
Assume starting one disk write action, the vdev_disk_io_start will be called from zio_execute. static int vdev_disk_io_start(zio_t *zio) { .. bp->b_lblkno = lbtodb(zio->io_offset); .. } After scaning over the zfs source, I find the zio->io_offset is only set value in

[zfs-discuss] zfs read performance degrades over a short time

2009-01-23 Thread Ray Galvin
I appear to be seeing the performance of a local ZFS file system degrading over a short period of time. My system configuration: 32 bit Athlon 1800+ CPU 1 Gbyte of RAM Solaris 10 U6 SunOS filer 5.10 Generic_137138-09 i86pc i386 i86pc 2x250 GByte Western Digital WD2500JB

Re: [zfs-discuss] moving zfs root rpool between systems

2009-01-23 Thread Tim
On Tue, Jan 6, 2009 at 4:23 PM, John Arden wrote: > I have two 280R systems. System A has Solaris 10u6, and its (2) drives > are configured as a ZFS rpool, and are mirrored. I would like to pull > these drives, and move them to my other 280, system B, which is > currently hard drive-less. > > A

Re: [zfs-discuss] ZFS encryption?? - [Fwd: [osol-announce] SXCE Build 105 available

2009-01-23 Thread Mario Goebbels
> Does anyone know specifically if b105 has ZFS encryption? IIRC it has been pushed back to b109. -mg signature.asc Description: OpenPGP digital signature ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/li

Re: [zfs-discuss] replace same sized disk fails with too small error

2009-01-23 Thread Blake
+1 On Thu, Jan 22, 2009 at 11:12 PM, Paul Schlie wrote: > It also wouldn't be a bad idea for ZFS to also verify drives designated as > hot spares in fact have sufficient capacity to be compatible replacements > for particular configurations, prior to actually being critically required > (as if dr

Re: [zfs-discuss] Changing from ZFS back to HFS+

2009-01-23 Thread Blake
This is primarily a list for OpenSolaris ZFS - OS X is a little different ;) However, I think you need to do a 'sudo zpool destroy [poolname]' from Terminal.app Be warned, you can't go back once you have done this! On Sun, Jan 18, 2009 at 4:42 PM, Jason Todd Slack-Moehrle wrote: > Hi All, > >

Re: [zfs-discuss] Raidz1 faulted with single bad disk. Requesting assistance.

2009-01-23 Thread Blake
I've seen reports of a recent Seagate firmware update bricking drives again. What's the output of 'zpool import' from the LiveCD? It sounds like more than 1 drive is dropping off. On Thu, Jan 22, 2009 at 10:52 PM, Brad Hill wrote: >> I would get a new 1.5 TB and make sure it has the new >> fi

Re: [zfs-discuss] zpool status -x strangeness

2009-01-23 Thread Blake
A little gotcha that I found in my 10u6 update process was that 'zpool upgrade [poolname]' is not the same as 'zfs upgrade [poolname]/[filesystem(s)]' What does 'zfs upgrade' say? I'm not saying this is the source of your problem, but it's a detail that seemed to affect stability for me. On Thu

Re: [zfs-discuss] zpool import fails to find pool

2009-01-23 Thread Andrew Gabriel
James Nord wrote: > Hi all, > > I moved from Sol 10 Update4 to update 6. > > Before doing this I exported both of my zpools, and replace the discs > containing the ufs root on with two new discs (these discs did not have any > zpool /zfs info and are raid mirrored in hardware) > > Once I had inst

Re: [zfs-discuss] ZFS encryption?? - [Fwd: [osol-announce] SXCE Build 105 available

2009-01-23 Thread Tim Haley
Jerry K wrote: > It was rumored that Nevada build 105 would have ZFS encrypted file > systems integrated into the main source. > > In reviewing the Change logs (URL's below) I did not see anything > mentioned that this had come to pass. Its going to be another week > before I have a chance to

Re: [zfs-discuss] ZFS upgrade mangled my share

2009-01-23 Thread Tim Haley
Colin Johnson wrote: > I was having CIFs problems on my Mac so I upgrade to build 105. > After getting all my shares populated with data I ran zpool scrub on > the raidz array and it told me the version was out of date so I > upgraded. > > One of my shares is now inaccessible and I cannot even

[zfs-discuss] zpool import fails to find pool

2009-01-23 Thread James Nord
Hi all, I moved from Sol 10 Update4 to update 6. Before doing this I exported both of my zpools, and replace the discs containing the ufs root on with two new discs (these discs did not have any zpool /zfs info and are raid mirrored in hardware) Once I had installed update6 I did a zpool impor

Re: [zfs-discuss] replace same sized disk fails with too small error

2009-01-23 Thread Paul Schlie
It also wouldn't be a bad idea for ZFS to also verify drives designated as hot spares in fact have sufficient capacity to be compatible replacements for particular configurations, prior to actually being critically required (as if drives otherwise appearing to have equivalent capacity may not, it w

[zfs-discuss] Changing from ZFS back to HFS+

2009-01-23 Thread Jason Todd Slack-Moehrle
Hi All, Since switching to ZFS I get a lot of ³beach balls². I think for productivity sake I should switch back to HFS+. My home directory was on this ZFS parition. I backed up my data to another drive and tried using Disk Utility to select my ZFS partition, un-mount it and format just that part

Re: [zfs-discuss] mirror rpool

2009-01-23 Thread mijenix
Richard Elling wrote: > mijenix wrote: >> yes, that's the way zpool likes it >> >> I think I've to understand how (Open)Solaris create disks or how the >> partition thing works under OSol. Do you know any guide or howto? >> > > We've tried to make sure the ZFS Admin Guide covers these things,

[zfs-discuss] ZFS encryption?? - [Fwd: [osol-announce] SXCE Build 105 available

2009-01-23 Thread Jerry K
It was rumored that Nevada build 105 would have ZFS encrypted file systems integrated into the main source. In reviewing the Change logs (URL's below) I did not see anything mentioned that this had come to pass. Its going to be another week before I have a chance to play with b105. Does anyon

[zfs-discuss] ZFS upgrade mangled my share

2009-01-23 Thread Colin Johnson
I was having CIFs problems on my Mac so I upgrade to build 105. After getting all my shares populated with data I ran zpool scrub on the raidz array and it told me the version was out of date so I upgraded. One of my shares is now inaccessible and I cannot even delete it :( > > r...@bitchko:/

[zfs-discuss] moving zfs root rpool between systems

2009-01-23 Thread John Arden
I have two 280R systems. System A has Solaris 10u6, and its (2) drives are configured as a ZFS rpool, and are mirrored. I would like to pull these drives, and move them to my other 280, system B, which is currently hard drive-less. Although unsupported by Sun, I have done this before without

[zfs-discuss] OpenSolaris Storage Summit Feb 23, 2009 San Francisco

2009-01-23 Thread Peter Buckingham
Hi All, sorry for all the duplicates. Feel free to pass on to other interested parties. The OpenSolaris Storage Community is holding a Storage Summit on February 23 at the Grand Hyatt San Francisco, prior to the FAST conference. The registration wiki is here: https://wikis.sun.com/display/OpenS

[zfs-discuss] help please - Zpool import : I/O error

2009-01-23 Thread mathieu.email
Hi, I have a big problem with my ZFS drive. After a kernel panic, I cannot import the pool anymore : -- => zpool status no pools available => zpool list no pools available --

Re: [zfs-discuss] zfs smb public share, files created not public

2009-01-23 Thread Mark Shellenbaum
Roger wrote: > Hi! > Im running popensolaris b101 and ive made a zfs pool called tank and an fs > inside of it tank/public, ive shared it with smb. > > zfs set sharesmb=on tank/public > > im using solaris smb and not samba. > > The problem is this. When i connect and create a file its readable

Re: [zfs-discuss] ZFS Import Problem

2009-01-23 Thread Michael McKnight
Yes, but I can't export a pool that has never been imported. These drives are no longer connected to their original system, and at this point, when I connect them to their original system, the results are the same. Thanks, Michael --- On Tue, 12/30/08, Weldon S Godfrey 3 wrote: > > Did you

Re: [zfs-discuss] ZFS Import Problem

2009-01-23 Thread Michael McKnight
--- On Tue, 12/30/08, Andrew Gabriel wrote: >If you were doing a rolling upgrade, I suspect the old disks are all >horribly out of sync with each other? > >If that is the problem, then if the filesystem(s) have a snapshot that >existed when all the old disks were still online, I wonder if it migh

Re: [zfs-discuss] Bug report: disk replacement confusion

2009-01-23 Thread Scott L. Burson
Yes, everything seems to be fine, but that was still scary, and the fix was not completely obvious. At the very least, I would suggest adding text such as the following to the page at http://www.sun.com/msg/ZFS-8000-FD : When physically replacing the failed device, it is best to use the same c

Re: [zfs-discuss] Is scrubbing "safe" in 101b? (OpenSolaris 2008.11)

2009-01-23 Thread David Dyer-Bennet
On Fri, January 23, 2009 12:01, Glenn Lagasse wrote: > * David Dyer-Bennet (d...@dd-b.net) wrote: >> But what I'm wondering is, are there known bugs in 101b that make >> scrubbing inadvisable with that code? I'd love to *find out* what >> horrors >> may be lurking. > > There's nothing in the rel

Re: [zfs-discuss] Is scrubbing "safe" in 101b? (OpenSolaris 2008.11)

2009-01-23 Thread Glenn Lagasse
* David Dyer-Bennet (d...@dd-b.net) wrote: > > On Fri, January 23, 2009 09:52, casper@sun.com wrote: > > >>Which leaves me wondering, how safe is running a scrub? Scrub is one of > >>the things that made ZFS so attractive to me, and my automatic reaction > >>when I first hook up the data dis

Re: [zfs-discuss] SSD drives in Sun Fire X4540 or X4500 for dedicated ZIL device

2009-01-23 Thread Greg Mason
If i'm not mistaken (and somebody please correct me if i'm wrong), the Sun 7000 series storage appliances (the Fishworks boxes) use enterprise SSDs, with dram caching. One such product is made by STEC. My understanding is that the Sun appliances use one SSD for the ZIL, and one as a read cache.

Re: [zfs-discuss] SSD drives in Sun Fire X4540 or X4500 for dedicated ZIL device

2009-01-23 Thread Adam Leventhal
This is correct, and you can read about it here: http://blogs.sun.com/ahl/entry/fishworks_launch Adam On Fri, Jan 23, 2009 at 05:03:57PM +, Ross Smith wrote: > That's my understanding too. One (STEC?) drive as a write cache, > basically a write optimised SSD. And cheaper, larger, read op

Re: [zfs-discuss] SSD drives in Sun Fire X4540 or X4500 for dedicated ZIL device

2009-01-23 Thread Ross Smith
That's my understanding too. One (STEC?) drive as a write cache, basically a write optimised SSD. And cheaper, larger, read optimised SSD's for the read cache. I thought it was an odd strategy until I read into SSD's a little more and realised you really do have to think about your usage cases w

Re: [zfs-discuss] Is scrubbing "safe" in 101b? (OpenSolaris 2008.11)

2009-01-23 Thread David Dyer-Bennet
On Fri, January 23, 2009 09:52, casper@sun.com wrote: >>Which leaves me wondering, how safe is running a scrub? Scrub is one of >>the things that made ZFS so attractive to me, and my automatic reaction >>when I first hook up the data disks during a recovery is "run a scrub!". > > > If your m

Re: [zfs-discuss] SSD drives in Sun Fire X4540 or X4500 for dedicated ZIL device

2009-01-23 Thread Bob Friesenhahn
On Thu, 22 Jan 2009, Ross wrote: > However, now I've written that, Sun use SATA (SAS?) SSD's in their > high end fishworks storage, so I guess it definately works for some > use cases. But the "fishworks" (Fishworks is a development team, not a product) write cache device is not based on FLASH

Re: [zfs-discuss] Is scrubbing "safe" in 101b? (OpenSolaris 2008.11)

2009-01-23 Thread Casper . Dik
>I thought I'd noticed that my crashes tended to occur when I was running a >scrub, and saw at least one open bug that was scrub-related that could >cause such a crash. However, I eventually tracked my problem down (as it >got worse) to a bad piece of memory (been nearly a week since I replaced >

[zfs-discuss] Is scrubbing "safe" in 101b? (OpenSolaris 2008.11)

2009-01-23 Thread David Dyer-Bennet
I thought I'd noticed that my crashes tended to occur when I was running a scrub, and saw at least one open bug that was scrub-related that could cause such a crash. However, I eventually tracked my problem down (as it got worse) to a bad piece of memory (been nearly a week since I replaced the me

Re: [zfs-discuss] 'zfs recv' is very slow

2009-01-23 Thread River Tarnell
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Brent Jones: > My results are much improved, on the order of 5-100 times faster > (either over Mbuffer or SSH). this is good news - although not quite soon enough for my current 5TB zfs send ;-) have you tested if this also improves the performance