Assume starting one disk write action, the vdev_disk_io_start will be called
from zio_execute.
static int vdev_disk_io_start(zio_t *zio)
{
..
bp->b_lblkno = lbtodb(zio->io_offset);
..
}
After scaning over the zfs source, I find the zio->io_offset is only set value
in
I appear to be seeing the performance of a local ZFS file system degrading
over a short period of time.
My system configuration:
32 bit Athlon 1800+ CPU
1 Gbyte of RAM
Solaris 10 U6
SunOS filer 5.10 Generic_137138-09 i86pc i386 i86pc
2x250 GByte Western Digital WD2500JB
On Tue, Jan 6, 2009 at 4:23 PM, John Arden wrote:
> I have two 280R systems. System A has Solaris 10u6, and its (2) drives
> are configured as a ZFS rpool, and are mirrored. I would like to pull
> these drives, and move them to my other 280, system B, which is
> currently hard drive-less.
>
> A
> Does anyone know specifically if b105 has ZFS encryption?
IIRC it has been pushed back to b109.
-mg
signature.asc
Description: OpenPGP digital signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/li
+1
On Thu, Jan 22, 2009 at 11:12 PM, Paul Schlie wrote:
> It also wouldn't be a bad idea for ZFS to also verify drives designated as
> hot spares in fact have sufficient capacity to be compatible replacements
> for particular configurations, prior to actually being critically required
> (as if dr
This is primarily a list for OpenSolaris ZFS - OS X is a little different ;)
However, I think you need to do a 'sudo zpool destroy [poolname]' from
Terminal.app
Be warned, you can't go back once you have done this!
On Sun, Jan 18, 2009 at 4:42 PM, Jason Todd Slack-Moehrle
wrote:
> Hi All,
>
>
I've seen reports of a recent Seagate firmware update bricking drives again.
What's the output of 'zpool import' from the LiveCD? It sounds like
more than 1 drive is dropping off.
On Thu, Jan 22, 2009 at 10:52 PM, Brad Hill wrote:
>> I would get a new 1.5 TB and make sure it has the new
>> fi
A little gotcha that I found in my 10u6 update process was that 'zpool
upgrade [poolname]' is not the same as 'zfs upgrade
[poolname]/[filesystem(s)]'
What does 'zfs upgrade' say? I'm not saying this is the source of
your problem, but it's a detail that seemed to affect stability for
me.
On Thu
James Nord wrote:
> Hi all,
>
> I moved from Sol 10 Update4 to update 6.
>
> Before doing this I exported both of my zpools, and replace the discs
> containing the ufs root on with two new discs (these discs did not have any
> zpool /zfs info and are raid mirrored in hardware)
>
> Once I had inst
Jerry K wrote:
> It was rumored that Nevada build 105 would have ZFS encrypted file
> systems integrated into the main source.
>
> In reviewing the Change logs (URL's below) I did not see anything
> mentioned that this had come to pass. Its going to be another week
> before I have a chance to
Colin Johnson wrote:
> I was having CIFs problems on my Mac so I upgrade to build 105.
> After getting all my shares populated with data I ran zpool scrub on
> the raidz array and it told me the version was out of date so I
> upgraded.
>
> One of my shares is now inaccessible and I cannot even
Hi all,
I moved from Sol 10 Update4 to update 6.
Before doing this I exported both of my zpools, and replace the discs
containing the ufs root on with two new discs (these discs did not have any
zpool /zfs info and are raid mirrored in hardware)
Once I had installed update6 I did a zpool impor
It also wouldn't be a bad idea for ZFS to also verify drives designated as
hot spares in fact have sufficient capacity to be compatible replacements
for particular configurations, prior to actually being critically required
(as if drives otherwise appearing to have equivalent capacity may not, it
w
Hi All,
Since switching to ZFS I get a lot of ³beach balls². I think for
productivity sake I should switch back to HFS+. My home directory was on
this ZFS parition.
I backed up my data to another drive and tried using Disk Utility to select
my ZFS partition, un-mount it and format just that part
Richard Elling wrote:
> mijenix wrote:
>> yes, that's the way zpool likes it
>>
>> I think I've to understand how (Open)Solaris create disks or how the
>> partition thing works under OSol. Do you know any guide or howto?
>>
>
> We've tried to make sure the ZFS Admin Guide covers these things,
It was rumored that Nevada build 105 would have ZFS encrypted file
systems integrated into the main source.
In reviewing the Change logs (URL's below) I did not see anything
mentioned that this had come to pass. Its going to be another week
before I have a chance to play with b105.
Does anyon
I was having CIFs problems on my Mac so I upgrade to build 105.
After getting all my shares populated with data I ran zpool scrub on
the raidz array and it told me the version was out of date so I
upgraded.
One of my shares is now inaccessible and I cannot even delete it :(
>
> r...@bitchko:/
I have two 280R systems. System A has Solaris 10u6, and its (2) drives
are configured as a ZFS rpool, and are mirrored. I would like to pull
these drives, and move them to my other 280, system B, which is
currently hard drive-less.
Although unsupported by Sun, I have done this before without
Hi All,
sorry for all the duplicates. Feel free to pass on to other interested
parties.
The OpenSolaris Storage Community is holding a Storage Summit on
February 23 at the Grand Hyatt San Francisco, prior to the FAST
conference.
The registration wiki is here:
https://wikis.sun.com/display/OpenS
Hi,
I have a big problem with my ZFS drive. After a kernel panic, I cannot
import the pool anymore :
--
=> zpool status
no pools available
=> zpool list
no pools available
--
Roger wrote:
> Hi!
> Im running popensolaris b101 and ive made a zfs pool called tank and an fs
> inside of it tank/public, ive shared it with smb.
>
> zfs set sharesmb=on tank/public
>
> im using solaris smb and not samba.
>
> The problem is this. When i connect and create a file its readable
Yes, but I can't export a pool that has never been imported. These drives are
no longer connected to their original system, and at this point, when I connect
them to their original system, the results are the same.
Thanks,
Michael
--- On Tue, 12/30/08, Weldon S Godfrey 3 wrote:
>
> Did you
--- On Tue, 12/30/08, Andrew Gabriel wrote:
>If you were doing a rolling upgrade, I suspect the old disks are all
>horribly out of sync with each other?
>
>If that is the problem, then if the filesystem(s) have a snapshot that
>existed when all the old disks were still online, I wonder if it migh
Yes, everything seems to be fine, but that was still scary, and the fix was not
completely obvious. At the very least, I would suggest adding text such as the
following to the page at http://www.sun.com/msg/ZFS-8000-FD :
When physically replacing the failed device, it is best to use the same
c
On Fri, January 23, 2009 12:01, Glenn Lagasse wrote:
> * David Dyer-Bennet (d...@dd-b.net) wrote:
>> But what I'm wondering is, are there known bugs in 101b that make
>> scrubbing inadvisable with that code? I'd love to *find out* what
>> horrors
>> may be lurking.
>
> There's nothing in the rel
* David Dyer-Bennet (d...@dd-b.net) wrote:
>
> On Fri, January 23, 2009 09:52, casper@sun.com wrote:
>
> >>Which leaves me wondering, how safe is running a scrub? Scrub is one of
> >>the things that made ZFS so attractive to me, and my automatic reaction
> >>when I first hook up the data dis
If i'm not mistaken (and somebody please correct me if i'm wrong), the
Sun 7000 series storage appliances (the Fishworks boxes) use enterprise
SSDs, with dram caching. One such product is made by STEC.
My understanding is that the Sun appliances use one SSD for the ZIL, and
one as a read cache.
This is correct, and you can read about it here:
http://blogs.sun.com/ahl/entry/fishworks_launch
Adam
On Fri, Jan 23, 2009 at 05:03:57PM +, Ross Smith wrote:
> That's my understanding too. One (STEC?) drive as a write cache,
> basically a write optimised SSD. And cheaper, larger, read op
That's my understanding too. One (STEC?) drive as a write cache,
basically a write optimised SSD. And cheaper, larger, read optimised
SSD's for the read cache.
I thought it was an odd strategy until I read into SSD's a little more
and realised you really do have to think about your usage cases w
On Fri, January 23, 2009 09:52, casper@sun.com wrote:
>>Which leaves me wondering, how safe is running a scrub? Scrub is one of
>>the things that made ZFS so attractive to me, and my automatic reaction
>>when I first hook up the data disks during a recovery is "run a scrub!".
>
>
> If your m
On Thu, 22 Jan 2009, Ross wrote:
> However, now I've written that, Sun use SATA (SAS?) SSD's in their
> high end fishworks storage, so I guess it definately works for some
> use cases.
But the "fishworks" (Fishworks is a development team, not a product)
write cache device is not based on FLASH
>I thought I'd noticed that my crashes tended to occur when I was running a
>scrub, and saw at least one open bug that was scrub-related that could
>cause such a crash. However, I eventually tracked my problem down (as it
>got worse) to a bad piece of memory (been nearly a week since I replaced
>
I thought I'd noticed that my crashes tended to occur when I was running a
scrub, and saw at least one open bug that was scrub-related that could
cause such a crash. However, I eventually tracked my problem down (as it
got worse) to a bad piece of memory (been nearly a week since I replaced
the me
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Brent Jones:
> My results are much improved, on the order of 5-100 times faster
> (either over Mbuffer or SSH).
this is good news - although not quite soon enough for my current 5TB zfs send
;-)
have you tested if this also improves the performance
34 matches
Mail list logo