[...]
>
> Well yes actually you aren't looking for the snapshots the correct way.
[...]
>> No difference, and there is no rpool/dump
>> rpool/export
>> rpool/export/home
>> rpool/export/home/reader
>>
>> under either snapshot... not to mention all
is this a direct write to a zfs filesystem or is it some kind of zvol export?
anyway, sounds similar to this:
http://opensolaris.org/jive/thread.jspa?threadID=105702&tstart=0
On Tue, Jun 23, 2009 at 7:14 PM, Bob
Friesenhahn wrote:
> It has been quite some time (about a year) since I did testing
Hi,
What does zio_assess do? Is it a stage of pipeline? I see quite a bit these
stacks in 5 second time.
I tried to search src.opensolaris, did not find any reference. Thanks for any
help
zfs`zio_assess+0x58
zfs`zio_execute+0x74
genunix`taskq_thread+
It has been quite some time (about a year) since I did testing of
batch processing with my software (GraphicsMagick). In between time,
ZFS added write-throttling. I am using Solaris 10 with kernel
141415-03.
Quite a while back I complained that ZFS was periodically stalling the
writing proc
> "vab" == Volker A Brandt writes:
>> I thought the LSI 1068 do not work with SPARC (mfi driver, x86
>> only). I thought the 1078 are supposed to work with SPARC
>> (mega_sas).
vab> uname -a SunOS shelob 5.10
vab> Generic_137111-02 sun4v sparc SUNW,Sun-Fire-T1000
vab>
On Mon, 22 Jun 2009 15:28:08 -0700
Carson Gaspar wrote:
> James C. McPherson wrote:
>
> > Use raidctl(1m). For fwflash(1m), this is on the "future project"
> > list purely because we've got much higher priority projects on the
> > boil - if we couldn't use raidctl(1m) this would be higher up the
"scrub: resilver completed after 5h50m with 0 errors on Tue Jun 23 05:04:18
2009"
Zero errors even though other parts of the message definitely show errors?
This is described here: http://docs.sun.com/app/docs/doc/819-5461/gbcve?a=view
Device errors do not guarantee pool errors when redundancy
On Jun 23, 2009, at 11:50 AM, Richard Elling wrote:
(2) is there some reasonable way to read in multiples of these
blocks in a single IOP? Theoretically, if the blocks are in
chronological creation order, they should be (relatively)
sequential on the drive(s). Thus, ZFS should be able
Chookiex wrote:
Hi all.
Because the property compression could decrease the file size, and the
file IO will be decreased also.
So, would it increase the ZFS I/O throughput with compression?
for example:
I turn on gzip-9,on a server with 2*4core Xeon, 8GB RAM.
It could compress my files with c
On 23-Jun-09, at 1:58 PM, Erik Trimble wrote:
Richard Elling wrote:
Erik Trimble wrote:
All this discussion hasn't answered one thing for me: exactly
_how_ does ZFS do resilvering? Both in the case of mirrors, and
of RAIDZ[2] ?
I've seen some mention that it goes in cronological order
Erik Ableson wrote:
The problem I had was with the single raid 0 volumes (miswrote RAID 1
on the original message)
This is not a straight to disk connection and you'll have problems if
you ever need to move disks around or move them to another controller.
Would you mind explaining exactly
Miles Nordin wrote:
"ave" == Andre van Eyssen writes:
"et" == Erik Trimble writes:
"ea" == Erik Ableson writes:
"edm" == "Eric D. Mudama" writes:
ave> The LSI SAS controllers with SATA ports work nicely with
ave> SPARC.
I think what you mean is ``some LSI SAS controllers
> I thought the LSI 1068 do not work with SPARC (mfi driver, x86 only).
> I thought the 1078 are supposed to work with SPARC (mega_sas).
Hmmm
uname -a
SunOS shelob 5.10 Generic_137111-02 sun4v sparc SUNW,Sun-Fire-T1000
man mpt
Devices mpt(
The problem I had was with the single raid 0 volumes (miswrote RAID 1
on the original message)
This is not a straight to disk connection and you'll have problems if
you ever need to move disks around or move them to another controller.
I agree that the MD1000 with ZFS is a rocking, inexpens
> "dc" == Daniel Carosone writes:
dc> I'm concerned that, despite clear recommendations and advice
dc> against it, there seem to be a number of solutions appearing
dc> (like automated backup to cloud, via the auto-snapshot hooks)
dc> that use the stream format for long term st
> "ave" == Andre van Eyssen writes:
> "et" == Erik Trimble writes:
> "ea" == Erik Ableson writes:
> "edm" == "Eric D. Mudama" writes:
ave> The LSI SAS controllers with SATA ports work nicely with
ave> SPARC.
I think what you mean is ``some LSI SAS controllers work nicely
Erik Trimble wrote:
Richard Elling wrote:
Erik Trimble wrote:
All this discussion hasn't answered one thing for me: exactly
_how_ does ZFS do resilvering? Both in the case of mirrors, and of
RAIDZ[2] ?
I've seen some mention that it goes in cronological order (which to
me, means that the
Richard Elling wrote:
Erik Trimble wrote:
All this discussion hasn't answered one thing for me: exactly _how_
does ZFS do resilvering? Both in the case of mirrors, and of RAIDZ[2] ?
I've seen some mention that it goes in cronological order (which to
me, means that the metadata must be read
Mike
---
Michael Sullivan
michael.p.sulli...@me.com
http://www.kamiogi.net/
Japan Mobile: +81-80-3202-2599
US Phone: +1-561-283-2034
On 24 Jun 2009, at 01:01 , Harry Putnam wrote:
Darren J Moffat writes:
Harry Putnam wrote:
I thought I recalled reading somewhere that in the situation where
Harry Putnam wrote:
Darren J Moffat writes:
Harry Putnam wrote:
I thought I recalled reading somewhere that in the situation where you
have several zfs filesystems under one top level directory like this:
rpool
rpool/ROOT/osol-112
rpool/export
rpool/export/home
rpool/export/home/reader
you c
Darren J Moffat writes:
> Harry Putnam wrote:
>> I thought I recalled reading somewhere that in the situation where you
>> have several zfs filesystems under one top level directory like this:
>> rpool
>> rpool/ROOT/osol-112
>> rpool/export
>> rpool/export/home
>> rpool/export/home/reader
>>
>> y
On 23 Jun 2009, at 23:59 , Darren J Moffat wrote:
Harry Putnam wrote:
I thought I recalled reading somewhere that in the situation where
you
have several zfs filesystems under one top level directory like this:
rpool
rpool/ROOT/osol-112
rpool/export
rpool/export/home
rpool/export/home/reader
Erik Ableson wrote:
Just a side note on the PERC labelled cards: they don't have a JBOD
mode so you _have_ to use hardware RAID. This may or may not be an
issue in your configuration but it does mean that moving disks between
controllers is no longer possible. The only way to do a pseudo JBOD
We're definitely working on problems contributing to such 'picket
fencing'.
But beware to equate symptoms and root caused issues. We already know
that picket fencing is multicause and
we're tracking the ones we know about : there is something related to
taskq cpu scheduling and
something
Erik Trimble wrote:
All this discussion hasn't answered one thing for me: exactly _how_
does ZFS do resilvering? Both in the case of mirrors, and of RAIDZ[2] ?
I've seen some mention that it goes in cronological order (which to
me, means that the metadata must be read first) of file creatio
Harry Putnam wrote:
I thought I recalled reading somewhere that in the situation where you
have several zfs filesystems under one top level directory like this:
rpool
rpool/ROOT/osol-112
rpool/export
rpool/export/home
rpool/export/home/reader
you could do a shapshot encompassing everything below
I thought I recalled reading somewhere that in the situation where you
have several zfs filesystems under one top level directory like this:
rpool
rpool/ROOT/osol-112
rpool/export
rpool/export/home
rpool/export/home/reader
you could do a shapshot encompassing everything below zpool instead of
havi
Thanks Darren,
I might request that it gets added.
That is, if anyone else thinks it might be a useful feature?
Regards,
Mike.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolari
On Mon, 22 Jun 2009, Ross wrote:
All seemed well, I replaced the faulty drive, imported the pool again, and
kicked off the repair with:
# zpool replace zfspool c1t1d0
What build are you running? Between builds 105 and 113 inclusive there's
a bug in the resilver code which causes it to miss
On Tue, Jun 23, 2009 at 1:13 PM, Ross wrote:
> Look at how the resilver finished:
>
> c1t3d0 ONLINE 3 0 0 128K resilvered
> c1t4d0 ONLINE 0 0 11 473K resilvered
> c1t5d0 ONLINE 0 0 23 986K resilvered
Comparing from your
Mike Forey wrote:
Hi,
I'd like to be able to select zfs filesystems, based on the value of properties.
Something like this:
zfs select mounted=yes
What is the output of the above ?
Would you want to specify multiple properties ?
What about for properties that aren't index values (eg sharen
No snapshots running. I have only 21 filesystems mounted. Blocksize is the
default one. Slow disk I dont think so because I get read and write rates about
350MB/s. Bios is the last also I tried to splitt the pool to two controllers
all this dont help.
--
This message posted from opensolaris.org
very tidy, thanks! :)
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Mike Forey wrote:
zfs select mounted=yes
If not, is there a clean way of achieving the same result?
How about this:
zfs list -o name,mounted | awk '$2 == "yes" {print $1}'
Allan
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.op
Hi,
I'd like to be able to select zfs filesystems, based on the value of properties.
Something like this:
zfs select mounted=yes
Is anyone aware if this feature might be available in the future?
If not, is there a clean way of achieving the same result?
Thanks, Mike.
--
This message posted
Volker A. Brandt schrieb:
2) disks that were attached once leave a stale /dev/dsk entry behind
that takes full 7 seconds to stat() with kernel running at 100%.
>>> Such entries should go away with an invocation of "devfsadm -vC".
>>> If they don't, it's a bug IMHO.
>> yes, they go away. B
> >> 2) disks that were attached once leave a stale /dev/dsk entry behind
> >> that takes full 7 seconds to stat() with kernel running at 100%.
> >
> > Such entries should go away with an invocation of "devfsadm -vC".
> > If they don't, it's a bug IMHO.
>
> yes, they go away. But the problem is wh
Volker A. Brandt schrieb:
>> 2) disks that were attached once leave a stale /dev/dsk entry behind
>> that takes full 7 seconds to stat() with kernel running at 100%.
>
> Such entries should go away with an invocation of "devfsadm -vC".
> If they don't, it's a bug IMHO.
>
>
> Regards -- Volker
y
> 2) disks that were attached once leave a stale /dev/dsk entry behind
> that takes full 7 seconds to stat() with kernel running at 100%.
Such entries should go away with an invocation of "devfsadm -vC".
If they don't, it's a bug IMHO.
Regards -- Volker
--
--
On Tue, 23 Jun 2009, Thomas Maier-Komor wrote:
1) Once the disks spin down due to idleness it can become impossible to
reactivate them without doing a full reboot (i.e. hot plugging won't help)
That's a good point - I don't think a second goes by without at least a
little I/O on those disks,
Andre van Eyssen schrieb:
> On Mon, 22 Jun 2009, Jacob Ritorto wrote:
>
>> Is there a card for OpenSolaris 2009.06 SPARC that will do SATA
>> correctly yet? Need it for a super cheapie, low expectations,
>> SunBlade 100 filer, so I think it has to be notched for 5v PCI slot,
>> iirc. I'm OK with
Just a side note on the PERC labelled cards: they don't have a JBOD
mode so you _have_ to use hardware RAID. This may or may not be an
issue in your configuration but it does mean that moving disks between
controllers is no longer possible. The only way to do a pseudo JBOD is
to create brok
42 matches
Mail list logo