Same problem here after some patching :(((
42GB free in a 4.2TB zpool
We can't upgrade to U3 without planning it.
Is there any way to solve the problem?? remove latest patches?
Our uptime with ZFS is going very low ...
thanks
Gino
This message posted from opensolaris.org
___
Hi All,
this morning one of our edge fc switches died. Thanks to multipath all the
nodes using that switch keeps running except the ONLY ONE still using ZFS (we
went back to UFS on all our production servers) !
In that one we had one path failed and than a CPU panic :(
Here is the logs:
Feb 23
Hi Jason,
saturday we made some tests and found that disabling a FC port under heavy load
(MPXio enabled) often takes to a panic. (using a RAID-Z !)
No problems with UFS ...
later,
Gino
This message posted from opensolaris.org
___
zfs-discuss mail
Hi Jason,
we done the tests using S10U2, two fc cards, MPXIO.
5 LUN in a raidZ group.
Each LUN was visible to both the fc card.
Gino
> Hi Gino,
>
> Was there more than one LUN in the RAID-Z using the
> port you disabled?
>
This message posted from opensolaris.org
_
Feb 28 05:47:31 server141 genunix: [ID 403854 kern.notice] assertion failed: ss
== NULL, file: ../../common/fs/zfs/space_map.c, line: 81
Feb 28 05:47:31 server141 unix: [ID 10 kern.notice]
Feb 28 05:47:31 server141 genunix: [ID 802836 kern.notice] fe8000d559f0
fb9acff3 ()
Feb 28 0
Hi All,
yesterday we done some tests with ZFS using a new server and a new JBOD going
in production this week.
Here is what we found:
1) Solaris seems unable to recognize as "disk" any fc disk already labeled by a
storage processor. cfgadm reports them as "unknown".
We had to start linux and
> > Conclusion:
> > After a day of tests we are going to think that ZFS
> doesn't work well with MPXIO.
> >
>
> What kind of array is this? If it is not a Sun array
> then how are you
> configuring mpxio to recognize the array?
We are facing the same problems with a JBOD (EMC DAE2), a Storag
> What makes you think that these arrays work with
> mpxio? Every array does
> not automatically work.
They are working rock solid with mpxio and UFS!
gino
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolari
> -when using highly available SAN storage, export the
> disks as LUNS and use zfs to do your redundancy -
> using array rundandancy (say 5 mirrors that you will
> zpool together as a stripe) will cause the machine
> to crap out and die if any of those mirrored
> devices, say, gets too much io and
Hi all.
One of our server had a panic and now can't mount the zpool anymore!
Here is what I get at boot:
Mar 21 11:09:17 SERVER142 ^Mpanic[cpu1]/thread=90878200:
Mar 21 11:09:17 SERVER142 genunix: [ID 603766 kern.notice] assertion failed:
ss->ss_start <= start (0x67b800 <= 0x67
9
Hi All,
Last week we had a panic caused by ZFS and then we had a corrupted zpool!
Today we are doing some test with the same data, but on a different
server/storage array. While copying the data ... panic!
And again we had a corrupted zpool!!
Mar 28 12:38:19 SERVER144 genunix: [ID 403854 kern
I forgot to mention we are using S10U2.
Gino
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On HDS arrays we set sd_max_throttle to 8.
gino
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> Gino Ruopolo wrote:
> > On HDS arrays we set sd_max_throttle to 8.
>
> HDS provides an algorithm for estimating
> sd[d]_max_throttle in their
> planning docs. It will vary based on a number of
> different parameters.
> AFAIK, EMC just sets it to 20.
> -- rich
> Gino Ruopolo wrote:
> > Hi All,
> >
> > Last week we had a panic caused by ZFS and then we
> had a corrupted
> > zpool! Today we are doing some test with the same
> data, but on a
> > different server/storage array. While copying the
> data ... pani
Unfortunately we don't have experience with NexSAN.
HDS are quite conservative and with a value of 8 we run quite stable (with UFS).
Also we found that value appropiate for old HP EMA arrays (old units but very
very reliable! Digital products were rocks)
gino
This message posted from opensola
Hi Matt,
trying to import our corrupted zpool with snv_60 and 'set zfs:zfs_recover=1' in
/etc/system give us:
Apr 3 20:35:56 SERVER141 ^Mpanic[cpu3]/thread=fffec3860f20:
Apr 3 20:35:56 SERVER141 genunix: [ID 603766 kern.notice] assertion failed:
ss->ss_start <= start (0x67b800 <= 0x67
Other test, same setup.
SOLARIS10:
zpool/a filesystem containing over 10Millions subdirs each containing 10
files of about 1k
zpool/b empty filesystem
rsync -avx /zpool/a/* /zpool/b
time: 14 hours (iostat showing %b = 100 for each lun in the zpool)
FreeBSD:
/vol1/a dir
Hi Chris,
both server same setup. OS on local hw raid mirror, other filesystem on a SAN.
We found really bad performance but also that under that heavy I/O zfs pool was
something like freezed.
I mean, a zone living on the same zpool was completely unusable because of I/O
load.
We use FSS, but C
>
> Hi Gino,
>
> Can you post the 'zpool status' for each pool and
> 'zfs get all'
> for each fs; Any interesting data in the dmesg output
> ?
sure.
1) nothing on dmesg (are you thinking about shared IRQ?)
2) Only using one pool for tests:
# zpool status
pool: zpool1
state: ONLINE
scrub:
> We use FSS, but CPU load was really load under the
> tests.
errata: We use FSS, but CPU load was really LOW under the tests.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris
> Looks like you have compression turned on?
we made tests with compression on and off and found almost no difference.
CPU load was under 3% ...
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://
Update ...
iostat output during "zpool scrub"
extended device statistics
device r/sw/s Mr/s Mw/s wait actv svc_t %w %b
sd34 2.0 395.20.10.6 0.0 34.8 87.7 0 100
sd3521.0 312.21.22.9 0.0 26.0 78.0 0
> Update ...
>
> iostat output during "zpool scrub"
>
> extended device statistics
>
> w/s Mr/s Mw/s wait actv svc_t %w %b
> 34 2.0 395.20.10.6 0.0 34.8 87.7
> 0 100
> 3521.0 312.21.22.9 0.0 26.0 78.0
> 0 79
> 362
other example:
rsyncing from/to the same zpool:
device r/sw/s Mr/s Mw/s wait actv svc_t %w %b
c625.0 276.51.33.8 1.9 16.5 61.1 0 135
sd44 6.0 158.30.30.4 1.9 15.5 106.2 33 [b]100[/b]
sd45 6.0 37.10.31.1 0.0 0.3
Thank you Bill for your clear description.
Now I have to find a way to justify myself with my head office that after
spending 100k+ in hw and migrating to "the most advanced OS" we are running
about 8 time slower :)
Anyway I have a problem much more serious than rsync process speed. I hope
yo
>
> More generally, I could suggest that we use an odd
> number of vdevs
> for raidz and an even number for mirrors and raidz2.
> Thoughts?
uhm ... we found serious performance problems also using a RAID-Z of 3 luns ...
Gino
This message posted from opensolaris.org
__
> > Even if we're using FSS, Solaris seems unable to
> give a small amount of I/O resource to ZONEX's
> activity ...
> >
> > I know that FSS doesn't deal with I/O but I think
> Solaris should be smarter ..
>
> What about using ipqos (man ipqos)?
I'm not referring to "network I/O" but "storage I/
Hi All,
we have some ZFS pools on production with more than 100s fs and more than 1000s
snapshots on them.
Now we do backups with zfs send/receive with some scripting but I'm searching
for a way to mirror each zpool to an other one for backup purposes (so
including all snapshots!). Is that poss
>
> Not right now (without a bunch of shell-scripting).
> I'm working on
> eing able to "send" a whole tree of filesystems &
> their snapshots.
> Would that do what you want?
Exactly! When you think that -really useful- feature will be available?
thanks,
gino
This message posted from ope
30 matches
Mail list logo