I don't think the hardware has any problems, it only started having errors when
I upgraded OpenSolaris.
It's still working fine again now after a reboot. Actually, I reread one of
your earlier messages,
and I didn't realize at first when you said "non-Sun JBOD" that this didn't
apply to me (in
Well, ok, the msi=0 thing didn't help after all. A few minutes after my last
message a few errors showed
up in iostat, and then in a few minutes more the machine was locked up hard...
Maybe I will try just
doing a scrub instead of my rsync process and see how that does.
Chad
On Tue, Dec 01,
This is basically just a me too. I'm using different hardware but essentially
the same problems. The relevant hardware I have is:
---
SuperMicro MBD-H8Di3+-F-O motherboard with LSI 1068E onboard
SuperMicro SC846E2-R900B 4U chassis with two LSI SASx36 expander chips on the
backplane
24 Western D
Neil Perrin wrote:
Under the hood in ZFS, writes are committed using either shadow
paging or
logging, as I understand it. So I believe that I mean to ask whether a
write(2), pushed to ZPL, and pushed on down the stack, can be split
into
multiple transactions? Or, instead, is it guaranteed t
You have fallen into the same trap I fell into. df(1M) is not dedup aware;
dedup occurs at the pool level, not the filesystem level. If you look at
your df output, you can see your disk seems to be growing in size which is
non-intuitive.
Once you start using ZFS and in particular dedup but also
Chad Cantwell wrote:
Hi,
I was using for quite awhile OpenSolaris 2009.06
with the opensolaris-provided mpt driver to operate a zfs raidz2 pool of
about ~20T and this worked perfectly fine (no issues or device errors
logged for several months, no hanging). A few days ago I decided to
reinsta
Mark Johnson wrote:
Chad Cantwell wrote:
Hi,
I was using for quite awhile OpenSolaris 2009.06
with the opensolaris-provided mpt driver to operate a zfs raidz2 pool of
about ~20T and this worked perfectly fine (no issues or device errors
logged for several months, no hanging). A few days a
We actually tried this, although using sol10-version of mpt-driver.
Surprisingly it didn't work :-)
Yours
Markus Kovero
-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Mark Johnson
Sent: 1. joulukuuta 2009 15:57
To:
>
> What's the earliest build someone has seen this
> problem? i.e. if we binary chop, has anyone seen it
> in
> b118?
>
We have used every "stable" build from b118 up, as b118 was the first reliable
one that could be used is a CIFS-heavy environment. The problem occurs on all
of them.
- Adam
If someone from Sun will confirm that it should work to use the mpt driver from
2009.06, I'd be willing to set up a BE and try it. I still have the snapshot
from my 2009.06 install, so I should be able to mount that and grab the files
easily enough.
--
This message posted from opensolaris.org
_
Just an update, my scrub completed without any timeout errors in the log. XVM
with MSI disabled globally.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/z
Perhaps. As I noted though, it also occurs on the onboard NVidia SATA
controller when MSI is enabled. I had already put a line in /etc/system to
disable MSI for that controller per a forum thread and it worked great. I'm now
running with all MSI disabled via XVM as the mpt controller is giving m
All,
We're going to start testing ZFS and I had a question about Top Level
Devices (TLDs). In Sun's class, they specifically said not to use more than
9 TLDs due to performance concerns. Our storage admins make LUNs roughly
15G in size -- so how would we make a large pool (1TB) if we're limited
Hi -
Using OpenSolaris 2008.11
Hope this is enough information.
Stuart
-Original Message-
From: cindy.swearin...@sun.com [mailto:cindy.swearin...@sun.com]
Sent: November 30, 2009 11:32 AM
To: Stuart Reid
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Adding drives to syste
Thanks Pablo. I think I confused the matters - i meant to respond to the issue
in bug #6733267, and somehow landed on that one...
-Original Message-
From: Pablo Méndez Hernández [mailto:pabl...@gmail.com]
Sent: Monday, November 30, 2009 12:35 PM
To: Moshe Vainer
Cc: zfs-discuss@opensolari
George, thank you very much! This is great news.
-Original Message-
From: george.wil...@sun.com [mailto:george.wil...@sun.com]
Sent: Monday, November 30, 2009 9:04 PM
To: Moshe Vainer
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] CR# 6574286, remove slog device
Moshe Vainer w
Hi Chris,
If you have 40 or so disks then you would create 5-6 RAIDZ virtual
devices of 7-8 disks each, or possibly include two disks for the root
pool, two disks as spares, and then 36 (4 RAIDZ vdevs of 6 disks) disks
for a non-root pool.
This configuration guide hasn't been updated for RAIDZ-3
Cindy --
Thanks for the link!
I see in one of the examples that there are 14 TLDs (all mirrored). Does
that mean there are no performance issues with having more than 9 TLDs? In
the Sun class I attended, the instructor said to not use more than 9 TLDs,
which seems like it could be very limiting
On Dec 1, 2009, at 8:53 AM, Christopher White wrote:
Cindy --
Thanks for the link!
I see in one of the examples that there are 14 TLDs (all mirrored).
Does that mean there are no performance issues with having more than
9 TLDs? In the Sun class I attended, the instructor said to not use
Chris,
The TLD terminology is confusing so let's think about this way:
Performance is best when only 3-9 physical devices are included in
each mirror or RAIDZ grouping as shown in the configuration guide.
Cindy
On 12/01/09 09:53, Christopher White wrote:
Cindy --
Thanks for the link!
I see
Travis Tabbal wrote:
If someone from Sun will confirm that it should work to use the mpt
driver from 2009.06, I'd be willing to set up a BE and try it. I
still have the snapshot from my 2009.06 install, so I should be able
to mount that and grab the files easily enough.
I tried, it doesn't work
On Tue, 1 Dec 2009, Cindy Swearingen wrote:
The TLD terminology is confusing so let's think about this way:
Performance is best when only 3-9 physical devices are included in
each mirror or RAIDZ grouping as shown in the configuration guide.
It seems that these "TLDs" are what the rest of us
Bob - thanks, that makes sense.
The classroom book refers to "top-level virtual devices," and were referred
to as TLDs throughout the class (Top-Level Devices). As you noted, those
are either the base LUN, mirror, raidz, or raidz2.
So there's no limit to the number of TLDs/vdevs we can have, the
I was able to reproduce this problem on the latest Nevada build:
# zpool create tank raidz c1t2d0 c1t3d0 c1t4d0
# zpool add -n tank raidz c1t5d0 c1t6d0 c1t7d0
would update 'tank' to the following configuration:
tank
raidz1
c1t2d0
c1t3d0
c1t4d
On Tue, 1 Dec 2009, Christopher White wrote:
So there's no limit to the number of TLDs/vdevs we can have, the only
recommendation is
that we have no more than 3-9 LUNs per RAIDZ vdev?
Yes. No one here has complained about problems due to too many vdevs.
They have complained due to too many
First I tried just upgrading to b127, that had a few issues besides the mpt
driver. After that
I did a clean install of b127, but no I don't have my osol2009.06 root still
there. I wasn't
sure how to install another copy and leave it there (I suspect it is possible,
since I saw
when doing upgr
To update everyone, I did a complete zfs scrub, and it it generated no errors
in iostat, and I have 4.8T of
data on the filesystem so it was a fairly lengthy test. The machine also has
exhibited no evidence of
instability. If I were to start copying a lot of data to the filesystem again
though
It seems that device names aren't always updated when importing
pools if devices have moved. I am not sure if this is only an
cosmetic issue or if it could actually be a real problem -
could it lead to the device not being found at a later import?
/ragge
(This is on snv_127.)
I ran the followin
Hi Chris,
It sounds like there is some confusion with the recommendation about raidz?
vdevs. It is recommended that each raidz? TLD be a "single-digit" number of
disks - so up to 9. The total number of these single digit TLDs is not
practically limited.
Craig
Christopher White wrote:
> Cindy -
This may be a dup of 6881631.
Regards,
markm
On 1 Dec 2009, at 15:14, Cindy Swearingen
wrote:
I was able to reproduce this problem on the latest Nevada build:
# zpool create tank raidz c1t2d0 c1t3d0 c1t4d0
# zpool add -n tank raidz c1t5d0 c1t6d0 c1t7d0
would update 'tank' to the follow
30 matches
Mail list logo