So there is no current way to specify the creation of a 3 disk raid-z
array with a known missing disk?
On 12/5/06, David Bustos wrote:
> Quoth Thomas Garner on Thu, Nov 30, 2006 at 06:41:15PM -0500:
> > I currently have a 400GB disk that is full of data on a linux system.
> >
> These are the same as the acard devices we've discussed here
> previously; earlier hyperdrive models were their own design. Very
> interesting, and my personal favourite, but I don't know of anyone
> actually reporting results yet with them as ZIL.
Here's one report:
http://www.mail-archive.co
Are you looking for something like:
kstat -c disk sd:::
Someone can correct me if I'm wrong, but I think the documentation for
the above should be at:
http://src.opensolaris.org/source/xref/zfs-crypto/gate/usr/src/uts/common/avs/ns/sdbc/cache_kstats_readme.txt
I'm not sure about the file i/o vs
Are these machines 32-bit by chance? I ran into similar seemingly
unexplainable hangs, which Marc correctly diagnosed and have since not
reappeared:
http://mail.opensolaris.org/pipermail/zfs-discuss/2008-August/049994.html
Thomas
___
zfs-discuss mailin
For what it's worth I see this as well on 32-bit Xeons, 1GB ram, and
dual AOC-SAT2-MV8 (large amounts of io sometimes resulting in lockup
requiring a reboot --- though my setup is Nexenta b85). Nothing in the
logging, nor loadavg increasing significantly. It could be the
regular Marvell driver iss
If I have 2 raidz's, 5x400G and a later added 5x1T, should I expect
that streaming writes would go primarily to only 1 of the raidz sets?
Or is this some side effect of my non-ideal hardware setup? I thought
that adding additional capacity to a pool automatically would then
balance writes to both
Thanks, Roch! Much appreciated knowing what the problem is and that a
fix is in a forthcoming release.
Thomas
On 6/25/07, Roch - PAE <[EMAIL PROTECTED]> wrote:
Sorry about that; looks like you've hit this:
6546683 marvell88sx driver misses wakeup for mv_empty_cv
http://bugs.
We have seen this behavior, but it appears to be entirely related to the hardware having
the "Intel IPMI" stuff swallow up the NFS traffic on port 623 directly by the
network hardware and never getting.
http://blogs.sun.com/shepler/entry/port_623_or_the_mount
Unfortunately, this nfs hangs acr
So it is expected behavior on my Nexenta alpha 7 server for Sun's nfsd
to stop responding after 2 hours of running a bittorrent client over
nfs4 from a linux client, causing zfs snapshots to hang and requiring
a hard reboot to get the world back in order?
Thomas
There is no NFS over ZFS issue (
Perhaps someone on this mailing list can shed some light onto some odd
zfs circumstances I encountered this weekend. I have an array of 5
400GB drives in a raidz, running on Nexenta. One of these drives
showed a SMART error (HARDWARE IMPENDING FAILURE GENERAL HARD DRIVE
FAILURE [asc=5d, ascq=10]
> for what purpose ?
Darren's correct, it's a simple case of ease of use. Not
show-stopping by any means but would be nice to have.
Thomas
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discu
Since I have been unable to find the answer online, I thought I would
ask here. Is there a knob to turn to on a zfs filesystem put the .zfs
snapshot directory into all of the children directories of the
filesystem, like the .snapshot directories of NetApp systems, instead
of just the root of the
So there is no current way to specify the creation of a 3 disk raid-z
array with a known missing disk?
On 12/5/06, David Bustos <[EMAIL PROTECTED]> wrote:
Quoth Thomas Garner on Thu, Nov 30, 2006 at 06:41:15PM -0500:
> I currently have a 400GB disk that is full of data on a linux syste
In the same vein...
I currently have a 400GB disk that is full of data on a linux system.
If I buy 2 more disks and put them into a raid-z'ed zfs under solaris,
is there a generally accepted way to build an degraded array with the
2 disks, copy the data to the new filesystem, and then move the
or
14 matches
Mail list logo