Hi Noel.
zpool iostat -v
For a working pool and for a problem pool would help to see
the type of pool and it's capacity.
I assume the problem is not the source of the data.
To read large number of small files typically requires lots
and lots of threads (say 100 per source disks).
Is da
Damon Atkins wrote:
The zpool.cache file makes clustering complex. {Assume the man page is
still correct}
The man page is correct. zpool.cache helps make clustering feasible
because it differentiates those file systems which are of interest from
those which are not. This is particularly impor
Since I moved to ZFS, sorry, I tend to have more problems after power
failures. We have around 1 outage per week, in average, and the
machine(s) don't boot up as one might expect (from ZFS).
Just today: reboot, and rebooting in circles; with no chance on my side
to see the 30-40 lines of hex-stu
Hello,
I have a question about using the `copies` option in zfs.
If I were to make a non-redundant zpool of say 3 hard drives, but set
the `copies` option to something like 2 or 3, would that protect me in
the event of a hard drive failure? Or would raidz be the only way to
really protect
The zpool.cache file makes clustering complex. {Assume the man page is
still correct}
From the zpool man page:
cachefile=path | "none"
Controls the location of where the pool configuration is cached.
Discovering all pools on system startup requires a cached copy of the
configuration data tha
Uwe Dippel wrote:
Since I moved to ZFS, sorry, I tend to have more problems after power
failures. We have around 1 outage per week, in average, and the
machine(s) don't boot up as one might expect (from ZFS).
Just today: reboot, and rebooting in circles; with no chance on my
side to see the 30-
Robert Parkhurst wrote:
I have a question about using the `copies` option in zfs.
If I were to make a non-redundant zpool of say 3 hard drives, but set
the `copies` option to something like 2 or 3, would that protect me in
the event of a hard drive failure?
No it won't because if a drive co
Uwe Dippel wrote:
At earlier boot failures after a power outage, the behaviour was
different, but the boot archive was recognized as inconsistent a
handful of times. This bugs me. Otherwise, the machines run through
without trouble, and with ZFS, the chances for a damaged boot archive
should
C. wrote:
I've worked hard to resolve this problem.. google opensolaris rescue
will show I've hit it a few times... Anyway, short version is it's
not zfs at all, but stupid handling of bootarchive. If you've
installed something like a 3rd party driver (OSS/Virtualbox) you'll
likely hit thi
Jim,
There is no space constraints nor quotas...
Thanks,
-Nobel
Jim Mauro wrote:
Cross-posting to the public ZFS discussion alias.
There's nothing here that requires confidentiallity, and
the public alias is a much broader audience with a larger
number of experienced ZFS users...
As to the is
Uwe Dippel wrote:
C. wrote:
I've worked hard to resolve this problem.. google opensolaris rescue
will show I've hit it a few times... Anyway, short version is it's
not zfs at all, but stupid handling of bootarchive. If you've
installed something like a 3rd party driver (OSS/Virtualbox) you'l
Hello Mattias,
Monday, March 23, 2009, 9:08:53 PM, you wrote:
MP> It would be nice to be able to move disks around when a system is
MP> powered off and not have to worry about a "cache" when I boot.
You don't have to unless you are talking about share disks and
importing a pool on another syste
Hello Richard,
Friday, March 20, 2009, 12:23:40 AM, you wrote:
RE> It depends on your BIOS. AFAIK, there is no way for the BIOS to
RE> tell the installer which disks are valid boot disks. For OBP (SPARC)
RE> systems, you can have the installer know which disks are available
RE> for booting.
I
I'm happy to see that someone else brought up this topic. I had a nasty
long power failure last night that drained the APC/UPS batteries dry.[1]
:-(
I changed the subject line somewhat because I feel that the issue is one
of honesty as opposed to reliability.
I *feel* that ZFS is reliable out pa
On Tue, 24 Mar 2009, Dennis Clarke wrote:
However, I have repeatedly run into problems when I need to boot after a
power failure. I see vdevs being marked as FAULTED regardless if there are
actually any hard errors reported by the on disk SMART Firmware. I am able
to remove these FAULTed devices
Where is the boot-interest mailing list??
A review of mailing list here:
http://mail.opensolaris.org/mailman/listinfo/
does not show a boot-interest mailing list, or anything similar. Is it
on a different site?
Thanks
Richard Elling wrote:
Uwe Dippel wrote:
C. wrote:
I've worked hard t
> On Tue, 24 Mar 2009, Dennis Clarke wrote:
>>
>> However, I have repeatedly run into problems when I need to boot after a
>> power failure. I see vdevs being marked as FAULTED regardless if there
>> are
>> actually any hard errors reported by the on disk SMART Firmware. I am
>> able
>> to remove
Jerry K wrote:
Where is the boot-interest mailing list??
A review of mailing list here:
http://mail.opensolaris.org/mailman/listinfo/
does not show a boot-interest mailing list, or anything similar. Is
it on a different site?
My appologies, boot-interest is/was a Sun internal list. Try o
On Tue, 24 Mar 2009, Dennis Clarke wrote:
You would think so eh?
But a transient problem that only occurs after a power failure?
Transient problems are most common after a power failure or during
initialization.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.o
> On Tue, 24 Mar 2009, Dennis Clarke wrote:
>>
>> You would think so eh?
>> But a transient problem that only occurs after a power failure?
>
> Transient problems are most common after a power failure or during
> initialization.
Well the issue here is that power was on for ten minutes before I tr
On Tue, 24 Mar 2009, Dennis Clarke wrote:
Regardless, the point is that the ZPool shows no faults at boot time and
then shows phantom faults *after* I go to init 3.
That does seem odd.
Yes, it does. I assume that you have already taken the obvious first
steps and assured that your kernel an
Dennis Clarke wrote:
On Tue, 24 Mar 2009, Dennis Clarke wrote:
However, I have repeatedly run into problems when I need to boot after a
power failure. I see vdevs being marked as FAULTED regardless if there
are
actually any hard errors reported by the on disk SMART Firmware. I am
able
to re
> MP> It would be nice to be able to move disks around when a system is
> MP> powered off and not have to worry about a "cache" when I boot.
>
> You don't have to unless you are talking about share disks and
> importing a pool on another system while the original is powered off
> and the pool was n
+1
On Mon, Mar 23, 2009 at 8:43 PM, Damon Atkins wrote:
> PS it would be nice to have a zpool diskinfo reports if the
> device belongs to a zpool imported or not, and all the details about any
> zpool it can find on the disk. e.g. file-systems (zdb is only for ZFS
> "engineers" says the man pa
Hey, Dennis -
I can't help but wonder if the failure is a result of zfs itself finding
some problems post restart...
Is there anything in your FMA logs?
fmstat
for a summary and
fmdump
for a summary of the related errors
eg:
drteeth:/tmp # fmdump
TIME UUID
> Hey, Dennis -
>
> I can't help but wonder if the failure is a result of zfs itself finding
> some problems post restart...
Yes, yes, this is what I am feeling also, but I need to find the data also
and then I can sleep at night. I am certain that ZFS does not just toss
out faults on a whim bec
26 matches
Mail list logo