Bill Sommerfeld wrote:
> On Thu, 2008-08-28 at 13:05 -0700, Eric Schrock wrote:
>
>> A better option would be to not use this to perform FMA diagnosis, but
>> instead work into the mirror child selection code. This has already
>> been alluded to before, but it would be cool to keep track of lat
G'Day Ben,
ARC visibility is important; did you see Neel's arcstat?:
http://www.solarisinternals.com/wiki/index.php/Arcstat
Try -x for various sizes, and -v for definitions.
On Thu, Aug 21, 2008 at 10:23:24AM -0700, Ben Rockwood wrote:
> Its a starting point anyway. The key is to try
On Thu, 2008-08-28 at 13:05 -0700, Eric Schrock wrote:
> A better option would be to not use this to perform FMA diagnosis, but
> instead work into the mirror child selection code. This has already
> been alluded to before, but it would be cool to keep track of latency
> over time, and use this to
On Thu, 28 Aug 2008, Miles Nordin wrote:
> None of the decisions I described its making based on performance
> statistics are ``haywire''---I said it should funnel reads to the
> faster side of the mirror, and do this really quickly and
> unconservatively. What's your issue with that?
>From what
Hi
I'm not sure that the ZFS pool meets this requirement. I have
# lufslist SXCE_94
Filesystem fstypedevice size Mounted on Mount Options
--- --- --
/dev/dsk/c1t2d0s1
Hi
I'm not sure that the ZFS pool meets this requirement. I have
# lufslist SXCE_94
Filesystem fstypedevice size Mounted on Mount Options
--- --- --
/dev/dsk/c1t2d0s1
On Thu, Aug 28, 2008 at 08:34:24PM +0100, Ross Smith wrote:
>
> Personally, if a SATA disk wasn't responding to any requests after 2
> seconds I really don't care if an error has been detected, as far as
> I'm concerned that disk is faulty.
Unless you have power management enabled, or there's a b
Many mid-range/high-end RAID controllers work by having a small timeout on
individual disk I/O operations. If the disk doesn't respond quickly, they'll
issue an I/O to the redundant disk(s) to get the data back to the host in a
reasonable time. Often they'll change parameters on the disk to limi
Many mid-range/high-end RAID controllers work by having a small timeout on
individual disk I/O operations. If the disk doesn't respond quickly, they'll
issue an I/O to the redundant disk(s) to get the data back to the host in a
reasonable time. Often they'll change parameters on the disk to limi
> "bf" == Bob Friesenhahn <[EMAIL PROTECTED]> writes:
bf> If the system or device is simply overwelmed with work, then
bf> you would not want the system to go haywire and make the
bf> problems much worse.
None of the decisions I described its making based on performance
statistics
On Thu, Aug 21, 2008 at 8:47 PM, Ben Rockwood <[EMAIL PROTECTED]> wrote:
> New version is available (v0.2) :
>
> * Fixes divide by zero,
> * includes tuning from /etc/system in output
> * if prefetch is disabled I explicitly say so.
> * Accounts for jacked anon count. Still need improvement he
> "es" == Eric Schrock <[EMAIL PROTECTED]> writes:
es> I don't think you understand how this works. Imagine two
es> I/Os, just with different sd timeouts and retry logic - that's
es> B_FAILFAST. It's quite simple, and independent of any
es> hardware implementation.
AIUI the
Hi guys,
Bob, my thought was to have this timeout as something that can be optionally
set by the administrator on a per pool basis. I'll admit I was mainly thinking
about reads and hadn't considered the write scenario, but even having thought
about that it's still a feature I'd like. After a
On Thu, 28 Aug 2008, Miles Nordin wrote:
>
> you're right in terms of fixed timeouts, but there's no reason it
> can't compare the performance of redundant data sources, and if one
> vdev performs an order of magnitude slower than another set of vdevs
> with sufficient redundancy, stop issuing read
On Thu, Aug 28, 2008 at 02:17:08PM -0400, Miles Nordin wrote:
>
> you're right in terms of fixed timeouts, but there's no reason it
> can't compare the performance of redundant data sources, and if one
> vdev performs an order of magnitude slower than another set of vdevs
> with sufficient redunda
> "jl" == Jonathan Loran <[EMAIL PROTECTED]> writes:
jl> Fe = 46% failures/month * 12 months = 5.52 failures
the original statistic wasn't of this kind. It was ``likelihood a
single drive will experience one or more failures within 12 months''.
so, you could say, ``If I have a thousan
On Thu, Aug 28, 2008 at 12:38 PM, Bob Friesenhahn <
[EMAIL PROTECTED]> wrote:
> On Thu, 28 Aug 2008, Toby Thain wrote:
>
> > What goes unremarked here is how the original system has coped
> > reliably for decades of (one guesses) geometrically growing load.
>
> Fantastic engineering from a company
> "es" == Eric Schrock <[EMAIL PROTECTED]> writes:
es> Finally, imposing additional timeouts in ZFS is a bad idea.
es> [...] As such, it doesn't have the necessary context to know
es> what constitutes a reasonable timeout.
you're right in terms of fixed timeouts, but there's no re
Miles Nordin wrote:
> What is a ``failure rate for a time interval''?
>
>
Failure rate => Failures/unit time
Failure rate for a time interval => (Failures/unit time) * time
For example, if we have a failure rate:
Fr = 46% failures/month
Then the expectation value of a failure in one year
On Aug 28, 2008, at 11:38 AM, Bob Friesenhahn wrote:
> The old FORTRAN code
> either had to be ported or new code written from scratch.
Assuming it WAS written in FORTRAN there is no reason to believe it
wouldn't just compile with a modern Fortran compiler. I've often run
codes originally w
On Thu, 28 Aug 2008, Toby Thain wrote:
>
> "two 20-year-old redundant mainframe configurations ... that
> apparently are hanging on for dear life until reinforcements arrive
> in the form of a new, state-of-the-art system this winter."
>
> And we all know that 'new, state-of-the-art systems' are si
On Thu, Aug 28, 2008 at 09:25:14AM -0700, Trevor Watson wrote:
> Looking at the GRUB menu, it appears as though the flags "-B $ZFS-BOOTFS" are
> needed to be passed to the kernel. Is this something I can add to: kernel$
> /boot/$ISADIR/xen.gz or is there some other mechanism required for bootin
Kenny wrote:
>
> How did you determine from the format output the GB vs MB amount??
>
> Where do you compute 931 GB vs 932 MB from this??
>
> 2. c6t600A0B800049F93C030A48B3EA2Cd0 /scsi_vhci/[EMAIL PROTECTED]
>
> 3. c6t600A0B800049F93C030D48B3EAB6d0
> /scsi_vhci/[EMAIL PROTECTED]
>
It's in t
Ross, thanks for the feedback. A couple points here -
A lot of work went into improving the error handling around build 77 of
Nevada. There are still problems today, but a number of the
complaints we've seen are on s10 software or older nevada builds that
didn't have these fixes. Anything from
Ok so I knew it had to be operator headspace...
I found my error and have fixed it in CAM. Thanks to all for helping my
education!!
However I do have a question. And pardon if it's a 101 type...
How did you determine from the format output the GB vs MB amount??
Where do you compute 931 G
Ok so I knew it had to be operator headspace...
I found my error and have fixed it in CAM. Thanks to all for helping my
education!!
However I do have a question. And pardon if it's a 101 type...
How did you determine from the format output the GB vs MB amount??
Where do you compute 931 G
On Thu, 28 Aug 2008, Kenny wrote:
> 2. c6t600A0B800049F93C030A48B3EA2Cd0
> /scsi_vhci/[EMAIL PROTECTED]
Good.
> 3. c6t600A0B800049F93C030D48B3EAB6d0
> /scsi_vhci/[EMAIL PROTECTED]
Oops! Oops! Oops!
It seems that some of your drives have the full 931.01G
On Thu, 28 Aug 2008, Kenny wrote:
> Bob, Thanks for the reply. Yes I did read your white paper and am using
> it!! Thanks again!!
>
> I used zpool iostat -v and it did't give the information as advertised...
> see below
The lack of size information seems quit odd.
Bob
=
Take a look at my xVM/GRUB config:
http://malsserver.blogspot.com/2008/08/installing-xvm.html
On Thu, Aug 28, 2008 at 9:25 AM, Trevor Watson <[EMAIL PROTECTED]>wrote:
> I just ran live-upgrade of my system from nv94/UFS to nv96/ZFS on x86.
>
> nv96/ZFS boots okay. However, I can't boot the Solari
> "rm" == Robert Milkowski <[EMAIL PROTECTED]> writes:
rm> Please look for slides 23-27 at
rm> http://unixdays.pl/i/unixdays-prezentacje/2007/milek.pdf
yeah, ok, ONCE AGAIN, I never said that checksums are worthless.
relling: some drives don't return errors on unrecoverable read even
> "re" == Richard Elling <[EMAIL PROTECTED]> writes:
re> There is no error in my math. I presented a failure rate for
re> a time interval,
What is a ``failure rate for a time interval''?
AIUI, the failure rate for a time interval is 0.46% / yr, no matter how
many drives you have.
On Thu, 28 Aug 2008, Ross wrote:
>
> I believe ZFS should apply the same tough standards to pool
> availability as it does to data integrity. A bad checksum makes ZFS
> read the data from elsewhere, why shouldn't a timeout do the same
> thing?
A problem is that for some devices, a five minute
I just ran live-upgrade of my system from nv94/UFS to nv96/ZFS on x86.
nv96/ZFS boots okay. However, I can't boot the Solaris xVM partition as the
GRUB entry does not contain the necessary magic to tell grub to use ZFS instead
of UFS.
Looking at the GRUB menu, it appears as though the flags "-
I just ran live-upgrade of my system from nv94/UFS to nv96/ZFS on x86.
nv96/ZFS boots okay. However, I can't boot the Solaris xVM partition as the
GRUB entry does not contain the necessary magic to tell grub to use ZFS instead
of UFS.
Looking at the GRUB menu, it appears as though the flags "-
On Aug 27, 2008, at 4:38 PM, Tim wrote:
On Wed, Aug 27, 2008 at 3:29 PM, Ian Collins <[EMAIL PROTECTED]> wrote:
Does anyone have any tuning tips for a Subversion repository on
ZFS? The
repository will mainly be storing binary (MS Office documents).
It looks like a vanilla, uncompressed f
exactly :)
On 8/28/08, Kyle McDonald <[EMAIL PROTECTED]> wrote:
> Daniel Rock wrote:
>>
>> Kenny schrieb:
>> >2. c6t600A0B800049F93C030A48B3EA2Cd0
>>
>> > /scsi_vhci/[EMAIL PROTECTED]
>> >3. c6t600A0B800049F93C030D48B3EAB6d0
>>
>> > /scsi_vhci/[EMAIL
[EMAIL PROTECTED] wrote on 08/28/2008 09:00:23 AM:
>
> On 28-Aug-08, at 10:54 AM, Toby Thain wrote:
>
> >
> > On 28-Aug-08, at 10:11 AM, Richard Elling wrote:
> >
> >> It is rare to see this sort of "CNN Moment" attributed to file
> >> corruption.
> >> http://www.eweek.com/c/a/IT-Infrastructure/Co
Robert Milkowski wrote:
> Hello Miles,
>
> Wednesday, August 27, 2008, 10:51:49 PM, you wrote:
>
> MN> It's not really enough for me, but what's more the case doesn't match
> MN> what we were looking for: a device which ``never returns error codes,
> MN> always returns silently bad data.'' I asked
On 28-Aug-08, at 10:54 AM, Toby Thain wrote:
>
> On 28-Aug-08, at 10:11 AM, Richard Elling wrote:
>
>> It is rare to see this sort of "CNN Moment" attributed to file
>> corruption.
>> http://www.eweek.com/c/a/IT-Infrastructure/Corrupt-File-Brought-
>> Down-FAAs-Antiquated-IT-System/?kc=EWKNLNAV08
Hi,
I think LU 94->96 would be fine, if there's no zone in your system,
just simply do
# cd /Solaris_11/Tools/Installers
# liveupgrade20 --nodisplay
# lucreate -c BE94 -n BE96 -p newpool (The newpool must be SMI lable)
# luupgrade -u -n BE96 -s
# luactivate BE96
# init 6
Dur
Hello Miles,
Wednesday, August 27, 2008, 10:51:49 PM, you wrote:
MN> It's not really enough for me, but what's more the case doesn't match
MN> what we were looking for: a device which ``never returns error codes,
MN> always returns silently bad data.'' I asked for this because you said
MN> ``How
On 28-Aug-08, at 10:11 AM, Richard Elling wrote:
> It is rare to see this sort of "CNN Moment" attributed to file
> corruption.
> http://www.eweek.com/c/a/IT-Infrastructure/Corrupt-File-Brought-
> Down-FAAs-Antiquated-IT-System/?kc=EWKNLNAV08282008STR4
>
"two 20-year-old redundant mainframe c
On Thu, 28 Aug 2008, Paul Floyd wrote:
> Does anyone have a pointer to a howto for doing a liveupgrade such that
> I can convert the SXCE 94 UFS BE to ZFS (and liveupgrade to SXCE 96
> while I'm at it) if this is possible? Searching with google shows a lot
> of blogs that describe the early pro
Miles Nordin wrote:
> re> Indeed. Intuitively, the AFR and population is more easily
> re> grokked by the masses.
>
> It's nothing to do with masses. There's an error in your math. It's
> not right under any circumstance.
>
There is no error in my math. I presented a failure rate fo
Daniel Rock wrote:
>
> Kenny schrieb:
> >2. c6t600A0B800049F93C030A48B3EA2Cd0
>
> > /scsi_vhci/[EMAIL PROTECTED]
> >3. c6t600A0B800049F93C030D48B3EAB6d0
>
> > /scsi_vhci/[EMAIL PROTECTED]
>
> Disk 2: 931GB
> Disk 3: 931MB
>
> Do you see the difference
Hi
On my opensolaris machine I currently have SXCEs 95 and 94 in two BEs. The same
fdisk partition contains /export/home and swap. In a separate fdisk partition
on another disk I have a ZFS pool.
Does anyone have a pointer to a howto for doing a liveupgrade such that I can
convert the SXCE 94
On Thu, Aug 28, 2008 at 06:11:06AM -0700, Richard Elling wrote:
> It is rare to see this sort of "CNN Moment" attributed to file corruption.
> http://www.eweek.com/c/a/IT-Infrastructure/Corrupt-File-Brought-Down-FAAs-Antiquated-IT-System/?kc=EWKNLNAV08282008STR4
`file corruption' takes the blame a
Kenny schrieb:
>2. c6t600A0B800049F93C030A48B3EA2Cd0
> /scsi_vhci/[EMAIL PROTECTED]
>3. c6t600A0B800049F93C030D48B3EAB6d0
> /scsi_vhci/[EMAIL PROTECTED]
Disk 2: 931GB
Disk 3: 931MB
Do you see the difference?
Daniel
_
Hello all,
I tried to test the behavior of zpool recovering after removing one
drive with strange results.
Setup SunFire V240/4Gig RAM, Solaris10u5, fully patched (last week)
1 3510 12x 140Gig FC Drives, 12 luns (every drive is one lun), (I don't
want to use the RAID hardware, letting ZFS doi
It is rare to see this sort of "CNN Moment" attributed to file corruption.
http://www.eweek.com/c/a/IT-Infrastructure/Corrupt-File-Brought-Down-FAAs-Antiquated-IT-System/?kc=EWKNLNAV08282008STR4
-- richard
___
zfs-discuss mailing list
zfs-discuss@opens
Victor Latushkin wrote:
On 28.08.08 15:06, Chris Gerhard wrote:
I have a USB disk with a pool on it called removable. On one laptop
zpool import removable works just fine but on another with the same
disk attached it tells me there is more than one matching pool:
: sigma TS 6 $; pfexec zpool im
Tim,
Per your request...
df -h
bash-3.00# df -h
Filesystem size used avail capacity Mounted on
/dev/md/dsk/d10 98G 4.2G92G 5%/
/devices 0K 0K 0K 0%/devices
ctfs 0K 0K 0K 0%/system/contract
p
Bob, Thanks for the reply. Yes I did read your white paper and am using it!!
Thanks again!!
I used zpool iostat -v and it did't give the information as advertised... see
below
bash-3.00# zpool iostat -v
capacity
operations
On 28.08.08 15:06, Chris Gerhard wrote:
> I have a USB disk with a pool on it called removable. On one laptop
> zpool import removable works just fine but on another with the same
> disk attached it tells me there is more than one matching pool:
>
> : sigma TS 6 $; pfexec zpool import removable
>
Hi Todd,
sorry for the delay in responding, been head down rewriting
a utility for the last few days.
Todd H. Poole wrote:
> Howdy James,
>
> While responding to halstead's post (see below), I had to restart several
> times to complete some testing. I'm not sure if that's important to these
> co
On Thu, Aug 28, 2008 at 3:47 AM, Klaus Bergius <[EMAIL PROTECTED]>wrote:
> I'll second the original questions, but would like to know specifically
> when we will see (or how to install) the ZFS admin gui for OpenSolaris
> 2008.05.
> I installed 2008.05, then updated the system, so it is now snv_95
Hey folks,
Tim Foster just linked this bug to the zfs auto backup mailing list, and I
wondered if anybody knew if the work being done on ZFS boot includes making use
of ZFS reservations to ensure the boot filesystems always have enough free
space?
http://defect.opensolaris.org/bz/show_bug.cgi?
I have a USB disk with a pool on it called removable. On one laptop zpool
import removable works just fine but on another with the same disk attached it
tells me there is more than one matching pool:
: sigma TS 6 $; pfexec zpool import removable
cannot import 'removable': more than one matching
There is no good ZFS gui. Nothing that is actively maintained, anyway.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I'll second the original questions, but would like to know specifically when we
will see (or how to install) the ZFS admin gui for OpenSolaris 2008.05.
I installed 2008.05, then updated the system, so it is now snv_95.
There are no smc* commands, there is no service 'webconsole' to be seen in svc
Not the common case for ZFS but a useful performance improvement
for when it does happen. This is as a result of some follow on work to
optimising the byteswapping work Dan has done for the crypto algorithms
in OpenSolaris.
Original Message
Subject: Re: Review for 6729208 Opti
Since somebody else has just posted about their entire system locking up when
pulling a drive, I thought I'd raise this for discussion.
I think Ralf made a very good point in the other thread. ZFS can guarantee
data integrity, what it can't do is guarantee data availability. The problem
is, t
Toby Thain wrote:
> On 27-Aug-08, at 5:47 PM, Ian Collins wrote:
>
>> Tim writes:
>>
>>> On Wed, Aug 27, 2008 at 3:29 PM, Ian Collins <[EMAIL PROTECTED]>
>>> wrote:
>>>
Does anyone have any tuning tips for a Subversion repository on
ZFS? The
repository will mainly be storing bi
63 matches
Mail list logo