We weren't able to do anything at all, and finally rebooted the system. When
we did, everything came back normally, even with the target that was
reporting errors before. We're using an LSI PCI-E controller that's on the
supported device list, and LSI 3801-E. Right now, I'm trying to figure out
if
can you guess? wrote:
> CERN was using relatively cheap disks and found that they were more
> than adequate (at least for any normal consumer use) without that
> additional level of protection: the incidence of errors, even
> including the firmware errors which presumably would not have occurred
> > Au contraire: I estimate its worth quite
> accurately from the undetected error rates reported
> in the CERN "Data Integrity" paper published last
> April (first hit if you Google 'cern "data
> integrity"').
> >
> > > While I have yet to see any checksum error
> reported
> > > by ZFS on
> > >
Michael Stalnaker wrote:
>
> Finally trying to do a zpool status yields:
>
> [EMAIL PROTECTED]:/# zpool status -v
> pool: LogData
> state: ONLINE
> status: One or more devices has experienced an unrecoverable error. An
> attempt was made to correct the error. Applications are unaffecte
The comment in the header file where this error is defined says:
/* volume is too large for 32-bit system */
So it does look like it's a 32-bit CPU issue. Odd, since file systems don't
normally have any sort of dependence on the CPU type
Anton
This message posted from opensolaris.org
> On Wed, Nov 07, 2007 at 01:47:04PM -0800, can you
> guess? wrote:
> > I do consider the RAID-Z design to be somewhat
> brain-damaged [...]
>
> How so? In my opinion, it seems like a cure for the
> brain damage of RAID-5.
Nope.
A decent RAID-5 hardware implementation has no 'write hole' to worr
> On 11/7/07, can you guess? <[EMAIL PROTECTED]>
> wrote:
> > > Monday, November 5, 2007, 4:42:14 AM, you wrote:
> > >
> > > cyg> Having gotten a bit tired of the level of
> ZFS
> > > hype floating
...
> But I do believe that some of the "hype" is justified
Just to make it clear, so do I: it's
I'm not aware of any plans to do this.
If you're on S10U4 or NV, can use kstat to fetch arcstats on ZFS memory
usage:
kstat -n arcstats. Prior to the addition of arcstats, you needed to use
mdb to
determine how much memory ZFS was using...
> Which just (as far as I can tell) includes the zfs b
We recently installed a 24 disk SATA array with an LSI controller attached
to a box running Solaris X86 10 Release 4. The drives were set up in one
big pool with raidz, and it worked great for about a month. On the 4th, we
had the system kernel panic and crash, and it's now behaving very badly.
He
>
> Also... doesn't ZFS do some form of read ahead .. 64KB anyways?
>
I believe you are referring to the vdev cache here. Check out:
http://blogs.sun.com/erickustarz/entry/vdev_cache_improvements_to_help
eric
___
zfs-discuss mailing list
zfs-discuss@o
Louwtjie Burger wrote:
> On 11/8/07, Richard Elling <[EMAIL PROTECTED]> wrote:
>
>> Louwtjie Burger wrote:
>>
>>> Hi
>>>
>>> What is the impact of not aligning the DB blocksize (16K) with ZFS,
>>> especially when it comes to random reads on single HW RAID LUN.
>>>
>>>
>> Potentially,
On 11/8/07, Richard Elling <[EMAIL PROTECTED]> wrote:
> Louwtjie Burger wrote:
> > Hi
> >
> > What is the impact of not aligning the DB blocksize (16K) with ZFS,
> > especially when it comes to random reads on single HW RAID LUN.
> >
>
> Potentially, depending on the write part of the workload, the
Hey all -
Just a quick one...
Is there any plan to update the mdb ::memstat dcmd to present ZFS
buffers as part of the summary?
At present, we get something like:
> ::memstat
Page SummaryPagesMB %Tot
Ker
Is compression impacted when setting block size?
--zoly
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Richard Elling
Sent: Thursday, November 08, 2007 1:56 PM
To: Louwtjie Burger
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] ZFS + DB + defa
Louwtjie Burger wrote:
> Hi
>
> What is the impact of not aligning the DB blocksize (16K) with ZFS,
> especially when it comes to random reads on single HW RAID LUN.
>
Potentially, depending on the write part of the workload, the system may
read
128 kBytes to get a 16 kByte block. This is not
Hi Lukas,
The system that we use for zfs is Solaris 10 on Sparc Update 3.
I assume all the scripts you gave have to be run on the nfs/zfs server
and not any client.
Thanks,
--Walter
On Nov 8, 2007 2:34 AM, Łukasz K <[EMAIL PROTECTED]> wrote:
> Dnia 8-11-2007 o godz. 7:58 Walter Faleiro napisał
Dave Bevans wrote:
> Does anyone have any thoughts on this?
>
> Hi,
>
> I have a customer with the following questions...
>
>
>
> *Describe the problem:*
> A ZFS Question - I have one ZFS pool which is made from 2 storage
> arrays (vdevs). I have to delete the zfs filesystems with the name
Hi All:
Actually, I am running into the same case (need to remove certain files from a
snapshot), let me give you a good example of why we need to do it.
In our case, we use zfs snapshots to store online backups of home file systems
(that is, readily accessable to employees if they need to reco
We too are seeing this problem on some of our Thumpers - the ones with U4
and/or all the latest patches installed. We have one which we stopped patching
before the kernel patch that introduced
this problem that works fine...
Works:
[0] andromeda:/<2>ncri86pc/sbin# uname -a
SunOS andromeda 5.10
Well, I've tried the latest OpenSolaris snv_76 release, and it displays the
same symptoms.
(so b66-0624-xen, 75a and 76 all have the same problem)
But, the good news is that is behaves well if there is only 2Gb of memory in
the system.
So, in summary
The command time dd if=/dev/zero of=myfile.
Hello can,
>>
>> Journaling vs ZFS - well, I've been managing some
>> rather large
>> environment and having fsck (even with journaling)
>> from time to time
cyg> 'From time to time' suggests at least several occurrences: just
cyg> how many were there? What led you to think that doing an fsck
Does anyone have any thoughts on this?
Hi,
I have a customer with the following questions...
*Describe the problem:*
A ZFS Question - I have one ZFS pool which is made from 2 storage
arrays (vdevs). I have to delete the zfs filesystems with the names of
/orbits/araid/* and remove one of
is it possible to delete files in the snapshot? (.zfs directory?)
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Dnia 8-11-2007 o godz. 7:58 Walter Faleiro napisał(a):
Hi Lukasz,
The output of the first sript gives
bash-3.00# ./test.sh
dtrace: script './test.sh' matched 4 probes
CPU
ID
FUNCTION:NAME
0
42681
:tick-10s
0
42681
:tic
how is the performance on the zfs directly without nfs?
i have experienced big problems running nfs on large volumes (independent on
the underlaying fs)
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
That is interesting, again we're having the same problem with our X4500s.
I am trying to work out what is causing the problem with NFS, restarting the
service causes it to try and stop and not bring it back up.
Rebooting the whole box fails and it just hangs till a hard reset..
This message
> Au contraire: I estimate its worth quite accurately from the undetected
> error rates reported in the CERN "Data Integrity" paper published last April
> (first hit if you Google 'cern "data integrity"').
>
> > While I have yet to see any checksum error reported
> > by ZFS on
> > Symmetrix arra
On 11/8/07, Mark Ashley <[EMAIL PROTECTED]> wrote:
> Economics for one.
Yep, for sure ... it was a rhetoric question ;)
> > Why would I consider a new solution that is safe, fast enough, stable
> > .. easier to manage and lots cheaper?
Rephrase, "Why would I NOT consider ...?" :)
___
Economics for one.
We run a number of testing environments which mimic the production one.
But we don't want to spend $750,000 on EMC storage each time when
something costing $200,000 will do the job we need.
At the moment we have over 100TB on four SE6140s and we're very happy
with the soluti
On Wed, Nov 07, 2007 at 01:47:04PM -0800, can you guess? wrote:
> I do consider the RAID-Z design to be somewhat brain-damaged [...]
How so? In my opinion, it seems like a cure for the brain damage of RAID-5.
Adam
--
Adam Leventhal, FishWorkshttp://blogs.sun.com/ahl
30 matches
Mail list logo