Am 24.1.2007 15:49 Uhr, Michael Schuster schrieb:
>> I am going to create the same conditions here but with snv_55b and
>> then yank
>> a disk from my zpool. If I get a similar response then I will *hope*
>> for a
>> crash dump.
>>
>> You must be kidding about the "open a case" however. This is
Anantha N. Srirama writes:
> Agreed, I guess I didn't articulate my point/thought very well. The
> best config is to present JBoDs and let ZFS provide the data
> protection. This has been a very stimulating conversation thread; it
> is shedding new light into how to best use ZFS.
>
>
I
Hi All,
In my test set up, I have one zpool of size 1000M bytes and it has only 30 M
free space (970 M is used for some other purpose). On this zpool I created one
file (using open () call) and i attempted to write 2MB data on it ( with
write() call) but it is failed. It written only 1.3 M
On Sun, Jan 28, 2007 at 01:53:04PM +0100, [EMAIL PROTECTED] wrote:
>
> >is this tuneable somehow/somewhere? can i enabyle writecache if only using a
> >dedicated partition ?
>
> If does put the additional data at some what of a risk; not really
> for swap but perhaps not nice for UFS.
How about
Ihsan,
If you are running Solaris 10 then you are probably hitting:
6456939 sd_send_scsi_SYNCHRONIZE_CACHE_biodone() can issue TUR which
calls biowait()and deadlock/hangs host
This was fixed in opensolaris (build 48) but a patch is not yet
available for Solaris 10.
Thanks,
George
Ihsan Do
> > Our Netapp does double-parity RAID. In fact, the filesystem design is
> > remarkably similar to that of ZFS. Wouldn't that also detect the
> > error? I suppose it depends if the `wrong sector without notice'
> > error is repeated each time. Or is it random?
>
> On most (all?) other systems
Have a look at:
http://blogs.sun.com/ahl/entry/a_little_zfs_hack
On 27/01/07, roland <[EMAIL PROTECTED]> wrote:
is it planned to add some other compression algorithm to zfs ?
lzjb is quite good and especially performing very well, but i`d like to have
better compression (bzip2?) - no matter
Robert Milkowski wrote:
Hello Richard,
Friday, January 26, 2007, 11:36:07 PM, you wrote:
RE> We've been talking a lot recently about failure rates and types of
RE> failures. As you may know, I do look at field data and generally don't
RE> ask the group for more data. But this time, for variou
See the following bug:
http://bugs.opensolaris.org/view_bug.do?bug_id=6280662
Cindy
roland wrote:
is it planned to add some other compression algorithm to zfs ?
lzjb is quite good and especially performing very well, but i`d like to have better compression (bzip2?) - no matter how worse perfo
On Jan 26, 2007, at 09:16, Jeffery Malloch wrote:
Hi Folks,
I am currently in the midst of setting up a completely new file
server using a pretty well loaded Sun T2000 (8x1GHz, 16GB RAM)
connected to an Engenio 6994 product (I work for LSI Logic so
Engenio is a no brainer). I have config
Hi Guys,
SO...
>From what I can tell from this thread ZFS if VERY fussy about managing
>writes,reads and failures. It wants to be bit perfect. So if you use the
>hardware that comes with a given solution (in my case an Engenio 6994) to
>manage failures you risk a) bad writes that don't get p
Hi Jeff,
Maybe I mis-read this thread, but I don't think anyone was saying that
using ZFS on-top of an intelligent array risks more corruption. Given
my experience, I wouldn't run ZFS without some level of redundancy,
since it will panic your kernel in a RAID-0 scenario where it detects
a LUN is
Thank you for the detailed explanation. It is very helpful to
understand the issue. Is anyone successfully using SNDR with ZFS yet?
Best Regards,
Jason
On 1/26/07, Jim Dunham <[EMAIL PROTECTED]> wrote:
Jason J. W. Williams wrote:
> Could the replication engine eventually be integrated more tigh
On Jan 29, 2007, at 14:17, Jeffery Malloch wrote:
Hi Guys,
SO...
From what I can tell from this thread ZFS if VERY fussy about
managing writes,reads and failures. It wants to be bit perfect.
So if you use the hardware that comes with a given solution (in my
case an Engenio 6994) to ma
Hi,
I'm looking for assistance troubleshooting an x86 laptop that I upgraded
from Solaris 10 6/06 to 11/06 using standard upgrade.
The upgrade went smoothly, but all attempts to boot it since then have
failed. Every time, it panics, leaving a partial stack trace on the
screen for a few seco
> There are ZFS file systems. There are no zones.
>
> Any help would be greatly appreciated, this is my
> everyday computer.
Take a look at page 167 of the admin guide:
http://opensolaris.org/os/community/zfs/docs/zfsadmin.pdf
You need to delete /etc/zfs/zpool.cache. And, use
zpool import to
Jim Walker wrote:
There are ZFS file systems. There are no zones.
Any help would be greatly appreciated, this is my
everyday computer.
Take a look at page 167 of the admin guide:
http://opensolaris.org/os/community/zfs/docs/zfsadmin.pdf
You need to delete /etc/zfs/zpool.cache. And, use
On Mon, Jan 29, 2007 at 11:17:05AM -0800, Jeffery Malloch wrote:
> From what I can tell from this thread ZFS if VERY fussy about
> managing writes,reads and failures. It wants to be bit perfect. So
> if you use the hardware that comes with a given solution (in my case
> an Engenio 6994) to manage
On January 29, 2007 11:17:05 AM -0800 Jeffery Malloch
<[EMAIL PROTECTED]> wrote:
Hi Guys,
SO...
From what I can tell from this thread ZFS if VERY fussy about managing
writes,reads and failures. It wants to be bit perfect.
It's funny to call that "fussy". All filesystems WANT to be bit perf
More diagnostic information:
Before the afore-listed stack dump, the console displays many lines of
text similar to the following, that scroll by very quickly. I was only
able to capture them with the help of a digital camera.
WARNING: kstat_create('unix', 0, zio_buf_#'): namespace_colli
Hi All,
I'd like to set up dumping to a file. This file is on a mirrored pool
using zfs. It seems that the dump setup doesn't work with zfs. This
worked for both a standard UFS slice and a SVM mirror using zfs.
Is there something that I'm doing wrong, or is this not yet supported on
ZFS?
N
Dumping to a file in a zfs file system is not supported yet.
The zfs file system does not support the VOP_DUMP and
VOP_DUMPCTL operations. This is bug 5008936 (ZFS and/or
zvol should support dumps).
Lori
Peter Buckingham wrote:
Hi All,
I'd like to set up dumping to a file. This file is on a mi
Hi Peter,
This operation isn't supported yet. See this bug:
http://bugs.opensolaris.org/view_bug.do?bug_id=5008936
Both the zfs man page and the ZFS Admin Guide identify
swap and dump limitations, here:
http://docs.sun.com/app/docs/doc/817-2271/6mhupg6gl?q=dump&a=view
Cindy
Peter Buckingham
Lori Alt wrote:
Dumping to a file in a zfs file system is not supported yet.
The zfs file system does not support the VOP_DUMP and
VOP_DUMPCTL operations. This is bug 5008936 (ZFS and/or
zvol should support dumps).
Ok, that's sort of what I expected thanks for the info.
peter
_
I attempted to increase my zraid from 2 disks to 3, but it looks like I added
the drive outside of the raid:
# zpool list
NAMESIZEUSED AVAILCAP HEALTH ALTROOT
amber 1.36T879G516G63% ONLINE -
home 65.5G 1.30M
> Have a look at:
>
> http://blogs.sun.com/ahl/entry/a_little_zfs_hack
thanks for the link, dick !
this sounds fantastic !
is the source for that (yet) available somewhere ?
>Adam Leventhal's Weblog
>inside the sausage factory
btw - just wondering - is this some english phrase or some running
[EMAIL PROTECTED] wrote on 01/29/2007 03:45:58 PM:
> I attempted to increase my zraid from 2 disks to 3, but it looks
> like I added the drive outside of the raid:
>
> # zpool list
>
> NAMESIZEUSED AVAILCAP HEALTH ALTROOT
> amber 1.36T8
roland wrote:
Adam Leventhal's Weblog
inside the sausage factory
btw - just wondering - is this some english phrase or some running gag ? i
have seen it once ago on another blog and so i`m wondering
greetings from the beer and sausage nation ;)
It's a response to a common Eng
Hi,
This is not exactly ZFS specific, but this still seems like a
fruitful place to ask.
It occurred to me today that hot spares could sit in standby (spun
down) until needed (I know ATA can do this, I'm supposing SCSI does
too, but I haven't looked at a spec recently). Does anybody do th
You could easily do this in Solaris today by just using power.conf(4).
Just have it spin down any drives that have been idle for a day or more.
The periodic testing part would be an interesting project to kick off.
--Bill
On Mon, Jan 29, 2007 at 08:21:16PM -0200, Toby Thain wrote:
> Hi,
>
> T
The lzjb compression implementation (IMO) is the fastest one on SPARC Solaris
systems. I've seen it beat lzo in speed while not necesarily in
compressibility. I've measured both implementations inside Solaris SPARC
kernels, and would love to hear from others about their experiences. As some
o
hey, thanks for your overwhelming private lesson for english colloquialism :D
now back to the technical :)
> # zfs create pool/gzip
> # zfs set compression=gzip pool/gzip
> # cp -r /pool/lzjb/* /pool/gzip
> # zfs list
> NAMEUSED AVAIL REFER MOUNTPOINT
> pool/gzip 64.9M 33.2G 64.9M
On Mon, 2007-01-29 at 14:15 -0800, Matt Ingenthron wrote:
> > > inside the sausage factory
> > >
> >
> > btw - just wondering - is this some english phrase or some running gag ? i
> > have seen it once ago on another blog and so i`m wondering
> >
> > greetings from the beer and sausage
Albert Chin said:
> Well, ZFS with HW RAID makes sense in some cases. However, it seems that if
> you are unwilling to lose 50% disk space to RAID 10 or two mirrored HW RAID
> arrays, you either use RAID 0 on the array with ZFS RAIDZ/RAIDZ2 on top of
> that or a JBOD with ZFS RAIDZ/RAIDZ2 on top of
Toby Thain wrote:
Hi,
This is not exactly ZFS specific, but this still seems like a fruitful
place to ask.
It occurred to me today that hot spares could sit in standby (spun down)
until needed (I know ATA can do this, I'm supposing SCSI does too, but I
haven't looked at a spec recently). Do
On Mon, 29 Jan 2007, Toby Thain wrote:
> Hi,
>
> This is not exactly ZFS specific, but this still seems like a
> fruitful place to ask.
>
> It occurred to me today that hot spares could sit in standby (spun
> down) until needed (I know ATA can do this, I'm supposing SCSI does
> too, but I haven't
On 29-Jan-07, at 9:04 PM, Al Hopper wrote:
On Mon, 29 Jan 2007, Toby Thain wrote:
Hi,
This is not exactly ZFS specific, but this still seems like a
fruitful place to ask.
It occurred to me today that hot spares could sit in standby (spun
down) until needed (I know ATA can do this, I'm suppo
Hi Guys,
I seem to remember the Massive Array of Independent Disk guys ran into
a problem I think they called static friction, where idle drives would
fail on spin up after being idle for a long time:
http://www.eweek.com/article2/0,1895,1941205,00.asp
Would that apply here?
Best Regards,
Jason
On 29-Jan-07, at 11:02 PM, Jason J. W. Williams wrote:
Hi Guys,
I seem to remember the Massive Array of Independent Disk guys ran into
a problem I think they called static friction, where idle drives would
fail on spin up after being idle for a long time:
You'd think that probably wouldn't h
On Jan 29, 2007, at 20:27, Toby Thain wrote:
On 29-Jan-07, at 11:02 PM, Jason J. W. Williams wrote:
I seem to remember the Massive Array of Independent Disk guys ran
into
a problem I think they called static friction, where idle drives
would
fail on spin up after being idle for a long time
On 1/30/07, David Magda <[EMAIL PROTECTED]> wrote:
What about a rotating spare?
When setting up a pool a lot of people would (say) balance things
around buses and controllers to minimize single points of failure,
and a rotating spare could disrupt this organization, but would it be
useful at al
Random thoughts:
If we were to use some intelligence in the design, we could perhaps have
a monitor that profiles the workload on the system (a pool, for example)
over a [week|month|whatever] and selects a point in time, based on
history, that it would expect the disks to be quite, and can 'pr
Jason,
Thank you for the detailed explanation. It is very helpful to
understand the issue. Is anyone successfully using SNDR with ZFS yet?
Of the opportunities I've been involved with the answer is yes, but so
far I've not seen SNDR with ZFS in a production environment, but that
does not mean
On Mon, Jan 29, 2007 at 02:39:13PM -0800, roland wrote:
> > # zfs get compressratio
> > NAME PROPERTY VALUE SOURCE
> > pool/gzip compressratio 3.27x -
> > pool/lzjb compressratio 1.89x -
>
> this looks MUCH better than i would have ever expected for smaller files.
>
Hi Jim,
Thank you very much for the heads up. Unfortunately, we need the
write-cache enabled for the application I was thinking of combining
this with. Sounds like SNDR and ZFS need some more soak time together
before you can use both to their full potential together?
Best Regards,
Jason
On 1/2
Hi Toby,
You're right. The healthcheck would definitely find any issues. I
misinterpreted your comment to that effect as a question and didn't
quite latch on. A zpool MAID-mode with that healthcheck might also be
interesting on something like a Thumper for pure-archival, D2D backup
work. Would dr
On 29/01/2007, at 12:50 AM, [EMAIL PROTECTED] wrote:
On 28-Jan-07, at 7:59 AM, [EMAIL PROTECTED] wrote:
On 27-Jan-07, at 10:15 PM, Anantha N. Srirama wrote:
... ZFS will not stop alpha particle induced memory corruption
after data has been received by server and verified to be correct.
47 matches
Mail list logo