Re: [zfs-discuss] Yager on ZFS

2007-11-15 Thread Marc Bevand
can you guess? metrocast.net> writes: > > You really ought to read a post before responding to it: the CERN study > did encounter bad RAM (and my post mentioned that) - but ZFS usually can't > do a damn thing about bad RAM, because errors tend to arise either > before ZFS ever gets the data or a

[zfs-discuss] Macs & compatibility (was Re: Yager on ZFS)

2007-11-15 Thread Anton B. Rang
This is clearly off-topic :-) but perhaps worth correcting -- >Long-time MAC users must be getting used to having their entire world >disrupted and having to re-buy all their software. This is at least the >second complete flag-day (no forward or backwards compatibility) change >they've been th

Re: [zfs-discuss] ZFS + DB + "fragments"

2007-11-15 Thread Richard Elling
Anton B. Rang wrote: >> There are many different ways to place the data on the media and we would >> typically >> strive for a diverse stochastic spread. >> > > Err ... why? > > A random distribution makes reasonable sense if you assume that future read > requests are independent, or that th

Re: [zfs-discuss] zpool question

2007-11-15 Thread Mike Dotson
On Thu, 2007-11-15 at 21:18 -0700, Brian Lionberger wrote: > I have a zpool issue that I need to discuss. > > My application is going to run on a 3120 with 4 disks. Two(mirrored) > disks will represent /export/home and the other two(mirrored) will be > /export/backup. > > The question is, shou

[zfs-discuss] Fwd: ZFS for consumers WAS:Yager on ZFS

2007-11-15 Thread Paul Kraus
Sent from the correct address... -- Forwarded message -- From: Paul Kraus <[EMAIL PROTECTED]> Date: Nov 15, 2007 12:57 PM Subject: Re: [zfs-discuss] ZFS for consumers WAS:Yager on ZFS To: zfs-discuss@opensolaris.org On 11/15/07, can you guess? <[EMAIL PROTECTED]> wrote: > ... > >

[zfs-discuss] read/write NFS block size and ZFS

2007-11-15 Thread msl
Hello all... I'm migrating a nfs server from linux to solaris, and all clients(linux) are using read/write block sizes of 8192. That was the better performance that i got, and it's working pretty well (nfsv3). I want to use all the zfs' advantages, and i know i can have a performance loss, so i

Re: [zfs-discuss] ZFS + DB + "fragments"

2007-11-15 Thread can you guess?
... > For modern disks, media bandwidths are now getting to > be > 100 MBytes/s. > If you need 500 MBytes/s of sequential read, you'll > never get it from > one disk. And no one here even came remotely close to suggesting that you should try to. > You can get it from multiple disks, so the ques

Re: [zfs-discuss] Yager on ZFS

2007-11-15 Thread can you guess?
Adam Leventhal wrote: > On Thu, Nov 08, 2007 at 07:28:47PM -0800, can you guess? wrote: >>> How so? In my opinion, it seems like a cure for the brain damage of RAID-5. >> Nope. >> >> A decent RAID-5 hardware implementation has no 'write hole' to worry about, >> and one can make a software implemen

Re: [zfs-discuss] ZFS + DB + "fragments"

2007-11-15 Thread can you guess?
Richard Elling wrote: ... >>> there are >>> really two very different configurations used to >>> address different >>> performance requirements: cheap and fast. It seems >>> that when most >>> people first consider this problem, they do so from >>> the cheap >>> perspective: single disk view. A

[zfs-discuss] zfs mount -a intermittent

2007-11-15 Thread Andre Lue
I have a slimmed down install on on_b61 and sometimes when the box is rebooted it fails to automatically remount the pool. Most cases if I login and run "zfs mount -a" it will mount. Some cases I have to reboot again. Can someone provide some insight as to what may be going on here? truss captu

[zfs-discuss] ZFS implimentations

2007-11-15 Thread Stephen Stogner
Hello, Does any one have some real world examples of using a large ZFS cluster ie some where with 40+ vdev's in the range of a few hundred or so terrabytes? Thank you. This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss

Re: [zfs-discuss] ZFS for consumers WAS:Yager on ZFS

2007-11-15 Thread can you guess?
... At home the biggest reason I > went with ZFS for my > data is ease of management. I split my data up based > on what it is ... > media (photos, movies, etc.), vendor stuff (software, > datasheets, > etc.), home directories, and other misc. data. This > gives me a good > way to control backups

Re: [zfs-discuss] [fuse-discuss] cannot mount 'mypool': Input/output error

2007-11-15 Thread Mark Phalan
On Thu, 2007-11-15 at 07:22 -0800, Nabeel Saad wrote: > Hello, > > I have a question about using ZFS with Fuse. A little bit of background of > what we've been doing first... We recently had an issue with a Solaris > server where the permissions of the main system files in /etc and such were

Re: [zfs-discuss] zfs on a raid box

2007-11-15 Thread Dan Pritts
On Tue, Nov 13, 2007 at 12:25:24PM +0100, Paul Boven wrote: > Hi everyone, > > We've building a storage system that should have about 2TB of storage > and good sequential write speed. The server side is a Sun X4200 running > Solaris 10u4 (plus yesterday's recommended patch cluster), the array we >

Re: [zfs-discuss] How to create ZFS pool ?

2007-11-15 Thread Mike Dotson
On Thu, 2007-11-15 at 05:25 -0800, Boris Derzhavets wrote: > Thank you very much Mike for your feedback. > Just one more question. > I noticed five device under /dev/rdsk:- > c1t0d0p0 > c1t0d0p1 > c1t0d0p2 > c1t0d0p3 > c1t0d0p4 > been created by system immediately after installation completed. > I

Re: [zfs-discuss] Securing a risky situation with zfs

2007-11-15 Thread Dan Pritts
On Tue, Nov 13, 2007 at 01:20:14AM -0800, Gabriele Bulfon wrote: > The basic idea was to have a zfs mirror of each iscsi disk on > scsi-attached disks, so that in case of another panic of the SAN, > everything should still work on the scsi-attached disks. > My questions are: > - is this a good id

Re: [zfs-discuss] Is ZFS stable in OpenSolaris?

2007-11-15 Thread Darren J Moffat
hex.cookie wrote: > In production environment, which platform should we use? Solaris 10 U4 or > OpenSolaris 70+? How should we estimate a stable edition for production? Or > OpenSolaris is stable in some build? All depends on what you define by stable. Do you intend to pay Sun for a service co

Re: [zfs-discuss] X4500 device disconnect problem persists

2007-11-15 Thread Richard Elling
Peter Eriksson wrote: > Speaking of error recovery due to bad blocks - anyone know if the SATA disks > that are delivered with the Thumper have "enterprise" or "desktop" > firmware/settings by default? If I'm not mistaken one of the differences is > that the "enterrprise" variant more quickly gi

Re: [zfs-discuss] cannot mount 'mypool': Input/output error

2007-11-15 Thread Nabeel Saad
I appreciate the different responses that I have gotten. As some of you may have realized I am not a guru in Linux / Solaris... I have been trying to figure out what file system my Solaris box was using... I got a comment from Paul that from the fdisk command he could see that most likely the

Re: [zfs-discuss] Is ZFS stable in OpenSolaris?

2007-11-15 Thread Mark Phalan
On Thu, 2007-11-15 at 17:20 +, Darren J Moffat wrote: > hex.cookie wrote: > > In production environment, which platform should we use? Solaris 10 U4 or > > OpenSolaris 70+? How should we estimate a stable edition for production? > > Or OpenSolaris is stable in some build? > > All depends o

[zfs-discuss] cannot mount 'mypool': Input/output error

2007-11-15 Thread Nabeel Saad
Hello, I have a question about using ZFS with Fuse. A little bit of background of what we've been doing first... We recently had an issue with a Solaris server where the permissions of the main system files in /etc and such were changed. On server restart, Solaris threw an error and it was

Re: [zfs-discuss] ZFS snapshot send/receive via intermediate device

2007-11-15 Thread Darren J Moffat
Simple answer yes. Slightly longer answer. zfs send just writes to stdout where you put that is upto your needs, can can be a file in some filesystem, a raw disk, a tape, a pipe to another program (such as ssh or compress or encrypt) zfs recv reads from stdin so just do the reverse of what

Re: [zfs-discuss] Yager on ZFS

2007-11-15 Thread can you guess?
... > Well, ZFS allows you to put its ZIL on a separate > device which could > be NVRAM. And that's a GOOD thing (especially because it's optional rather than requiring that special hardware be present). But if I understand the ZIL correctly not as effective as using NVRAM as a more general ki

Re: [zfs-discuss] ZFS + DB + "fragments"

2007-11-15 Thread can you guess?
> can you guess? wrote: > >> For very read intensive and position sensitive > >> applications, I guess > >> this sort of capability might make a difference? > > > > No question about it. And sequential table scans > in databases > > are among the most significant examples, because > (unlike thi

Re: [zfs-discuss] Yager on ZFS

2007-11-15 Thread Andy Lubel
On 11/15/07 9:05 AM, "Robert Milkowski" <[EMAIL PROTECTED]> wrote: > Hello can, > > Thursday, November 15, 2007, 2:54:21 AM, you wrote: > > cyg> The major difference between ZFS and WAFL in this regard is that > cyg> ZFS batch-writes-back its data to disk without first aggregating > cyg> it in N

Re: [zfs-discuss] ZFS for consumers WAS:Yager on ZFS

2007-11-15 Thread Paul Bartholdi
On 11/15/07, Paul Kraus <[EMAIL PROTECTED]> wrote: > > Splitting this thread and changing the subject to reflect that... > > On 11/14/07, can you guess? <[EMAIL PROTECTED]> wrote: > > > Another prominent debate in this thread revolves around the question of > > just how significant ZFS's unusual st

Re: [zfs-discuss] Yager on ZFS

2007-11-15 Thread Robert Milkowski
Hello can, Thursday, November 15, 2007, 2:54:21 AM, you wrote: cyg> The major difference between ZFS and WAFL in this regard is that cyg> ZFS batch-writes-back its data to disk without first aggregating cyg> it in NVRAM (a subsidiary difference is that ZFS maintains a cyg> small-update log which

Re: [zfs-discuss] internal error: Bad file number

2007-11-15 Thread Mark J Musante
On Thu, 15 Nov 2007, Manoj Nayak wrote: > I am getting following error message when I run any zfs command.I have > attach the script I use to create ramdisk image for Thumper. > > # zfs volinit > internal error: Bad file number > Abort - core dumped This sounds as if you may have somehow lost th

Re: [zfs-discuss] X4500 device disconnect problem persists

2007-11-15 Thread Peter Eriksson
Speaking of error recovery due to bad blocks - anyone know if the SATA disks that are delivered with the Thumper have "enterprise" or "desktop" firmware/settings by default? If I'm not mistaken one of the differences is that the "enterrprise" variant more quickly gives up with bad blocks and re

Re: [zfs-discuss] How to create ZFS pool ?

2007-11-15 Thread Boris Derzhavets
Thank you very much Mike for your feedback. Just one more question. I noticed five device under /dev/rdsk:- c1t0d0p0 c1t0d0p1 c1t0d0p2 c1t0d0p3 c1t0d0p4 been created by system immediately after installation completed. I believe it's x86 limitation (no more then 4 primary partitions) If I've got you

Re: [zfs-discuss] How to create ZFS pool ?

2007-11-15 Thread Boris Derzhavets
Tim, I ran format before creating third partition by fdisk. Rebooted SNV76 and ran format again. It keeps showing two disks, which actually are two 160 GB SATA drives originally installed on the box. When i select "0" :- First drive is properly shown with one NTFS , one SNV76 partition. It is one

Re: [zfs-discuss] ZFS + DB + "fragments"

2007-11-15 Thread Louwtjie Burger
> We are all anxiously awaiting data... > -- richard Would it be worthwhile to build a test case: - Build a postgresql database and import 1 000 000 (or more) lines of data. - Run a single and multiple large table scan queries ... and watch the system then, - Update a column of each row in th

Re: [zfs-discuss] ZFS + DB + "fragments"

2007-11-15 Thread Anton B. Rang
> When you have a striped storage device under a > file system, then the database or file system's view > of contiguous data is not contiguous on the media. Right. That's a good reason to use fairly large stripes. (The primary limiting factor for stripe size is efficient parallel access; using