Re: [zfs-discuss] SATA controller suggestion

2008-06-06 Thread Joe Little
On Thu, Jun 5, 2008 at 9:26 PM, Tim <[EMAIL PROTECTED]> wrote: > > > On Thu, Jun 5, 2008 at 11:12 PM, Joe Little <[EMAIL PROTECTED]> wrote: >> >> On Thu, Jun 5, 2008 at 8:16 PM, Tim <[EMAIL PROTECTED]> wrote: >> > >> > >> > On Thu, Jun 5, 2008 at 9:17 PM, Peeyush Singh <[EMAIL PROTECTED]> >> > wrot

Re: [zfs-discuss] Filesystem for each home dir - 10,000 users?

2008-06-06 Thread Mike Mackovitch
On Fri, Jun 06, 2008 at 03:43:29PM -0700, eric kustarz wrote: > > On Jun 6, 2008, at 3:27 PM, Brian Hechinger wrote: > > > On Fri, Jun 06, 2008 at 02:58:09PM -0700, eric kustarz wrote: > >> > clients do not. Without per-filesystem mounts, 'df' on the client > will not report correct da

[zfs-discuss] partitioning a disk with online zfs

2008-06-06 Thread Justin Vassallo
Hello, I have two disks with a partition mounted as swap, having also some space unallocated. I would like to format the disk to create a partition from that unallocated space. This should be safe given i've done it several time on disks with ufs, but i'm not too sure with zfs. Is there any

Re: [zfs-discuss] Filesystem for each home dir - 10,000 users?

2008-06-06 Thread Richard Elling
Mattias Pantzare wrote: > 2008/6/6 Richard Elling <[EMAIL PROTECTED]>: > >> Richard L. Hamilton wrote: >> A single /var/mail doesn't work well for 10,000 users either. When you start getting into that scale of service provisioning, you might look at how the big boy

Re: [zfs-discuss] Filesystem for each home dir - 10,000 users?

2008-06-06 Thread Mike Mackovitch
On Fri, Jun 06, 2008 at 06:27:01PM -0400, Brian Hechinger wrote: > On Fri, Jun 06, 2008 at 02:58:09PM -0700, eric kustarz wrote: > > > > >> clients do not. Without per-filesystem mounts, 'df' on the client > > >> will not report correct data though. > > > > > > I expect that mirror mounts will be

Re: [zfs-discuss] Filesystem for each home dir - 10,000 users?

2008-06-06 Thread eric kustarz
On Jun 6, 2008, at 3:27 PM, Brian Hechinger wrote: > On Fri, Jun 06, 2008 at 02:58:09PM -0700, eric kustarz wrote: >> clients do not. Without per-filesystem mounts, 'df' on the client will not report correct data though. >>> >>> I expect that mirror mounts will be coming Linux's way to

Re: [zfs-discuss] Filesystem for each home dir - 10,000 users?

2008-06-06 Thread Brian Hechinger
On Fri, Jun 06, 2008 at 04:52:45PM -0500, Nicolas Williams wrote: > > Mirror mounts take care of the NFS problem (with NFSv4). > > NFSv3 automounters could be made more responsive to server-side changes > is share lists, but hey, NFSv4 is the future. So basically it's just a waiting game at this

Re: [zfs-discuss] Filesystem for each home dir - 10,000 users?

2008-06-06 Thread Brian Hechinger
On Fri, Jun 06, 2008 at 02:58:09PM -0700, eric kustarz wrote: > > >> clients do not. Without per-filesystem mounts, 'df' on the client > >> will not report correct data though. > > > > I expect that mirror mounts will be coming Linux's way too. > > The should already have them: > http://blogs.su

Re: [zfs-discuss] Filesystem for each home dir - 10,000 users?

2008-06-06 Thread Nicolas Williams
On Fri, Jun 06, 2008 at 02:58:09PM -0700, eric kustarz wrote: > >I expect that mirror mounts will be coming Linux's way too. > > The should already have them: > http://blogs.sun.com/erickustarz/en_US/entry/linux_support_for_mirror_mounts Even better. __

Re: [zfs-discuss] Filesystem for each home dir - 10,000 users?

2008-06-06 Thread eric kustarz
On Jun 6, 2008, at 2:50 PM, Nicolas Williams wrote: > On Fri, Jun 06, 2008 at 10:42:45AM -0500, Bob Friesenhahn wrote: >> On Fri, 6 Jun 2008, Brian Hechinger wrote: >> >>> On Thu, Jun 05, 2008 at 12:02:42PM -0400, Chris Siebenmann wrote: - as separate filesystems, they have to be separa

Re: [zfs-discuss] Filesystem for each home dir - 10,000 users?

2008-06-06 Thread Nicolas Williams
On Fri, Jun 06, 2008 at 08:51:13PM +0200, Mattias Pantzare wrote: > 2008/6/6 Richard Elling <[EMAIL PROTECTED]>: > > I was going to post some history of scaling mail, but I blogged it instead. > > http://blogs.sun.com/relling/entry/on_var_mail_and_quotas > > The problem with that argument is that

Re: [zfs-discuss] Filesystem for each home dir - 10,000 users?

2008-06-06 Thread Nicolas Williams
On Fri, Jun 06, 2008 at 07:37:18AM -0400, Brian Hechinger wrote: > On Thu, Jun 05, 2008 at 12:02:42PM -0400, Chris Siebenmann wrote: > > > > - as separate filesystems, they have to be separately NFS mounted > > I think this is the one that gets under my skin. If there would be a > way to "merge"

Re: [zfs-discuss] Filesystem for each home dir - 10,000 users?

2008-06-06 Thread Nicolas Williams
On Fri, Jun 06, 2008 at 10:42:45AM -0500, Bob Friesenhahn wrote: > On Fri, 6 Jun 2008, Brian Hechinger wrote: > > > On Thu, Jun 05, 2008 at 12:02:42PM -0400, Chris Siebenmann wrote: > >> > >> - as separate filesystems, they have to be separately NFS mounted > > > > I think this is the one that get

Re: [zfs-discuss] SATA controller suggestion

2008-06-06 Thread Will Murnane
On Fri, Jun 6, 2008 at 16:23, Tom Buskey <[EMAIL PROTECTED]> wrote: > I have an AMD 939 MB w/ Nvidea on the motherboard and 4 500GB SATA II drives > in a RAIDZ. ... > I get 550 MB/s I doubt this number a lot. That's almost 200 (550/N-1 = 183) MB/s per disk, and drives I've seen are usually more i

Re: [zfs-discuss] Filesystem for each home dir - 10,000 users?

2008-06-06 Thread Peter Tribble
On Thu, Jun 5, 2008 at 2:11 PM, Erik Trimble <[EMAIL PROTECTED]> wrote: > > Quotas are great when, for administrative purposes, you want a large > number of users on a single filesystem, but to restrict the amount of > space for each. The primary place I can think of this being useful is > /var/ma

Re: [zfs-discuss] SATA controller suggestion

2008-06-06 Thread Tim
On Fri, Jun 6, 2008 at 3:23 PM, Tom Buskey <[EMAIL PROTECTED]> wrote: > >**pci or pci-x. Yes, you might see > > *SOME* loss in speed from a pci interface, but > > let's be honest, there aren't a whole lot of > > users on this list that have the infrastructure to > > use greater than 100MB/sec who

Re: [zfs-discuss] ZFS problems with USB Storage devices

2008-06-06 Thread Paulo Soeiro
Hi Ricardo, I'll try that. Thanks (Obrigado) Paulo Soeiro On 6/5/08, Ricardo M. Correia <[EMAIL PROTECTED]> wrote: > > On Ter, 2008-06-03 at 23:33 +0100, Paulo Soeiro wrote: > > 6)Remove and attached the usb sticks: > > zpool status > pool: myPool > state: UNAVAIL > status: One or more devices

Re: [zfs-discuss] SATA controller suggestion

2008-06-06 Thread Tom Buskey
>**pci or pci-x.  Yes, you might see > *SOME* loss in speed from a pci interface, but > let's be honest, there aren't a whole lot of > users on this list that have the infrastructure to > use greater than 100MB/sec who are asking this sort > of question.  A PCI bus should have no issues > pushing t

Re: [zfs-discuss] Filesystem for each home dir - 10,000 users?

2008-06-06 Thread Mattias Pantzare
2008/6/6 Richard Elling <[EMAIL PROTECTED]>: > Richard L. Hamilton wrote: >>> A single /var/mail doesn't work well for 10,000 users >>> either. When you >>> start getting into that scale of service >>> provisioning, you might look at >>> how the big boys do it... Apple, Verizon, Google, >>> Amazon

Re: [zfs-discuss] Filesystem for each home dir - 10,000 users?

2008-06-06 Thread Richard Elling
Richard L. Hamilton wrote: >> A single /var/mail doesn't work well for 10,000 users >> either. When you >> start getting into that scale of service >> provisioning, you might look at >> how the big boys do it... Apple, Verizon, Google, >> Amazon, etc. You >> should also look at e-mail systems des

Re: [zfs-discuss] ZFS conflict with MAID?

2008-06-06 Thread Brandon High
On Fri, Jun 6, 2008 at 9:29 AM, John Kunze <[EMAIL PROTECTED]> wrote: > My organization is considering an RFP for MAID storage and we're > wondering about potential conflicts between MAID and ZFS. I had to look up MAID, first link Google gave me was http://www.closetmaid.com/ which doesn't seem ri

[zfs-discuss] Quotas Locking down a system

2008-06-06 Thread Walter Faleiro
Folks, I am running into an issue with a quota enabled ZFS system. I tried to check out the ZFS properties but could not figure out a workaround. I have a file system /data/project/software which has 250G quota set. There are no snapshots enabled for this system. When the quota is reached on this,

Re: [zfs-discuss] ZFS conflict with MAID?

2008-06-06 Thread Mark A. Carlson
I think most MAID is sold as a (misguided IMHO) replacement for Tape, not as a Tier 1 kind of storage. YMMV. -- mark John Kunze wrote: > My organization is considering an RFP for MAID storage and we're > wondering about potential conflicts between MAID and ZFS. > > We want MAID's power management

[zfs-discuss] ZFS conflict with MAID?

2008-06-06 Thread John Kunze
My organization is considering an RFP for MAID storage and we're wondering about potential conflicts between MAID and ZFS. We want MAID's power management benefits but are concerned that what we understand to be ZFS's use of dynamic striping across devices with filesystem metadata replication and

Re: [zfs-discuss] Filesystem for each home dir - 10,000 users?

2008-06-06 Thread Bob Friesenhahn
On Fri, 6 Jun 2008, Brian Hechinger wrote: > On Thu, Jun 05, 2008 at 12:02:42PM -0400, Chris Siebenmann wrote: >> >> - as separate filesystems, they have to be separately NFS mounted > > I think this is the one that gets under my skin. If there would be a > way to "merge" a filesystem into a pare

Re: [zfs-discuss] system backup and recovery

2008-06-06 Thread Aubrey Li
On Fri, Jun 6, 2008 at 10:41 PM, Brandon High <[EMAIL PROTECTED]> wrote: > On Fri, Jun 6, 2008 at 12:23 AM, Aubrey Li <[EMAIL PROTECTED]> wrote: >> Here, "zfs send tank/root > /mnt/root" doesn't work, "zfs send" can't accept >> a directory as an output. So I use zfs send and zfs receive: > > Really

Re: [zfs-discuss] zfs/nfs issue editing existing files

2008-06-06 Thread Andy Lubel
That was it! hpux-is-old.com -> nearline.host NFS C GETATTR3 FH=F6B3 nearline.host -> hpux-is-old.com NFS R GETATTR3 OK hpux-is-old.com -> nearline.host NFS C SETATTR3 FH=F6B3 nearline.host -> hpux-is-old.com NFS R SETATTR3 Update synch mismatch hpux-is-old.com -> nearline.host NFS C GETATTR3 FH=F

Re: [zfs-discuss] system backup and recovery

2008-06-06 Thread Brandon High
On Thu, Jun 5, 2008 at 11:37 PM, Albert Lee <[EMAIL PROTECTED]> wrote: > Raw disk images are, uh, nice and all, but I don't think that was what > Aubrey had in mind when asking zfs-discuss about a backup solution. This > is 2008, not 1960. But retro is in! The point that I didn't really make is t

Re: [zfs-discuss] system backup and recovery

2008-06-06 Thread Brandon High
On Fri, Jun 6, 2008 at 12:23 AM, Aubrey Li <[EMAIL PROTECTED]> wrote: > Here, "zfs send tank/root > /mnt/root" doesn't work, "zfs send" can't accept > a directory as an output. So I use zfs send and zfs receive: Really? zfs send just gives you a byte stream, and the shell redirects it to the file

Re: [zfs-discuss] Filesystem for each home dir - 10,000 users?

2008-06-06 Thread Darren J Moffat
Brian Hechinger wrote: > On Thu, Jun 05, 2008 at 12:02:42PM -0400, Chris Siebenmann wrote: >> - as separate filesystems, they have to be separately NFS mounted > > I think this is the one that gets under my skin. If there would be a > way to "merge" a filesystem into a parent filesystem for the p

Re: [zfs-discuss] Filesystem for each home dir - 10,000 users?

2008-06-06 Thread Richard L. Hamilton
[...] > > That's not to say that there might not be other > problems with scaling to > > thousands of filesystems. But you're certainly not > the first one to test it. > > > > For cases where a single filesystem must contain > files owned by > > multiple users (/var/mail being one example), old >

Re: [zfs-discuss] Per-user home filesystems and OS-X Leopard anomaly

2008-06-06 Thread Richard L. Hamilton
> I encountered an issue that people using OS-X systems > as NFS clients > need to be aware of. While not strictly a ZFS issue, > it may be > encounted most often by ZFS users since ZFS makes it > easy to support > and export per-user filesystems. The problem I > encountered was when > using

Re: [zfs-discuss] Filesystem for each home dir - 10,000 users?

2008-06-06 Thread Brian Hechinger
On Thu, Jun 05, 2008 at 12:02:42PM -0400, Chris Siebenmann wrote: > > - as separate filesystems, they have to be separately NFS mounted I think this is the one that gets under my skin. If there would be a way to "merge" a filesystem into a parent filesystem for the purposes of NFS, that would be

Re: [zfs-discuss] Get your SXCE on ZFS here!

2008-06-06 Thread Brian Hechinger
On Thu, Jun 05, 2008 at 10:45:09PM -0700, Vincent Fox wrote: > Way to drag my post into the mud there. > > Can we just move on? Absolutely not! Just be glad you never had to create a swap file on an NFS mount just to be able to build software on your machine! Yes, I really did have to do that.

Re: [zfs-discuss] SATA controller suggestion

2008-06-06 Thread Marc Bevand
Richard L. Hamilton smart.net> writes: > But I suspect to some extent you get what you pay for; the throughput on the > higher-end boards may well be a good bit higher. Not really. Nowadays, even the cheapest controllers, processors & mobos are EASILY capable of handling the platter-speed throug

Re: [zfs-discuss] zfs incremental-forever

2008-06-06 Thread Richard L. Hamilton
If I read the man page right, you might only have to keep a minimum of two on each side (maybe even just one on the receiving side), although I might be tempted to keep an extra just in case; say near current, 24 hours old, and a week old (space permitting for the larger interval of the last one).

Re: [zfs-discuss] SATA controller suggestion

2008-06-06 Thread Marc Bevand
Buy a 2-port SATA II PCI-E x1 SiI3132 controller ($20). The solaris driver is very stable. Or, a solution I would personally prefer, don't use a 7th disk. Partition each of your 6 disks with a small ~7-GB slice at the beginning and the rest of the disk for ZFS. Install the OS in one of the sma

Re: [zfs-discuss] zfs incremental-forever

2008-06-06 Thread Peter Karlsson
Or you could use Tim Fosters ZFS snapshot service http://blogs.sun.com/timf/entry/zfs_automatic_snapshots_now_with /peter On Jun 6, 2008, at 14:07, Tobias Exner wrote: > Hi, > > I'm thinking about the following situation and I know there are some > things I have to understand: > > I want to use

Re: [zfs-discuss] zfs incremental-forever

2008-06-06 Thread Peter Karlsson
Hi Tobias, I did this for a large lab we had last month, I have it setup something like this. zfs snapshot [EMAIL PROTECTED] zfs send -i [EMAIL PROTECTED] [EMAIL PROTECTED] | ssh server2 zfs recv rep_pool ssh zfs destroy [EMAIL PROTECTED] ssh zfs rename [EMAIL PROTECTED] [EMAIL PROTECTED] zfs

Re: [zfs-discuss] Can't rm file when "No space left on device"...

2008-06-06 Thread Richard L. Hamilton
> On Thu, Jun 05, 2008 at 09:13:24PM -0600, Keith > Bierman wrote: > > On Jun 5, 2008, at 8:58 PM 6/5/, Brad Diggs > wrote: > > > Hi Keith, > > > > > > Sure you can truncate some files but that > effectively corrupts > > > the files in our case and would cause more harm > than good. The > > > onl

Re: [zfs-discuss] system backup and recovery

2008-06-06 Thread Aubrey Li
Hi Erik, Thanks for your instruction, but let me dig into details. On Thu, Jun 5, 2008 at 10:04 PM, Erik Trimble <[EMAIL PROTECTED]> wrote: > > Thus, you could do this: > > (1) Install system A No problem, :-) > (2) hook USB drive to A, and mount it at /mnt I created a zfs pool, and mount it at

Re: [zfs-discuss] SATA controller suggestion

2008-06-06 Thread Richard L. Hamilton
I don't presently have any working x86 hardware, nor do I routinely work with x86 hardware configurations. But it's not hard to find previous discussion on the subject: http://www.opensolaris.org/jive/thread.jspa?messageID=96790 for example... Also, remember that SAS controllers can usually also

[zfs-discuss] zfs incremental-forever

2008-06-06 Thread Tobias Exner
Hi, I'm thinking about the following situation and I know there are some things I have to understand: I want to use two SUN-Servers with the same amount of storage capacity on both of them and I want to replicate the filesystem ( zfs ) incrementally two times a day from the first to the second