Re: [zfs-discuss] reboot when copying large amounts of data

2009-12-20 Thread arnaud
Hi folks, I was trying to load a large file in /tmp so that a process that parses it wouldn't be limited by a disk throughput bottleneck. My rig here has only 12GB of RAM and the file I copied is about 12GB as well. Before the copied finished, my system restarted. I'm pretty up to date, the sy

Re: [zfs-discuss] How do I determine dedupe effectiveness?

2009-12-20 Thread Nick
> Wait...whoah, hold > on.If snapshots reside within the confines of the > pool, are you saying that dedup will also count > what's contained inside the snapshots? I'm > not sure why, but that thought is vaguely disturbing > on some level. > > Then again (not sure how gurus feel on this > point) b

Re: [zfs-discuss] How do I determine dedupe effectiveness?

2009-12-20 Thread Colin Raven
On Sun, Dec 20, 2009 at 16:23, Nick wrote: > > IMHO, snapshots are not a replacement for backups. Backups should > definitely reside outside the system, so that if you lose your entire array, > SAN, controller, etc., you can recover somewhere else. Snapshots, on the > other hand, give you the a

Re: [zfs-discuss] ZFS pool unusable after attempting to destroy a dataset with dedup enabled

2009-12-20 Thread Jack Kielsmeier
Ok, dump uploaded! Thanks for your upload Your file has been stored as "/cores/redshirt-vmdump.0" on the Supportfiles service. Size of the file (in bytes) : 1743978496. The file has a cksum of : 2878443682 . #

[zfs-discuss] raidz data loss stories?

2009-12-20 Thread Frank Cusack
The zfs best practices page (and all the experts in general) talk about MTTDL and raidz2 is better than raidz and so on. Has anyone here ever actually experienced data loss in a raidz that has a hot spare? Of course, I mean from disk failure, not from bugs or admin error, etc. -frank __

[zfs-discuss] On collecting data from "hangs"

2009-12-20 Thread Tim Haley
There seems to be a rash of posts lately where people are resetting or rebooting without getting any data, so I thought I'd post a quick overview on collecting crash dumps. If you think you've got a hang problem with ZFS and you want to gather data for someone to look at, then here are a few s

[zfs-discuss] iSCSI with Deduplication, is there any point?

2009-12-20 Thread Chris Scerbo
I've been using OpenSolaris for my home file server for a little over a year now. For most of that time I have used smb to share files out to my other systems. I also have a Windows Server 2003 DC and all my client systems are joined to the domain. Most of that time was a permissions nightmar

Re: [zfs-discuss] iSCSI with Deduplication, is there any point?

2009-12-20 Thread Mattias Pantzare
> I have already run into one little snag that I don't see any way of > overcoming with my chosen method.  I've upgraded to snv_129 with high hopes > for getting the most out of deduplication.  But using iSCSI volumes I'm not > sure how I can gain any benefit from it.  The volumes are a set size

Re: [zfs-discuss] reboot when copying large amounts of data

2009-12-20 Thread Ian Collins
arnaud wrote: Hi folks, I was trying to load a large file in /tmp so that a process that parses it wouldn't be limited by a disk throughput bottleneck. My rig here has only 12GB of RAM and the file I copied is about 12GB as well. Before the copied finished, my system restarted. I'm pretty up

Re: [zfs-discuss] iSCSI with Deduplication, is there any point?

2009-12-20 Thread Chris Scerbo
Cool thx, sounds like exactly what I'm looking for. I did a bit of reading on the subject and to my understanding I should... Create a volume of a size as large as I could possibly need. So, siding on the optimistic, "zfs create -s -V 4000G tank/iscsi1". Then in Windows initialize and quick

Re: [zfs-discuss] iSCSI with Deduplication, is there any point?

2009-12-20 Thread Cyril Plisko
On Sun, Dec 20, 2009 at 9:24 PM, Chris Scerbo wrote: > Cool thx, sounds like exactly what I'm looking for. > > I did a bit of reading on the subject and to my understanding I should... > Create a volume of a size as large as I could possibly need.  So, siding on > the optimistic, "zfs create -s -

[zfs-discuss] ARC not using all available RAM?

2009-12-20 Thread Tristan Ball
I've got an opensolaris snv_118 machine that does nothing except serve up NFS and ISCSI. The machine has 8G of ram, and I've got an 80G SSD as L2ARC. The ARC on this machine is currently sitting at around 2G, the kernel is using around 5G, and I've got about 1G free. I've pulled this from a co

Re: [zfs-discuss] How do I determine dedupe effectiveness?

2009-12-20 Thread Nick Couchman
> The (one and only) point that I was making was that - like backups - > snapshots should be kept "elsewhere" whether by using zfs-send, or zipping > up the whole shebang and ssh'ing it someplace"elsewhere" meaning beyond > the pool. Rolling 15 minute and hourly snapshotsno, they stay local

Re: [zfs-discuss] ZFS receive -dFv creates an extra "e" subdirectory..

2009-12-20 Thread Brandon High
On Sat, Dec 19, 2009 at 3:56 AM, Steven Sim wrote: > r...@sunlight:/root# zfs list -r myplace/Docs > NAME   USED  AVAIL  REFER  MOUNTPOINT > myplace/Docs  3.37G  1.05T  3.33G > /export/home/admin/Docs/e/Docs <--- *** Here is the extra "e/Docs".. I saw a sim

Re: [zfs-discuss] ARC not using all available RAM?

2009-12-20 Thread Richard Elling
On Dec 20, 2009, at 12:25 PM, Tristan Ball wrote: I've got an opensolaris snv_118 machine that does nothing except serve up NFS and ISCSI. The machine has 8G of ram, and I've got an 80G SSD as L2ARC. The ARC on this machine is currently sitting at around 2G, the kernel is using around 5G,

Re: [zfs-discuss] ARC not using all available RAM?

2009-12-20 Thread Bob Friesenhahn
On Sun, 20 Dec 2009, Richard Elling wrote: Given that I don't believe there is any other memory pressure on the system, why isn't the ARC using that last 1G of ram? Simon says, "don't do that" ? ;-) Yes, primarily since if there is no more memory immediately available, performance when sta

Re: [zfs-discuss] ARC not using all available RAM?

2009-12-20 Thread Tristan Ball
Bob Friesenhahn wrote: On Sun, 20 Dec 2009, Richard Elling wrote: Given that I don't believe there is any other memory pressure on the system, why isn't the ARC using that last 1G of ram? Simon says, "don't do that" ? ;-) Yes, primarily since if there is no more memory immediately availa

[zfs-discuss] FW: ARC not using all available RAM?

2009-12-20 Thread Tristan Ball
Oops, should have sent to the list... Richard Elling wrote: > > On Dec 20, 2009, at 12:25 PM, Tristan Ball wrote: > >> I've got an opensolaris snv_118 machine that does nothing except >> serve up NFS and ISCSI. >> >> The machine has 8G of ram, and I've got an 80G SSD as L2ARC. >> The ARC on this