[zfs-discuss] Using a zvol from your rpool as zil for another zpool

2010-07-01 Thread Ray Van Dolson
We have a server with a couple X-25E's and a bunch of larger SATA disks. To save space, we want to install Solaris 10 (our install is only about 1.4GB) to the X-25E's and use the remaining space on the SSD's for ZIL attached to a zpool created from the SATA drives. Currently we do this by install

Re: [zfs-discuss] Dedup RAM requirements, vs. L2ARC?

2010-07-01 Thread Neil Perrin
On 07/01/10 22:33, Erik Trimble wrote: On 7/1/2010 9:23 PM, Geoff Nordli wrote: Hi Erik. Are you saying the DDT will automatically look to be stored in an L2ARC device if one exists in the pool, instead of using ARC? Or is there some sort of memory pressure point where the DDT gets moved fr

Re: [zfs-discuss] zfs - filesystem versus directory

2010-07-01 Thread Malachi de Ælfweald
I created a zpool called 'data' from 7 disks. I created zfs filesystems on the zpool for each Xen vm I can choose to recursively snapshot all 'data' I can choose to snapshot the individual 'directories' If you use mkdir, I don't believe you can snapshot/restore at that level Malachi de Ælfweal

Re: [zfs-discuss] Dedup RAM requirements, vs. L2ARC?

2010-07-01 Thread Erik Trimble
On 7/1/2010 9:23 PM, Geoff Nordli wrote: Hi Erik. Are you saying the DDT will automatically look to be stored in an L2ARC device if one exists in the pool, instead of using ARC? Or is there some sort of memory pressure point where the DDT gets moved from ARC to L2ARC? Thanks, Geoff Go

Re: [zfs-discuss] Dedup RAM requirements, vs. L2ARC?

2010-07-01 Thread Geoff Nordli
> Actually, I think the rule-of-thumb is 270 bytes/DDT > entry. It's 200 > bytes of ARC for every L2ARC entry. > > DDT doesn't count for this ARC space usage > > E.g.:I have 1TB of 4k files that are to be > deduped, and it turns > out that I have about a 5:1 dedup ratio. I'd also > lik

[zfs-discuss] zfs - filesystem versus directory

2010-07-01 Thread Peter Taps
Folks, While going through a quick tutorial on zfs, I came across a way to create zfs filesystem within a filesystem. For example: # zfs create mytest/peter where mytest is a zpool filesystem. When does this way, the new filesystem has the mount point as /mytest/peter. When does it make sense

Re: [zfs-discuss] confused about lun alignment

2010-07-01 Thread Derek Olsen
doh! It turns out the host in question is actually a Solaris 10 update 6 host. It appears that an Solaris 10 update 8 host actually sets the start sector at 256. So to simplify the question. If I'm using ZFS with EFI label and full disk do I even need to worry about lun alignment? I was a

Re: [zfs-discuss] zpool on raw disk. Do I need to format?

2010-07-01 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Peter Taps > > I am learning more about zfs storage. It appears, zfs pool can be > created on a raw disk. There is no need to create any partitions, etc. > on the disk. Does this mean there is

Re: [zfs-discuss] Help destroying phantom clone (zfs filesystem)

2010-07-01 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Alxen4 > > It looks like I have some leftovers of old clones that I cannot delete: > > Clone name is tank/WinSrv/Latest > > I'm trying: > > zfs destroy -f -R tank/WinSrv/Latest > cannot uns

Re: [zfs-discuss] Checksum errors with SSD.

2010-07-01 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Benjamin Grogg > > When I scrub my pool I got a lot of checksum errors : > > NAMESTATE READ WRITE CKSUM > rpool DEGRADED 0 0 5 > c8d0s0DEGRA

Re: [zfs-discuss] zpool on raw disk. Do I need to format?

2010-07-01 Thread Peter Taps
Awesome. Thank you, CIndy. Regards, Peter -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] confused about lun alignment

2010-07-01 Thread Derek Olsen
Folks. My env is Solaris 10 update 8 amd64. Does LUN alignment matter when I'm creating zpool's on disks (lun's) with EFI labels and providing zpool the entire disk? I recently read some sun/oracle docs and blog posts about adjusting the starting sector for partition 0 (in format -e) to a

Re: [zfs-discuss] zpool import hangs indefinitely (retry post in parts; too long?)

2010-07-01 Thread Andrew Jones
Victor, A little more info on the crash, from the messages file is attached here. I have also decompressed the dump with savecore to generate unix.0, vmcore.0, and vmdump.0. Jun 30 19:39:10 HL-SAN unix: [ID 836849 kern.notice] Jun 30 19:39:10 HL-SAN ^Mpanic[cpu3]/thread=ff0017909c60: Jun

Re: [zfs-discuss] zpool on raw disk. Do I need to format?

2010-07-01 Thread Cindy Swearingen
Even easier, use the zpool create command to create a pool on c8t1d0, using the whole disk. Try this: # zpool create MyData c8t1d0 cs On 07/01/10 16:01, Peter Taps wrote: Folks, I am learning more about zfs storage. It appears, zfs pool can be created on a raw disk. There is no need to cre

[zfs-discuss] zpool on raw disk. Do I need to format?

2010-07-01 Thread Peter Taps
Folks, I am learning more about zfs storage. It appears, zfs pool can be created on a raw disk. There is no need to create any partitions, etc. on the disk. Does this mean there is no need to run "format" on a raw disk? I have added a new disk to my system. It shows up as /dev/rdsk/c8t1d0s0. Do

Re: [zfs-discuss] zfs destroy hangs machine if snapshot exists- workaround found

2010-07-01 Thread Erik Trimble
On 7/1/2010 12:23 PM, Lo Zio wrote: Thanks roy, I read a lot around and also was thinking it was a dedup-related problem. Although I did not find any indication of how many RAM is enough, and never find something saying "Do not use dedup, it will definitely crash your server". I'm using a Dell

Re: [zfs-discuss] zfs destroy hangs machine if snapshot exists- workaround found

2010-07-01 Thread Roy Sigurd Karlsbakk
- Original Message - > Thanks roy, I read a lot around and also was thinking it was a > dedup-related problem. Although I did not find any indication of how > many RAM is enough, and never find something saying "Do not use dedup, > it will definitely crash your server". I'm using a Dell Xeo

Re: [zfs-discuss] zfs destroy hangs machine if snapshot exists- workaround found

2010-07-01 Thread Lo Zio
Thanks roy, I read a lot around and also was thinking it was a dedup-related problem. Although I did not find any indication of how many RAM is enough, and never find something saying "Do not use dedup, it will definitely crash your server". I'm using a Dell Xeon with 4 Gb of RAM, maybe it is no

Re: [zfs-discuss] NexentaStor Community edition 3.0.3 released

2010-07-01 Thread Oliver Seidel
Hello, this may not apply to your machine. I have two changes to your setup: * Opensolaris instead of Nexenta * DL585G1 instead of your DL380G4 Here's my problem: reproducible crash after a certain time (1:30h in my case). Explanation: the HP machine has enterprise features (ECC RAM) and perfor

Re: [zfs-discuss] Mix SAS and SATA drives?

2010-07-01 Thread Roy Sigurd Karlsbakk
- Original Message - > > As the 15k drives are faster seek-wise (and possibly faster for > > linear I/O), you may want to separate them into different VDEVs or > > even pools, but then, it's quite impossible to give a "correct" > > answer unless knowing what it's going to be used for. > >

Re: [zfs-discuss] Mix SAS and SATA drives?

2010-07-01 Thread Ian D
> As the 15k drives are faster seek-wise (and possibly faster for linear I/O), > you may want to separate them into different VDEVs or even pools, but then, > it's quite impossible to give a "correct" answer unless knowing what it's > going to be used for.Mostly database duty.> > Also, using 10

Re: [zfs-discuss] Mix SAS and SATA drives?

2010-07-01 Thread Roy Sigurd Karlsbakk
- Original Message - > Another question... > We're building a ZFS NAS/SAN out of the following JBODs we already > own: > > > 2x 15x 1000GB SATA > 3x 15x 750GB SATA > 2x 12x 600GB SAS 15K > 4x 15x 300GB SAS 15K > > > That's a lot of spindles we'd like to benefit from, but our assumption

Re: [zfs-discuss] Mix SAS and SATA drives?

2010-07-01 Thread Ian D
Sorry for the formatting, that's 2x 15x 1000GB SATA 3x 15x 750GB SATA 2x 12x 600GB SAS 15K 4x 15x 300GB SAS 15K ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/list

[zfs-discuss] Mix SAS and SATA drives?

2010-07-01 Thread Ian D
Another question...We're building a ZFS NAS/SAN out of the following JBODs we already own: 2x 15x 1000GB SATA3x 15x 750GB SATA2x 12x 600GB SAS 15K4x 15x 300GB SAS 15K That's a lot of spindles we'd like to benefit from, but our assumption is that we should split these in two separate pools, on

Re: [zfs-discuss] Expected throughput

2010-07-01 Thread Roy Sigurd Karlsbakk
Hi! We've put 28x 750GB SATA drives in a RAIDZ2 pool (a single vdev) and we get about 80MB/s in sequential read or write. We're running local tests on the server itself (no network involved). Is that what we should be expecting? It seems slow to me. Please read the ZFS best practices

Re: [zfs-discuss] Supermicro AOC-USAS-L8i

2010-07-01 Thread Roy Sigurd Karlsbakk
> On a slightly different but related topic, anyone have advice on how > to connect up my drives? I've got room for 20 pool drives in the case. > I'll have two AOC-USAS-L8i cards along with cables to connect 16 SATA2 > drives. The motherboard has 6 SATA2 connectors plus 2 SATA3 > connectors. I was

Re: [zfs-discuss] zfs destroy hangs machine if snapshot exists- workaround found

2010-07-01 Thread Roy Sigurd Karlsbakk
- Original Message - > I also have this problem, with 134 if I delete big snapshots the > server hangs only responding to ping. > I also have the ZVOL issue. > Any news about having them solved? > In my case this is a big problem since I'm using osol as a file > server... Are you using ded

Re: [zfs-discuss] Expected throughput

2010-07-01 Thread Roy Sigurd Karlsbakk
Hi! We've put 28x 750GB SATA drives in a RAIDZ2 pool (a single vdev) and we get about 80MB/s in sequential read or write. We're running local tests on the server itself (no network involved). Is that what we should be expecting? It seems slow to me. Please read the ZFS best practices guide

Re: [zfs-discuss] ZFS on external iSCSI storage

2010-07-01 Thread Roy Sigurd Karlsbakk
> >The best would be to export the drives in JBOD style, one "array" per > >drive. If you rely on the Promise RAID, it you won't be able to > >recover from "silent" errors. I'm in the progress of moving from a > >NexSAN RAID to a JBOD-like style just because of that (we had data > >>corruption on t

[zfs-discuss] Expected throughput

2010-07-01 Thread Ian D
Hi! We've put 28x 750GB SATA drives in a RAIDZ2 pool (a single vdev) and we get about 80MB/s in sequential read or write. We're running local tests on the server itself (no network involved). Is that what we should be expecting? It seems slow to me. Thanks

Re: [zfs-discuss] ZFS on external iSCSI storage

2010-07-01 Thread Roy Sigurd Karlsbakk
- Original Message - > I'm new with ZFS, but I have had good success using it with raw > physical disks. One of my systems has access to an iSCSI storage > target. The underlying physical array is in a propreitary disk storage > device from Promise. So the question is, when building a OpenS

[zfs-discuss] Help destroying phantom clone (zfs filesystem)

2010-07-01 Thread Alxen4
It looks like I have some leftovers of old clones that I cannot delete: Clone name is tank/WinSrv/Latest I'm trying: zfs destroy -f -R tank/WinSrv/Latest cannot unshare 'tank/WinSrv/Latest': path doesn't exist: unshare(1M) failed Please help me to get rid of this garbage. Thanks a lot. -- Th

Re: [zfs-discuss] Checksum errors with SSD.

2010-07-01 Thread Cindy Swearingen
Hi Benjamin, I'm not familiar with this disk but you can see the fmstat output that disk, system event, and zfs-related diagnostics are on overtime about something and its probably this disk. You can get further details from fmdump -eV and you will probably see lots of checksum errors on this di

[zfs-discuss] ZFS on external iSCSI storage

2010-07-01 Thread Mark
I'm new with ZFS, but I have had good success using it with raw physical disks. One of my systems has access to an iSCSI storage target. The underlying physical array is in a propreitary disk storage device from Promise. So the question is, when building a OpenSolaris host to store its data on a

Re: [zfs-discuss] optimal ZFS filesystem layout on JBOD

2010-07-01 Thread Marty Scholes
Joachim Worringen wrote: > Greetings, > > we are running a few databases of currently 200GB > (growing) in total for data warehousing: > - new data via INSERTs for (up to) millions of rows > per day; sometimes with UPDATEs > - most data in a single table (=> 10 to 100s of > millions of rows) > - q

[zfs-discuss] Checksum errors with SSD.

2010-07-01 Thread Benjamin Grogg
Dear Forum I use a KINGSTON SNV125-S2/30GB SSD on a ASUS M3A78-CM Motherboard (AMD SB700 Chipset). SATA Type (in BIOS) is SATA Os : SunOS homesvr 5.11 snv_134 i86pc i386 i86pc When I scrub my pool I got a lot of checksum errors : NAMESTATE READ WRITE CKSUM rpool DEGRA

Re: [zfs-discuss] NexentaStor Community edition 3.0.3 released

2010-07-01 Thread David Magda
On Jul 1, 2010, at 10:39, Pasi Kärkkäinen wrote: basicly 5-30 seconds after login prompt shows up on the console the server will reboot due to kernel crash. the error seems to be about the broadcom nic driver.. Is this a known bug? Please contact Nexenta via their support infrastructure (web

Re: [zfs-discuss] Announce: zfsdump

2010-07-01 Thread Edward Ned Harvey
> From: Asif Iqbal [mailto:vad...@gmail.com] > > currently to speed up the zfs send| zfs recv I am using mbuffer. It > moves the data > lot faster than using netcat (or ssh) as the transport method Yup, this works because network and disk latency can both be variable. So without buffering, your

Re: [zfs-discuss] NexentaStor Community edition 3.0.3 released

2010-07-01 Thread Pasi Kärkkäinen
On Tue, Jun 15, 2010 at 10:57:53PM +0530, Anil Gulecha wrote: > Hi All, > > On behalf of NexentaStor team, I'm happy to announce the release of > NexentaStor Community Edition 3.0.3. This release is the result of the > community efforts of Nexenta Partners and users. > > Changes over 3.0.2 includ

[zfs-discuss] optimal ZFS filesystem layout on JBOD

2010-07-01 Thread Joachim Worringen
Greetings, we are running a few databases of currently 200GB (growing) in total for data warehousing: - new data via INSERTs for (up to) millions of rows per day; sometimes with UPDATEs - most data in a single table (=> 10 to 100s of millions of rows) - queries SELECT subsets of this table via a

Re: [zfs-discuss] zfs destroy hangs machine if snapshot exists- workaround found

2010-07-01 Thread Lo Zio
I also have this problem, with 134 if I delete big snapshots the server hangs only responding to ping. I also have the ZVOL issue. Any news about having them solved? In my case this is a big problem since I'm using osol as a file server... Thanks -- This message posted from opensolaris.org __

[zfs-discuss] dedup accounting anomaly / dedup experiments

2010-07-01 Thread Lutz Schumann
Hello list, I wanted to test deduplication a little and did a experiment. My question was: can I dedupe infinite or is ther a upper limit ? So for that I did a very basic test. - I created a ramdisk-pool (1GB) - enabled dedup and - wrote zeros to it (in one single file) until an error is r

Re: [zfs-discuss] Supermicro AOC-USAS-L8i

2010-07-01 Thread Jay Heyl
> I plan on removing the second USAS-L8i and connect > all 16 drives to the > first USAS-L8i when I need more storage capacity. I > have no doubt that > it will work as intended. I will report to the list > otherwise. I'm a little late to the party here. First, I'd like to thank those pioneers