Re: [zfs-discuss] how to convert zio->io_offset to disk block number?

2009-06-25 Thread zhihui Chen
Thanks, fixes following two issues, I can get the right value:(1) Dividing offset 0x657800(6649856) by 512 and take it as the iseek value. (2) Run the dd command on device c2t0d0s0, not c2t0d0. Zhihui 2009/6/26 m...@bruningsystems.com > Hi Zhihui Chen, > > zhihui Chen wrote: > >> Find that zio-

Re: [zfs-discuss] how to convert zio->io_offset to disk block number?

2009-06-25 Thread zhihui Chen
Find that zio->io_offset is the absolute offset of device, not in sector unit. And If we need use zdb -R to dump the block, we should use the offset (zio->io_offset-0x40). 2009/6/25 zhihui Chen > I use following dtrace script to trace the postion of one file on zfs: > > #!/usr/sbin/dtrace -qs

Re: [zfs-discuss] SPARC SATA, please.

2009-06-25 Thread Erik Trimble
Simon Breden wrote: Miles, thanks for helping clear up the confusion surrounding this subject! My decision is now as above: for my existing NAS to leave the pool as-is, and seek a 2+ SATA port card for the 2-drive mirror for 2 x 30GB SATA boot SSDs that I want to add. For the next NAS build l

Re: [zfs-discuss] SPARC SATA, please.

2009-06-25 Thread Erik Trimble
Simon Breden wrote: I think the confusion is because the 1068 can do "hardware" RAID, it can and does write its own labels, as well as reserve space for replacements of disks with slightly different sizes. But that is only one mode of operation. So, it sounds like if I use a 1068-based d

Re: [zfs-discuss] SPARC SATA, please.

2009-06-25 Thread Simon Breden
OK, thanks James. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] SPARC SATA, please.

2009-06-25 Thread James C. McPherson
On Thu, 25 Jun 2009 16:11:04 -0700 (PDT) Simon Breden wrote: > That sounds even better :) > > So what's the procedure to create a zpool using the 1068? same as any other device: # zpool create poolname vdev vdev vdev > Also, any special 'tricks /tips' / commands required for using a 1068-ba

Re: [zfs-discuss] SPARC SATA, please.

2009-06-25 Thread Simon Breden
That sounds even better :) So what's the procedure to create a zpool using the 1068? Also, any special 'tricks /tips' / commands required for using a 1068-based SAS/SATA device? Simon -- This message posted from opensolaris.org ___ zfs-discuss mailin

Re: [zfs-discuss] ZFS for iSCSI based SAN

2009-06-25 Thread Scott Meilicke
> Isn't that section of the evil tuning guide you're quoting actually about > checking if the NVRAM/driver connection is working right or not? Miles, yes, you are correct. I just thought it was interesting reading about how syncs and such work within ZFS. Regarding my NFS test, you remind me tha

Re: [zfs-discuss] SPARC SATA, please.

2009-06-25 Thread Eric D. Mudama
On Fri, Jun 26 at 8:55, James C. McPherson wrote: On Thu, 25 Jun 2009 15:43:17 -0700 (PDT) Simon Breden wrote: > I think the confusion is because the 1068 can do "hardware" RAID, > it can and does write its own labels, as well as reserve space > for replacements of disks with slightly differe

Re: [zfs-discuss] SPARC SATA, please.

2009-06-25 Thread James C. McPherson
On Thu, 25 Jun 2009 15:43:17 -0700 (PDT) Simon Breden wrote: > > I think the confusion is because the 1068 can do "hardware" RAID, it > can and does write its own labels, as well as reserve space for replacements > of disks with slightly different sizes. But that is only one mode of > operation.

Re: [zfs-discuss] SPARC SATA, please.

2009-06-25 Thread Simon Breden
> I think the confusion is because the 1068 can do "hardware" RAID, it can and does write its own labels, as well as reserve space for replacements of disks with slightly different sizes. But that is only one mode of operation. So, it sounds like if I use a 1068-based device, and I *don't* want i

Re: [zfs-discuss] SPARC SATA, please.

2009-06-25 Thread Simon Breden
Miles, thanks for helping clear up the confusion surrounding this subject! My decision is now as above: for my existing NAS to leave the pool as-is, and seek a 2+ SATA port card for the 2-drive mirror for 2 x 30GB SATA boot SSDs that I want to add. For the next NAS build later on this summer, I

Re: [zfs-discuss] unable to import zfs pool

2009-06-25 Thread Ketan
Thanx to all for the efforts but i was able to import the zpool after disabling first HBA cards do not know the reason for this but now the pool is imported and there was not disk lost :-) -- This message posted from opensolaris.org ___ zfs-discuss mai

Re: [zfs-discuss] SPARC SATA, please.

2009-06-25 Thread Richard Elling
Miles Nordin wrote: "sb" == Simon Breden writes: sb> The situation regarding lack of open source drivers for these sb> LSI 1068/1078-based cards is quite scary. meh I dunno. The amount of confusion is a little scary, I guess. sb> And did I understand you correctly w

Re: [zfs-discuss] Best controller card for 8 SATA drives ?

2009-06-25 Thread Nicholas Lee
On Fri, Jun 26, 2009 at 4:11 AM, Eric D. Mudama wrote: > True. In $ per sequential GB/s, rotating rust still wins by far. > However, your comment about all flash being slower than rotating at > sequential writes was mistaken. Even at 10x the price, if you're > working with a dataset that needs r

Re: [zfs-discuss] SPARC SATA, please.

2009-06-25 Thread Miles Nordin
> "sb" == Simon Breden writes: sb> The situation regarding lack of open source drivers for these sb> LSI 1068/1078-based cards is quite scary. meh I dunno. The amount of confusion is a little scary, I guess. sb> And did I understand you correctly when you say that these LSI

Re: [zfs-discuss] auto snapshots 0.12

2009-06-25 Thread Richard Elling
Thanks Tim! -- richard ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] SPARC SATA, please.

2009-06-25 Thread Simon Breden
The situation regarding lack of open source drivers for these LSI 1068/1078-based cards is quite scary. And did I understand you correctly when you say that these LSI 1068/1078 drivers write labels to drives, meaning you can't move drives from an LSI controlled array to another arbitrary array

Re: [zfs-discuss] unable to import zfs pool

2009-06-25 Thread Ketan
and regarding the path my other system has same and its working fine see the below output # zpool status pool: emcpool1 state: ONLINE status: The pool is formatted using an older on-disk format. The pool can still be used, but some features are unavailable. action: Upgrade the pool

Re: [zfs-discuss] unable to import zfs pool

2009-06-25 Thread Ketan
zpool cache is in /etc/zfs/zpool.cache or it can be viewed as zdb -C but in my case its blank :-( -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discu

Re: [zfs-discuss] ZFS for iSCSI based SAN

2009-06-25 Thread Miles Nordin
> "sm" == Scott Meilicke writes: sm> Some storage will flush their caches despite the fact that the sm> NVRAM protection makes those caches as good as stable sm> storage. [...] ZFS also issues a flush every time an sm> application requests a synchronous write (O_DSYNC, fsync,

Re: [zfs-discuss] SPARC SATA, please.

2009-06-25 Thread Miles Nordin
> "jl" == James Lever writes: jl> I thought they were both closed source yes, both are closed source / proprietary. If you are really confused and not just trying to pick a dictionary fight, I can start saying ``closed source / proprietary'' on Solaris lists from now on. On Linux list

Re: [zfs-discuss] unable to import zfs pool

2009-06-25 Thread Daniel J. Priem
Ketan writes: > thats the problem this system has just 2 LUNs assigned and both are present > as you can see from format output > > 10. emcpower0a > /pseudo/e...@0 > 11. emcpower1a > /pseudo/e...@1 ahhh. so the path has changed. your old path was emcpower0c now you have emcpower0a and emcpow

Re: [zfs-discuss] unable to import zfs pool

2009-06-25 Thread Ketan
thats the problem this system has just 2 LUNs assigned and both are present as you can see from format output 10. emcpower0a /pseudo/e...@0 11. emcpower1a /pseudo/e...@1 -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-di

Re: [zfs-discuss] unable to import zfs pool

2009-06-25 Thread Daniel J. Priem
Ketan writes: > no idea path changed or not .. but following is output from my format .. and > nothing has changed > > AVAILABLE DISK SELECTIONS: >0. c1t0d0 > /p...@0/p...@0/p...@2/s...@0/s...@0,0 >1. c1t1d0 > /p...@0/p...@0/p...@2/s...@0/s...@1,0 >

Re: [zfs-discuss] unable to import zfs pool

2009-06-25 Thread Ketan
no idea path changed or not .. but following is output from my format .. and nothing has changed AVAILABLE DISK SELECTIONS: 0. c1t0d0 /p...@0/p...@0/p...@2/s...@0/s...@0,0 1. c1t1d0 /p...@0/p...@0/p...@2/s...@0/s...@1,0 2. c3t5006016841E0A08Dd0

Re: [zfs-discuss] unable to import zfs pool

2009-06-25 Thread Daniel J. Priem
could it be possible that your path changed? just do "format" CTRL+D and look if emcpower0c is now located somewhere else. regards daniel Ketan writes: > Hi , I had a zfs pool which i exported before our SAN maintenance > and powerpath upgrade but now after the powerpath upgrade and > mainten

[zfs-discuss] unable to import zfs pool

2009-06-25 Thread Ketan
Hi , I had a zfs pool which i exported before our SAN maintenance and powerpath upgrade but now after the powerpath upgrade and maintenance i 'm unable to import the pool it give following errors # zpool import pool: emcpool1 id: 5596268873059055768 state: UNAVAIL status: One or more d

Re: [zfs-discuss] Is the PROPERTY compression will increase the ZFS I/O throughput?

2009-06-25 Thread David Pacheco
Chookiex wrote: thank you ;) I mean that it would be faster in reading compressed data IF the write with compression is faster than non-compressed? Just like lzjb. Do you mean that it would be faster to read compressed data than uncompressed data, or it would be faster to read compressed dat

Re: [zfs-discuss] Best controller card for 8 SATA drives ?

2009-06-25 Thread Eric D. Mudama
On Wed, Jun 24 at 18:43, Bob Friesenhahn wrote: On Wed, 24 Jun 2009, Eric D. Mudama wrote: The main purpose for using SSDs with ZFS is to reduce latencies for synchronous writes required by network file service and databases. In the "available 5 months ago" category, the Intel X25-E will wr

Re: [zfs-discuss] ZFS write I/O stalls

2009-06-25 Thread Bob Friesenhahn
On Thu, 25 Jun 2009, Ross wrote: But the unintended side effect of this is that ZFS's attempt to optimize writes will causes jerky read and write behaviour any time you have a large amount of writes going on, and when you should be pushing the disks to 100% usage you're never going to reach th

Re: [zfs-discuss] [storage-discuss] Backups

2009-06-25 Thread Greg
I think I am getting closer to ideas as to how to back this up. I will do as you said to backup the os, take an image or something of that nature. I will take a full backup every one to three months of the virtual machines, however the data that the vm is working with will be mounted seperately

Re: [zfs-discuss] ZFS for iSCSI based SAN

2009-06-25 Thread Scott Meilicke
> if those servers are on physical boxes right now i'd do some perfmon > caps and add up the iops. Using perfmon to get a sense of what is required is a good idea. Use the 95 percentile to be conservative. The counters I have used are in the Physical disk object. Don't ignore the latency counter

Re: [zfs-discuss] Regular panics: BAD TRAP: type=e

2009-06-25 Thread Anton Lundin
Im having the same problems. aprox. every 1-9 hours it crashes and the backtrace is exactly the same as posted here. the machine ran b98 rock-solid for a long time... Anyone have a clue where to start? -- This message posted from opensolaris.org ___

Re: [zfs-discuss] ZFS write I/O stalls

2009-06-25 Thread Bob Friesenhahn
On Wed, 24 Jun 2009, Lejun Zhu wrote: There is a bug in the database about reads blocked by writes which may be related: http://bugs.opensolaris.org/view_bug.do?bug_id=6471212 The symptom is sometimes reducing queue depth makes read perform better. This one certainly sounds promising. Sinc

Re: [zfs-discuss] auto snapshots 0.12

2009-06-25 Thread Tim Foster
Hi Ross, On Thu, 2009-06-25 at 04:24 -0700, Ross wrote: > Thanks Tim, do you know which build this is going to appear in? I've actually no idea - SUNWzfs-auto-snapshot gets delivered by the Desktop consolidation, not me. I'm checking in with them to see what the story is. That said, it probably

Re: [zfs-discuss] Is the PROPERTY compression will increase the ZFS I/O throughput?

2009-06-25 Thread Chookiex
thank you ;) I mean that it would be faster in reading compressed data IF the write with compression is faster than non-compressed? Just like lzjb. But i can't understand why the read performance is generally unaffected by compression? Because the uncompression (lzjb, gzip)  is faster than compr

Re: [zfs-discuss] auto snapshots 0.12

2009-06-25 Thread Ross
Thanks Tim, do you know which build this is going to appear in? -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZFS write I/O stalls

2009-06-25 Thread Ross
> I am not sure how zfs would know the rate of the > underlying disk storage Easy: Is the buffer growing? :-) If the amount of data in the buffer is growing, you need to throttle back a bit until the disks catch up. Don't stop writes until the buffer is empty, just slow them down to match t

[zfs-discuss] how to convert zio->io_offset to disk block number?

2009-06-25 Thread zhihui Chen
I use following dtrace script to trace the postion of one file on zfs: #!/usr/sbin/dtrace -qs zio_done:entry /((zio_t *)(arg0))->io_vd/ { zio=(zio_t *)arg0; printf("Offset:%x and Size:%x\n",zio->io_offset,zio->io_size); printf("vd:%x\n",(unsigned long)(zio->io_vd));

[zfs-discuss] auto snapshots 0.12

2009-06-25 Thread Tim Foster
Hi all, Just a quick plug: the latest version of ZFS Automatic Snapshots SMF service hit the hg repository yesterday. If you're using 0.11 or older, it's well worth upgrading to get the few bugfixes (especially if you're using CIFs - we use '_' instead of ':' in snapshot names now) More at: http

Re: [zfs-discuss] "zpoll status -x" output

2009-06-25 Thread Tomasz Kłoczko
> It might be easier to look for the pool status thusly > zpool get health poolname Correct me if I'm wrong but "zpool get" is available only in some latest versions of OS and Solaris 10 (we are using on some boxes some older versions of Solaris 10). Nevertheless IMO "zpoll status -x" should

Re: [zfs-discuss] Increase size of ZFS mirror

2009-06-25 Thread Ben
Thanks very much everyone. Victor, I did think about using VirtualBox, but I have a real machine and a supply of hard drives for a short time, for I'll test it out using that if I can. Scott, of course, at work we use three mirrors and it works very well, has saved us on occasion where we have

Re: [zfs-discuss] SPARC SATA, please.

2009-06-25 Thread Carson Gaspar
Miles Nordin wrote: There's also been talk of two tools, MegaCli and lsiutil, which are both binary only and exist for both Linux and Solaris, and I think are used only with the 1078 cards but maybe not. lsiutil works with LSI chips that use the Fusion-MPT interface (SCSI, SAS, and FC), inclu

Re: [zfs-discuss] x4500 resilvering spare taking forever?

2009-06-25 Thread Joe Kearney
> Yep, it also suffers from the bug that restarts > resilvers when you take a > snapshot. This was fixed in b94. > http://bugs.opensolaris.org/bugdatabase/view_bug.do?bu > g_id=6343667 > -- richard Hats off to Richard for saving the day. This was exactly the issue. I shut off my automatic snap

Re: [zfs-discuss] SPARC SATA, please.

2009-06-25 Thread James Lever
On 25/06/2009, at 5:16 AM, Miles Nordin wrote: and mpt is the 1068 driver, proprietary, works on x86 and SPARC. then there is also itmpt, the third-party-downloadable closed-source driver from LSI Logic, dunno much about it but someone here used it. I'm confused. Why do you say the mpt dr