Re: [zfs-discuss] Performance problem suggestions?

2011-05-10 Thread Don
> # dd if=/dev/zero of=/dcpool/nodedup/bigzerofile Ahh- I misunderstood your pool layout earlier. Now I see what you were doing. >People on this forum have seen and reported that adding a 100Mb file tanked >their > multiterabyte pool's performance, and removing the file boosted it back up. Sadly

Re: [zfs-discuss] Tuning disk failure detection?

2011-05-10 Thread Ray Van Dolson
On Tue, May 10, 2011 at 03:57:28PM -0700, Brandon High wrote: > On Tue, May 10, 2011 at 9:18 AM, Ray Van Dolson wrote: > > My question is -- is there a way to tune the MPT driver or even ZFS > > itself to be more/less aggressive on what it sees as a "failure" > > scenario? > > You didn't mention

Re: [zfs-discuss] Tuning disk failure detection?

2011-05-10 Thread Brandon High
On Tue, May 10, 2011 at 9:18 AM, Ray Van Dolson wrote: > My question is -- is there a way to tune the MPT driver or even ZFS > itself to be more/less aggressive on what it sees as a "failure" > scenario? You didn't mention what drives you had attached, but I'm guessing they were normal "desktop"

Re: [zfs-discuss] Performance problem suggestions?

2011-05-10 Thread Hung-ShengTsao (Lao Tsao) Ph.D.
it is my understanding for write (fast) consider faster HDD (SSD) for ZIL for read consider faster HDD(SSD) for L2ARC There were many discussion for V12N env raid1 is better than raidz On 5/10/2011 3:31 PM, Don wrote: I've been going through my iostat, zilstat, and other outputs all to no avail.

Re: [zfs-discuss] Tuning disk failure detection?

2011-05-10 Thread Ray Van Dolson
On Tue, May 10, 2011 at 02:42:40PM -0700, Jim Klimov wrote: > In a recent post "r-mexico" wrote that they had to parse system > messages and "manually" fail the drives on a similar, though > different, occasion: > > http://opensolaris.org/jive/message.jspa?messageID=515815#515815 Thanks Jim, good

Re: [zfs-discuss] Modify stmf_sbd_lu properties

2011-05-10 Thread Jim Dunham
Don, > Is it possible to modify the GUID associated with a ZFS volume imported into > STMF? > > To clarify- I have a ZFS volume I have imported into STMF and export via > iscsi. I have a number of snapshots of this volume. I need to temporarily go > back to an older snapshot without removing a

Re: [zfs-discuss] Tuning disk failure detection?

2011-05-10 Thread Jim Klimov
In a recent post "r-mexico" wrote that they had to parse system messages and "manually" fail the drives on a similar, though different, occasion: http://opensolaris.org/jive/message.jspa?messageID=515815#515815 -- This message posted from opensolaris.org _

Re: [zfs-discuss] Performance problem suggestions?

2011-05-10 Thread Jim Klimov
Well, as I wrote in other threads - i have a pool named "pool" on physical disks, and a compressed volume in this pool which i loopback-mount over iSCSI to make another pool named "dcpool". When files in "dcpool" are deleted, blocks are not zeroed out by current ZFS and they are still allocated

[zfs-discuss] Modify stmf_sbd_lu properties

2011-05-10 Thread Don
Is it possible to modify the GUID associated with a ZFS volume imported into STMF? To clarify- I have a ZFS volume I have imported into STMF and export via iscsi. I have a number of snapshots of this volume. I need to temporarily go back to an older snapshot without removing all the more recent

Re: [zfs-discuss] Performance problem suggestions?

2011-05-10 Thread Don
I've been going through my iostat, zilstat, and other outputs all to no avail. None of my disks ever seem to show outrageous service times, the load on the box is never high, and if the darned thing is CPU bound- I'm not even sure where to look. "(traversing DDT blocks even if in memory, etc -

[zfs-discuss] Old posts to zfs-discuss

2011-05-10 Thread Bill Rushmore
Sorry for the old posts that some of you are seeing to zfs-discuss. The link between Jive and mailman was broken so I fixed that. However, once this was fixed Jive started sending every single post from the zfs-discuss board on Jive to the mail list. Quite a few posts were sent before I real

Re: [zfs-discuss] fuser vs. zfs

2011-05-10 Thread Tomas Ögren
On 10 May, 2011 - Tomas Ögren sent me these 0,9K bytes: > On 23 November, 2005 - Benjamin Lewis sent me these 3,0K bytes: > > > Hello, > > > > I'm running Solaris Express build 27a on an amd64 machine and > > fuser(1M) isn't behaving > > as I would expect for zfs filesystems. Various google and

Re: [zfs-discuss] fuser vs. zfs

2011-05-10 Thread Tomas Ögren
On 23 November, 2005 - Benjamin Lewis sent me these 3,0K bytes: > Hello, > > I'm running Solaris Express build 27a on an amd64 machine and > fuser(1M) isn't behaving > as I would expect for zfs filesystems. Various google and ... > #fuser -c / > /:[lots of other PIDs] 20617tm [others] 2041

Re: [zfs-discuss] DTrace IO provider and oracle

2011-05-10 Thread Jim Litchfield
I use this construct to get something better than "" args[2]->fi_pathname != "" ? args[2]->fi_pathname : args[1]->dev_pathname In the latest versions of Solaris 10, you'll see IOs not directly issued by the app show up as being owned by 'zpool-POOLNAME' where POOLNAME is the real name of

Re: [zfs-discuss] raidz DEGRADED state

2011-05-10 Thread Krzys
Ah, did not see your follow up. Thanks. Chris On Thu, 30 Nov 2006, Cindy Swearingen wrote: > Sorry, Bart, is correct: > > If new_device is not specified, it defaults to > old_device. This form of replacement is useful after an > existing disk has failed

Re: [zfs-discuss] raidz DEGRADED state

2011-05-10 Thread Thomas Garner
So there is no current way to specify the creation of a 3 disk raid-z array with a known missing disk? On 12/5/06, David Bustos wrote: > Quoth Thomas Garner on Thu, Nov 30, 2006 at 06:41:15PM -0500: > > I currently have a 400GB disk that is full of data on a linux system. > > If I buy 2 more disk

Re: [zfs-discuss] ZFS Performance Question

2011-05-10 Thread Luke Lonergan
Robert, > I belive it's not solved yet but you may want to try with > latest nevada and see if there's a difference. It's fixed in the upcoming Solaris 10 U3 and also in Solaris Express post build 47 I think. - Luke ___ zfs-discuss mailing list zfs

Re: [zfs-discuss] DTrace IO provider and oracle

2011-05-10 Thread przemol...@poczta.fm
On Tue, Aug 08, 2006 at 11:33:28AM -0500, Tao Chen wrote: > On 8/8/06, przemol...@poczta.fm wrote: > > > >Hello, > > > >Solaris 10 GA + latest recommended patches: > > > >while runing dtrace: > > > >bash-3.00# dtrace -n 'io:::start {@[execname, args[2]->fi_pathname] = > >count();}' > >... > > > >

Re: [zfs-discuss] ZFS and Storage

2011-05-10 Thread przemol...@poczta.fm
On Thu, Jun 29, 2006 at 10:01:15AM +0200, Robert Milkowski wrote: > Hello przemolicc, > > Thursday, June 29, 2006, 8:01:26 AM, you wrote: > > ppf> On Wed, Jun 28, 2006 at 03:30:28PM +0200, Robert Milkowski wrote: > >> ppf> What I wanted to point out is the Al's example: he wrote about > >> damag

Re: [zfs-discuss] COW question

2011-05-10 Thread Francois Marcoux
przemol...@poczta.fm wrote: >On Fri, Jul 07, 2006 at 11:59:29AM +0800, Raymond Xiong wrote: > > >>It doesn't. Page 11 of the following slides illustrates how COW >>works in ZFS: >> >>http://www.opensolaris.org/os/community/zfs/docs/zfs_last.pdf >> >>"Blocks containing active data are never over

Re: [zfs-discuss] cluster features

2011-05-10 Thread Joe Little
Well, here's my previous summary off list to different solaris folk (regarding NFS serving via ZFS and iSCSI): I want to use ZFS as a NAS with no bounds on the backing hardware (not restricted to one boxes capacity). Thus, there are two options: FC SAN or iSCSI. In my case, I have multi-building c

[zfs-discuss] fuser vs. zfs

2011-05-10 Thread Benjamin Lewis
Hello, I'm running Solaris Express build 27a on an amd64 machine and fuser(1M) isn't behaving as I would expect for zfs filesystems. Various google and opensolaris.org searches didn't turn up anything on the subject, so I thought I'd ask the experts. The specific problem is that "fuser -c /some_

Re: [zfs-discuss] primarycache=metadata seems to force behaviour of secondarycache=metadata

2011-05-10 Thread Brandon High
On Mon, May 9, 2011 at 2:54 PM, Tomas Ögren wrote: > Slightly off topic, but we had an IBM RS/6000 43P with a PowerPC 604e > cpu, which had about 60MB/s memory bandwidth (which is kind of bad for a > 332MHz cpu) and its disks could do 70-80MB/s or so.. in some other > machine.. It wasn't that lon

Re: [zfs-discuss] GPU acceleration of ZFS

2011-05-10 Thread Chris Ridd
On 10 May 2011, at 16:44, Hung-Sheng Tsao (LaoTsao) Ph. D. wrote: > > IMHO, zfs need to run in all kind of HW > T-series CMT server that can help sha calculation since T1 day, did not see > any work in ZFS to take advantage it That support would be in the crypto framework though, not ZFS per s

[zfs-discuss] Tuning disk failure detection?

2011-05-10 Thread Ray Van Dolson
We recently had a disk fail on one of our whitebox (SuperMicro) ZFS arrays (Solaris 10 U9). The disk began throwing errors like this: May 5 04:33:44 dev-zfs4 scsi: [ID 243001 kern.warning] WARNING: /pci@0,0/pci8086,3410@9/pci15d9,400@0 (mpt_sas0): May 5 04:33:44 dev-zfs4mptsas_handle_e

Re: [zfs-discuss] GPU acceleration of ZFS

2011-05-10 Thread C Bergström
On Tue, May 10, 2011 at 10:29 PM, Anatoly wrote: > Good day, > > I think ZFS can take advantage of using GPU for sha256 calculation, > encryption and maybe compression. Modern video card, like 5xxx or 6xxx ATI > HD Series can do calculation of sha256 50-100 times faster than modern 4 > cores CPU.

Re: [zfs-discuss] GPU acceleration of ZFS

2011-05-10 Thread Hung-Sheng Tsao (LaoTsao) Ph. D.
IMHO, zfs need to run in all kind of HW T-series CMT server that can help sha calculation since T1 day, did not see any work in ZFS to take advantage it On 5/10/2011 11:29 AM, Anatoly wrote: Good day, I think ZFS can take advantage of using GPU for sha256 calculation, encryption and maybe

Re: [zfs-discuss] GPU acceleration of ZFS

2011-05-10 Thread Krunal Desai
On Tue, May 10, 2011 at 11:29 AM, Anatoly wrote: > Good day, > > I think ZFS can take advantage of using GPU for sha256 calculation, > encryption and maybe compression. Modern video card, like 5xxx or 6xxx ATI > HD Series can do calculation of sha256 50-100 times faster than modern 4 > cores CPU.

[zfs-discuss] GPU acceleration of ZFS

2011-05-10 Thread Anatoly
Good day, I think ZFS can take advantage of using GPU for sha256 calculation, encryption and maybe compression. Modern video card, like 5xxx or 6xxx ATI HD Series can do calculation of sha256 50-100 times faster than modern 4 cores CPU. kgpu project for linux shows nice results. 'zfs scrub'

Re: [zfs-discuss] Summary: Dedup and L2ARC memory requirements

2011-05-10 Thread Frank Van Damme
Op 09-05-11 15:42, Edward Ned Harvey schreef: >> > in my previous >> > post my arc_meta_used was bigger than my arc_meta_limit (by about 50%) > I have the same thing. But as I sit here and run more and more extensive > tests on it ... it seems like arc_meta_limit is sort of a soft limit. Or it >