[zfs-discuss] RealNeel : ZFS and DB performance

2007-02-09 Thread Roch - PAE

It's just a matter of time before ZFS overtakes UFS/DIO 
for DB loads, See Neel's new blog entry:

http://blogs.sun.com/realneel/entry/zfs_and_databases_time_for

-r

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Peculiar behavior of snapshot after zfs receive

2007-02-09 Thread Trevor Watson

Thanks Robert, that did the trick for me!

Robert Milkowski wrote:

Hello Wade,

Thursday, February 8, 2007, 8:00:40 PM, you wrote:





TW> Am I using send/recv incorrectly or is there something else
going on here that
TW> I am missing?


It's a known bug.

umount and rollback file system on host 2. You should see 0 used space
on a snapshot and then it should work.


WSfc> Bug ID?  Is it related to atime changes?

It has to do with delete queue being processed when fs is mounted.


The bug id is: 6343779
http://bugs.opensolaris.org/view_bug.do?bug_id=6343779





smime.p7s
Description: S/MIME Cryptographic Signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: ZFS multi-threading

2007-02-09 Thread Reuven Kaswin
The experiment was on a V240. Throughput wasn't the issue in our test; CPU 
utilization seemed to drop by approx. 50% after turning checksum off. The 
concern was in potentially running out of CPU horsepower to support multiple 
parallel sequential writes.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: ZFS multi-threading

2007-02-09 Thread Reuven Kaswin
Thanks for that info. I validated with a simple experiment on a Niagara 
machine, by viewing 'mpstat' that no more than 2-3 threads were being saturated 
by my large block sequential write test.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS multi-threading

2007-02-09 Thread Carisdad
I've seen very good performance on streaming large files to ZFS on a 
T2000.  We have been looking at using the T2000 as a disk storage unit 
for backups.  I've been able to push over 500MB/s to the disks. Setup is 
EMC Clariion CX3 with 84 500GB SATA drives connected w/ 4Gbps all the 
way to the disk shelves.  The 84 drives are presented as raw luns to the 
T2000 -- no HW RAID enabled on the Clariion.  The problem we've seen 
comes when enabling compression, as that is single threaded per zpool.  
Enabling compression drops our throughput to 12-15MB/s per pool. 

This is bugid: 6460622, the fix is apparently set to be put back into 
Nevada fairly soon.


-Andy

Reuven Kaswin wrote:

With the CPU overhead imposed in checksum of blocks by ZFS, on a large 
sequential write test, the CPU was heavily loaded in a test that I ran. By 
turning off the checksum, the CPU load was greatly reduced. Obviously, this 
caused a tradeoff in reliability for CPU cycles.

Would the logic behind ZFS take full advantage of a heavily multicored system, 
such as on the Sun Niagara platform? Would it utilize of the 32 concurrent 
threads for generating its checksums? Has anyone compared ZFS on a Sun Tx000, 
to that of a 2-4 thread x64 machine?
 
 
This message posted from opensolaris.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

  


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: ZFS multi-threading

2007-02-09 Thread Tomas Ögren
On 09 February, 2007 - Reuven Kaswin sent me these 0,4K bytes:

> Thanks for that info. I validated with a simple experiment on a
> Niagara machine, by viewing 'mpstat' that no more than 2-3 threads
> were being saturated by my large block sequential write test.

And on, say 32 parallel writes?

/Tomas
-- 
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ENOSPC on full FS (was: Meta data corruptions on ZFS.)

2007-02-09 Thread Matthew Ahrens

dudekula mastan wrote:


Hi All,
 
In my test set up, I have one zpool of size 1000M bytes.
 
On this zpool, my application writes 100 files each of size 10 MB.
 
First 96 files were written successfully with out any problem.
 
But the 97 file is not written successfully , it written only 5 MB (the 
return value of write() call ).
 
Since it is short write my application tried to truncate it to 5MB. But 
ftruncate is failing with an erroe message saying that No space on the 
devices.


Try removing one of the larger files.  Alternatively, upgrade to a more 
recent version of solaris express / nevada / opensolaris where this 
problem is much less severe.


--matt

ps. subject changed, not sure what this had to do with corruption.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Read Only Zpool: ZFS and Replication

2007-02-09 Thread Matthew Ahrens

Ben Rockwood wrote:

What I really want is a Zpool on node1 open and writable (production
storage) and a replicated to node2 where its open for read-only
access (standby storage).


We intend to solve this problem by using zfs send/recv.  You can script 
up a "poor man's" send/recv solution today but we're working on making 
it better.


--matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss