It's just a matter of time before ZFS overtakes UFS/DIO
for DB loads, See Neel's new blog entry:
http://blogs.sun.com/realneel/entry/zfs_and_databases_time_for
-r
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolari
Thanks Robert, that did the trick for me!
Robert Milkowski wrote:
Hello Wade,
Thursday, February 8, 2007, 8:00:40 PM, you wrote:
TW> Am I using send/recv incorrectly or is there something else
going on here that
TW> I am missing?
It's a known bug.
umount and rollback file system on host
The experiment was on a V240. Throughput wasn't the issue in our test; CPU
utilization seemed to drop by approx. 50% after turning checksum off. The
concern was in potentially running out of CPU horsepower to support multiple
parallel sequential writes.
This message posted from opensolaris.o
Thanks for that info. I validated with a simple experiment on a Niagara
machine, by viewing 'mpstat' that no more than 2-3 threads were being saturated
by my large block sequential write test.
This message posted from opensolaris.org
___
zfs-discuss
I've seen very good performance on streaming large files to ZFS on a
T2000. We have been looking at using the T2000 as a disk storage unit
for backups. I've been able to push over 500MB/s to the disks. Setup is
EMC Clariion CX3 with 84 500GB SATA drives connected w/ 4Gbps all the
way to the d
On 09 February, 2007 - Reuven Kaswin sent me these 0,4K bytes:
> Thanks for that info. I validated with a simple experiment on a
> Niagara machine, by viewing 'mpstat' that no more than 2-3 threads
> were being saturated by my large block sequential write test.
And on, say 32 parallel writes?
/T
dudekula mastan wrote:
Hi All,
In my test set up, I have one zpool of size 1000M bytes.
On this zpool, my application writes 100 files each of size 10 MB.
First 96 files were written successfully with out any problem.
But the 97 file is not written successfully , it written only 5 MB (th
Ben Rockwood wrote:
What I really want is a Zpool on node1 open and writable (production
storage) and a replicated to node2 where its open for read-only
access (standby storage).
We intend to solve this problem by using zfs send/recv. You can script
up a "poor man's" send/recv solution today