The compress on-write behavior is what I expected, but I wanted to validate
that for sure. Thank you.
On the 2nd question, the obvious answer is that I'm doing work where knowing
how large the total file sizes tells me how much work has been completed, and I
don't have any other feedback whic
I'm about to enable compression on my ZFS filesystem, as most of the data I
intend to store should be highly compressible.
Before I do so, I'd like to ask a couple of newbie questions
First - if you were running a ZFS without compression, wrote some files to it,
then turned compression on,
At this point, ZFS is performing admirably with the Areca card. Also, that
card is only 8-port, and the Areca controllers I have are 12-port. My chassis
has 24 SATA bays, so being able to cover all the drives with 2 controllers is
preferable.
Also, the driver for the Areca controllers is bein
I have to come back and face the shame; this was a total newbie mistake by
myself.
I followed the ZFS shortcuts for noobs guide off bigadmin;
http://wikis.sun.com/display/BigAdmin/ZFS+Shortcuts+for+Noobs
What that had me doing was creating a UFS filesystem on top of a ZFS volume, so
I was usi
That was part of my testing of the RAID controller settings; turning off the
controller cache dropped me to 20 mb/sec read & write under raidz2/zfs.
--Ross
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolari
Okay, after doing some testing, it appears that the issue is on the ZFS side.
I fiddled around a while with options on the areca card, and never got any
better performance results than my first test. So, my best out of the raidz2 is
42 mb/s write and 43 mb/s read. I also tried turning off crc'
Well, I just got in a system I am intending to be a BIG fileserver;
background- I work for a SAN startup, and we're expecting in our first year to
collect 30-60 terabytes of Fibre Channel traces. The purpose of this is to be
a large repository for those traces w/ statistical analysis run ag