Awesome initiative.
One thing ZFS is missing is the ability to select which files to compress.
Even a simple heuristic like "don't compress mp3,avi,zip,tar files" would yield
a tremendous change in which data is compressed on consumer computers. I don't
know if such a heuristic is planned o
On Jul 7, 2007, at 06:14, Orvar Korvar wrote:
When I copy that file from ZFS to /dev/null I get this output:
real0m0.025s
user0m0.002s
sys 0m0.007s
which can't be correct. Is it wrong of me to use "time cp fil fil2"
when measuring disk performance?
well you're reading and writin
ZFS is a 128 bit file system. The performance on your 32-bit CPU will
not be that good. ZFS was designed for a 64-bit CPU. Another GB of RAM
might help. There are a bunch of post in the archive about 32-bit CPUs
and performance.
-Sean
Orvar Korvar wrote:
> I am using Solaris Express Communi
one other thing... the checksums for all files to send *could* be checked first
in batch and known unique blocks prioritized and sent first, then the possibly
duplicative data sent afterwards to be verified a dupe, thereby decreasing the
possible data loss for the backup window to levels equivol
agreed.
while a bitwise check is the only assured way to determine duplicative nature
of two blocks, if the check were done in a streaming method as you suggest,
performance, while a huge impact compared to not, would be more than bearable
if used within an environment with large known levels o
On 7/7/07, Neil Perrin <[EMAIL PROTECTED]> wrote:
> Cyril,
>
> I wrote this case and implemented the project. My problem was
> that I didn't know what policy (if any) Sun has about publishing
> ARC cases, and a mail log with a gazillion email addresses.
>
> I did receive an answer to this this in t
Cyril,
I wrote this case and implemented the project. My problem was
that I didn't know what policy (if any) Sun has about publishing
ARC cases, and a mail log with a gazillion email addresses.
I did receive an answer to this this in the form:
http://www.opensolaris.org/os/community/arc/arc-faq/
Scott Lovenberg wrote:
> First Post!
> Sorry, I had to get that out of the way to break the ice...
Welcome!
> I was wondering if it makes sense to zone ZFS pools by disk slice, and if it
> makes a difference with RAIDZ. As I'm sure we're all aware, the end of a
> drive is half as fast as the b
> When tuning recordsize for things like databases, we
> try to recommend
> that the customer's recordsize match the I/O size of
> the database
> record.
On this filesystem I have:
- file links and they are rather static
- small files ( about 8kB ) that keeps changing
- big files ( 1MB - 20 MB
nice idea! :)
>We plan to start with the development of a fast implementation of a Burrows
>Wheeler Transform based algorithm (BWT).
why not starting with lzo first - it`s already in zfs-fuse on linux and it
looks, that it`s just "in between lzjb and gzip" in terms of performance and
compressi
>On 7/7/07, Cyril Plisko <[EMAIL PROTECTED]> wrote:
>> Hello,
>>
>> This is a third request to open the materials of the PSARC case
>> 2007/171 ZFS Separate Intent Log
>> I am not sure why two previous requests were completely ignored
>> (even when seconded by another community member).
>> In any
On 7/7/07, Cyril Plisko <[EMAIL PROTECTED]> wrote:
> Hello,
>
> This is a third request to open the materials of the PSARC case
> 2007/171 ZFS Separate Intent Log
> I am not sure why two previous requests were completely ignored
> (even when seconded by another community member).
> In any case that
Hello,
This is a third request to open the materials of the PSARC case
2007/171 ZFS Separate Intent Log
I am not sure why two previous requests were completely ignored
(even when seconded by another community member).
In any case that is absolutely unaccepted practice.
On 6/30/07, Cyril Plisko
I am using Solaris Express Community build 67 installed on a 40GB harddrive
(UFS filesystem on Solaris), dual boot with Windows XP. I have a zfsraid with 4
samsung drives. It is a [EMAIL PROTECTED] and 1GB RAM.
When I copy a 1.3G file from ZFSpool to ZFSpool the command "time cp file
file2"
14 matches
Mail list logo