I've been saving up a few wishlist items for zfs. Time to share.
1. A verbose (-v) option to the zfs commandline.
In particular zfs sometimes takes a while to return from zfs snapshot -r
tank/[EMAIL PROTECTED] in the case where there are a great many iscsi shared
volumes underneath. A little pr
Dedicate some CPU to the task. Create a psrset and bind the ftp
daemon to it.
If that works then add a few of the read threads as many as what fits
in the requirements.
-r
Le 25 juin 07 à 15:00, Paul van der Zwan a écrit :
On 25 Jun 2007, at 14:37, [EMAIL PROTECTED] wrote:
On 25 Ju
On June 25, 2007 1:02:38 PM -0700 Erik Trimble <[EMAIL PROTECTED]> wrote:
algorithms. I think (as Casper said), that should you need to, use SHA
to weed out the cases where the checksums are different (since, that
definitively indicates they are different), then do a bitwise compare on
any that
Bill Sommerfeld wrote:
[This is version 2. the first one escaped early by mistake]
On Sun, 2007-06-24 at 16:58 -0700, dave johnson wrote:
The most common non-proprietary hash calc for file-level deduplication seems
to be the combination of the SHA1 and MD5 together. Collisions have been
s
[This is version 2. the first one escaped early by mistake]
On Sun, 2007-06-24 at 16:58 -0700, dave johnson wrote:
> The most common non-proprietary hash calc for file-level deduplication seems
> to be the combination of the SHA1 and MD5 together. Collisions have been
> shown to exist in MD5 a
>I wouldn't de-duplicate without actually verifying that two blocks were
>actually bitwise identical.
Absolutely not, indeed.
But the nice property of hashes is that if the hashes don't match then
the inputs do not either.
I.e., the likelyhood of having to do a full bitwise compare is vanishi
On Sun, 2007-06-24 at 16:58 -0700, dave johnson wrote:
> The most common non-proprietary hash calc for file-level deduplication seems
> to be the combination of the SHA1 and MD5 together. Collisions have been
> shown to exist in MD5 and theoried to exist in SHA1 by extrapolation, but
> the prob
I've spent some time searching, and I apologize if I've missed this somewhere,
but in testing ZVOL write performance I cannot see any noticeable difference
between opening a ZVOL with or without O_DSYNC.
Does the O_DSYNC flag have any actual influence on ZVOL writes ?
For ZVOLS that I have ope
Hello,
I ran across this entry on the following page:
ZFS hotplug - PSARC/2007/197
http://www.opensolaris.org/os/community/on/flag-days/66-70/
Since Build 68 hasn't closed yet, I assume this would currently be in
a nightly build? If so, has anyone had a chance to play with this yet
and see how
I'm seeing some odd behaviour with ZFS and a reasonably heavy workload.
I'm currently on contract to BBC R&D to build what is effectively a
network-based personal video recorder. To that end, I have a rather large
collection of discs, arranged very poorly as it's something of a hack at
present, an
> You've tripped over a variant of:
>
> 6335095 Double-slash on /. pool mount points
>
> - Eric
>
oh well .. no points for originality then I guess :-)
Thanks
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
You've tripped over a variant of:
6335095 Double-slash on /. pool mount points
- Eric
On Mon, Jun 25, 2007 at 02:11:33AM -0400, Dennis Clarke wrote:
>
> Not sure if this has been reported or not.
>
> This is fairly minor but slightly annoying.
>
> After fresh install of snv_64a I run zpool im
What is the controller setup going to look like for the 30 drives? Is it going
to be fibre channel, SAS, etc. and what will be the Controller-to-Disk ratio?
~Bryan
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@ope
Thanks, Roch! Much appreciated knowing what the problem is and that a
fix is in a forthcoming release.
Thomas
On 6/25/07, Roch - PAE <[EMAIL PROTECTED]> wrote:
Sorry about that; looks like you've hit this:
6546683 marvell88sx driver misses wakeup for mv_empty_cv
http://bugs.
Thanks for the info Eric and Eric.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Outstanding! Wow that was a ka-winky-dink in timing. This will clear up a lot
of problems for my customers in HPC environments and in some of the SAN
environments.
Thanks a lot for the info. I'll keep my eyes open.
This message posted from opensolaris.org
___
> FreeBSD plays it safe too. It's just that UFS, and other file systems on
> FreeBSD, understand write caches and flush at appropriate times.
Do you have something to cite w.r.t. UFS here? Because as far as I know,
that is not correct. FreeBSD shipped with write caching turned off by
default for
On Tue, 19 Jun 2007, John Brewer wrote:
> bash-3.00# zpool import
> pool: zones
> id: 4567711835620380868
> state: ONLINE
> status: The pool is formatted using an older on-disk version.
> action: The pool can be imported using its name or numeric identifier, though
> some features w
> On Mon, Jun 25, 2007 at 02:34:21AM -0400, Dennis Clarke wrote:
note that it was well after 2 AM for me .. half blind asleep
that's my excuse .. I'm sticking to it. :-)
>>
>> > in /usr/src/cmd/zpool/zpool_main.c :
>> >
>>
>> at line 680 forwards we can probably check for this scenario
On 25 Jun 2007, at 14:37, [EMAIL PROTECTED] wrote:
On 25 Jun 2007, at 14:00, [EMAIL PROTECTED] wrote:
I'm testing an X4500 where we need to send over 600MB/s over the
network.
This is no problem, I get about 700MB/s over a single 10G
interface.
Problem is the box also needs to accept
>
>On 25 Jun 2007, at 14:00, [EMAIL PROTECTED] wrote:
>
>>
>>> I'm testing an X4500 where we need to send over 600MB/s over the
>>> network.
>>> This is no problem, I get about 700MB/s over a single 10G interface.
>>> Problem is the box also needs to accept incoming data at 100MB/s.
>>> If I do a
On 25 Jun 2007, at 14:00, [EMAIL PROTECTED] wrote:
I'm testing an X4500 where we need to send over 600MB/s over the
network.
This is no problem, I get about 700MB/s over a single 10G interface.
Problem is the box also needs to accept incoming data at 100MB/s.
If I do a simple test ftp-ing fil
>I'm testing an X4500 where we need to send over 600MB/s over the
>network.
>This is no problem, I get about 700MB/s over a single 10G interface.
>Problem is the box also needs to accept incoming data at 100MB/s.
>If I do a simple test ftp-ing files into the same filesystem I see
>the FTP being
I think the problem is a timing one. Something must be attempting to
use the in kernel API to /dev/random sooner with ZFS boot that with UFS
boot. We need some boot time DTrace output to find out who is
attempting to call any of the APIs in misc/kcf - particularly the random
provider ones.
Sorry about that; looks like you've hit this:
6546683 marvell88sx driver misses wakeup for mv_empty_cv
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6546683
Fixed in snv_64.
-r
Thomas Garner writes:
> > We have seen this behavior, but it appears to be entirely re
I'm testing an X4500 where we need to send over 600MB/s over the
network.
This is no problem, I get about 700MB/s over a single 10G interface.
Problem is the box also needs to accept incoming data at 100MB/s.
If I do a simple test ftp-ing files into the same filesystem I see
the FTP being limite
On 22 Jun 2007, at 19:35, Victor Latushkin wrote:
Hi,
Recently PC Magazine Russian Edition published article about ZFS in
Russian titled
ZFS - Новый взгляд на файловые системы
or in English
ZFS - New view on a filesystem
http://www.pcmag.ru/solutions/detail.php?ID=9141
There's already so
27 matches
Mail list logo