[zfs-discuss] dedupratio riddle

2010-03-16 Thread Paul van der Zwan
On Opensolaris build 134, upgraded from older versions, I have an rpool for which I had switch on dedup for a few weeks. After that I switched to back on. Now it seems the dedup ratio is stuck at a value of 1.68. Even when I copy more then 90 GB of data it still remains at 1.68. Any ideas ?

Re: [zfs-discuss] dedupratio riddle

2010-03-17 Thread Paul van der Zwan
On 16 mrt 2010, at 19:48, valrh...@gmail.com wrote: > Someone correct me if I'm wrong, but it could just be a coincidence. That is, > perhaps the data that you copied happens to lead to a dedup ratio relative to > the data that's already on there. You could test this out by copying a few > gig

Re: [zfs-discuss] dedupratio riddle

2010-03-17 Thread Paul van der Zwan
On 16 mrt 2010, at 19:48, valrh...@gmail.com wrote: > Someone correct me if I'm wrong, but it could just be a coincidence. That is, > perhaps the data that you copied happens to lead to a dedup ratio relative to > the data that's already on there. You could test this out by copying a few > gig

Re: [zfs-discuss] dedupratio riddle

2010-03-17 Thread Paul van der Zwan
On 17 mrt 2010, at 10:56, zfs ml wrote: > On 3/17/10 1:21 AM, Paul van der Zwan wrote: >> >> On 16 mrt 2010, at 19:48, valrh...@gmail.com wrote: >> >>> Someone correct me if I'm wrong, but it could just be a coincidence. That >>> is, perhaps the dat

Re: [zfs-discuss] dedupratio riddle

2010-03-18 Thread Paul van der Zwan
On 18 mrt 2010, at 10:07, Henrik Johansson wrote: > Hello, > > On 17 mar 2010, at 16.22, Paul van der Zwan wrote: > >> >> On 16 mrt 2010, at 19:48, valrh...@gmail.com wrote: >> >>> Someone correct me if I'm wrong, but it could just be a coincide

[zfs-discuss] Block locations in a mirror vdev ?

2010-03-31 Thread Paul van der Zwan
I cannot find the answer in the on disk specification or anywhere else. Are the vdev in a mirror block by block copies ? I mean is block 10013223 on on device the same as block 10013223 on the other devices in a mirror vdev. Off course only after that block has ever been used by zfs, I know blocks

[zfs-discuss] Fixing device names after disk shuffle

2012-10-14 Thread Paul van der Zwan
I moved some disk around on my Openindiana system and now the names that are shown by zpool status no longer match the names format shows: $ zpool status pool: datapool state: ONLINE scan: scrub repaired 0 in 7h58m with 0 errors on Wed Oct 3 01:13:47 2012 config: NAMESTATE

Re: [zfs-discuss] Fixing device names after disk shuffle

2012-10-14 Thread Paul van der Zwan
On 14 Oct 2012, at 20:56 , "Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)" wrote: >> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- >> boun...@opensolaris.org] On Behalf Of Paul van der Zwan >> >> What was c5t2 is now

Re: [zfs-discuss] Apple Removes Nearly All Reference To ZFS

2009-06-11 Thread Paul van der Zwan
On 11 jun 2009, at 10:48, Jerry K wrote: There is a pretty active apple ZFS sourceforge group that provides RW bits for 10.5. Things are oddly quiet concerning 10.6. I am curious about how this will turn out myself. Jerry Strange thing I noticed in the keynote is that they claim the

Re: [zfs-discuss] Apple Removes Nearly All Reference To ZFS

2009-06-11 Thread Paul van der Zwan
On 11 jun 2009, at 11:48, Sami Ketola wrote: On 11 Jun 2009, at 12:44, Paul van der Zwan wrote: Strange thing I noticed in the keynote is that they claim the disk usage of Snow Leopard is 6 GB less than Leopard mostly because of compression. Either they have implemented compressed

[zfs-discuss] Write over read priority possible ?

2007-06-25 Thread Paul van der Zwan
I'm testing an X4500 where we need to send over 600MB/s over the network. This is no problem, I get about 700MB/s over a single 10G interface. Problem is the box also needs to accept incoming data at 100MB/s. If I do a simple test ftp-ing files into the same filesystem I see the FTP being limite

Re: [zfs-discuss] Write over read priority possible ?

2007-06-25 Thread Paul van der Zwan
On 25 Jun 2007, at 14:00, [EMAIL PROTECTED] wrote: I'm testing an X4500 where we need to send over 600MB/s over the network. This is no problem, I get about 700MB/s over a single 10G interface. Problem is the box also needs to accept incoming data at 100MB/s. If I do a simple test ftp-ing fil

Re: [zfs-discuss] Write over read priority possible ?

2007-06-25 Thread Paul van der Zwan
On 25 Jun 2007, at 14:37, [EMAIL PROTECTED] wrote: On 25 Jun 2007, at 14:00, [EMAIL PROTECTED] wrote: I'm testing an X4500 where we need to send over 600MB/s over the network. This is no problem, I get about 700MB/s over a single 10G interface. Problem is the box also needs to accept

Re: [zfs-discuss] ZFS Roadmap - thoughts on expanding raidz / restriping / defrag

2007-12-18 Thread Paul van der Zwan
On 17 Dec 2007, at 11:42, Jeff Bonwick wrote: > In short, yes. The enabling technology for all of this is something > we call bp rewrite -- that is, the ability to rewrite an existing > block pointer (bp) to a new location. Since ZFS is COW, this would > be trivial in the absence of snapshots -

Re: [zfs-discuss] Performance with Sun StorageTek 2540

2008-02-28 Thread Paul Van Der Zwan
> On Wed, 27 Feb 2008, Cyril Plisko wrote: > >> > >> > http://www.simplesystems.org/users/bfriesen/zfs-discuss/2540-zfs-performance.pdf > > > > Nov 26, 2008 ??? May I borrow your time machine ? ;-) > > Are there any stock prices you would like to know about? Perhaps you > > are inter

Re: [zfs-discuss] linux versus sol10

2006-11-08 Thread Paul van der Zwan
On 7 Nov 2006, at 21:02, Michael Schuster wrote: listman wrote: hi, i found a comment comparing linux and solaris but wasn't sure which version of solaris was being referred. can the list confirm that this issue isn't a problem with solaris10/zfs?? "Linux also supports asynchronous director

Re: Re[2]: [zfs-discuss] linux versus sol10

2006-11-08 Thread Paul van der Zwan
On 8 Nov 2006, at 16:16, Robert Milkowski wrote: Hello Paul, Wednesday, November 8, 2006, 3:23:35 PM, you wrote: PvdZ> On 7 Nov 2006, at 21:02, Michael Schuster wrote: listman wrote: hi, i found a comment comparing linux and solaris but wasn't sure which version of solaris was being referr

Re: [zfs-discuss] Trying to replicate ZFS self-heal demo and not seeing fixed error

2006-05-09 Thread Paul van der Zwan
On 9-mei-2006, at 11:35, Joerg Schilling wrote: Darren J Moffat <[EMAIL PROTECTED]> wrote: Jeff Bonwick wrote: I personally hate this device naming semantic (/dev/rdsk/c-t-d not meaning what you'd logically expect it to). (It's a generic Solaris bug, not a ZFS thing.) I'll see