> So recently, i decided to test out some of the ideas i've been toying
> with, and decided to create 50 000 and 100 000 filesystems, the test
> machine was a nice V20Z with dual 1.8 opterons, 4gb ram, connecting a
> scsi 3310 raid array, via two scsi controllers.
I did a similar test a couple o
Darren said:
> Right, that is a very important issue. Would a
> ZFS "scrub" framework do copy on write ?
> As you point out if it doesn't then we still need
> to do something about the old clear text blocks
> because strings(1) over the raw disk will show them.
>
> I see the desire to have a knob
> Dunno about eSATA jbods, but eSATA host ports have
> appeared on at least two HDTV-capable DVRs for storage
> expansion (looks like one model of the Scientific Atlanta
> cable box DVR's as well as on the shipping-any-day-now
> Tivo Series 3).
>
> It's strange that they didn't go with firewire
Lori said:
> The limitation is mainly about the *number* of disks
> that can be accessed at one time.
> ...
> But with straight mirroring, there's no such problem
> because any disk in the mirror can supply all of the
> disk blocks needed to boot.
Does that mean that these restrictions will go awa
Eric said:
> For U3, these are the performance fixes:
> 6424554 full block re-writes need not read data in
> 6440499 zil should avoid txg_wait_synced() and use dmu_sync()
> to issue
> parallelIOs when fsyncing
> 6447377 ZFS prefetch is inconsistant
> 6373978 want to take lots of snapshots quickly
Jeff Bonwick said:
> RAID-Z takes a different approach. We were designing a filesystem
> as well, so we could make the block pointers as semantically rich
> as we wanted. To that end, the block pointers in ZFS contains data
> layout information. One nice side effect of this is that we don't
> n
> > I guess that could be made to work, but then the data on
> > the disk becomes much (much much) more difficult to
> > interpret because you have some rows which are effectively
> > one width and others which are another (ad infinitum).
>
> How do rows come into it? I was just assuming that
Mike said:
> 3) ZFS ability to recognize duplicate blocks and store only one copy.
> I'm not sure the best way to do this, but my thought was to have ZFS
> remember what the checksums of every block are. As new blocks are
> written, the checksum of the new block is compared to known checksums.
>
> If you are going to use Veritas NetBackup why not use the
> native Solaris client ?
I don't suppose anyone knows if Networker will become zfs-aware at any
point?
e.g.
backing up properties
backing up an entire pool as a single save set
efficient incrementals (something similar to "zfs s
A slightly different tack now...
what filesystems is it a good (or bad) idea to put on ZFS?
root - NO (not yet anyway)
home - YES (although the huge number of mounts still scares me a bit)
/usr - possible?
/var - possible?
swap - no?
Is there any advantage in having multiple zpools over just havi
Casper said:
> You can have composite mounts (multiple nested mounts)
> but that is essentially a single automount entry so it
> can't be overly long, I believe.
I've seen that in the man page, but I've never managed to
find a use for it!
What I'd *like* to be able to do is have a map that amount
11 matches
Mail list logo