>>>>> "k" == Khyron <khyron4...@gmail.com> writes:
k> The RFE is out there. Just like SLOGs, I happen to think it a k> good idea, personally, but that's my personal opinion. If it k> makes dedup more usable, I don't see the harm. slogs and l2arcs, modulo the current longstanding ``cannot import pool with attached missing slog'' bug, are disposeable: You will lose either a little data or no data if the device goes away (once the bug is finally fixed). This makes them less ponderous because these days we are looking for raidz2 or raidz3 amount of redundancy, so in a seperate device that wasn't disposeable we'd need a 3- or 4-way mirror. It also makes their seperateness more seperable since they can go away at any time, so maybe they do deserve to be seperate. The two together make the complexity more bearable. Would an sddt be disposeable, or would it be a critical top-level vdev needed for import? If it's critical, well, that's kind of annoying, because now we need 3-way mirrors of sddt to match the minimum best-practice redundancy of the rest of the pool's redundancy, and my first reaction would be ``can you spread it throughout the normal raidz{,2,3} vdevs at least in backup form?'' once I say a copy should be kept in the main pool even afer it becomes an sddt, well, what's that imply? * In the read case it means cacheing, so it could go in the l2arc. How's DDT different from anything else in the l2arc? * In the write case it means sometimes commiting it quickly without waiting on the main pool so we can release some lock or answer some RPC and continue. Why not write it to the slog? Then if we lose the slog we can do what we always do without the slog and roll back to the last valid txg, losing whatever writes were associated with that lost ddt update. The two cases fit fine with the types of SSD's we're using for each role and the type of special error recovery we have if we lose the device. Why litter a landscape so full of special cases and tricks (like the ``import pool with missing slog'' bug that is taking so long to resolve) with yet another kind of vdev that will take 1 year to discover special cases and a halfdecade to address them? Maybe there is a reason. Are DDT write patterns different than slog write patterns? Is it possible to make a DDT read cache using less ARC for pointers than the l2arc currently uses? Is the DDT particularly hard on the l2arc by having small block sizes? Will the sddt be delivered with a separate offline ``not an fsck!!!'' tool for slowly regenerating it from pool data if it's lost, or maybe after an sddt goes bad the pool can be mounted space-wastingly as in like no dedup is done and deletes do not free space, with an empty DDT, and the sddt regenerated by a scrub? If the performance or recovery behavior is different than what we're working towards with optional-slog and persistent-l2arc then maybe sddt does deserve to be antoher vdev type. so....i dunno. On one hand I'm clearly nowhere near informed enough to weigh in on an architectural decision like this and shouldn't even be discussing it, and the same applies to you Khyron, to my view, since our input seems obvious at best and misinformed at worst. On the other hand, other major architectural changes (slog) was delivered incomplete in a cripplingly bad and silly, trivial way for, AFAICT, nothing but lack of sufficient sysadmin bitching and moaning, leaving heaps of multi-terabyte naked pools out there for half a decade with fancy triple redundancy that will be totally lost if a single SSD + zpool.cache goes bad, so apparently thinking things through even at this trivial level might have some value to the ones actually doing the work.
pgp8j6Y2dtxrq.pgp
Description: PGP signature
_______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss