Data: 90% of current computers has less than 9 GB of RAM, less than 5% has SSD
systems.
Let use a computer storage "standard", with a capacity of 4 TB ... dedupe on,
dataset with blocks of 32 kb ..., 2 TB of data in use ... need 16 GB of memory
just only for DTT ... but this will not see it unti
> Realistically, I think people are overtly-enamored
> with dedup as a
> feature - I would generally only consider it
> worth-while in cases where
> you get significant savings. And by significant, I'm
> talking an order of
> magnitude space savings. A 2x savings isn't really
> enough to counte
or as a member of the ZFS team
> (which I'm not).
>
Then you have to be brutally good with Java
> --
> Erik Trimble
> Java System Support
> Mailstop: usca22-123
> Phone: x17195
> Santa Clara, CA
>
> ___
> zfs-discuss mailing list
> zfs-discus
> This may also be accomplished by using snapshots and
> clones of data
> sets. At least for OS images: user profiles and
> documents could be
> something else entirely.
Yes... but that will need a manager with access to zfs itself... but with
dedupe you can use a userland manager (much more
>
> I think, with current bits, it's not a simple matter
> of "ok for
> enterprise, not ok for desktops". with an ssd for
> either main storage
> or l2arc, and/or enough memory, and/or a not very
> demanding workload, it
> seems to be ok.
The main problem is not performance (for a home serve
>
> Does the machine respond to ping?
Yes
>
> If there is a gui does the mouse pointer move?
>
There is no GUI (nexentastor)
> Does the keyboard numlock key respond at all ?
Yes
>
> I just find it very hard to believe that such a
> situation could exist as I
> have done some *abusive* tes
I had the same experience.
Finally i could remove the dedup dataset (1,7 TB)... i was wrong... it wasnt 30
hours... it was "only" 21 (the reason of the mistake: first i tried to delete
with nexentastor enterprises trial 3.02... but when i see that there was a new
version of nexentastor comunity