I had the same experience.
Finally i could remove the dedup dataset (1,7 TB)... i was wrong... it wasnt 30
hours... it was "only" 21 (the reason of the mistake: first i tried to delete
with nexentastor enterprises trial 3.02... but when i see that there was a new
version of nexentastor comunity
On Wed, Jun 16, 2010 at 3:39 AM, Fco Javier Garcia wrote:
> The main problem is not performance (for a home server is not a problem)...
> but what really is a BIG PROBLEM is when you try to delete a snapshot a
> little big... (try yourself...create a big random file with 90 Gb of data...
> then
On Jun 16, 2010, at 9:02 AM, Carlos Varela
wrote:
Does the machine respond to ping?
Yes
If there is a gui does the mouse pointer move?
There is no GUI (nexentastor)
Does the keyboard numlock key respond at all ?
Yes
I just find it very hard to believe that such a
situation cou
On Jun 16, 2010, at 6:46 AM, Dennis Clarke wrote:
>
> I have been lurking in this thread for a while for various reasons and
> only now does a thought cross my mind worth posting : Are you saying that
> a reasonably fast computer with 8GB of memory is entirely non-responsive
> due to a ZFS relate
> >
> > Does the machine respond to ping?
>
> Yes
>
> >
> > If there is a gui does the mouse pointer move?
> >
>
> There is no GUI (nexentastor)
>
> > Does the keyboard numlock key respond at all ?
>
> Yes
>
> >
> > I just find it very hard to believe that such a
> > situation could exist
>
> Does the machine respond to ping?
Yes
>
> If there is a gui does the mouse pointer move?
>
There is no GUI (nexentastor)
> Does the keyboard numlock key respond at all ?
Yes
>
> I just find it very hard to believe that such a
> situation could exist as I
> have done some *abusive* tes
>>
>> I think, with current bits, it's not a simple matter
>> of "ok for
>> enterprise, not ok for desktops". with an ssd for
>> either main storage
>> or l2arc, and/or enough memory, and/or a not very
>> demanding workload, it
>> seems to be ok.
>
>
> The main problem is not performance (for a h
>
> I think, with current bits, it's not a simple matter
> of "ok for
> enterprise, not ok for desktops". with an ssd for
> either main storage
> or l2arc, and/or enough memory, and/or a not very
> demanding workload, it
> seems to be ok.
The main problem is not performance (for a home serve
On 16/06/2010 11:30, Fco Javier Garcia wrote:
This may also be accomplished by using snapshots and
clones of data
sets. At least for OS images: user profiles and
documents could be
something else entirely.
Yes... but that will need a manager with access to zfs itself... but with
dedupe you can
> This may also be accomplished by using snapshots and
> clones of data
> sets. At least for OS images: user profiles and
> documents could be
> something else entirely.
Yes... but that will need a manager with access to zfs itself... but with
dedupe you can use a userland manager (much more
On Tue, Jun 15, 2010 at 7:28 PM, David Magda wrote:
> On Jun 15, 2010, at 14:20, Fco Javier Garcia wrote:
>
>> I think dedup may have its greatest appeal in VDI environments (think
>> about a environment with 85% if the data that the virtual machine needs is
>> into ARC or L2ARC... is like a dream
On Jun 15, 2010, at 14:20, Fco Javier Garcia wrote:
I think dedup may have its greatest appeal in VDI environments
(think about a environment with 85% if the data that the virtual
machine needs is into ARC or L2ARC... is like a dream...almost
instantaneous response... and you can boot a new
On 06/15/10 10:52, Erik Trimble wrote:
Frankly, dedup isn't practical for anything but enterprise-class
machines. It's certainly not practical for desktops or anything remotely
low-end.
We're certainly learning a lot about how zfs dedup behaves in practice.
I've enabled dedup on two desktops
On 6/15/2010 11:53 AM, Fco Javier Garcia wrote:
or as a member of the ZFS team
(which I'm not).
Then you have to be brutally good with Java
Thanks, but I do get it wrong every so often (hopefully, rarely). More
importantly, I don't know anything about the internal goings-on o
On 6/15/2010 11:49 AM, Geoff Nordli wrote:
From: Fco Javier Garcia
Sent: Tuesday, June 15, 2010 11:21 AM
Realistically, I think people are overtly-enamored with dedup as a
feature - I would generally only consider it worth-while in cases
where you get significant savings. And by significa
or as a member of the ZFS team
> (which I'm not).
>
Then you have to be brutally good with Java
> --
> Erik Trimble
> Java System Support
> Mailstop: usca22-123
> Phone: x17195
> Santa Clara, CA
>
> ___
> zfs-discuss mailing list
> zfs-discus
>From: Fco Javier Garcia
>Sent: Tuesday, June 15, 2010 11:21 AM
>
>> Realistically, I think people are overtly-enamored with dedup as a
>> feature - I would generally only consider it worth-while in cases
>> where you get significant savings. And by significant, I'm talking an
>> order of magnitude
On 6/15/2010 10:52 AM, Erik Trimble wrote:
Frankly, dedup isn't practical for anything but enterprise-class
machines. It's certainly not practical for desktops or anything
remotely low-end.
This isn't just a ZFS issue - all implementations I've seen so far
require enterprise-class solutions
> Realistically, I think people are overtly-enamored
> with dedup as a
> feature - I would generally only consider it
> worth-while in cases where
> you get significant savings. And by significant, I'm
> talking an order of
> magnitude space savings. A 2x savings isn't really
> enough to counte
On 6/15/2010 9:03 AM, Fco Javier Garcia wrote:
Data: 90% of current computers has less than 9 GB of RAM, less than 5% has SSD
systems.
Let use a computer storage "standard", with a capacity of 4 TB ... dedupe on,
dataset with blocks of 32 kb ..., 2 TB of data in use ... need 16 GB of memory jus
Data: 90% of current computers has less than 9 GB of RAM, less than 5% has SSD
systems.
Let use a computer storage "standard", with a capacity of 4 TB ... dedupe on,
dataset with blocks of 32 kb ..., 2 TB of data in use ... need 16 GB of memory
just only for DTT ... but this will not see it unti
21 matches
Mail list logo