It finished!
Going to switch off dedup ... if it's possible yet
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
- Original Message -
> I have 1TB mirror deduplicated pool.
> snv_134 runned on x86 i7 PC with 8GB RAM
> I destroyed 30GB zfs volume and now trying to import that pool at the
> LiveUSB runned osol.
> It works >2h already, I'm waiting ...
It may even take longer. I've seen this take a while
I have 1TB mirror deduplicated pool.
snv_134 runned on x86 i7 PC with 8GB RAM
I destroyed 30GB zfs volume and now trying to import that pool at the LiveUSB
runned osol.
It works >2h already, I'm waiting ...
How can I see some progressbar or another signs of current import job?
--
This message po
> "k" == Khyron writes:
k> The RFE is out there. Just like SLOGs, I happen to think it a
k> good idea, personally, but that's my personal opinion. If it
k> makes dedup more usable, I don't see the harm.
slogs and l2arcs, modulo the current longstanding ``cannot import pool
Ugh! If you received a direct response to me instead of via the list,
apologies for
that.
Rob:
I'm just reporting the news. The RFE is out there. Just like SLOGs, I
happen to
think it a good idea, personally, but that's my personal opinion. If it
makes dedup
more usable, I don't see the harm
> The other thing I've noticed with all of the "destroyed a large dataset with
> dedup
> enabled and it's taking forever to import/destory/ questions
> is that the process runs so so so much faster with 8+ GiB of RAM. Almost to
> a man,
> everyone who reports these 3, 4, or more day destroys
The system in question has 8GB of ram. It never paged during the
import (unless I was asleep at that point, but anyway).
It ran for 52 hours, then started doing 47% kernel cpu usage. At this
stage, dtrace stopped responding, and so iopattern died, as did
iostat. It was also increasing ram usage ra
> RFE open to allow you to store [DDT] on a separate top level VDEV
hmm, add to this spare, log and cache vdevs, its to the point of making
another pool and thinly provisioning volumes to maintain partitioning
flexibility.
taemun: hay, thanks for closing the loop!
The DDT is stored within the pool, IIRC, but there is an RFE open to allow
you to
store it on a separate top level VDEV, like a SLOG.
The other thing I've noticed with all of the "destroyed a large dataset with
dedup
enabled and it's taking forever to import/destory/ wrote:
> Just thought I'd chi
Just thought I'd chime in for anyone who had read this - the import
operation completed this time, after 60 hours of disk grinding.
:)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
After around four days the process appeared to have stalled (no
audible hard drive activity). I restarted with milestone=none; deleted
/etc/zfs/zpool.cache, restarted, and went zpool import tank. (also
allowed root login to ssh, so I could make new ssh sessions if
required.) Now I can watch the pro
Do you think that more RAM would help this progress faster? We've just
hit 48 hours. No visible progress (although that doesn't really mean
much).
It is presently in a system with 8GB of ram, I could try to move the
pool across to a system with 20GB of ram, if that is likely to
expedite the proces
On 02/11/10 10:33, Lori Alt wrote:
This bug is closed as a dup of another bug which is not readable from
the opensolaris site, (I'm not clear what makes some bugs readable and
some not).
the other bug in question was opened yesterday and probably hasn't had
time to propagate.
On 02/11/10 08:15, taemun wrote:
Can anyone comment about whether the on-boot "Reading ZFS confi" is
any slower/better/whatever than deleting zpool.cache, rebooting and
manually importing?
I've been waiting more than 30 hours for this system to come up. There
is a pool with 13TB of data attached
Can anyone comment about whether the on-boot "Reading ZFS confi" is
any slower/better/whatever than deleting zpool.cache, rebooting and
manually importing?
I've been waiting more than 30 hours for this system to come up. There
is a pool with 13TB of data attached. The system locked up whilst
destr
15 matches
Mail list logo