On Tue, Jul 20, 2010 at 1:40 PM, Ulrich Graef wrote:
> When you are writing to a file and currently dedup is enabled, then the
> Data is entered into the dedup table of the pool.
> (There is one dedup table per pool not per zfs).
>
> Switching off the dedup does not change this data.
Yes, i suppo
Hi, thanks for answering,
> How large is your ARC / your main memory?
> Probably too small to hold all metadata (1/1000 of the data amount).
> => metadata has to be read again and again
Main memory is 8GB. ARC (according to arcstat.pl) usually stays at 5-7GB
> A recordsize smaller than 128k
Hi, I'm not sure if this is the right place to ask. I'm having a little trouble
deleting old solaris installs:
[EMAIL PROTECTED]:~]# lustatus
Boot Environment Is Active ActiveCanCopy
Name Complete NowOn Reboot Delete Status
---
no, weird situation. I unplugged the disks from the controller (I have them
labeled) before upgrading to snv89. after the upgrade, the controller names
changed.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@openso
Well, finally managed to solve my issue, thanks to the invaluable help of
Victor Latushkin, who I can't thank enough.
I'll post a more detailed step-by-step record of what he and I did (well, all
credit to him actually) to solve this. Actually, the problem is still there
(destroying a huge zvol
Thanks for your answer,
> after looking at your posts my suggestion would be to
> try the "OpenSolaris 2008.05 Live CD" and to import
> your pool using the CD. That CD is nv86 + some extra
> fixes.
I upgraded the snv85 to snv89 to see if it helped, but it didn't. I'll try to
download the 2008.05
Here's the output. Numbers may be a little off because I'm doing a nightly
build and compressing a crashdump with bzip2 at the same time.
extended device statistics
r/sw/s Mr/s Mw/s wait actv wsvc_t asvc_t %w %b device
3.7 19.40.10.3 3.3 0.0 142.
I'll provide you with the results of these commands soon. But for the record,
solaris does hang (dies out of memory, can't type anything on the console,
etc). What I can do is boot with -k and get to kmdb when it's hung (BREAK over
serial line). I have a crashdump I can upload.
I checked the di
fwiw, here are my previous posts:
http://www.opensolaris.org/jive/thread.jspa?threadID=61301&tstart=30
http://www.opensolaris.org/jive/thread.jspa?threadID=62120&tstart=0
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discu
Seriously, can anyone help me? I've been asking for a week. No relevant
answers, just a couple of answers but none solved my problem or even pointed me
in the right way, and my posts were bumped down into oblivion.
I don't know how to ask. My home server has been offline for over a week now
bec
Hello. I'm still having problems with my array. It's been replaying the ZIL (I
think) for a week now and it hasn't finished. Now I don't know if it will ever
finish: is it starting from scratch every time? I'm dtracing the ZIL and this
is what I get:
0 46882dsl_pool_zil_clean:return
so, anyone have any ideas? I'm obviously hitting a bug here. I'm happy to help
anyone solve this, I DESPERATELY need this data. I can post dtrace results if
you send them to me. I wish I could solve this myself, but I'm not a C
progammer, I don't know how to program filesystems, much less an adv
Hello, thanks for your suggestion. I tried settin zfs_arc_max to 0x3000
(768MB, out of 3GB). The system ran for almost 45 minutes before it froze.
Here's an interesting piece of arcstat.pl, which I noticed just as it was
pasing by:
Time read miss miss% dmis dm% pmis pm% mmis
So, I think I've narrowed it down to two things:
* ZFS tries to destroy the dataset every time it's called because the last time
it didn't finish destroying
* In this process, ZFS makes the kernel run out of memory and die
So I thought of two options, but I'm not sure if I'm right:
Option 1: "D
No, this is a 64-bit system (athlon64) with 64-bit kernel of course.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
oops. replied too fast.
Ran without -n, and space was added successfully... but it didn't work. It died
out of memory again.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
I forgot to post arcstat.pl's output:
Time read miss miss% dmis dm% pmis pm% mmis mm% arcsz c
22:32:37 556K 525K 94 515K 949K 98 515K 97 1G1G
22:32:38636310063 100 0063 100 1G1G
22:32:39747410074 100
I let it run while watching TOP, and this is what I got just before it hung.
Look at free mem. Is this memory allocated to the kernel? can I allow the
kernel to swap?
last pid: 7126; load avg: 3.36, 1.78, 1.11; up 0+01:01:11
21:16:49
88 pr
I let it run for about 4 hours. when I returned, still the same: I can ping the
machine but I can't SSH to it, or use the console. Please, I need urgent help
with this issue!
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-
I got more info. I can run zpool history and this is what I get:
2008-05-23.00:29:40 zfs destroy tera/[EMAIL PROTECTED]
2008-05-23.00:29:47 [internal destroy_begin_sync txg:3890809] dataset = 152
2008-05-23.01:28:38 [internal destroy_begin_sync txg:3891101] dataset = 152
2008-05-23.07:01:36 zpool
Hello, I'm having a big problem here, disastrous maybe.
I have a zpool consisting of 4x500GB SATA drives, this pool was born on S10U4
and was recently upgraded to snv85 because of iSCSI issues with some initiator.
Last night I was doing housekeeping, deleting old snapshots. One snapshot
failed
21 matches
Mail list logo