Thanks,
Looks like I'll be using raidz3.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
The only other zfs pool in my system is a mirrored rpool (2 500 gb disks). This
is for my own personal use, so it's not like the data is mission critical in
some sort of production environment.
The advantage I can see with going with raidz2 + spare over raidz3 and no spare
is I would spend much
I'd like to thank Tim and Cindy at Sun for providing me with a new zfs binary
file that fixed my issue. I was able to get my zpool back! Hurray!
Thank You.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris
Just curious if anything has happened here.
I had a similar issue that was solved by upgrading from 4GB to 8GB of RAM.
I now have the issue again, and my box hard locks when doing the import after
about 30 minutes. (This time not using de-dup, but was using iscsi). I debated
on upgrading to 16G
I should also mention that once the "lock" starts, the disk activity light on
my case stays busy for a bit (1-2 minutes MAX), then does nothing.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://
Howdy All,
I made a 1 TB zfs volume within a 4.5 TB zpool called vault for testing iscsi.
Both DeDup and Compression were off. After my tests, I issued a zfs destroy to
remove the volume.
This command hung. After 5 hours, I hard rebooted into single user mode and
removed my zfs cache file (I h
I'm thinking that the issue is simply with zfs destroy, not with dedup or
compression.
Yesterday I decided to do some iscsi testing, I created a new dataset in my
pool, 1TB. I did not use compression or dedup.
After copying about 700GB of data from my windows box (NTFS on top of the iscsi
disk
> On Sun, 3 Jan 2010, Jack Kielsmeier wrote:
> >
> > help. It is suggested not to put zil on a device
> external to the
> > disks in the pool unless you mirror the zil device.
> This is
> > suggested to prevent data loss if the zil device
> dies.
>
&g
> Just l2arc. Guess I can always repartition later.
>
> mike
>
>
> On Sun, Jan 3, 2010 at 11:39 AM, Jack Kielsmeier
> wrote:
> > Are you using the SSD for l2arc or zil or both?
> > --
> > This message posted from opensolaris.org
> > __
Are you using the SSD for l2arc or zil or both?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> That's the thing, the drive lights aren't blinking,
> but I was thinking maybe the writes are going so slow
> that it's possible they aren't registering. And since
> I can't keep a running iostat, Ican't tell if
> anything is going on. I can however get into the
> KMDB. is there something in th
> Yeah, still no joy on getting my pool back. I think
> I might have to try grabbing another server with a
> lot more memory and slapping the HBA and the drives
> in that. Can ZFS deal with a controller change?
Just some more info that 'may' help.
After I upgraded to 8GB of RAM, I did not limit
I should note that my import command was:
zpool import -f vault
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I got my pool back
Did a rig upgrade (new motherboard, processor, and 8 GB of RAM), re-installed
opensolaris 2009.06, did an upgrade to snv_130, and did the import!
The import only took about 4 hours!
I have a hunch that I was running into some sort of issue with not having
enough RAM prev
One thing that bugged me is that I can not ssh as myself to my box when a zpool
import is running. It just hangs after accepting my password.
I had to convert root from a role to a user and ssh as root to my box.
I now know why this is, when I log in, /usr/sbin/quota gets called. This must
do a
Here is iostat output of my disks being read:
r/sw/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
45.30.0 27.60.0 0.0 0.60.0 13.3 0 60 c3d0
44.30.0 27.00.0 0.0 0.30.07.7 0 34 c3d1
43.50.0 27.40.0 0.0 0.50.0 12.6
It sounds like you have less data on yours, perhaps that is why yours freezes
faster.
Whatever mine is doing during the import, it reads my disks now for nearly
24-hours, and then starts writing to the disks.
The reads start out fast, then they just sit, going at something like 20k /
second on
Just wondering,
How much RAM is in your system?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I still haven't given up :)
I moved my Virtual Machines to my main rig (which gets rebooted often, so this
is 'not optimal' to say the least) :)
I have since upgraded to 129. I noticed that even if timeslider/autosnaps are
disabled, a zpool command still gets generated every 15 minutes. Since a
I don't mean to sound ungrateful (because I really do appreciate all the help I
have received here), but I am really missing the use of my server.
Over Christmas, I want to be able to use my laptop (right now, it's acting as a
server for some of the things my OpenSolaris server did). This means
Ok, dump uploaded!
Thanks for your upload
Your file has been stored as "/cores/redshirt-vmdump.0" on the Supportfiles
service.
Size of the file (in bytes) : 1743978496.
The file has a cksum of : 2878443682 .
#
Ok, I have started my import after using the -k on my kernel line (I just did a
test dump using this method just to make sure it works ok, and it does).
I have also added the following to my /etc/system file and rebooted:
set snooping=1
According to this page:
http://developers.sun.com/solaris/
Ah!
Ok, I will give this a try tonight! Thanks.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Ok, this is the script I am running (as a background process). This script
doesn't matter much, it's just here for reference, as I'm running into problems
just running the savecore command while the zpool import is running.
#!/
Ok, my console is 100% completely hung, not gonna be able to enter any commands
when it freezes.
I can't even get the numlock light to change it's status.
This time I even plugged in a PS/2 keyboard instead of USB thinking maybe it
was USB dying during the hang, but not so.
I have hard reboote
I'll see what I can do. I have a busy couple of days, so it may not be until
Friday until I can spend much time on this.
Thanks
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris
f /var/crash/v120-brm-08/vmdump.0'
>
> It won't impact the running system.
>
> Then, upload the crash dump file by following these
> instructions:
>
> http://wikis.sun.com/display/supportfiles/Sun+Support+
> Files+-+Help+and+Users+Guide
>
> Let us know when yo
> On Dec 15, 2009, at 5:50, Jack Kielsmeier
> wrote:
>
> > Thanks.
> >
> > I've decided now to only post when:
> >
> > 1) I have my zfs pool back
> > or
> > 2) I give up
> >
> > I should note that there are periods of time wher
Thanks.
I've decided now to only post when:
1) I have my zfs pool back
or
2) I give up
I should note that there are periods of time where I can ping my server
(rarely), but most of the time not. I have not been able to ssh into it, and
the console is hung (minus the little blinking cursor).
My system was pingable again, unfortunately I disabled all services such as
ssh. My console was still hung, but I was wondering if I had hung USB crap
(since I use a USB keyboard and everything had been hung for days).
I force rebooted and the pool was not imported :(. I started the process off
It's been over 72 hours since my last import attempt.
System still is non-responsive. No idea if it's doing anything
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
My import is still going (I hope, as I can't confirm since my system appears to
be totally locked except for the little blinking console cursor), been well
over a day.
I'm less hopeful now, but will still let it "do it's thing" for another couple
of days.
--
This message posted from opensolari
Ah that could be it!
This leaves me hopeful, as it looks like that bug says it'll eventually finish!
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-di
> > I have disabled all 'non-important' processes
> (gdm,
> > ssh, vnc, etc). I am now starting this process
> > locally on the server via the console with about
> 3.4
> > GB free of RAM.
> >
> > I still have my entries in /etc/system for
> limiting
> > how much RAM zfs can use.
>
> Going on 10 h
> I have disabled all 'non-important' processes (gdm,
> ssh, vnc, etc). I am now starting this process
> locally on the server via the console with about 3.4
> GB free of RAM.
>
> I still have my entries in /etc/system for limiting
> how much RAM zfs can use.
Going on 10 hours now, still importin
I have disabled all 'non-important' processes (gdm, ssh, vnc, etc). I am now
starting this process locally on the server via the console with about 3.4 GB
free of RAM.
I still have my entries in /etc/system for limiting how much RAM zfs can use.
--
This message posted from opensolaris.org
_
> zpool import done! Back online.
>
> Total downtime for 4TB pool was about 8 hours, don't
> know how much of this was completing the destroy
> transaction.
Lucky You! :)
My box has gone totally unresponsive again :( I cannot even ping it now and I
can't hear the disks thrashing.
--
This messa
> On Tue, Dec 8, 2009 at 6:36 PM, Jack Kielsmeier
> wrote:
> > Ah, good to know! I'm learning all kinds of stuff
> here :)
> >
> > The command (zpool import) is still running and I'm
> still seeing disk activity.
> >
> > Any rough idea as to
Ok, I have started the zpool import again. Looking at iostat, it looks like I'm
getting compatible read speeds (possibly a little slower):
extended device statistics
r/sw/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
0.00.00.00.0
Yikes,
Posted too soon. I don't want to set my ncsize that high!!! (Was thinking the
entry was memory, but it's entries).
set ncsize = 25
set zfs:zfs_arc_max = 0x1000
Now THIS should hopefully only make it so the process can take around 1GB of
RAM.
--
This message posted from opensola
Upon further research, it appears I need to limit both the ncsize and the
arc_max. I think I'll use:
set ncsize = 0x3000
set zfs:zfs_arc_max = 0x1000
That should give me a max of 1GB used between both.
If I should be using different values (or other settings), please let me know :)
--
Ok,
When searching for how to do that, I see that it requires a modification to
/etc/system.
I'm thinking I'll limit it to 1GB, so the entry (which must be in hex) appears
to be:
set zfs:zfs_arc_max = 0x4000
Then I'll reboot the server and try the import again.
Thanks for the continued a
I just hard-rebooted my server. I'm moving off my VM to my laptop so it can
continue to run :)
Then, if it "freezes" again I'll just let it sit, as I did hear the disks
thrashing.
--
This message posted from opensolaris.org
___
zfs-discuss mailing lis
The server just went "almost" totally unresponsive :(
I still hear the disks thrashing. If I press keys on the keyboard, my login
screen will not show up. I had a VNC session hang and can no longer get back in.
I can try to ssh to the server, I get prompted for my username and password,
but it
Ah, good to know! I'm learning all kinds of stuff here :)
The command (zpool import) is still running and I'm still seeing disk activity.
Any rough idea as to how long this command should last? Looks like each disk is
being read at a rate of 1.5-2 megabytes per second.
Going worst case, assumin
It's been about 45 minutes now since I started trying to import the pool.
I see disk activity (see below)
What concerns me is my free memory keeps shrinking as time goes on. Now have
185MB free out of 4 gigs (and 2 gigs of swap free).
Hope this doesn't exhaust all my memory and freeze my box.
The pool is roughly 4.5 TB (Raidz1, 4 1.5 TB Disks)
I didn't attempt to destroy the pool, only a dataset within the pool. The
dataset is/was about 1.2TB.
System Specs
Intel Q6600 (2.4 Ghz Quad Core)
4GB RAM
2x 500 GB drives in zfs mirror (rpool)
4x 1.5 TB drives in zfs raidz1 array (vault)
The 1
I waited about 20 minutes or so. I'll try your suggestions tonight.
I didn't look at iostat. I just figured it was hung after waiting that long,
but now that I know it can take a very long time, I will watch it and make sure
it's doing something.
Thanks. I'll post my results either tonight or t
Howdy,
I upgraded to snv_128a from snv_125 . I wanted to do some de-dup testing :).
I have two zfs pools: rpool and vault. I upgraded my vault zpool version and
turned on dedup on datastore vault/shared_storage. I also turned on gzip
compression on this dataset as well.
Before I turned on dedu
49 matches
Mail list logo