Hi Max,
Unhelpful questions about your CPU aside, what else is your box doing?
Can you run up a second or third shell (ssh or whatever) and watch if
the disks / system are doing any work?
Were it Solaris, I'd run:
iostat -x
prstat -a
vmstat
mpstat (Though as discussed, you ha
2011-06-15 0:16, Frank Van Damme пишет:
2011/6/10 Tim Cook:
While your memory may be sufficient, that cpu is sorely lacking. Is it even
64bit? There's a reason intel couldn't give those things away in the early
2000s and amd was eating their lunch.
A Pentium 4 is 32-bit.
Technically, this
On Tue, Jun 14, 2011 at 3:16 PM, Frank Van Damme
wrote:
> 2011/6/10 Tim Cook :
> > While your memory may be sufficient, that cpu is sorely lacking. Is it
> even
> > 64bit? There's a reason intel couldn't give those things away in the
> early
> > 2000s and amd was eating their lunch.
>
> A Pentiu
2011/6/10 Tim Cook :
> While your memory may be sufficient, that cpu is sorely lacking. Is it even
> 64bit? There's a reason intel couldn't give those things away in the early
> 2000s and amd was eating their lunch.
A Pentium 4 is 32-bit.
--
Frank Van Damme
No part of this copyright message ma
Hi,
I am posting here in a tad of desperation. FYI, I am running FreeNAS 8.0.
Anyhow, I created a raidz1 (tank1) with 4 x 2Tb WD EARS hdds.
All was doing ok until I decided to up the RAM to 4 Gb since it is what was
recommended. Asap I re-started data migration, the ZFS issued messages
indicati
On Jun 10, 2011 11:52 AM, "Jim Klimov" wrote:
>
> 2011-06-10 18:00, Steve Gonczi пишет:
>>
>> Hi Jim,
>>
>> I wonder what OS version you are running?
>>
>> There was a problem similar to what you are describing in earlier
versions
>> in the 13x kernel series.
>>
>> Should not be present in the 14
2011-06-10 18:00, Steve Gonczi пишет:
Hi Jim,
I wonder what OS version you are running?
There was a problem similar to what you are describing in earlier versions
in the 13x kernel series.
Should not be present in the 14x kernels.
It is OpenIndiana oi_148a, and unlike many other details -
t
2011-06-10 13:51, Jim Klimov пишет:
and the system dies in
swapping hell (scanrates for available pages were seen to go
into millions, CPU context switches reach 200-300k/sec on a
single dualcore P4) after eating the last stable-free 1-2Gb
of RAM within a minute. After this the system responds to
The subject says it all, more or less: due to some problems
with a pool (i.e. deferred deletes a month ago, possibly
similar now), the "zpool import" hangs any zfs-related
programs, including "zfs", "zpool", "bootadm", sometimes "df".
After several hours of disk-thrashing all 8Gb of RAM in the
sy
Hi,
I am having trouble with a 8 disk raidz2 pool. Last week I noticed any
commands that were accessing the pool's filesystems would hang (ls, df
etc...). The logs showed some read errors for two of the drives. I had
to power cycle the machine since I could not shut it down cleanly. After
reb
>
> Good. Run 'zpool scrub' to make sure there are no
> other errors.
>
> regards
> victor
>
Yes, scrubbed successfully with no errors. Thanks again for all of your
generous assistance.
/AJ
--
This message posted from opensolaris.org
___
zfs-discus
On Jul 4, 2010, at 4:58 AM, Andrew Jones wrote:
> Victor,
>
> The zpool import succeeded on the next attempt following the crash that I
> reported to you by private e-mail!
>From the threadlist it looked like system was pretty low on memory with stacks
>of userland stuff swapped out, hence s
>
> - Original Message -
> > Victor,
> >
> > The zpool import succeeded on the next attempt
> following the crash
> > that I reported to you by private e-mail!
> >
> > For completeness, this is the final status of the
> pool:
> >
> >
> > pool: tank
> > state: ONLINE
> > scan: resilvere
- Original Message -
> Victor,
>
> The zpool import succeeded on the next attempt following the crash
> that I reported to you by private e-mail!
>
> For completeness, this is the final status of the pool:
>
>
> pool: tank
> state: ONLINE
> scan: resilvered 1.50K in 165h28m with 0 erro
Victor,
The zpool import succeeded on the next attempt following the crash that I
reported to you by private e-mail!
For completeness, this is the final status of the pool:
pool: tank
state: ONLINE
scan: resilvered 1.50K in 165h28m with 0 errors on Sat Jul 3 08:02:30 2010
config:
> Andrew,
>
> Looks like the zpool is telling you the devices are
> still doing work of
> some kind, or that there are locks still held.
>
Agreed; it appears the CSV1 volume is in a fundamentally inconsistent state
following the aborted zfs destroy attempt. See later in this thread where
Vict
On Jul 1, 2010, at 10:28 AM, Andrew Jones wrote:
> Victor,
>
> I've reproduced the crash and have vmdump.0 and dump device files. How do I
> query the stack on crash for your analysis? What other analysis should I
> provide?
Output of 'echo "::threadlist -v" | mdb 0' can be a good start in th
Victor,
A little more info on the crash, from the messages file is attached here. I
have also decompressed the dump with savecore to generate unix.0, vmcore.0, and
vmdump.0.
Jun 30 19:39:10 HL-SAN unix: [ID 836849 kern.notice]
Jun 30 19:39:10 HL-SAN ^Mpanic[cpu3]/thread=ff0017909c60:
Jun
Victor,
I've reproduced the crash and have vmdump.0 and dump device files. How do I
query the stack on crash for your analysis? What other analysis should I
provide?
Thanks
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-
>
> On Jun 29, 2010, at 8:30 PM, Andrew Jones wrote:
>
> > Victor,
> >
> > The 'zpool import -f -F tank' failed at some point
> last night. The box was completely hung this morning;
> no core dump, no ability to SSH into the box to
> diagnose the problem. I had no choice but to reset,
> as I had
On Jun 29, 2010, at 8:30 PM, Andrew Jones wrote:
> Victor,
>
> The 'zpool import -f -F tank' failed at some point last night. The box was
> completely hung this morning; no core dump, no ability to SSH into the box to
> diagnose the problem. I had no choice but to reset, as I had no diagnostic
Victor,
The 'zpool import -f -F tank' failed at some point last night. The box was
completely hung this morning; no core dump, no ability to SSH into the box to
diagnose the problem. I had no choice but to reset, as I had no diagnostic
ability. I don't know if there would be anything in the log
Andrew,
Looks like the zpool is telling you the devices are still doing work of
some kind, or that there are locks still held.
From man of section 2 intro page the errors are listed. Number 16
looks to be an EBUSY.
16 EBUSYDevice busy
An
Thanks Victor. I will give it another 24 hrs or so and will let you know how it
goes...
You are right, a large 2TB volume (CSV1) was not in the process of being
deleted, as described above. It is showing error 16 on 'zdb -e'
--
This message posted from opensolaris.org
_
On Jun 28, 2010, at 9:32 PM, Andrew Jones wrote:
> Update: have given up on the zdb write mode repair effort, as least for now.
> Hoping for any guidance / direction anyone's willing to offer...
>
> Re-running 'zpool import -F -f tank' with some stack trace debug, as
> suggested in similar thr
Just re-ran 'zdb -e tank' to confirm the CSV1 volume is still exhibiting error
16:
Could not open tank/CSV1, error 16
Considering my attempt to delete the CSV1 volume lead to the failure in the
first place, I have to think that if I can either 1) complete the deletion of
this volume or 2) ro
- Original Message -
> Dedup had been turned on in the past for some of the volumes, but I
> had turned it off altogether before entering production due to
> performance issues. GZIP compression was turned on for the volume I
> was trying to delete.
Was there a lot of deduped data still on
Malachi,
Thanks for the reply. There were no snapshots for the CSV1 volume that I
recall... very few snapshots on the any volume in the tank.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://ma
Dedup had been turned on in the past for some of the volumes, but I had turned
it off altogether before entering production due to performance issues. GZIP
compression was turned on for the volume I was trying to delete.
--
This message posted from opensolaris.org
___
I had a similar issue on boot after upgrade in the past and it was due to
the large number of snapshots I had... don't know if that could be related
or not...
Malachi de Ælfweald
http://www.google.com/profiles/malachid
On Mon, Jun 28, 2010 at 8:59 AM, Andrew Jones wrote:
> Now at 36 hours sin
- Original Message -
> Now at 36 hours since zdb process start and:
>
>
> PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP
> 827 root 4936M 4931M sleep 59 0 0:50:47 0.2% zdb/209
>
> Idling at 0.2% processor for nearly the past 24 hours... feels very
> stuck. Thoughts on how to
Update: have given up on the zdb write mode repair effort, as least for now.
Hoping for any guidance / direction anyone's willing to offer...
Re-running 'zpool import -F -f tank' with some stack trace debug, as suggested
in similar threads elsewhere. Note that this appears hung at near idle.
f
Now at 36 hours since zdb process start and:
PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP
827 root 4936M 4931M sleep 590 0:50:47 0.2% zdb/209
Idling at 0.2% processor for nearly the past 24 hours... feels very stuck.
Thoughts on how to determine where and
I used the GUI to delete all my snapshots, and after that, "zfs list" worked
without hanging. I did a "zpool scrub" and will wait to see what happens with
that. I DID have automatic snapshots enabled before. They are disabled now. I
don't know how the snapshots work to be honest, so maybe I r
Thank you so much for your reply!
Here are the outputs:
>1. Find PID of the hanging 'zpool import', e.g. with 'ps -ef | grep zpool'
r...@mybox:~# ps -ef|grep zpool
root 915 908 0 03:34:46 pts/3 0:00 grep zpool
root 901 874 1 03:34:09 pts/2 0:00 zpool import drownin
On 21.08.09 14:52, No Guarantees wrote:
Every time I attempt to import a particular RAID-Z pool, my system hangs.
Specifically, if I open up a gnome terminal and input '$ pfexec import zpool
mypool', the process will never complete and I will return to the prompt. If
I open up another terminal,
Okay, I'm trying to do whatever I can NONDESTRUCTIVELY to fix this. I have
almost 5TB of data that I can't afford to lose (baby pictures and videos,
etc.). Since no one has seen this problem before, maybe someone can tell me
what I need to do to make a backup of what I have now so I can try ot
Every time I attempt to import a particular RAID-Z pool, my system hangs.
Specifically, if I open up a gnome terminal and input '$ pfexec import zpool
mypool', the process will never complete and I will return to the prompt. If I
open up another terminal, I can input a 'zpool status" and see
Thanks to the help of a zfs/kernel developer at Sun who volunteered to help me,
it turns out this was a bug in solaris that needs to be fixed. Bug report here
for the curious:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6859446
--
This message posted from opensolaris.org
__
Just trying to help since no one has responded
Have you tried importing with an alternate root? We don't know your setup,
such as other pools, types of controllers and/or disks, or how your pool was
constructed.
Try importing something like this:
zpool import -R /tank2 -f pool_numeric_ide
I am having trouble with a Raid-Z zpool "bigtank" of 5x 750GB drives that will
not import.
After having some trouble with this pool, I exported it and attempted a
reimport only to discover this issue:
I can see the pool by running zpool import, and the devices are all online
however
running "zp
As the subject says, I can't import a seemingly okay raidz pool and I really
need to as it has some information on it that is newer than the last backup
cycle :-( I'm really in a bind; I hope anyone can help...
Background: A drive in a four-slice pool failed (I have to use slices due to a
mot
On Wed, Jun 17, 2009 at 11:49 AM, Brad Reese wrote:
> Yes, you may access the system via ssh. Please contact me at bar001 at uark
> dot
> edu and I will reply with details of how to connect.
...and then please tell us what was wrong! :-)
--
Kind regards, BM
Things, that are stupid at the begin
Hi Victor,
Yes, you may access the system via ssh. Please contact me at bar001 at uark dot
edu and I will reply with details of how to connect.
Thanks,
Brad
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensola
On 16.06.09 07:57, Brad Reese wrote:
Hi Victor,
'zdb -e -bcsv -t 2435913 tank' ran for about a week with no output. We had yet
another brown out and then the comp shut down (have a UPS on the way). A few
days before that I started the following commands, which also had no output:
zdb -e -bcsv
Hi Victor,
'zdb -e -bcsv -t 2435913 tank' ran for about a week with no output. We had yet
another brown out and then the comp shut down (have a UPS on the way). A few
days before that I started the following commands, which also had no output:
zdb -e -bcsv -t 2435911 tank
zdb -e -bcsv -t 243589
Hi Victor,
Sorry it took a while for me to reply, I was traveling and had limited network
access.
'zdb -e -bcsv -t 2435913 tank' has been running for a few days with no
output...want to try something else?
Here's the output of 'zdb -e -u -t 2435913 tank':
Uberblock
magic = 00
Brad Reese wrote:
Hi Victor,
Here's the output of 'zdb -e -bcsvL tank' (similar to above but with -c).
Thanks,
Brad
Traversing all blocks to verify checksums ...
zdb_blkptr_cb: Got error 50 reading <0, 11, 0, 0> [L0 packed nvlist] 4000L/4000P
DVA[0]=<0:2500014000:4000> DVA[1]=<0:4400014000
Hi Victor,
Here's the output of 'zdb -e -bcsvL tank' (similar to above but with -c).
Thanks,
Brad
Traversing all blocks to verify checksums ...
zdb_blkptr_cb: Got error 50 reading <0, 11, 0, 0> [L0 packed nvlist]
4000L/4000P DVA[0]=<0:2500014000:4000> DVA[1]=<0:4400014000:4000> fletcher4
unc
Here's the output of 'zdb -e -bsvL tank' (without -c) in case it helps. I'll
post with -c if it finishes.
Thanks,
Brad
Traversing all blocks ...
block traversal size 431585053184 != alloc 431585209344 (unreachable 156160)
bp count: 4078410
bp logical:433202894336
Hi Victor,
zdb -e -bcsvL tank
(let this go for a few hours...no output. I will let it go overnight)
zdb -e -u tank
Uberblock
magic = 00bab10c
version = 4
txg = 2435914
guid_sum = 16655261404755214374
timestamp = 1240517036 UTC = Thu Apr 23 15:03:56
Hi Brad,
Brad Reese wrote:
Hello,
I've run into a problem with zpool import that seems very similar to
the following thread as far as I can tell:
http://opensolaris.org/jive/thread.jspa?threadID=70205&tstart=15
The suggested solution was to use a later version of open solaris
(b99 or later) b
Hello,
I've run into a problem with zpool import that seems very similar to the
following thread as far as I can tell:
http://opensolaris.org/jive/thread.jspa?threadID=70205&tstart=15
The suggested solution was to use a later version of open solaris (b99 or
later) but that did not work. I've t
Hi Victor,
ok, not exactly ...
zdb -e -bb share fails on an assertions as follows:
/root 16 # zdb -e -bb share
Traversing all blocks to verify nothing leaked ...
Assertion failed: space_map_load(&msp->ms_map, &zdb_space_map_ops, 0x0,
&msp->ms_smo, spa->spa_meta_objset) == 0, file ../zdb.c,
Hi Jens,
Jens Hamisch wrote:
> Hi Erik,
> hi Victor,
>
>
> I have exactly the same problem as you described in your thread.
Exactly same problem would mean that only config object in the pool is
corrupted. Are you 100% sure that you have exact same problem?
> Could you please explain to me wh
Hi Erik,
hi Victor,
I have exactly the same problem as you described in your thread.
Could you please explain to me what to do to recover the data
on the pool?
Thanks in advance,
Jens Hamisch
--
This message posted from opensolaris.org
___
zfs-discus
To keep everyone updated - Thanks to Victor we have recovered AND
repaired all of the data that was lost in the incident. Victor may be
able to explain in detail what he did to accomplish this, I only know
it involved loading a patched zfs kernel module.
I would like to shout a big thanks to Victo
Victor,
> Well, since we are talking about ZFS any thread somewhere in ZFS module are
> interesting, and there should not be too many of them. Though in this case
> it is clear - it is trying to update config object and waits for the update
> to sync. There should be another thread with stack simi
Victor,
> Was it totally spontaneous? What was the uptime before panic? Systems
> messages on you Solaris 10 machine may have some clues.
I actually don't know exactly what happened (this was during my
vacation). Monitoring graphs shows that load was very high on this
particular server this day.
Erik Gulliksson wrote:
> Hi Victor,
>
> Thanks for the prompt reply. Here are the results from your suggestions.
>
>> Panic stack would be useful.
> I'm sorry I don't have this available and I don't want to cause another panic
> :)
It should be saved in system messages on your Solaris 10 machin
Erik,
could you please provide a little bit more details.
Erik Gulliksson wrote:
> Hi,
>
> I have a zfs-pool (unfortunately not setup according to the Best
> Practices Guide) that somehow got corrupted after a spontaneous server
> reboot.
Was it totally spontaneous? What was the uptime before p
Hi Victor,
Thanks for the prompt reply. Here are the results from your suggestions.
> Panic stack would be useful.
I'm sorry I don't have this available and I don't want to cause another panic :)
>
> It is apparently blocked somewhere in kernel. Try to do something like this
> from another windo
Hi Erik,
Erik Gulliksson wrote:
> Hi,
>
> I have a zfs-pool (unfortunately not setup according to the Best
> Practices Guide) that somehow got corrupted after a spontaneous server
> reboot. On Solaris 10u4 the machine simply panics when I try to import
> the pool.
Panic stack would be useful.
>
Hi,
I have a zfs-pool (unfortunately not setup according to the Best
Practices Guide) that somehow got corrupted after a spontaneous server
reboot. On Solaris 10u4 the machine simply panics when I try to import
the pool. So what I've done is taken a dd-image of the whole LUN so
that I have somethi
64 matches
Mail list logo