0
mirror-3 ONLINE 0 0 0
c9t3d0 ONLINE 0 0 0
c9t4d0 ONLINE 0 0 0
errors: No known data errors
On Mar 16, 2012, at 9:21 PM, Jan Hellevik wrote:
> Hours... :-(
>
> Should have used both devices as
.org
> [mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Jan Hellevik
> Sent: Friday, March 16, 2012 2:20 PM
> To: zfs-discuss@opensolaris.org
> Subject: [zfs-discuss] Cannot remove slog device
>
> I have a problem with my box. The slog started showing errors, so I decided
&g
I have a problem with my box. The slog started showing errors, so I decided to
remove it. I have tried to offline it with the same result. Any ideas?
I have offlined the cache device, which happened immediately, but both
offline/remove of the slog hangs and makes the box unusable.
If I have a
Hi!
You were right. It turns out that the disks were not part of a pool yet. One of
them had previously been used in a pool in another machine, but one of them had
been used somewhere else (Ubuntu or OS X), and that explains it. After I put
them to use in a pool, 'format' show what I expected:
On Feb 1, 2012, at 8:07 PM, Bob Friesenhahn wrote:
> On Wed, 1 Feb 2012, Jan Hellevik wrote:
>>>
>>> Are all of the disks the same make and model?
>>
>> They are different makes - I try to make pairs of different brands to
>> minimise risk.
>
> D
Hi!
On Feb 1, 2012, at 7:43 PM, Bob Friesenhahn wrote:
> On Wed, 1 Feb 2012, Jan Hellevik wrote:
>> The disk in question is c6t70d0 - it shows consistently higher %b and asvc_t
>> than the other disks in the pool. The output is from a 'zfs receive' after
>> about
I suspect that something is wrong with one of my disks.
This is the output from iostat:
extended device statistics errors ---
r/sw/s kr/s kw/s wait actv wsvc_t asvc_t %w %b s/w h/w trn tot
device
2.0 18.9 38.1 160.9 0.0 0.10.1
Export did not go very well.
j...@opensolaris:~# zpool export master
internal error: Invalid argument
Abort (core dumped)
So I deleted (renamed) the zpool.cache and rebooted.
After reboot I imported the pool and it seems to have gone well. It is now
scrubbing.
Thanks a lot for the help!
j...@
Thanks! I will try later today and report back the result.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I think the 'corruption' is caused by the shuffling and mismatch of the disks.
One 1.5TB is now believed to be part of a mirror with a 2TB, a 1TB part of a
mirror with a 1.5TB and so on. It would be better if zfs would try to find the
second disk of each mirror instead of relying on what control
Ok, so I did it again... I moved my disks around without doing export first.
I promise - after this I will always export before messing with the disks. :-)
Anyway - the problem. I decided to rearrange the disks due to cable lengths and
case layout. I disconnected the disks and moved them around.
Well, for me it was a cure. Nothing else I tried got the pool back. As far as I
can tell, the way to get it back should be to use symlinks to the fdisk
partitions on my SSD, but that did not work for me. Using -V got the pool back.
What is wrong with that?
If you have a better suggestion as to
I found a thread that mentions an undocumented parameter -V
(http://opensolaris.org/jive/thread.jspa?messageID=444810) and that did the
trick!
The pool is now online and seems to be working well.
Thanks everyone who helped!
--
This message posted from opensolaris.org
__
Thanks for the reply. The thread on FreeBSD mentions creating symlinks for the
fdisk partitions. So did you earlier in this thread. I tried that but it did
not help - you can see the result in my earlier reply to your previous message
in this thread.
Is this the way to go? Should I try again wi
Hi! Sorry for the late reply - I have been busy at work and this had to wait.
The system has been powered off since my last post.
The computer is new - built it to use as file server at home. I have not seen
any strange behaviour (other than this). All parts are brand new (except for
the disks
Ok - this is really strange. I did a test. Wiped my second pool (4 disks like
the other pool), and used them to create a pool similar to the one I have
problems with.
Then i powered off, moved the disks and powered on. Same error message as
before. Moved the disks back to the original controlle
I am making a second backup of my other pool - then I'll use those disks and
recreate the problem pool. The only difference will be the SSD - only have one
of those. I'll use a disk in the same slot, so it will be close.
Backup will be finished in 2 hours time
--
This message posted from op
Yes, I can try to do that. I do not have any more of this brand of disk, but I
guess that does not matter. It will have to wait until tomorrow (I have an
appointment in a few minutes, and it is getting late here in Norway), but I
will try first thing tomorrow. I guess a pool on a single drive wi
It did not work. I did not find labels on p1, but on p0.
j...@opensolaris:~# zdb -l /dev/dsk/c10d0p1
LABEL 0
failed to unpack label 0
LABEL 1
-
Thanks! Not home right now, but I will try that as soon as I get home.
Message was edited by: janh
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-disc
svn_133 and zfs 22. At least my rpool is 22.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I don't think that is the problem (but I am not sure). It seems like te problem
is that the ZIL is missing. It is there, but not recognized.
I used fdisk to create a 4GB partition of a SSD, and then added it to the pool
with the command 'zpool add vault log /dev/dsk/c10d0p1'.
When I try to impo
I cannot import - that is the problem. :-(
I have read the discussions you referred to (and quite a few more), and also
about the logfix program. I also found a discussion where 'zpool import -FX'
solved a similar problem so I tried that but no luck.
Now I have read so many discussions and blog
Thanks for the help, but I cannot get it to work.
j...@opensolaris:~# zpool import
pool: vault
id: 8738898173956136656
state: UNAVAIL
status: One or more devices are missing from the system.
action: The pool cannot be imported. Attach the missing
devices and try again.
see: http:
...@3,0
Specify disk (enter its number): ^C
j...@opensolaris:~$
On Thu, May 13, 2010 at 7:15 PM, Richard Elling wrote:
> now try "zpool import" to see what it thinks the drives are
> -- richard
>
> On May 13, 2010, at 2:46 AM, Jan Hellevik wrote:
>
> > Short versi
Yes, I turned the system off before I connected the disks to the other
controller. And I turned the system off beore moving them back to the original
controller.
Now it seems like the system does not see the pool at all.
The disks are there, and they have not been used so I do not understand w
j...@opensolaris:~$ zpool clear vault
cannot open 'vault': no such pool
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
j...@opensolaris:~$ pfexec zpool import -D
no pools available to import
Any other ideas?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
n rpool
NAME PROPERTY VALUESOURCE
rpool version 22 default
... and this is where I am now.
The zpool contains my digital images and videos and I would be really unhappy
to lose them. What can I do to get back the pool? Is there hope?
Sorry for the long post - tried to assemble
29 matches
Mail list logo