The Linux host can still see the device. I showed you the log from the Linux host.

I tried the fdisk -l and it listed the iSCSI disks.

Lars-Gunnar Persson

Den 2. mars. 2009 kl. 17.02 skrev "O'Shea, Damien" <daos...@revenue.ie>:


I could be wrong but this looks like an issue on the Linux side

A zpool status is returning the healthy pool

What does format/fdisk show you on the Linux side ? Can it still see the
iSCSI device that is being shared from the Solaris server ?



Regards,
Damien O'Shea
Strategy & Unix Systems
Revenue Backup Site
VPN: 35603
daos...@revenue.ie <mailto:daos...@revenue.ie>


-----Original Message-----
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org]on Behalf Of Blake
Sent: 02 March 2009 15:57
To: Lars-Gunnar Persson
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] ZFS volume corrupted?


*************************************

This e-mail has been received by the Revenue Internet e-mail service. (IP)

*************************************

It looks like you only have one physical device in this pool.  Is that
correct?



On Mon, Mar 2, 2009 at 9:01 AM, Lars-Gunnar Persson
<lars-gunnar.pers...@nersc.no> wrote:
Hey to everyone on this mailing list (since this is my first post)!

We've a Sun Fire X4100 M2 server running Solaris 10 u6 and after some
system
work this weekend we have a problem with only one ZFS volume.

We have a pool called /Data with many file systems and two volumes. The
status of my zpool is:

-bash-3.00$ zpool status
 pool: Data
 state: ONLINE
 scrub: scrub in progress, 5.99% done, 13h38m to go
config:

       NAME                     STATE     READ WRITE CKSUM
       Data                     ONLINE       0     0     0
         c4t5000402001FC442Cd0  ONLINE       0     0     0

errors: No known data errors


Yesterday I started the scrub process because I read that was a smart thing to do after a zpool export and zpool import procedure. I did this because I wanted to move the zpool to another OS installation but changed my mind and
did a zpool import on the same OS as I did an export.

After checking as much information as I could find on the web, I was
advised
to to run the zpool scrub after an import.

Well, the problem now is that one volume in this zpool is not working. I've shared it via iscsi to a Linux host (all of this was working on Friday).
The
Linux host reports that it can't find a partition table. Here is the log
from the Linux host:

Mar  2 11:09:36 eva kernel: SCSI device sdb: 524288000 512-byte hdwr
sectors
(268435 MB)
Mar 2 11:09:36 eva kernel: SCSI device sdb: drive cache: write through
Mar  2 11:09:36 eva kernel: SCSI device sdb: 524288000 512-byte hdwr
sectors
(268435 MB)
Mar 2 11:09:37 eva kernel: SCSI device sdb: drive cache: write through
Mar  2 11:09:37 eva kernel:  sdb: unknown partition table
Mar 2 11:09:37 eva kernel: Attached scsi disk sdb at scsi28, channel 0, id
0, lun 0


So I checked the status on my Solaris server and I found this information a
bit strange;:

-bash-3.00$ zfs list Data/subversion1
NAME               USED  AVAIL  REFER  MOUNTPOINT
Data/subversion1  22.5K   519G  22.5K  -

How can it bed 519GB available on a volume that is 250GB in size? Here are
more details:

-bash-3.00$ zfs get all Data/subversion1
NAME              PROPERTY       VALUE                  SOURCE
Data/subversion1  type           volume                 -
Data/subversion1  creation       Wed Apr  2  9:06 2008  -
Data/subversion1  used           22.5K                  -
Data/subversion1  available      519G                   -
Data/subversion1  referenced     22.5K                  -
Data/subversion1  compressratio  1.00x                  -
Data/subversion1  reservation    250G                   local
Data/subversion1  volsize        250G                   -
Data/subversion1  volblocksize   8K                     -
Data/subversion1  checksum       on                     default
Data/subversion1  compression    off                    default
Data/subversion1  readonly       off                    default
Data/subversion1  shareiscsi     off                    local


Will this be fixed after the scrub process is finished tomorrow or is this
volume lost forever?

Hoping for some quick answers as the data is quite important for us.

Regards,

Lars-Gunnar Persson

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


************************

This message has been delivered to the Internet by the Revenue Internet e-mail service (OP)

*************************
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to