On Thu, Jun 23, 2011 at 9:28 AM, David W. Smith wrote:
> When I tried out Solaris 11, I just exported the pool prior to the install of
> Solaris 11. I was lucky in that I had mirrored the boot drive, so after I had
> installed Solaris 11 I still had the other disk in the mirror with Solaris 10
>
On Wed, Jun 22, 2011 at 06:32:49PM -0700, Daniel Carosone wrote:
> On Wed, Jun 22, 2011 at 12:49:27PM -0700, David W. Smith wrote:
> > # /home/dws# zpool import
> > pool: tank
> > id: 13155614069147461689
> > state: FAULTED
> > status: The pool metadata is corrupted.
> > action: The pool can
On Wed, Jun 22, 2011 at 12:49:27PM -0700, David W. Smith wrote:
> # /home/dws# zpool import
> pool: tank
> id: 13155614069147461689
> state: FAULTED
> status: The pool metadata is corrupted.
> action: The pool cannot be imported due to damaged devices or data.
>see: http://www.sun.com/ms
An update:
I had mirrored my boot drive when I installed Solaris 10U9 originally, so I
went ahead and rebooted the system to this disk instead of my Solaris 11
install. After getting the system up, I imported the zpool, and everything
worked normally.
So I guess there is some sort of incomp
On Wed, Jun 22, 2011 at 02:01:12PM -0700, Larry Liu wrote:
> You can try
>
> #fdisk /dev/rdsk/c5d0t0p0
Or just dd /dev/zero over the raw device, eject and start from clean.
--
Dan.
pgpqmaR5Jw6Q0.pgp
Description: PGP signature
___
zfs-discuss mailing l
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Dave U.Random
>
> > My personal preference, assuming 4 disks, since the OS is mostly reads
and
> > only a little bit of writes, is to create a 4-way mirrored 100G
partition
> > for the OS, and
You can try
#fdisk /dev/rdsk/c5d0t0p0
to delete all the partitions and exit. You don't have to create EFI
partition in fdisk(1M) because it only create the PMBR but not the EFI
label. Then try format -e again.
The panic sounds like a bug in USB driver to me.
Larry
于 2011/6/22 13:48, Kitt
I cannot run format -e to change it since it will crash my sys or
the server I am trying to attach the disk to.
It is a 2.5TB drive for sure.
On 06/22/11 13:12, Larry Liu wrote:
4. c5t0d0
/pci@0,0/pci108e,5351@1d,7/storage@5/disk@0,0
Specify disk (enter its number):
pfexec fdisk /dev/rdsk/c5
I was recently running Solaris 10 U9 and I decided that I would like to go
to Solaris 11 Express so I exported my zpool, hoping that I would just do
an import once I had the new system installed with Solaris 11.
4. c5t0d0
/pci@0,0/pci108e,5351@1d,7/storage@5/disk@0,0
Specify disk (enter its number):
pfexec fdisk /dev/rdsk/c5t0d0p0
Total disk size is 60800 cylinders
Cylinder size is 80325 (512 byte) blocks
Cylinders
Partition Status Type Start End Length %
= == === = === ==
Kitty,
> I am trying to mount a WD 2.5TB external drive (was IFS:NTFS) to my OSS box.
>
> After connecting it to my Ultra24, I ran "pfexec fdisk /dev/rdsk/c5t0d0p0" and
> changed the Type to EFI. Then, "format -e" or "format" showed the disk was
> config
> with 291.10GB only.
The following mess
I was recently running Solaris 10 U9 and I decided that I would like to go
to Solaris 11 Express so I exported my zpool, hoping that I would just do
an import once I had the new system installed with Solaris 11. Now when I
try to do an import I'm getting the following:
# /home/dws# zpool import
Hello!
> I don't see the problem. Install the OS onto a mirrored partition, and
> configure all the remaining storage however you like - raid or mirror or
> watever.
I didn't understand your point of view until I read the next paragraph.
> My personal preference, assuming 4 disks, since the OS
On Sat, Jun 18, 2011 at 09:49:44PM +0200, Roy Sigurd Karlsbakk wrote:
> Hi all
>
> I have a few machines setup with OI 148, and I can't make the LEDs on the
> drives work when something goes bad. The chassies are supermicro ones, and
> work well, normally. Any idea how to make drive LEDs wirk wi
Cindy,
Thanks for the response. You are saying that by re-sharing the 3 descendant
systems, then the parent will pick up the descendant shares ?
Could you tell me how best to re-share the 3 descendant systems ? Do you mean
I should just
zfs set sharesmb=off tank/documents/Jan
zfs set shar
Hi Ed,
This is current Solaris SMB sharing behavior. CR 6582165 is filed to
provide this feature.
You will need to reshare your 3 descendent file systems.
NFS sharing does this automatically.
Thanks,
Cindy
On 06/22/11 09:46, Ed Fang wrote:
Need a little help. I set up my zfs storage last
I am trying to mount a WD 2.5TB external drive (was IFS:NTFS) to my OSS box.
After connecting it to my Ultra24, I ran "pfexec fdisk /dev/rdsk/c5t0d0p0" and
changed the Type to EFI. Then, "format -e" or "format" showed the disk was
config
with 291.10GB only.
Selecting a disk would crash my OSS bo
Need a little help. I set up my zfs storage last year and everything has been
working great. The initial setup was as follows
tank/documents (not shared explicitly)
tank/documents/Jan- shared as Jan
tank/documents/Feb - shared as Feb
tank/documents/March - shared as March
Anyhow, I now
I'll be doing this over the upcoming weekend so I'll see how it goes.
Thanks for all of the suggestions.
Todd
On Jun 22, 2011, at 10:48 AM, Cindy Swearingen
wrote:
> Hi Todd,
>
> Yes, I have seen zpool scrub do some miracles but I think it depends
> on the amount of corruption.
>
> A few
Hi Todd,
Yes, I have seen zpool scrub do some miracles but I think it depends
on the amount of corruption.
A few suggestions are:
1. Identify and resolve the corruption problems on the underlying
hardware. No point in trying to clear the pool errors if this
problem continues.
The fmdump comman
20 matches
Mail list logo