On Tue, 6 Sep 2011, Tyler Benster wrote:
It seems quite likely that all of the data is intact, and that something
different is preventing me from accessing the pool. What can I do to
recover the pool? I have downloaded the Solaris 11 express livecd if
that would be of any use.
Try running zd
Hi Doug,
The "vms" pool was created in a non-redundant way, so there is no way to
get the data off of it unless you can put back the original c0t3d0 disk.
If you can still plug in the disk, you can always do a zpool replace on it
afterwards.
If not, you'll need to restore from backup, pref
The fix for 6991788 would probably let the 40mb drive work, but it would
depend on the asize of the pool.
On Fri, 4 Mar 2011, Cindy Swearingen wrote:
Hi Robert,
We integrated some fixes that allowed you to replace disks of equivalent
sizes, but 40 MB is probably beyond that window.
Yes, yo
On Mon, 6 Dec 2010, Curtis Schiewek wrote:
Hi Mark,
I've tried running "zpool attach media ad24 ad12" (ad12 being the new
disk) and I get no response. I tried leaving the command run for an
extended period of time and nothing happens.
What version of solaris are you running?
__
y clean up ad24 & ad18
for you.
On Fri, Dec 3, 2010 at 1:38 PM, Mark J Musante wrote:
On Fri, 3 Dec 2010, Curtis Schiewek wrote:
NAME STATE READ WRITE CKSUM
media DEGRADED 0 0 0
raidz1 ONLINE 0 0 0
On Fri, 3 Dec 2010, Curtis Schiewek wrote:
NAME STATE READ WRITE CKSUM
media DEGRADED 0 0 0
raidz1 ONLINE 0 0 0
ad8ONLINE 0 0 0
ad10 ONLINE 0 0 0
On Wed, 10 Nov 2010, Darren J Moffat wrote:
On 10/11/2010 11:18, sridhar surampudi wrote:
I was wondering how zpool split works or implemented.
Or are you really asking about the implementation details ? If you want
to know how it is implemented then you need to read the source code.
Also
On Thu, 30 Sep 2010, Darren J Moffat wrote:
* It can be applied recursively down a ZFS hierarchy
True.
* It will unshare the filesystems first
Actually, because we use the zfs command to do the unmount, we end up
doing the unshare on the filesystem first.
See the opensolaris code for de
On Thu, 30 Sep 2010, Linder, Doug wrote:
Michael Schuster [mailto:michael.schus...@oracle.com] wrote:
Mark, I think that wasn't the question, rather, "what's the difference
between 'zfs u[n]mount' and '/usr/bin/umount'?"
Yes, that was the question. Sorry I wasn't more clear.
Oops, ok. Bu
On Thu, 30 Sep 2010, Linder, Doug wrote:
Is there any technical difference between using "zfs unmount" to unmount
a ZFS filesystem versus the standard unix "umount" command? I always
use "zfs unmount" but some of my colleagues still just use umount. Is
there any reason to use one over the ot
On Mon, 20 Sep 2010, Valerio Piancastelli wrote:
Yes, it is mounted
r...@disk-00:/volumes/store# zfs get sas/mail-ccts
NAME PROPERTY VALUESOURCE
sas/mail-cts mounted yes -
OK - so the next question would be where the data is. I assume when you
say you "cannot access" t
On Mon, 20 Sep 2010, Valerio Piancastelli wrote:
After a crash i cannot access one of my datasets anymore.
ls -v cts
brwxrwxrwx+ 2 root root 0, 0 ott 18 2009 cts
zfs list sas/mail-cts
NAME USED AVAIL REFER MOUNTPOINT
sas/mail-cts 149G 250G 149G /sas/mail-cts
a
Did you run installgrub before rebooting?
On Tue, 7 Sep 2010, Piotr Jasiukajtis wrote:
Hi,
After upgrade from snv_138 to snv_142 or snv_145 I'm unable to boot the system.
Here is what I get.
Any idea why it's not able to import rpool?
I saw this issue also on older builds on a different mac
On Thu, 2 Sep 2010, Dominik Hoffmann wrote:
I think, I just destroyed the information on the old raidz members by doing
zpool create BackupRAID raidz /dev/disk0s2 /dev/disk1s2 /dev/disk2s2
It should have warned you that two of the disks were already formatted
with a zfs pool. Did it not do
What does 'zpool import' show? If that's empty, what about 'zpool import
-d /dev'?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Wed, 1 Sep 2010, Benjamin Brumaire wrote:
your point have only a rethoric meaning.
I'm not sure what you mean by that. I was asking specifically about your
situation. You want to run labelfix on /dev/rdsk/c0d1s4 - what happened
to that slice that requires a labelfix? Is there something
On Mon, 30 Aug 2010, Benjamin Brumaire wrote:
As this feature didn't make it into zfs it would be nice to have it
again.
Better to spend time fixing the problem that requires a 'labelfix' as a
workaround, surely. What's causing the need to fix vdev labels?
__
On Mon, 30 Aug 2010, Jeff Bacon wrote:
All of this would be ok... except THOSE ARE THE ONLY DEVICES THAT WERE
PART OF THE POOL. How can it be missing a device that didn't exist?
The device(s) in question are probably the logs you refer to here:
I can't obviously use b134 to import the pool
On Fri, 27 Aug 2010, Rainer Orth wrote:
zpool status thinks rpool is on c1t0d0s3, while format (and the kernel)
correctly believe it's c11t0d0(s3) instead.
Any suggestions?
Try removing the symlinks or using 'devfsadm -C' as suggested here:
https://defect.opensolaris.org/bz/show_bug.cgi?id=14
On Mon, 16 Aug 2010, Matthias Appel wrote:
Can anybody tell me how to get rid of c1t3d0 and heal my zpool?
Can you do a "zpool detach performance c1t3d0/o"? If that works, then
"zpool replace performance c1t3d0 c1t0d0" should replace the bad disk with
the new hot spare. Once the resilver c
I keep the pool version information up-to-date here:
http://blogs.sun.com/mmusante/entry/a_zfs_taxonomy
On Sun, 15 Aug 2010, Haudy Kazemi wrote:
Hello,
This is a consolidated list of ZFS pool and filesystem versions, along with
the builds and systems they are found in. It is based on multip
On Wed, 11 Aug 2010, seth keith wrote:
NAME STATE READ WRITE CKSUM
brick DEGRADED 0 0 0
raidz1 DEGRADED 0 0 0
c13d0 ONLINE 0 0 0
c4d0
On Wed, 11 Aug 2010, Seth Keith wrote:
When I do a zdb -l /dev/rdsk/ I get the same output for all my
drives in the pool, but I don't think it looks right:
# zdb -l /dev/rdsk/c4d0
What about /dev/rdsk/c4d0s0?
___
zfs-discuss mailing list
zfs-discu
On Tue, 10 Aug 2010, seth keith wrote:
# zpool status
pool: brick
state: UNAVAIL
status: One or more devices could not be used because the label is missing
or invalid. There are insufficient replicas for the pool to continue
functioning.
action: Destroy and re-create the pool fro
On Tue, 10 Aug 2010, seth keith wrote:
first off I don't have the exact failure messages here, and I did not take good
notes of the failures, so I will do the best I can. Please try and give me
advice anyway.
I have a 7 drive raidz1 pool with 500G drives, and I wanted to replace them all
wit
You can use 'zpool history -l syspool' to show the username of the person
who created the dataset. The history is in a ring buffer, so if too many
pool operations have happened since the dataset was created, the
information is lost.
On Wed, 4 Aug 2010, Peter Taps wrote:
Folks,
In my app
On Wed, 28 Jul 2010, Gary Gendel wrote:
Right now I have a machine with a mirrored boot setup. The SAS drives are 43Gs
and the root pool is getting full.
I do a backup of the pool nightly, so I feel confident that I don't need to
mirror the drive and can break the mirror and expand the pool
On Thu, 15 Jul 2010, Tim Castle wrote:
j...@opensolaris:~# zpool import -d /dev
...shows nothing after 20 minutes
OK, then one other thing to try is to create a new directory, e.g. /mydev,
and create in it symbolic links to only those drives that are part of your
pool.
Based on your label
What does 'zpool import -d /dev' show?
On Wed, 14 Jul 2010, Tim Castle wrote:
My raidz1 (ZFSv6) had a power failure, and a disk failure. Now:
j...@opensolaris:~# zpool import
pool: files
id: 3459234681059189202
state: UNAVAIL
status: One or mor
On Tue, 6 Jul 2010, Roy Sigurd Karlsbakk wrote:
what I'm saying is that there are several posts in here where the only
solution is to boot onto a live cd and then do an import, due to
metadata corruption. This should be doable from the installed system
Ah, I understand now.
A couple of thing
On Tue, 6 Jul 2010, Roy Sigurd Karlsbakk wrote:
Hi all
With several messages in here about troublesome zpools, would there be a
good reason to be able to fsck a pool? As in, check the whole thing
instead of having to boot into live CDs and whatnot?
You can do this with "zpool scrub". It vi
On Fri, 2 Jul 2010, Julie LaMothe wrote:
Cindy - this discusses how to rename the rpool temporarily. Is there a
way to do it permanently and will it break anything? I have to rename a
root pool because of a type-o. This is on a Solaris sparc environment.
Please help!
The only difference
On Mon, 24 May 2010, h wrote:
but...wait..that cant be.
i disconnected the 1TB drives and plugged in the 2TB's before doing replace
command. no information could be written to the 1TBs at all since it is
physically offline.
Do the labels still exist? What does 'zdb -l /dev/rds
On Mon, 24 May 2010, h wrote:
i had 6 disks in a raidz1 pool that i replaced from 1TB drives to 2TB
drives. i have installed the older 1TB drives in another system and
would like to import the old pool to access some files i accidentally
deleted from the new pool.
Did you use the 'zpool
On Thu, 20 May 2010, Edward Ned Harvey wrote:
Also, since you've got "s0" on there, it means you've got some
partitions on that drive. You could manually wipe all that out via
format, but the above is pretty brainless and reliable.
The "s0" on the old disk is a bug in the way we're formattin
On Wed, 19 May 2010, John Andrunas wrote:
ff001f45e830 unix:die+dd ()
ff001f45e940 unix:trap+177b ()
ff001f45e950 unix:cmntrap+e6 ()
ff001f45ea50 zfs:ddt_phys_decref+c ()
ff001f45ea80 zfs:zio_ddt_free+55 ()
ff001f45eab0 zfs:zio_execute+8d ()
ff001f45eb50 genunix:taskq
Do you have a coredump? Or a stack trace of the panic?
On Wed, 19 May 2010, John Andrunas wrote:
Running ZFS on a Nexenta box, I had a mirror get broken and apparently
the metadata is corrupt now. If I try and mount vol2 it works but if
I try and mount -a or mount vol2/vm2 is instantly kerne
On Sun, 18 Apr 2010, Michelle Bhaal wrote:
zpool lists my pool as having 2 disks which have identical names. One
is offline, the other is online. How do I tell zpool to replace the
offline one?
If you're lucky, the device will be marked as not being present, and then
you can use the GUID.
On Wed, 7 Apr 2010, Neil Perrin wrote:
There have previously been suggestions to read slogs periodically. I
don't know if there's a CR raised for this though.
Roch wrote up CR 6938883 "Need to exercise read from slog dynamically"
Regards,
markm
___
> It would be nice for Oracle/Sun to produce a separate
> script which reset system/devices back to a install
> like beginning so if you move a OS disk with current
> password file and software from one system to
> another, and have it rebuild the device tree on the
> new system.
You mean /usr/sbi
On Wed, 31 Mar 2010, Damon Atkins wrote:
Why do we still need "/etc/zfs/zpool.cache" file???
The cache file contains a list of pools to import, not a list of pools
that exist. If you do a "zpool export foo" and then reboot, we don't want
foo to be imported after boot completes.
Unfortunat
On Mon, 29 Mar 2010, Jim wrote:
Thanks for the suggestion, but have tried detaching but it refuses
reporting no valid replicas. Capture below.
Could you run 'zdb -ddd tank | | awk '/^Dirty/ {output=1} /^Dataset/ {output=0}
{if (output) {print}}'
This will print the dirty time log of the pool
OK, I see what the problem is: the /etc/zfs/zpool.cache file.
When the pool was split, the zpool.cache file was also split - and the split
happens prior to the config file being updated. So, after booting off the
split side of the mirror, zfs attempts to mount rpool based on the information
in
On Mon, 29 Mar 2010, Victor Latushkin wrote:
On Mar 29, 2010, at 1:57 AM, Jim wrote:
Yes - but it does nothing. The drive remains FAULTED.
Try to detach one of the failed devices:
zpool detach tank 4407623704004485413
As Victor says, the detach should work. This is a known issue and I'
On Sat, 27 Mar 2010, Frank Middleton wrote:
Started with c0t1d0s0 running b132 (root pool is called rpool)
Attached c0t0d0s0 and waited for it to resilver
Rebooted from c0t0d0s0
zpool split rpool spool
Rebooted from c0t0d0s0, both rpool and spool were mounted
Rebooted from c0t1d0s0, only rpool
On Thu, 11 Mar 2010, Lars-Gunnar Persson wrote:
> Is it possible to convert a rz2 array to rz1 array? I have a pool with
> to rz2 arrays. I would like to convert them to rz1. Would that be
> possible?
No, you'll have to create a second pool with raidz1 and do a "send | recv"
operation to copy th
On Mon, 8 Mar 2010, Tim Cook wrote:
Is there a way to manually trigger a hot spare to kick in?
Yes - just use 'zpool replace fserv 12589257915302950264 c3t6d0'. That's
all the fma service does anyway.
If you ever get your drive to come back online, the fma service should
recognize that an
On Sat, 6 Mar 2010, Richard Elling wrote:
On Mar 6, 2010, at 5:38 PM, tomwaters wrote:
My though is this, I remove the 3rd mirror disk and offsite it as a backup.
To do this either:
1. upgrade to a later version where the "zpool split" command is
available
2. zfs send/receiv
It looks like you're running into a DTL issue. ZFS believes that ad16p2 has
some data on it that hasn't been copied off yet, and it's not considering the
fact that it's part of a raidz group and ad4p2.
There is a CR on this,
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6909724 bu
On Wed, 24 Feb 2010, Gregory Gee wrote:
files
files/home
files/mail
files/VM
I want to move the files/VM to another zpool, but keep the same mount
point. What would be the right steps to create the new zpool, move the
data and mount in the same spot?
Create the new pool, take a snapshot of
On Tue, 23 Feb 2010, patrik wrote:
I want to import my zpool's from FreeBSD 8.0 in OpenSolaris 2009.06.
secureUNAVAIL insufficient replicas
raidz1 UNAVAIL insufficient replicas
c8t1d0p0 ONLINE
c8t2d0s2 ONLINE
c8t3d0s8 UNAVAIL c
On Mon, 22 Feb 2010, tomwaters wrote:
I have just installed open solaris 2009.6 on my server using a 250G
laptop drive (using the entire drive).
So, 2009.06 was based on 111b. There was a fix that went into build 117
that allows you to mirror to smaller disks if the metaslabs in zfs are
sti
On Fri, 12 Feb 2010, Daniel Carosone wrote:
You can use zfs promote to change around which dataset owns the base
snapshot, and which is the dependant clone with a parent, so you can
deletehe other - but if you want both datasets you will need to keep the
snapshot they share.
Right. The othe
On Thu, 11 Feb 2010, Cindy Swearingen wrote:
On 02/11/10 04:01, Marc Friesacher wrote:
fr...@vault:~# zpool import
pool: zedpool
id: 10232199590840258590
state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:
zedpoolONLINE
On Thu, 11 Feb 2010, Tony MacDoodle wrote:
I have a 2-disk/2-way mirror and was wondering if I can remove 1/2 the
mirror and plunk it in another system?
Intact? Or as a new disk in the other system?
If you want to break the mirror, and create a new pool on the disk, you
can just do 'zpool d
On Fri, 5 Feb 2010, Alexander M. Stetsenko wrote:
NAMESTATE READ WRITE CKSUM
mypool DEGRADED 0 0 0
mirrorDEGRADED 0 0 0
c1t4d0 DEGRADED 0 028 too many errors
c1t5d0 ONLINE 0 0 0
I
On Thu, 4 Feb 2010, Karl Pielorz wrote:
The reason for testing this is because of a weird RAID setup I have
where if 'ad2' fails, and gets replaced - the RAID controller is going
to mirror 'ad1' over to 'ad2' - and cannot be stopped.
Does the raid controller not support a JBOD mode?
Regards
On Thu, 28 Jan 2010, TheJay wrote:
Attached the zpool history.
Did the resilver ever complete on the first c6t1d0? I see a second
replace here:
2010-01-27.20:41:15 zpool replace rzpool2 c6t1d0 c6t16d0
2010-01-28.07:57:27 zpool scrub rzpool2
2010-01-28.20:39:42 zpool clear rzpool2 c6t1d0
2
On Wed, 27 Jan 2010, TheJay wrote:
Guys,
Need your help. My DEV131 OSOL build with my 21TB disk system somehow got
really screwed:
This is what my zpool status looks like:
NAME STATE READ WRITE CKSUM
rzpool2 DEGRADED 0 0 0
raidz2
On Fri, 22 Jan 2010, Tony MacDoodle wrote:
Can I move the below mounts under / ?
rpool/export/export
rpool/export/home /export/home
Sure. Just copy the data out of the directory, do a zfs destroy on the
two filesystems, and copy it back.
For example:
# mkdir /save
# cp -r /expo
On Thu, 14 Jan 2010, Josh Morris wrote:
Hello List,
I am porting a block device driver(for a PCIe NAND flash disk driver) from
OpenSolaris to Solaris 10. On Solaris 10 (10/09) I'm having an issues
creating a zpool with the disk. Apparently I have an 'invalid argument'
somewhere:
% pfexec z
On Fri, 8 Jan 2010, Rob Logan wrote:
this one has me alittle confused. ideas?
j...@opensolaris:~# zpool import z
cannot mount 'z/nukeme': mountpoint or dataset is busy
cannot share 'z/cle2003-1': smb add share failed
j...@opensolaris:~# zfs destroy z/nukeme
internal error: Bad exchange descrip
Did you set autoexpand on? Conversely, did you try doing a 'zpool online
bigpool ' for each disk after the replace completed?
On Mon, 7 Dec 2009, Alexandru Pirvulescu wrote:
Hi,
I've read before regarding zpool size increase by replacing the vdevs.
The initial pool was a raidz2 with 4 640
This may be a dup of 6881631.
Regards,
markm
On 1 Dec 2009, at 15:14, Cindy Swearingen
wrote:
I was able to reproduce this problem on the latest Nevada build:
# zpool create tank raidz c1t2d0 c1t3d0 c1t4d0
# zpool add -n tank raidz c1t5d0 c1t6d0 c1t7d0
would update 'tank' to the follow
On 10 Nov, 2009, at 21.02, Ron Mexico wrote:
This didn't occur on a production server, but I thought I'd post
this anyway because it might be interesting.
This is CR 6895446 and a fix for it should be going into build 129.
Regards,
markm
___
zf
On Mon, 19 Oct 2009, Espen Martinsen wrote:
Let's say I've chosen to live with a zpool without redundancy, (SAN
disks, has actually raid5 in disk-cabinet)
What benefit are you hoping zfs will provide in this situation? Examine
your situation carefully and determine what filesystem works best
On Thu, 24 Sep 2009, Paul Archer wrote:
I may have missed something in the docs, but if I have a file in one FS,
and want to move it to another FS (assuming both filesystems are on the
same ZFS pool), is there a way to do it outside of the standard
mv/cp/rsync commands?
Not yet. CR 6483179
On 23 Sep, 2009, at 21.54, Ray Clark wrote:
My understanding is that if I "zfs set checksum=" to
change the algorithm that this will change the checksum algorithm
for all FUTURE data blocks written, but does not in any way change
the checksum for previously written data blocks.
I need to
On Mon, 14 Sep 2009, Marty Scholes wrote:
I really want to move back to 2009.06 and keep all of my files /
snapshots. Is there a way somehow to zfs send an older stream that
2009.06 will read so that I can import that into 2009.06?
Can I even create an older pool/dataset using 122? Ideall
On Sat, 12 Sep 2009, Jeremy Kister wrote:
scrub: resilver in progress, 0.12% done, 108h42m to go
[...]
raidz1 DEGRADED 0 0 0
c3t8d0ONLINE 0 0 0
c5t8d0ONLINE 0 0 0
c3t9d0ONLINE 0 0 0
The device is listed with s0; did you try using c5t9d0s0 as the name?
On 12 Sep, 2009, at 17.44, Jeremy Kister wrote:
[sorry for the cross post to solarisx86]
One of my disks died that i had in a raidz configuration on a Sun
V40z with Solaris 10u5. I took the bad disk out, replaced the dis
On Fri, 28 Aug 2009, Dave wrote:
Thanks, Trevor. I understand the RFE/CR distinction. What I don't
understand is how this is not a bug that should be fixed in all solaris
versions.
Just to get the terminology right: "CR" means Change Request, and can
refer to Defects ("bugs") or RFE's. Defe
Hi Stephen,
Have you got many zvols (or snapshots of zvols) in your pool? You could
be running into CR 6761786 and/or 6693210.
On Thu, 27 Aug 2009, Stephen Green wrote:
I'm having trouble booting with one of my zpools. It looks like this:
pool: tank
state: ONLINE
scrub: none requested
c
On Wed, 29 Jul 2009, David Magda wrote:
Which makes me wonder: is there a programmatic way to determine if a
path is on ZFS?
Yes, if it's local. Just use df -n $path and it'll spit out the filesystem
type. If it's mounted over NFS, it'll just say something like nfs or
autofs, though.
Reg
On Wed, 29 Jul 2009, Glen Gunselman wrote:
Where would I see CR 6308817 my usual search tools aren't find it.
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6308817
Regards,
markm
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On Tue, 28 Jul 2009, Glen Gunselman wrote:
# zpool list
NAME SIZE USED AVAILCAP HEALTH ALTROOT
zpool1 40.8T 176K 40.8T 0% ONLINE -
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
zpool1 364K 32.1T 28.8K /zpool1
This is normal, and admitted
On Wed, 8 Jul 2009, Moore, Joe wrote:
The copies code is nice because it tries to put each copy "far away"
from the others. This does have a significant performance impact when
on a single spindle, however, because each logical write will be written
"here" and then a disk seek to write it to
On Tue, 30 Jun 2009, John Hoogerdijk wrote:
i've setup a RAIDZ2 pool with 5 SATA drives and added a 32GB SSD log
device. to see how well it works, i ran bonnie++, but never saw any
io's on the log device (using iostat -nxce) . pool status is good - no
issues or errors. any ideas?
Try usin
On Mon, 29 Jun 2009, Carsten Aulbert wrote:
s11 console login: root
Password:
Last login: Mon Jun 29 10:37:47 on console
Sun Microsystems Inc. SunOS 5.10 Generic January 2005
s11:~# zpool export atlashome
s11:~# ls -l /atlashome
/atlashome: No such file or directory
s11:~# zpool import at
On Mon, 29 Jun 2009, Carsten Aulbert wrote:
Is there any way to force zpool import to re-order that? I could delete
all stuff under BACKUP, however given the size I don't really want to.
Do a zpool export first, and then check to see what's in /atlashome. My
bet is that the BACKUP directory
On Mon, 22 Jun 2009, Ross wrote:
All seemed well, I replaced the faulty drive, imported the pool again, and
kicked off the repair with:
# zpool replace zfspool c1t1d0
What build are you running? Between builds 105 and 113 inclusive there's
a bug in the resilver code which causes it to miss
On Mon, 15 Jun 2009, Todd Stansell wrote:
Any thoughts on how this can be done? I do have other systems I can use
to test this procedure, but ideally it would not introduce any downtime,
but that can be arranged if necessary.
I think the only work-around is to re-promote 'data', destroy the
Hi Jim,
See if 'zpool history' gives you what you're looking for.
Regards,
markm
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Fri, 29 May 2009, Rich Teer wrote:
zpool attach dpool c1t0d0 c2t0d0
zpool attach dpool c1t1d0 c2t1d0
zpool attach dpool c1t2d0 c2t2d0
These should all be "zpool add dpool mirror {disk1} {disk2}", but yes. I
recommend trying this out using files instead of disks beforehand so you
get a fe
On Thu, 21 May 2009, Nandini Mocherla wrote:
Then I booted into failsafe mode of 101a and then tried to run the
following command as given in luactivate output.
Yeah, that's a known bug in the luactivate output. CR 6722845
# mount -F zfs /dev/dsk/c1t2d0s0 /mnt
cannot open '/dev/dsk/c1t2d0s0
On Thu, 21 May 2009, Ian Collins wrote:
I'm trying to use zfs send/receive to replicate the root pool of a system and
I can't think of a way to stop the received copy attempting to mount the
filesystem over the root of the destination pool.
If you're using build 107 or later, there's a hidden
On Thu, 7 May 2009, Mike Gerdts wrote:
Perhaps you have change the configuration of the array since the last
reconfiguration boot. If you run "devfsadm" then run format, does it
see more disks?
Another thing to check is to see if the controller has a "jbod" mode as
opposed to passthrough.
On Fri, 17 Apr 2009, Mark J Musante wrote:
The dependency is based on the names.
I should clarify what I mean by that. There are actually two dependencies
here: one is based on dataset names, and one is based on snapshots and
clones.
If there are two datasets, pool/foo and pool/foo/bar
The dependency is based on the names. Try renaming
testpool/testfs2/clone1 out of the hierarchy:
zfs rename testpool/testfs2/clone1 testpool/foo
Then it should be possible to destroy testpool/testfs2.
On Fri, 17 Apr 2009, Grant Lowe wrote:
I was wondering if there is a solution for this
On Fri, 10 Apr 2009, Patrick Skerrett wrote:
degradation) when these write bursts come in, and if I could buffer them
even for 60 seconds, it would make everything much smoother.
ZFS already batches up writes into a transaction group, which currently
happens every 30 seconds. Have you tested
On Thu, 9 Apr 2009, shyamali.chakrava...@sun.com wrote:
Hi All,
I have corefile where we see NULL pointer de-reference PANIC as we have sent
(deliberately) NULL pointer for return value.
vdev_disk_io_start()
error = ldi_ioctl(dvd->vd_lh, zio->io_cmd,
On Fri, 27 Mar 2009, Alec Muffett wrote:
The inability to create more than 1 clone at a time (ie: in separate
TXGs) is something which has hampered me (and several projects on which
I have worked) for some years, now.
Hi Alec,
Does CR 6475257 cover what you're looking for?
Regards,
markm
_
On 17 Mar, 2009, at 16.21, Bryan Allen wrote:
Then mirror the VTOC from the first (zfsroot) disk to the second:
# prtvtoc /dev/rdsk/c1t0d0s2 | fmthard -s - /dev/rdsk/c1t1d0s2
# zpool attach -f rpool c1t0d0s0 c1t1d0s0
# zpool status -v
And then you'll still need to run installgrub to put grub
On Tue, 17 Mar 2009, Neal Pollack wrote:
Can anyone share some instructions for setting up the rpool mirror of
the boot disks during the Solaris Nevada (SXCE) install?
You'll need to use the text-based installer, and in there you choose two
the two bootable disks instead of just one. They're
On Fri, 6 Mar 2009, Blake wrote:
I have savecore enabled, but nothing in /var/crash:
r...@filer:~# savecore -v
savecore: dump already processed
r...@filer:~# ls /var/crash/filer/
r...@filer:~#
OK, just to ask the dumb questions: is dumpadm configured for
/var/crash/filer? Is the dump zvol b
On Fri, 6 Mar 2009, Blake wrote:
I have savecore enabled, but it doesn't look like the machine is dumping
core as it should - that is, I don't think it's a panic - I suspect
interrupt handling.
Then when you say you had a machine crash, what did you mean?
Did you look in /var/crash/* to see
Hi Steven,
Try doing 'zfs list -t all'. This is a change that went in late last year
to list only datasets unless snapshots were explicitly requested.
On Fri, 6 Mar 2009, Steven Sim wrote:
Gurus;
I am using OpenSolaris 2008.11 snv_101b_rc2 X86
Prior to this I was using SXCE built 91 (or
On Thu, 5 Mar 2009, Blake wrote:
I had a 2008.11 machine crash while moving a 700gb file from one machine
to another using cp. I looked for an existing bug for this, but found
nothing.
Has anyone else seen behavior like this? I wanted to check before
filing a bug.
Have you got a copy of
On Fri, 13 Feb 2009, Tony Marshall wrote:
How would i obtain the current setting for the vdev_cache from a
production system? We are looking at trying to tune ZFS for better
performance with respect to oracle databases, however before we start
changing settings via the /etc/system file we wou
Handojo wrote:
> hando...@opensolaris:~# zpool add rpool c4d0
>
Two problems: first, the command needed is 'zpool attach', because
you're making a mirror. 'zpool add' is for extending stripes, and
currently stripes are not supported as root pools.
The second problem is that when the drive is
1 - 100 of 195 matches
Mail list logo