Hi Shawn,
I have no experience with this configuration, but you might review
the information in this blog:
http://blogs.sun.com/erickustarz/entry/poor_man_s_cluster_end
ZFS is not a cluster file system and yes, possible data corruption
issues exist. Eric mentions this in his blog.
You might al
Hi Laurent,
Yes, you should able to offline a faulty device in a redundant
configuration as long as enough devices are available to keep
the pool redundant.
On my Solaris Nevada system (latest bits), injecting a fault
into a disk in a RAID-Z configuration and then offlining a disk
works as expec
Hi--
With 40+ drives, you might consider two pools any way. If you want to
use a ZFS root pool, some like this:
- Mirrored ZFS root pool (2 x 500 GB drives)
- Mirrored ZFS non-root pool for everything else
Mirrored pools are flexible and provide good performance. See this site
for more tips:
h
Hi Dick,
I haven't see this problem when I've tested these steps.
And its been awhile since I've seen the nobody:nobody problem, but it
sounds like NFSMAPID didn't get set correctly.
I think this question is asked during installation and generally is set
to the default DNS domain name.
The dom
Tim,
I sent your subscription problem to the OpenSolaris help list.
We should hear back soon.
Cindy
On 07/27/09 16:15, Tim Cook wrote:
So it is broken then... because I'm on week 4 now, no responses to this thread,
and I'm still not getting any emails.
Anyone from Sun still alive that can a
Tim,
If you could send me your email address privately, the
OpenSolaris list folks have a better chance of resolving
this problem.
I promise I won't sell it to anyone. :-)
Cindy
On 07/27/09 16:25, cindy.swearin...@sun.com wrote:
Tim,
I sent your subscription problem to the OpenSolaris help l
Hi Laurent,
I was able to reproduce on it on a Solaris 10 5/09 system.
The problem is fixed in a current Nevada bits and also in
the upcoming Solaris 10 release.
The bug fix that integrated this change might be this one:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6328632
zpool o
Hi Dick,
The Solaris 10 volume management service is volfs.
If you attach the USB hard disk and run volcheck, the disk should
be mounted under the /rmdisk directory.
If the auto-mounting doesn't occur, you can disable volfs and mount
it manually.
You can read more about this feature here:
htt
I apologize for replying in the middle of this thread, but I never
saw the initial snapshot syntax of mypool2, which needs to be
recursive (zfs snapshot -r mypo...@snap) to snapshot all the
datasets in mypool2. Then, use zfs send -R to pick up and
restore all the dataset properties.
What was the
Hi Andrew,
The AVAIL column indicates the pool size, not the volsize
in this example.
In your case, the iscsi-pool/log_1_1 volume is 24 GB in size
and the remaining pool space is 33.7G. The 33.7G reflects
your pool space, not your volume size.
The sizing is easier to see if you include the zpoo
Andrew,
Take a look at your zpool list output, which identifies the size of your
iscsi-pool pool.
Regardless of how the volume size was determined, your remaining
pool size is still 33GB and yes, some of it is used for metadata.
cs
On 08/03/09 11:26, andrew.r...@sun.com wrote:
hi cindy,
tnx
Hi Will,
It looks to me like you are running into this bug:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6664649
This is fixed in Nevada and a fix will also be available in an
upcoming Solaris 10 release.
This doesn't help you now, unfortunately.
I don't think this ghost of a de
Hi Will,
Since no workaround is provided in the CR, I don't know if importing on
a more recent OpenSolaris release and trying to remove it will work.
I will simulate this error, try this approach, and get back to you.
Thanks,
Cindy
On 08/04/09 18:34, Will Murnane wrote:
On Tue, Aug 4, 2009
Hi Nawir,
I haven't tested these steps myself, but the error message
means that you need to set this property:
# zpool set bootfs=rpool/ROOT/BE-name rpool
Cindy
On 08/05/09 03:14, nawir wrote:
Hi,
I have sol10u7 OS with 73GB HD in c1t0d0.
I want to clone it to 36GB HD
These steps below is w
Hi Will,
I simulated this issue on s10u7 and then imported the pool on a
current Nevada release. The original issue remains, which is you
can't remove a spare device that no longer exists.
My sense is that the bug fix prevents the spare from getting messed
up in the first place when the device I
Hi Steffen,
Go with a mirrored root pool is my advice with all the disk space in s0
on each disk. Simple is best and redundant simple is even better.
I'm no write cache expert, but a few simple tests on Solaris 10 5/09,
show me that the write cache is enabled on a disk that is labeled with
an SM
Brian,
CR 4852783 was updated again this week so you might add yourself or
your customer to continue to be updated.
In the meantime, a reminder is that a mirrored ZFS configuration
is flexible in that devices can be detached (as long as the redundancy
is not compromised) or replaced as long as t
Hi Andreas,
Good job for using a mirrored configuration. :-)
Your various approaches would work.
My only comment about #2 is that it might take some time for the spare
to kick in for the faulted disk.
Both 1 and 2 would take a bit more time than just replacing the faulted
disk with a spare dis
Andreas,
More comments below.
Cindy
On 08/06/09 14:18, Andreas Höschler wrote:
Hi Cindy,
Good job for using a mirrored configuration. :-)
Thanks!
Your various approaches would work.
My only comment about #2 is that it might take some time for the spare
to kick in for the faulted disk.
Andreas,
I think you can still offline the faulted disk, c1t6d0.
The difference between these two replacements:
zpool replace tank c1t6d0 c1t15d0
zpool replace tank c1t6d0
Is that in the second case, you are telling ZFS that c1t6d0
has been physically replaced in the same location. This would
Hi Kyle,
Except that in the case of spares, you can't replace them.
You'll see a message like the one below.
Cindy
# zpool create pool mirror c1t0d0 c1t1d0 spare c1t5d0
# zpool status
pool: pool
state: ONLINE
scrub: none requested
config:
NAMESTATE READ WRITE CKSUM
Dang. This is a bug we talked about recently that is fixed in Nevada and
an upcoming Solaris 10 release.
Okay, so you can't offline the faulted disk, but you were able to
replace it and detach the spare.
Cool beans...
Cindy
On 08/06/09 15:35, Andreas Höschler wrote:
Hi Cindy,
I think you c
Hi Michael,
I will get this fixed.
Thanks for letting us know.
Cindy
On 08/07/09 09:24, Michael Marburger wrote:
Who do we contact to fix mis-information in the evil tuning guide?
at:
http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide#How_to_Tune_Cache_Sync_Handling_Per_St
Hey Richard,
I believe 6844090 would be a candidate for an s10 backport.
The behavior of 6844090 worked nicely when I replaced a disk of the same
physical size even though the disks were not identical.
Another flexible storage feature is George's autoexpand property (Nevada
build 117), where yo
oving again which is a good thing.
Is there further documentation on this yet?
I just asked Cindy Swearingen, the tech writer for ZFS, about this and
sadly, it appears that there isn't any documentation for this available
outside of Sun yet. The documentation for using flash archives
ch is a good thing.
Is there further documentation on this yet?
I just asked Cindy Swearingen, the tech writer for ZFS, about this and
sadly, it appears that there isn't any documentation for this available
outside of Sun yet. The documentation for using flash archives to set
up systems
Hi Chris,
You might repost this query on desktop-discuss to find out
the status of the Access List tab.
Last I heard, it was being reworked.
Cindy
On 08/21/09 10:14, Chris wrote:
How do I get this in OpenSolaris 2009.06?
http://www.alobbs.com/albums/albun26/ZFS_acl_dialog1.jpg
thanks.
_
Hi Dick,
I'm testing root pool recovery from remotely stored snapshots rather
than from files.
I can send the snapshots to a remote pool easily enough.
The problem I'm having is getting the snapshots back while the
local system is booted from the miniroot to simulate a root pool
recovery. I don
Hi Grant,
I don't have all my usual resources at the moment, but I would
boot from alternate media and use the format utility to check
the partitioning on newly added disk, and look for something
like overlapping partitions. Or, possibly, a mismatch between
the actual root slice and the one you
g from DVD but nothing showed up. Thanks for the ideas, though.
Maybe your other sources might have something?
- Original Message ----
From: Cindy Swearingen
To: Grant Lowe
Cc: zfs-discuss@opensolaris.org
Sent: Thursday, August 27, 2009 6:24:00 PM
Subject: Re: [zfs-discuss] Boot error
Hi
Hi Mike,
I reviewed this doc and the only issue I have with it now is that uses
/var/tmp an an example of storing snapshots in "long-term storage"
elsewhere.
For short-term storage, storing a snapshot as a file is an acceptable
solution as long as you verify that the snapshots as files are valid
Hi Jon,
If the zpool import command shows the old rpool and associated disk
(c1t1d0s0), then you might able to import it like this:
# zpool import rpool rpool2
Which renames the original pool, rpool, to rpool2, upon import.
If the disk c1t1d0s0 was overwritten in any way then I'm not sure
th
Hi Brian,
I'm tracking this issue and expected resolution, here:
http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide#RAID-Z_Checksum_Errors_in_Nevada_Builds.2C_120-123
Thanks,
Cindy
On 09/10/09 13:21, Brian Hechinger wrote:
I've hit google and it looks like this is still
Hi RB,
We have a draft of the ZFS/flar image support here:
http://opensolaris.org/os/community/zfs/boot/flash/
Make sure you review the Solaris OS requirements.
Thanks,
Cindy
On 09/14/09 11:45, RB wrote:
Is it possible to create flar image of ZFS root filesystem to install it to
other maci
In addition, if you need the flexibility of moving disks around until
the device removal CR integrates, then mirrored pools are more flexible.
Detaching disks from a mirror isn't ideal but if you absolutely have
to reuse a disk temporarily then go with mirrors. See the output below.
You can repla
Michael,
ZFS handles EFI labels just fine, but you need an SMI label on the disk
that you are booting from.
Are you saying that localtank is your root pool?
I believe the OSOL install creates a root pool called rpool. I don't
remember if its configurable.
Changing labels or partitions from
:
Cindy Swearingen wrote:
Michael,
ZFS handles EFI labels just fine, but you need an SMI label on the
disk that you are booting from.
Are you saying that localtank is your root pool?
no... (I was on the plane yesterday, I'm still jet-lagged), I should
have realised that that's st
Dave,
I've searched opensolaris.org and our internal bug database.
I don't see that anyone else has reported this problem.
I asked someone from the OSOL install team and this behavior
is a mystery.
If you destroyed the phantom pools before you reinstalled,
then they probably returned from the i
Hi Chris,
Unless we can figure out the best way to provide this info, please ask
about specific features and we'll tell you.
One convoluted way is that a CR that integrate a ZFS feature
identifies the Nevada integration build and the Solaris 10 release,
but not all CRs provide this info. You can
Dustin,
You didn't describe the process that you used to replace the disk so its
difficult to commment on what happened.
In general, you physically replace the disk and then let ZFS know that
the disk is replaced, like this:
# zpool replace pool-name device-name
This process is described here:
Hi Karl,
Manually cloning the root pool is difficult. We have a root pool
recovery procedure that you might be able to apply as long as the
systems are identical. I would not attempt this with LiveUpgrade
and manually tweaking.
http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting
m specific info stored in the root pool?
Thanks
Peter
2009/9/24 Cindy Swearingen :
Hi Karl,
Manually cloning the root pool is difficult. We have a root pool recovery
procedure that you might be able to apply as long as the
systems are identical. I would not attempt this with LiveUpgrade
an
Karl,
I'm not sure I'm following everything. If you can't swap the drives,
the which pool would you import?
If you install the new v210 with snv_115, then you would have a bootable
root pool.
You could then receive the snapshots from the old root pool into the
root pool on the new v210.
I wo
The opensolaris.org site will be transitioning to a wiki-based site
soon, as described here:
http://www.opensolaris.org/os/about/faq/site-transition-faq/
I think it would be best to use the new site to collect this
information because it will be much easier for community members
to contribute.
Hi Donour,
You would use the boot -L syntax to select the ZFS BE to boot from,
like this:
ok boot -L
Rebooting with command: boot -L
Boot device: /p...@8,60/SUNW,q...@4/f...@0,0/d...@w2104cf7fa6c7,0:a
File and args: -L
1 zfs1009BE
2 zfs10092BE
Select environment to boot: [ 1 - 2 ]: 2
Hi David,
All system-related components should remain in the root pool, such as
the components needed for booting and running the OS.
If you have datasets like /export/home or other non-system-related
datasets in the root pool, then feel free to move them out.
Moving OS components out of the ro
See the following bug:
http://bugs.opensolaris.org/view_bug.do?bug_id=6280662
Cindy
roland wrote:
is it planned to add some other compression algorithm to zfs ?
lzjb is quite good and especially performing very well, but i`d like to have better compression (bzip2?) - no matter how worse perfo
Hi Peter,
This operation isn't supported yet. See this bug:
http://bugs.opensolaris.org/view_bug.do?bug_id=5008936
Both the zfs man page and the ZFS Admin Guide identify
swap and dump limitations, here:
http://docs.sun.com/app/docs/doc/817-2271/6mhupg6gl?q=dump&a=view
Cindy
Peter Buckingham
Final for the first draft. :-)
Use the .../community/zfs/docs link to get to this doc link at the
bottom of the page. The current version is indeed 0822.
More updates are needed, but the dnode description is still applicable.
Someone will correct if I'm wrong.
cs
James Blackburn wrote:
Or l
Uwe,
It was also unclear to me that legacy mounts were causing your
troubles. The ZFS Admin Guide describes ZFS mounts and legacy
mounts, here:
http://docs.sun.com/app/docs/doc/819-5461/6n7ht6qs6?a=view
Richard, I think we need some more basic troubleshooting info, such
as this mount failure. I
Matt,
Generally, when a disk needs to be replaced, you replace the disk,
use the zpool replace command, and you're done...
This is only a little more complicated in your scenario below because
of the sharing the disk between ZFS and UFS.
Most disks are hot-pluggable so you generally don't need
Hi Kory,
No, they don't have to the same size. But, the pool size will be
constrained by the smallest disk and might not be the best
use of your disk space.
See the output below. I'd be better off mirroring the two 136-GB
disks and using the 4 GB-disk for something else. :-)
Cindy
c0t0d0 = 4
Hi Peter,
The bugs are filed:
http://bugs.opensolaris.org/view_bug.do?bug_id=6430563
Your coworker might be able to workaround this by setting a 10GB quota
on the ZFS file system.
cs
Peter Eriksson wrote:
A coworker of mine ran into a large ZFS-related bug the other day. He was
trying to
Malachi,
The section on adding devices to a ZFS storage pool in the ZFS Admin
guide, here, provides an example of adding to a raidz configuration:
http://docsview.sfbay/app/docs/doc/817-2271/6mhupg6ft?a=view
I think I need to provide a summary of what you can do with
both raidz and mirrored c
Here's the correct link:
http://docs.sun.com/app/docs/doc/817-2271/6mhupg6ft?a=view
The same example exists on page 52 of the 817-2271 PDF posted on
the opensolaris.../zfs/documentation page.
Cindy
Malachi de Ælfweald wrote:
FYI That page is not publicly viewable. It was the 817-2271 pdf I was
Hi Martin,
Yes, you can do this with the zpool attach command.
See the output below.
An example in the ZFS Admin Guide is here:
http://docs.sun.com/app/docs/doc/817-2271/6mhupg6ft?a=view
Cindy
# zpool create mpool c1t20d0
# zpool status mpool
pool: mpool
state: ONLINE
scrub: none reques
implemented?
Cindy Swearingen wrote:
Hi Mike,
Yes, outside of the hot-spares feature, you can detach, offline, and
replace existing devices in a pool, but you can't remove devices, yet.
This feature work is being tracked under this RFE:
http://bugs.opensolaris.org/bugdatabase/view_b
Chris,
Looks like you're not running a Solaris release that contains
the zfs receive -F option. This option is in current Solaris community
release, build 48.
http://docs.sun.com/app/docs/doc/817-2271/6mhupg6f1?a=view#gdsup
Otherwise, you'll have to wait until an upcoming Solaris 10 release.
C
Chris,
This option will be available in the upcoming Solaris 10 release, a
few months from now.
We'll send out a listing of the new ZFS features around that time.
Cindy
Krzys wrote:
Ah, ok, not a problem, do you know Cindy when next Solaris Update is
going to be released by SUN? Yes, I am run
Mario,
Until zpool remove is available, you don't have any options to remove a
disk from a non-redundant pool.
Currently, you can:
- replace or detach a disk in a ZFS mirrored storage pool
- replace a disk in a ZFS RAID-Z storage pool
Please see the ZFS best practices site for more info about
Nenad,
I've seen this solution offered before, but I would not recommend this
except as a last resort, unless you didn't care about the health of
the original pool.
Removing a device from an exported pool, could be very bad, depending
on the pool's redundancy. You might not get your all data bac
Hi Robert,
I just want to be clear that you can't just remove a disk from an
exported pool without penalty upon import:
- If the underlying redundancy of the original pool doesn't support
it and you lose data
- Some penalty exists even for redundant pools, which is running
in DEGRADED mode until
Hi Rainer,
This is a long thread and I wasn't commenting on your previous
replies regarding mirror manipulation. If I was, I would have done
so directly. :-)
I saw the export-a-pool-to-remove-a-disk-solution described in
a Sun doc.
My point and (I agree with your points below) is that making a
Hi Lee,
You can decide whether you want to use ZFS for a root file system now.
You can find this info here:
http://opensolaris.org/os/community/zfs/boot/
Consider this setup for your other disks, which are:
250, 200 and 160 GB drives, and an external USB 2.0 600 GB drive
250GB = disk1
200GB =
Lee,
Yes, the hot spare (disk4) should kick if another disk in the pool fails
and yes, the data is moved to disk4.
You are correct:
160 GB (the smallest disk) * 3 + raidz parity info
Here's the size of raidz pool comprised of 3 136-GB disks:
# zpool list
NAMESIZEUSED
Arif,
You need to boot from {net | DVD} in single-user mode, like this:
boot net -s or boot cdrom -s
Then, when you get to a shell prompt, relabel the disk like this:
# format -e
format> label
[0] SMI Label
[1] EFI Label
Specify Label type[0]: 0
Then, you should be able to repartition howev
Huitzi,
Yes, you are correct. You can add more raidz devices in the future as
your excellent graphic suggests.
A similar zpool add example is described here:
http://docs.sun.com/app/docs/doc/817-2271/6mhupg6fu?a=view
This new section describes what operations are supported for both raidz
an
Huitzi,
Awesome graphics! Do we have your permission to use them? :-)
I might need to recreate them in another format.
Someone was kind enough to point out the error in this example yesterday
and I fixed it in the opensolaris.../zfs version, found here:
http://opensolaris.org/os/community/zfs/d
Hi Ed,
This BP was added as a lesson learned for not mixing these
models because its too confusing to administer and no other reason.
I'll update the BP to be clear about this.
I'm sure someone else will answer your NFSv3 question. (I'd like
to know too).
Cindy
Ed Ravin wrote:
Looking over t
Jens,
Someone already added it to the ZFS links page, here:
http://opensolaris.org/os/community/zfs/links/
I just added a link to the links page from the zfs docs page
so it is easier to find.
Thanks,
Cindy
Jens Elkner wrote:
On Tue, Jun 19, 2007 at 05:19:05PM +0200, Constantin Gonzalez wro
Hi Young,
I will link these versions on the ZFS community docs page.
Thanks for the reminder. :-)
Cindy
Young Joo Pintaske wrote:
> Hi ZFS Community,
>
> Some time ago I posted a message that ZFS Administration Guide was translated
> (Russian and Brazilian Portuguese). There are several other
Sean,
This scenario is covered in the ZFS Admin Guide, found here:
http://docs.sun.com/app/docs/doc/817-2271/6mhupg6fu?a=view#gcfhe
I provided an example below.
Cindy
# zpool create tank02 c0t0d0
# zpool status tank02
pool: tank02
state: ONLINE
scrub: none requested
config:
NA
Marko,
The ZFS Admin Guide has been updated to include the delegated
administration feature.
See Chapter 8, here:
http://opensolaris.org/os/community/zfs/docs/zfsadmin.pdf
Cindy
Matthew Ahrens wrote:
> Marko Milisavljevic wrote:
>
>>Hmm.. my b69 installation understands zfs allow, but man z
The OpenSolaris ZFS FAQ is here:
http://www.opensolaris.org/os/community/zfs/faq
Other resources are listed here:
http://www.opensolaris.org/os/community/zfs/links/
Cindy
Brandorr wrote:
> P.S. - Is there a ZFS FAQ somewhere?
>
___
zfs-discuss mail
Paul,
Scroll down a bit in this section to the default passwd/group tables:
http://docs.sun.com/app/docs/doc/819-2379/6n4m1vl99?a=view
Cindy
Paul Kraus wrote:
> On 9/17/07, Darren J Moffat <[EMAIL PROTECTED]> wrote:
>
>
>>Why not use the already assigned webservd/webserved 80/80 uid/gid pair ?
The log device feature integrated into snv_68.
You can read about them here:
http://blogs.sun.com/perrin/entry/slog_blog_or_blogging_on
And starting on page 18 of the ZFS Admin Guide, here:
http://opensolaris.org/os/community/zfs/docs
Albert Chin wrote:
> On Tue, Sep 18, 2007 at 12:59:02PM -
Mike, Grant,
I reported the zoneadm.1m man page problem to the man page group.
I also added some stronger wording to the ZFS Admin Guide and the
ZFS FAQ about not using ZFS for zone root paths for the Solaris 10
release and that upgrading or patching is not supported for either
Solaris 10 or Sola
I think you want zpool iostat:
% zpool iostat
capacity operationsbandwidth
pool used avail read write read write
-- - - - - - -
tank54.5M 16.7G 0 0 3 1
users217M 16.5G 0 0
Hi Stephen,
No, you can't replace a one device with a raidz device, but you can
create a mirror from one device by using zpool attach. See the output
below.
The other choice is to add to an existing raidz configuration. See
the output below.
I thought we had an RFE to expand an existing raidz de
Chris,
I agree that your best bet is to replace the 128-mb device with
another device, fix the emcpower2a manually, and then replace it
back. I don't know these drives at all, so I'm unclear about the
fix it manually step.
Because your pool isn't redundant, you can't use zpool offline
or detach.
Chris,
You need to use the zpool replace command.
I recently enhanced this section of the admin guide with more explicit
instructions on page 68, here:
http://opensolaris.org/os/community/zfs/docs/zfsadmin.pdf
If these are hot-swappable disks, for example, c0t1d0, then use this syntax:
# zpool
Jonathan,
Thanks for providing the zpool history output. :-)
You probably missed the message after this command:
# zpool add tank c4t0d0
invalid vdev specification
use '-f' to override the following errors:
mismatched replication level: pool uses raidz and new vdev is disk
I provided some guid
Jonathan,
I think I remember seeing this error in an older Solaris release. The
current zpool.1m man page doesn't have this error unless I'm missing it:
http://docs.sun.com/app/docs/doc/819-2240/zpool-1m
In a current Solaris release, this command fails as expected:
# zpool create mirror c0t2d0
Hi Doug,
ZFS uses an EFI label so you need to use format -e to set it back to a
VTOC label, like this:
# format -e
Specify disk (enter its number)[4]: 3
selecting c0t4d0
[disk formatted]
format> label
[0] SMI Label
[1] EFI Label
Specify Label type[1]: 0
Warning: This disk has an EFI label. Changi
Shawn,
Using slices for ZFS pools is generally not recommended so I think
we minimized any command examples with slices:
# zpool create tank mirror c1t0d0s0 c1t1d0s0
Keep in mind that using the slices from the same disk for both UFS
and ZFS makes administration more complex. Please see the ZFS B
Hey Kory,
I think you must mean can you detach one of the 73GB disks from moodle
and then add it to another pool of 146GB and you want to save the
data from the 73GB disk?
You can't do this and save the data. By using zpool detach, you are
removing any knowledge of ZFS from that disk.
If you wa
Hi Kory,
Yes, I get it now. You want to detach one of the disks and then readd
the same disk, but lose the redundancy of the mirror.
Just as long as you realize you're losing the redundancy.
I'm wondering if zpool add will complain. I don't have a system to
try this at the moment.
Cindy
Kory
Hi Kava,
Your questions are hard for me to answer without seeing your syntax.
Also, you don't need to futz with slices if you are using whole disks.
I added some add'l information to the zpool replace section
on page 74, here:
http://opensolaris.org/os/community/zfs/docs/zfsadmin.pdf
Note that
Kava,
Because of a recent bug, you need to export and import the pool to see
the expanded space after you use zpool replace.
Also, you don't need to detach first. The process would look like this:
# zpool create test mirror 8gb-1 8gb-2
# zpool replace test 8gb-1 12gb-1
# zpool replace test 8gb-
Sure you can, but it would be something like this:
300GB-1 = c0t0d0
300GB-2 = c0t1d0
500GB = c0t2d0s0 (300 GB slice is created on s0)
# zpool create test raidz c0t0d0 c0t1d0 c0t2d0s0
However, if you are going to use the add'l 200 GB on the 500GB
drive for something else, administration is more
Because of the mirror mount feature that integrated into that Solaris
Express, build 77.
You can read about here on page 20 of the ZFS Admin Guide:
http://opensolaris.org/os/community/zfs/docs/zfsadmin.pdf
Cindy
Andrew Tefft wrote:
> Let's say I have a zfs called "pool/backups" and it contains
Chris,
You can replace the disks one at a time with larger disks. No problem.
You can also add another raidz vdev, but you can't add disks to an
existing raidz vdev.
See the sample output below. This might not solve all your problems,
but should give you some ideas...
Cindy
# zpool create rpool
Chris,
You would need to replace all the disks to see the expanded space.
Otherwise, space on the 1-2 larger disks would be wasted. If
you replace all the disks with larger disks, then yes, the
disk space in the raidz config would be expanded.
A ZFS mirrored config would be more flexible but it
David,
Try detaching the spare, like this:
# zpool detach pool-name c10t600A0B80001139967CE145E80D4Dd0
Cindy
David Smith wrote:
> Addtional information:
>
> It looks like perhaps the original drive is in use, and the hot spare is
> assigned but not in use see below about zpool iostat:
>
The file system only quotas and reservations feature description
starts here:
http://docs.sun.com/app/docs/doc/817-2271/gfwpz?a=view
cs
Eric Schrock wrote:
> On Thu, Mar 20, 2008 at 06:41:42PM -0500, [EMAIL PROTECTED] wrote:
>
>>There was an change request put in to disable snaps affecting quot
Hi Mertol,
Log devices aren't supported in the Solaris 10 release yet. You would
have to run a Solaris Express version to configure log devices, such
as SXDE 9/07 or SXDE 1/08, described here:
http://docs.sun.com/app/docs/doc/817-2271/gfgaa?a=view
cs
Mertol Ozyoney wrote:
> Hi All ;
>
>
>
>
Jeff,
No easy way exists to convert this configuration to a mirrored
configuration currently.
If you had two more disks, you could use zpool attach to create
a two-way, two disk mirror. See the output below.
A more complicated solution is to create two files that are the size of
your existing di
Hi Sam,
You might review the ZFS best practice site for maintenance
recommendations, here:
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
Cindy
Sam wrote:
> I have a 10x500 disc file server with ZFS+, do I need to perform any sort of
> periodic maintenance to the files
Hi Ulrich,
The updated lucreate.1m man page integrated accidentally into
build 88.
If you review the build 88 instructions, here:
http://opensolaris.org/os/community/zfs/boot/
You'll see that we're recommending patience until the install/upgrade
support integrates.
If you are running the tran
Simon,
I think you should review the checksum error reports from the fmdump
output (dated 4/30) that you supplied previously.
You can get more details by using fmdump -ev.
Use "zpool status -v" to identify checksum errors as well.
Cindy
Simon Breden wrote:
> Thanks Max,
>
> I have not been a
601 - 700 of 738 matches
Mail list logo