10GB of memory + 5 days later. The pool was imported.
this file server is a virtual machine. I allocated 2GB of memory and 2 CPU
cores assume this was enough to mange 6 TB (6x 1TB disks). While the pool I am
try to recover is only 700 GB and not the 6TB pool I am try to migrate.
So I decided t
Hi All, is there any procedure to recover a filesystem from an office pool or
bring a pool on-line quickly.
Here is my issue.
* One 700GB Zpool
* 1 filesystem with compression turn on (only using few MB)
* Try to migrated another filesystem from a different pool with dedup stream.
with
zfs send
You may or may not need to add the log device back.
zfs clear should bring the pool online.
either way shouldn't affect the data.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris
When I boot up without the disks in the slots. I manually bring the pool on
line with
zpool clear
I believe that was what you were missing from your command. However I did not
try to change controller.
Hopefully you only been unplug disks while the system is turn off. If that's
case the
I was expecting
zfs send tank/export/projects/project1...@today
would send everything up to @today. That is the only snapshot and I am not
using the -i options.
The things worries me is that tank/export/projects/project1_nb was the first
file system that I tested with full dedup and compression
size of snapshot?
r...@filearch1:/var/adm# zfs list mpool/export/projects/project1...@today
NAMEUSED AVAIL REFER MOUNTPOINT
mpool/export/projects/project1...@today 0 - 407G -
r...@filearch1:/var/adm# zfs list tank/export/projects/project1...@
Okay, so after some test with dedup on snv_134. I decided we can not to use
dedup feature for the time being.
While unable to destroy a dedupped file system. I decided to migrate the file
system to another pool then destroy the pool. (see below)
http://opensolaris.org/jive/thread.jspa?threadI
Looks like I am hitting the same issue now
from the earlier post that you responded.
http://opensolaris.org/jive/thread.jspa?threadID=128532&tstart=15
Continue my test migration with the dedup=off and synced couple more file
systems.
I decided the merge two of the file systems together by copyi
> Would your opinion change if the disks you used took
> 7 days to resilver?
>
> Bob
That will only make a stronger case that hot spare is absolutely needed.
This will also make a strong case for choosing raidz3 over raidz2 as well as
vdev smaller number of disks.
--
This message posted from op
> Why would you recommend a spare for raidz2 or raidz3?
> -- richard
Spare is to minimize the reconstruction time. Because remember a vdev can not
start resilvering until there is a spare disk available. And with disks as big
as they are today, resilvering also take many hours. I rather have
> 3 shelves with 2 controllers each. 48 drive per
> shelf. These are Fibrechannel attached. We would like
> all 144 drives added to the same large pool.
I would do either a 12 or 16 disk raidz3 vdev and do spread out the disk across
controllers within vdevs. also may want to leave a least 1 spare
This is not a performance issue. The rsync will hang hard and one of the child
process can not be killed (I assume it's the one running on the zfs). the
command gets slower I am referring to the output of the file system commands
(zpool, zfs, df, du, etc) from the different shell. I left the
Sorry for the double post but I think this was better suite for zfs forum.
I am running OpenSolaris snv_134 as a file server in a test environment,
testing deduplication. I am transferring large amount of data from our
production server via using rsync.
The Data pool is on a separated raidz1-0
I understand your point. however in most production system the selves are added
incrementally so make sense to be related to number of slots per shelf. and in
most case withstand a shelf failure is to much of overhead on storage any are.
for example in his case he will have to configure 1+0 ra
Sorry, I need to correct myself. Mirror luns on the windows side to switch
storage pool under it is a great idea and I think you can do this without
downtime.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensol
So on the point of not need an migration back.
Even at 144 disk. they won't be on the same raid group. So figure out what is
the best raid group size for you since zfs don't support changing number of
disk in raidz yet. I usually use the number of the slots per shelf. or a good
number is 7~10
For this type of migration a downtime is required. However, it can be reduce
to only a few hours to a few minutes depending how much change need to be
synced.
I have done this many times on a NetApp Filer but can be apply to zfs as well.
First thing is consider is only do the migration once so
Unclear what you want to do? What's the goal for this excise?
If you want to replace the pool with larger disks and the pool is in mirror or
raidz. You just replace one disk at a time and allow the pool to rebuild it
self. Once all the disk has been replace, it will atomically realize the disk
18 matches
Mail list logo