that fixed the problem, but
unfortunately, typing Zpool status and Zpool import finds nothing even though
"FORMAT" and FORMAT -E displays the 1TB volume.
Are there any known problems or ways to reimport a supposed lost/confused zpool
on a new host?
Thanks
Andrew
--
This message p
Ok,
The fault appears to have occurred regardless of the attempts to move to
vSphere as we've now moved the host back to ESX 3.5 from whence it came and the
problem still exists.
Looks to me like the fault occurred as a result of a reboot.
Any help and advice would be greatly appreciated.
-
nfortunately I can't rely on connecting using an
iSCSI initiator within the OS to attach the volume so I guess i have to dive
straight into checking the MBR at this stage. I'll no doubt need some help here
so please forgive me if I fall at the first hurdle.
Kind Regards
Andre
0 and /dev/dsk/c8t4d0 but neither of them are valid.
Kind Regards
Andrew
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi again,
Out of interest, could this problem have been avoided if the ZFS configuration
didnt rely on a single disk? i.e. RAIDZ etc
Thanks
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://ma
Hi all,
Great news - by attaching an identical size RDM to the server and then grabbing
the first 128K using the command you specified Ross
dd if=/dev/rdsk/c8t4d0p0 of=~/disk.out bs=512 count=256
we then proceeded to inject this into the faulted RDM and lo and behold the
volume recovered!
dd
is is incorrect. The viral effects of the GPL only take effect at the point
of distribution. If ZFS is distributed seperately to the Linux kernel as a
module then the person doing the combining is the user. Different if a Linux
distro wanted to include it on a live CD, for example. GPL
I'm getting the same thing now.
I tried moving my 5-disk raidZ and 2disk Mirror over to another machine, but
that machine would keep panic'ing (not ZFS related panics). When I brought the
array back over, I started getting this as well.. My Mirror array is unaffected.
snv111b (2009.06 release)
This is what my /var/adm/messages looks like:
Sep 27 12:46:29 solaria genunix: [ID 403854 kern.notice] assertion failed: ss
== NULL, file: ../../common/fs/zfs/space_map.c, line: 109
Sep 27 12:46:29 solaria unix: [ID 10 kern.notice]
Sep 27 12:46:29 solaria genunix: [ID 655072 kern.notice]
followed by the inevitable reboot.
How can I get this working? I'm using OpenSolaris 2008.05 upgraded to build 93.
Thanks
Andrew.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ot bootable! It is either offlined or
> detached or faulted. Please try to boot from another
> device." and a nice kernel panic, followed by the
> inevitable reboot.
>
> How can I get this working? I'm using OpenSolaris
> 2008.05 upgraded to build 93.
>
>
e the command "disks" to get Solaris to update the disk
links under /dev before you use installgrub.
Andrew.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ce 2 to do that.
Cheers
Andrew.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
OK, I've put up some screenshots and a copy of my menu.lst to clarify my setup:
http://sites.google.com/site/solarium/zfs-screenshots
Cheers
Andrew.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolari
Sounds like you've got an EFI label on the second disk. Can you run "format",
select the second disk, then enter "fdisk" then "print" and post the output
here?
Thanks
Andrew.
This message posted from opensolaris.org
_
es not set the 2nd disk up correctly, as you've discovered. To write a
new Solaris grub MBR to the second disk, do this:
installgrub -m /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c3d0s0
The -m flag tells installgrub to put the grub stage1 into the MBR.
Cheers
Andrew.
This
I've no idea how this is handled on Linux.
Cheers
Andrew.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
http://sites.google.com/site/solarium/_/rsrc/1218841252931/zfs-screenshots/paniconboot.gif
Thanks
Andrew.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hmm... Just tried the same thing on SXCE build 95 and it works fine. Strange.
Anyone know what's up with OpenSolaris (the distro)? I'm using the ISO of
OpenSolaris 208.11 snv_93 image-updated to build 95 if that makes a difference.
I've not tried this on 2008.05 .
Thanks
A
Perhaps user properties on pools would be useful here? At present only ZFS
filesystems can have user properties - not pools. Not really an immediate
solution to your problem though.
Cheers
Andrew.
This message posted from opensolaris.org
___
zfs
Just tried with a fresh install from the OpenSolaris 2008.11 snv_95 CD and it
works fine.
Thanks
Andrew.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo
back?
hope you can help a insightful dork, Andrew
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Inserting the drive does not automatically mount the ZFS filesystem on it. You
need to use the "zpool import" command which lists any pools available to
import, then zpool import -f {name of pool} to force the import (to force the
import if you haven't exported the pool first).
oblem.
Solaris Express Community Edition (SXCE) is also not affected by this bug.
Cheers
Andrew.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ing zpool.cache, deleting both boot archives,
then doing a "bootadm update-archive" should work.
Cheers
Andrew.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I woke up yesterday morning, only to discover my system kept rebooting..
It's been running fine for the last while. I upgraded to snv 98 a couple weeks
back (from 95), and had upgraded my RaidZ Zpool from version 11 to 13 for
improved scrub performance.
After some research it turned out that, o
Thanks a lot! Google didn't seem to cooperate as well as I had hoped.
Still no dice on the import. I only have shell access on my Blackberry Pearl
from where I am, so it's kind of hard, but I'm managing.. I've tried the OP's
exact commands, and even trying to import array as ro, yet the system s
Do you guys have any more information about this? I've tried the offset
methods, zfs_recover, aok=1, mounting read only, yada yada, with still 0 luck.
I have about 3TBs of data on my array, and I would REALLY hate to lose it.
Thanks!
--
This message posted from opensolaris.org
_
hey Victor,
Where would i find that? I'm still somewhat getting used to the Solaris
environment. /var/adm/messages doesn't seem to show any Panic info.. I only
have remote access via SSH, so I hope I can do something with dtrace to pull it.
Thanks,
Andrew
--
This message p
Not too sure if it's much help. I enabled kernel pages and curproc.. Let me
know if I need to enable "all" then.
solaria crash # echo "::status" | mdb -k
debugging live kernel (64-bit) on solaria
operating system: 5.11 snv_98 (i86pc)
solaria crash # echo "::stack" | mdb -k
solaria crash # echo ":
So I tried a few more things..
I think the combination of the following in /etc/system made a difference:
set pcplusmp:apic_use_acpi=0
set sata:sata_max_queue_depth = 0x1
set zfs:zfs_recover=1 <<< I had this before
set aok=1 <<< I had this before too
I crossed my fingers, and it
st again - if this works then memory
contention is the cause of the slowdown.
Also, NFS to ZFS filesystems will run slowly under certain conditions
-including with the default configuration. See this link for more information:
http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide#Cache_Flushes
Cheers
Andrew.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
IIRC, uncorrectable bitrot even in a nonessential file detected by ZFS used to
cause a kernel panic.
Bug ID 4924238 was closed with the claim that bitrot-induced panics is not a
bug, but the description did mention an open bug ID 4879357, which suggests
that it's considered a bug after all.
Can
eschrock wrote:
> Unfortunately, there is one exception to this rule. ZFS currently does
> not handle write failure in an unreplicated pool. As part of writing
> out data, it is sometimes necessary to read in space map data. If this
> fails, then we can panic due to write failure. This is a known b
What is the reasoning behind ZFS not enabling the write cache for the root
pool? Is there a way of forcing ZFS to enable the write cache?
Thanks
Andrew.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss
What is the current estimated ETA on the integration of install support for ZFS
boot/root support to Nevada?
Also, do you have an idea when we can expect the improved ZFS write throttling
to integrate?
Thanks
Andrew.
This message posted from opensolaris.org
By my calculations that makes the possible release date for ZFS boot installer
support around the 9th June 2008. Mark that date in your diary!
Cheers
Andrew.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss
root pool.
This is about 3 weeks from being released as a DVD and CD image.
Also, you might be pleased to learn that in build 87 Solaris moved the root
user's home directory from the root of the filesystem (i.e. the / directory) to
its own directory, namely /root .
Cheers
Andrew.
ounce these two improvements on the announce
list/forum now as well, since they are probably of interest to many users.
Keep up the great work ZFS team!
Cheers
Andrew.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opens
ystem with a ZFS root, it
would be best to wait for the Nevada build 90 DVD/CD images to be released.
Cheers
Andrew.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Apologies for the misinformation. OpenSolaris 2008.05 does *not* put swap on
ZFS, so is *not* susceptible to the bugs that cause lock-ups under certain
situations where the swap is on ZFS.
Cheers
Andrew.
This message posted from opensolaris.org
able to boot ZFS on an Intel Mac using Boot Camp.
Cheers
Andrew.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
bugs up at bugs.opensolaris.org for more info.
Cheers
Andrew.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Your Solaris 10 system should also have the Sun Update Manager which will allow
you to install patches in a more automated fashion. Look for it on the Gnome /
CDE menus.
Cheers
Andrew.
This message posted from opensolaris.org
___
zfs-discuss
With the release of the Nevada build 90 binaries, it is now possible to install
SXCE directly onto a ZFS root filesystem, and also put ZFS swap onto a ZFS
filesystem without worrying about having it deadlock. ZFS now also supports
crash dumps!
To install SXCE to a ZFS root, simply use the text-
I've done this successfully on
x86/x64.
Thanks
Andrew.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
He means that you can have two types of pool as your root pool:
1. A single physical disk.
2. A ZFS mirror. Usually this means 2 disks.
RAIDZ arrays are not supported as root pools (at the moment).
Cheers
Andrew.
This message posted from opensolaris.org
Since ZFS is COW, can I have a read-only pool (on a central file server, or on
a DVD, etc) with a separate block-differential pool on my local hard disk to
store writes?
This way, the pool in use can be read-write, even if the main pool itself is
read-only, without having to make a full local co
Do an automatic pool snapshot (using the recursive atomic snapshot feature that
Matt Ahrens implemented recently, taking time proportional to the number of
filesystems in the pool) upon every txg commit.
Management of the trashcan snapshots could be done by some user-configurable
policy such as
For a synchronous write to a pool with mirrored disks, does the write unblock
after just one of the disks' write caches is flushed, or only after all of the
disks' caches are flushed?
This message posted from opensolaris.org
___
zfs-discuss mailing
I started three new threads recently,
"Feature proposal: differential pools"
"Feature proposal: trashcan via auto-snapshot with every txg commit"
"Flushing synchronous writes to mirrors"
Matthew Ahrens and Henk Langeveld both replied to my first thread by sending
their messages to both me and to
Jeff Bonwick wrote:
>> For a synchronous write to a pool with mirrored disks, does the write
>> unblock after just one of the disks' write caches is flushed,
>> or only after all of the disks' caches are flushed?
> The latter. We don't consider a write to be committed until
> the data is on stable
:
a) spread out the deleting of the snapshots, and
b) create more snapshots more often (and conversely delete more
snapshots, more often), so each one contains fewer accumulated space to
be freed off.
--
Andrew
___
zfs-discuss mailing list
zfs-dis
it's because I left the NFSv4 domain setting at the default.
(I'm just using NFSv3, but trying to come up with an explanation. In
any case, using the FQDN works.)
-Andrew
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
eed barely SATA
controllers at all by todays standards as I think they always pretend to
be PATA to the host system.
--
Andrew
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
er to discover this before you reduce the pool
redundancy/resilience, whilst it's still fixable.
--
Andrew
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
to another RFE/BUG and the
pause/resume requirement got lost. I'll see about reinstating it.
--
Andrew
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
this or something similar before. Thanks in advance for any
suggestions.
Andrew Kener
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
3 - community edition
Andrew
On Apr 18, 2010, at 11:15 PM, Richard Elling wrote:
> Nexenta version 2 or 3?
> -- richard
>
> On Apr 18, 2010, at 7:13 PM, Andrew Kener wrote:
>
>> Hullo All:
>>
>> I'm having a problem importing a ZFS pool. When I first
The correct URL is:
http://code.google.com/p/maczfs/
-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Rich Teer
Sent: Sunday, April 25, 2010 7:11 PM
To: Alex Blewitt
Cc: ZFS discuss
Subject: Re: [zfs-discuss] Mac OS X c
900', but it
still said the dataset did not exist.
Finally I exported the pool, and after importing it, the snapshot was
gone, and I could receive the snapshot normally.
Is there a way to clear a "partial" snapshot without an export/import
cycle?
Thanks,
Andrew
[1]
http://mail.o
Support for thin reclamation depends on the SCSI "WRITE SAME" command; see this
draft of a document from T10:
http://www.t10.org/ftp/t10/document.05/05-270r0.pdf.
I spent some time searching the source code for support for "WRITE SAME", but I
wasn't able to find much. I assume that if it
up on the ARC (memory) anyway. If you don't have enough
RAM for this to help, then you could add more memory, and/or an SSD as a
L2ARC device ("cache" device in zpool command line terms).
--
Andrew Gabriel
___
zfs-discuss mailing list
zf
if NV ZIL. Trouble is that no other operating systems or
filesystems work this well with such relatively tiny amounts of NV
storage, so such a hardware solution is very ZFS-specific.
--
Andrew Gabriel |
Solaris Systems Architect
Email: andrew.gabr...@oracle.com
Mobile: +44 7720 598213
Oracl
etween 30 minutes and 4
hours into a scrub, and with it scrubs run successfully.
-Andrew
>>> Demian Phillips 5/23/2010 8:01 AM >>>
On Sat, May 22, 2010 at 11:33 AM, Bob Friesenhahn
wrote:
> On Fri, 21 May 2010, Demian Phillips wrote:
>
>> For years I have been run
few lines above, another test (for a valid bootfs name) does get
bypassed in the case of clearing the property.
Don't know if that alone would fix it.
--
Andrew Gabriel |
Solaris Systems Architect
Email: andrew.gabr...@oracle.com
Mobile: +44 7720 598213
Oracle Pre-Sales
Guillemont Park | M
for disks.
(Actually, vanity naming for disks should probably be brought out into
a separate RFE.)
--
Andrew Gabriel |
Solaris Systems Architect
Email: andrew.gabr...@oracle.com
Mobile: +44 7720 598213
Oracle Pre-Sales
Guillemont Park | Minley Road | Camberley | GU17 9QG | United Kingdom
ORACLE Corporat
evious installs). It's an amd64 box.
Both OS versions show the same problem.
Do I need to run a scrub? (will take days...)
Other ideas?
It might be interesting to run it under truss, to see which syscall is
returning that error.
--
Andrew Gabriel
___
Now at 36 hours since zdb process start and:
PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP
827 root 4936M 4931M sleep 590 0:50:47 0.2% zdb/209
Idling at 0.2% processor for nearly the past 24 hours... feels very stuck.
Thoughts on how to determine where and
Update: have given up on the zdb write mode repair effort, as least for now.
Hoping for any guidance / direction anyone's willing to offer...
Re-running 'zpool import -F -f tank' with some stack trace debug, as suggested
in similar threads elsewhere. Note that this appears hung at near idle.
f
Dedup had been turned on in the past for some of the volumes, but I had turned
it off altogether before entering production due to performance issues. GZIP
compression was turned on for the volume I was trying to delete.
--
This message posted from opensolaris.org
___
Malachi,
Thanks for the reply. There were no snapshots for the CSV1 volume that I
recall... very few snapshots on the any volume in the tank.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://ma
Just re-ran 'zdb -e tank' to confirm the CSV1 volume is still exhibiting error
16:
Could not open tank/CSV1, error 16
Considering my attempt to delete the CSV1 volume lead to the failure in the
first place, I have to think that if I can either 1) complete the deletion of
this volume or 2) ro
Thanks Victor. I will give it another 24 hrs or so and will let you know how it
goes...
You are right, a large 2TB volume (CSV1) was not in the process of being
deleted, as described above. It is showing error 16 on 'zdb -e'
--
This message posted from opensolaris.org
_
anything in the logs?
Earlier I ran 'zdb -e -bcsvL tank' in write mode for 36 hours and gave up to
try something different. Now the zpool import has hung the box.
Should I try zdb again? Any suggestions?
Thanks,
Andrew
--
This message po
>
> On Jun 29, 2010, at 8:30 PM, Andrew Jones wrote:
>
> > Victor,
> >
> > The 'zpool import -f -F tank' failed at some point
> last night. The box was completely hung this morning;
> no core dump, no ability to SSH into the box to
> diagnose the
Victor,
I've reproduced the crash and have vmdump.0 and dump device files. How do I
query the stack on crash for your analysis? What other analysis should I
provide?
Thanks
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-
Victor,
A little more info on the crash, from the messages file is attached here. I
have also decompressed the dump with savecore to generate unix.0, vmcore.0, and
vmdump.0.
Jun 30 19:39:10 HL-SAN unix: [ID 836849 kern.notice]
Jun 30 19:39:10 HL-SAN ^Mpanic[cpu3]/thread=ff0017909c60:
Jun
> Andrew,
>
> Looks like the zpool is telling you the devices are
> still doing work of
> some kind, or that there are locks still held.
>
Agreed; it appears the CSV1 volume is in a fundamentally inconsistent state
following the aborted zfs destroy attempt. See later in
e, because the features and functionality in zfs are
otherwise absolutely second to none.
/Andrew[i][/i][i][/i][i][/i][i][/i][i][/i]
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opens
>
> - Original Message -
> > Victor,
> >
> > The zpool import succeeded on the next attempt
> following the crash
> > that I reported to you by private e-mail!
> >
> > For completeness, this is the final status of the
> pool:
> >
> >
> > pool: tank
> > state: ONLINE
> > scan: resilvere
>
> Good. Run 'zpool scrub' to make sure there are no
> other errors.
>
> regards
> victor
>
Yes, scrubbed successfully with no errors. Thanks again for all of your
generous assistance.
/AJ
--
This message posted from opensolaris.org
___
zfs-discus
esn't get recognized. Probably because it was
never part of the original zpool. I also symlinked the new ZIL file into
/dev/dsk but that didn't make any difference either.
Any suggestions?
Andrew Kener
___
zfs-discuss mailing list
zf
from
a backup source.
On Jul 6, 2010, at 11:48 PM, Edward Ned Harvey wrote:
>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> boun...@opensolaris.org] On Behalf Of Andrew Kener
>>
>> the OS hard drive crashed [and log device]
>
> Here'
27;t happen again (I have learned my lesson
this time) I have ordered two small SSD drives to put in a mirrored config
for the log device. Thanks again to everyone and now I will get some
worry-free sleep :)
Andrew
___
zfs-discuss mailing list
zfs-dis
FS, amongst other Solaris features.
--
Andrew Gabriel
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I find my home data growth is slightly less than the rate
of disk capacity increase, so every 18 months or so, I simply swap out
the disks for higher capacity ones.
--
Andrew Gabriel
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.
takes nearly 19 hours now, and
hammers the heads quite hard. I keep meaning to reduce the scrub
frequency now it's getting to take so long, but haven't got around to
it. What I really want is pause/resume scrub, and the ability to trigger
the pause/resume from the screensaver (or
between the machines due to the CPU limiting on the scp and gunzip
processes.
Also, if you have multiple datasets to send, might be worth seeing if
sending them in parallel helps.
--
Andrew Gabriel
___
zfs-discuss mailing list
z
tcat.
I haven't figured out where to get netcat nor the syntax for using it yet.
I used a buffering program of my own, but I presume mbuffer would work too.
--
Andrew Gabriel
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
dea
for some other applications though (although Linux ran this way for many
years, seemingly without many complaints). Note that there's no
increased risk of the zpool going bad - it's just that after the reboot,
filesystems with sync=disabled will look like they were rewo
ential
implications before embarking on this route.
(As I said before, the zpool itself is not at any additional risk of
corruption, it's just that you might find the zfs filesystems with
sync=disabled appear to have been rewound by up to 30 seconds.)
If you're unsure, then adding SSD no
Just wondering if anyone has experimented with working out the best zvol
recordsize for a zvol which is backing a zpool over iSCSI?
--
Andrew Gabriel
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo
use for one which is stopped.
However, you haven't given anything like enough detail here of your
situation and what's happening for me to make any worthwhile guesses.
--
Andrew Gabriel
___
zfs-discuss mailing list
zfs-discuss@opensolari
e you do a planned reduction of the pool
redundancy (e.g. if you're going to detach a mirror side in order to
attach a larger disk), most particularly if you are reducing the
redundancy to nothing.
--
Andrew Gabriel
___
zfs-discuss
ped out drives, this works well,
and avoids ending up with sprawling lower capacity drives as your pool
grows in size. This is what I do at home. The freed-up drives get used
in other systems and for off-site backups. Over the last 4 years, I've
upgraded from 1/4TB, to 1/2TB, and now on 1TB dri
Tony MacDoodle wrote:
I have 2 ZFS pools all using the same drive type and size. The
question is can I have 1 global hot spare for both of those pools?
Yes. A hot spare disk can be added to more than one pool at the same time.
--
Andrew Gabriel
o see output of: zfs list -t all -r zpool/filesystem
There is a trouble - snapshot is too old, and ,consequently, there is a
questions -- Can I browse pre-rollbacked corrupted branch of FS ? And, if I
can, how ?
--
Andrew Gabriel
___
zfs-discu
undancy on flaky storage is not a good place to be.
--
Andrew Gabriel
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
different nics (bge and e1000).
Unless you have some specific reason for thinking this is a zfs issue,
you probably want to ask on the crossbow-discuss mailing list.
--
Andrew
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
1 - 100 of 327 matches
Mail list logo